1.
Samsung Electronics
–
(Korean, 삼성전자, Hanja, 三星電子 is a South Korean multinational electronics company headquartered in Suwon, South Korea. Through extremely complicated ownership structure with some circular ownership, it is the division of the Samsung Group. It is the second largest information technology company by revenue. Samsung Electronics has assembly plants and sales networks in 80 countries, since 2012, Kwon Oh-hyun has served as the companys CEO. It is the worlds largest manufacturer of mobile phones and smartphones fueled by the popularity of its Samsung Galaxy line of devices, Samsung has been the worlds largest television manufacturer since 2006, and the worlds largest manufacturer of mobile phones since 2011. Samsung Electronics is a part of the South Korean economy. Samsung Electric Industries was established as an industry Samsung Group in 1969 in Suwon and its early products were electronic and electrical appliances including televisions, calculators, refrigerators, air conditioners and washing machines. In 1970, Samsung Group established another subsidiary, Samsung-NEC, jointly with Japans NEC Corporation to manufacture home appliances, in 1974, the group expanded into the semiconductor business by acquiring Korea Semiconductor, one of the first chip-making facilities in the country at the time. The acquisition of Korea Telecommunications, a switching system producer, was completed at the start of the next decade in 1980. By 1981, Samsung Electric Industries had manufactured over 10 million black-and-white televisions, one year later, Samsung became the third company in the world to develop a 64 kb DRAM. Samsung Electronics launched its first mobile phone in 1988, in the South Korean market, sales were initially poor and by the early 1990s, Motorola held a market share of over 60 percent in the countrys mobile phone market compared to just 10 percent for Samsung. Samsungs mobile phone division also struggled with poor quality and inferior products until the mid-1990s, Lee Kun-Hee decided that Samsung needed to change strategy. The company shelved the production of many under-selling product lines and instead pursued a process of designing and manufacturing components, in addition, Samsung outlined a 10-year plan to shrug off its image as a budget brand and to challenge Sony as the worlds largest consumer electronics manufacturer. It was hoped in this way Samsung would gain an understanding of how products are made and this patient vertical integration strategy of manufacturing components has borne fruit for Samsung in the late-2000s. As Samsung shifted away from markets, the company devised a plan to sponsor major sporting events. One such sponsorship was for the 1998 Winter Olympics held in Nagano, Samsung had a number of technological breakthroughs, particularly in the field of memory which are commonplace in most electrical products today. This includes the worlds first 64Mb DRAM in 1992,256 Mb DRAM in 1994, in 2004, Samsung developed the worlds first 8Gb NAND Memory chip and a manufacturing deal was struck with Apple in 2005. In 2005, Samsung Electronics surpassed its Japanese rival, Sony, for the first time to become the worlds twentieth-largest and most popular consumer brand, in 2007, Samsung Electronics became the worlds second-largest mobile-phone maker, overtaking Motorola for the first time
2.
Samsung Galaxy J
–
The Samsung Galaxy J is a smartphone manufactured by Samsung which works on the Android platform. This phone is developed for the Japan’s Carrier DoCoMo in fall 2013. The Galaxy J is fitted with a 5-inch Full HD Super AMOLED display, the phone does not support USB3.0 connectivity as compared to Samsung Galaxy Note 3. This phone was released with Android 4.3 Jelly Bean. List of Android devices Samsung Galaxy Note 3 Nexus 5
3.
Comparison of mobile phone standards
–
A comparison of mobile phone standards can be done in many ways. Global System for Mobile Communications and IS-95 were the two most prevalent 2G mobile communication technologies in 2007, in 3G, the most prevalent technology was UMTS with CDMA-2000 in close contention. All radio access technologies have to solve the problems, to divide the finite RF spectrum among multiple users as efficiently as possible. GSM uses TDMA and FDMA for user and cell separation, UMTS, IS-95 and CDMA-2000 use CDMA. Time-division multiple access provides multiuser access by chopping up the channel into sequential time slices, each user of the channel takes turns to transmit and receive signals. In reality, only one person is using the channel at a specific moment. This is analogous to time-sharing on a computer server. Frequency-division multiple access provides multiuser access by separating the used frequencies and this is used in GSM to separate cells, which then use TDMA to separate users within the cell. The receiver undoes the randomization to collect the bits together and produce the original data. As the codes are pseudorandom and selected in such a way as to cause interference to one another, multiple users can talk at the same time. This causes an added signal noise forcing all users to use more power, orthogonal Frequency Division Multiple Access uses bundling of multiple small frequency bands that are orthogonal to one another to provide for separation of users. The users are multiplexed in the domain by allocating specific sub-bands to individual users. This is often enhanced by also performing TDMA and changing the allocation periodically so that different users get different sub-bands at different times. For a classic example for understanding the difference of TDMA and CDMA. The room represents the available bandwidth, TDMA, A speaker takes turns talking to a listener, the speaker talks for a short time and then stops to let another couple talk. There is never more than one speaker talking in the room, the drawback is that it limits the practical number of discussions in the room. CDMA, any speaker can talk at any time, however uses a different language. Each listener can only understand the language of their partner, as more and more couples talk, the background noise gets louder, but because of the difference in languages, conversations do not mix
4.
GSM
–
As of 2014 it has become the de facto global standard for mobile communications – with over 90% market share, operating in over 219 countries and territories. This expanded over time to include communications, first by circuit-switched transport, then by packet data transport via GPRS. Subsequently, the 3GPP developed third-generation UMTS standards followed by fourth-generation LTE Advanced standards, GSM is a trademark owned by the GSM Association. It may also refer to the most common voice codec used, the decision to develop a continental standard eventually resulted in a unified, open, standard-based network which was larger than that in the United States. In February 1987 Europe produced the very first agreed GSM Technical Specification, the MoU drew-in mobile operators from across Europe to pledge to invest in new GSM networks to an ambitious common date. In 1989, the Groupe Spécial Mobile committee was transferred from CEPT to the European Telecommunications Standards Institute, in parallel, France and Germany signed a joint development agreement in 1984 and were joined by Italy and the UK in 1986. In 1986 the European Commission proposed reserving the 900 MHz spectrum band for GSM, the former Finnish prime minister Harri Holkeri made the worlds first GSM call on July 1,1991, calling Kaarina Suonio using a network built by Telenokia and Siemens and operated by Radiolinja. In the following year,1992, saw the sending of the first short messaging service message, work began in 1991 to expand the GSM standard to the 1800 MHz frequency band and the first 1800 MHz network became operational in the UK by 1993 called and DCS1800. Also that year, Telecom Australia became the first network operator to deploy a GSM network outside Europe and the first practical hand-held GSM mobile phone became available. In 1995, fax, data and SMS messaging services were launched commercially, in the same year, the GSM Association formed. Pre-paid GSM SIM cards were launched in 1996 and worldwide GSM subscribers passed 100 million in 1998, in 2000 the first commercial GPRS services were launched and the first GPRS-compatible handsets became available for sale. In 2001 the first UMTS network was launched, a 3G technology that is not part of GSM, worldwide GSM subscribers exceeded 500 million. In 2002 the first Multimedia Messaging Service were introduced and the first GSM network in the 800 MHz frequency band became operational, EDGE services first became operational in a network in 2003 and the number of worldwide GSM subscribers exceeded 1 billion in 2004. By 2005, GSM networks accounted for more than 75% of the cellular network market. In 2005 the first HSDPA-capable network also became operational, the first HSUPA network launched in 2007. Worldwide GSM subscribers exceeded three billion in 2008, GSM is a second-generation standard employing time-division multiple-Access spectrum-sharing, issued by the European Telecommunications Standards Institute. GSM, for the first time, set a standard for Europe for wireless networks. It was also adopted by many countries outside Europe and this allowed subscribers to use other GSM networks that have roaming agreements with each other
5.
High Speed Packet Access
–
A further improved 3GPP standard, Evolved High Speed Packet Access, was released late in 2008 with subsequent worldwide adoption beginning in 2010. The newer standard allows bit-rates to reach as high as 337 Mbit/s in the downlink and 34 Mbit/s in the uplink, however, these speeds are rarely achieved in practice. The first HSPA specifications supported increased peak data rates of up to 14 Mbit/s in the downlink and 5.76 Mbit/s in the uplink. It also reduced latency and provided up to five times more capacity in the downlink. HSDPA has been introduced with 3GPP Release 5, which accompanies a improvement on the uplink providing a new bearer of 384 kbit/s. The previous maximum bearer was 128 kbit/s, as well as improving data rates, HSDPA also decreases latency and so the round trip time for applications. HSPA+ introduced in 3GPP Release 7 further increases data rates by adding 64QAM modulation, MIMO and Dual-Cell HSDPA operation, even higher speeds of up to 337.5 Mbit/s are possible with Release 11 of the 3GPP standards. The first phase of HSDPA has been specified in the 3GPP release 5, phase one introduces new basic functions and is aimed to achieve peak data rates of 14.0 Mbit/s with significantly reduced latency. The improvement in speed and latency reduces the cost per bit, further new features are the High Speed Downlink Shared Channels, the adaptive modulation QPSK and 16QAM and the High Speed Medium Access protocol in base station. The upgrade to HSDPA is often just an update for WCDMA networks. In general voice calls are prioritized over data transfer. The following table is derived from table 5. 1a of the release 11 of 3GPP TS25.306 and shows maximum data rates of different device classes and by what combination of features they are achieved. The per-cell per-stream data rate is limited by the Maximum number of bits of an HS-DSCH transport block received within an HS-DSCH TTI, so for example Cat 10 can decode 27952 bits/2 ms =13.976 MBit/s. Categories 1-4 and 11 have inter-TTI intervals of 2 or 3, Dual-Cell and MIMO 2x2 each multiply the maximum data rate by 2, because multiple independent transport blocks are transmitted over different carriers or spatial streams, respectively. The data rates given in the table are rounded to one decimal point, further UE categories were defined from 3GGP Release 7 onwards as Evolved HSPA and are listed in Evolved HSDPA UE Categories. Notes, As of 28 August 2009,250 HSDPA networks have commercially launched mobile services in 109 countries. 169 HSDPA networks support 3.6 Mbit/s peak downlink data throughput, a growing number are delivering 21 Mbit/s peak data downlink and 28 Mbit/s. CDMA2000-EVDO networks had the lead on performance, and Japanese providers were highly successful benchmarks for it
6.
LTE (telecommunication)
–
In telecommunication, Long-Term Evolution is a standard for high-speed wireless communication for mobile phones and data terminals, based on the GSM/EDGE and UMTS/HSPA technologies. It increases the capacity and speed using a different radio interface together with core network improvements, the standard is developed by the 3GPP and is specified in its Release 8 document series, with minor enhancements described in Release 9. LTE is the path for carriers with both GSM/UMTS networks and CDMA2000 networks. The different LTE frequencies and bands used in different countries mean that only multi-band phones are able to use LTE in all countries where it is supported. LTE is commonly marketed as 4G LTE, but it does not meet the criteria of a 4G wireless service, as specified in the 3GPP Release 8 and 9 document series. The requirements were set forth by the ITU-R organization in the IMT Advanced specification. The LTE Advanced standard formally satisfies the ITU-R requirements to be considered IMT-Advanced, to differentiate LTE Advanced and WiMAX-Advanced from current 4G technologies, ITU has defined them as True 4G. LTE stands for Long Term Evolution and is a trademark owned by ETSI for the wireless data communications technology. However, other nations and companies do play an role in the LTE project. The goal of LTE was to increase the capacity and speed of data networks using new DSP techniques. A further goal was the redesign and simplification of the architecture to an IP-based system with significantly reduced transfer latency compared to the 3G architecture. The LTE wireless interface is incompatible with 2G and 3G networks, LTE was first proposed by NTT DoCoMo of Japan in 2004, and studies on the new standard officially commenced in 2005. Initially, CDMA operators planned to upgrade to rival standards called UMB and WiMAX, the evolution of LTE is LTE Advanced, which was standardized in March 2011. Services are expected to commence in 2013, additional evolution known as LTE Advanced Pro have been approved in year 2015. The LTE specification provides downlink peak rates of 300 Mbit/s, uplink peak rates of 75 Mbit/s, LTE has the ability to manage fast-moving mobiles and supports multi-cast and broadcast streams. LTE supports scalable carrier bandwidths, from 1.4 MHz to 20 MHz, the simpler architecture results in lower operating costs. In 2004, NTT DoCoMo of Japan proposes LTE as the international standard, in September 2006, Siemens Networks showed in collaboration with Nomor Research the first live emulation of an LTE network to the media and investors. As live applications two users streaming an HDTV video in the downlink and playing a game in the uplink have been demonstrated
7.
Form factor (mobile phones)
–
The form factor of a mobile phone is its size, shape, and style, as well as the layout and position of its major components. There are three major form factors – bar phones, flip phones, and sliders – as well as sub-categories of these forms, a bar phone takes the shape of a cuboid, usually with rounded corners and/or edges. The name is derived from the resemblance to a chocolate bar in size. This form factor is used by a variety of manufacturers, such as Nokia. Bar type mobile phones commonly have the screen and keypad on a single face, the Samsung SPH-M620 has a unique bar style, offering different devices on either side of the bar, a phone on one side, and a digital audio player on the other. Sony Ericsson also had a well-known MarsBar phone model CM-H333, brick is a slang term almost always used to refer to large, outdated bar-type phones, typically early mobile phones with large batteries and electronics. Such early mobile phones, such as the Motorola DynaTAC, have been displaced by newer models which offer greater portability thanks to smaller antennas. A slate or touchscreen phone is a subset of the bar form that, like a computer, has few physical buttons, instead relying upon a touchscreen. The first commercially available touchscreen phone was the IBM Simon Personal Communicator, the iPhone released in 2007 is largely responsible for the influence and achievement of this design as it is currently conceived. Since mid-2010s almost all come in touchscreen slate form. The phablet or smartlet is a subset of the slate/touchscreen, which in turn is a subset of the bar form, a portmanteau of the words phone and tablet, phablets are a class of mobile device designed to combine or straddle the functions of a smartphone and tablet. Phablets typically have screens that measure between 5.5 and 7, a flip or clamshell phone consists of two or more sections that are connected by hinges, allowing the phone to flip open then fold closed in order to become more compact. When flipped open, the speaker and microphone are placed closer to the operators ear and mouth. When flipped shut, the phone becomes smaller and more portable than when it is opened for use. Motorola was once owner of a trademark for the flip phone. Motorola was the manufacturer of the famed StarTAC flip phone, in 2010, Motorola introduced a different kind of flip phone with its Backflip smartphone. When closed, one side is the screen and the other is a physical QWERTY keyboard, the hinge is on a long edge of the phone instead of a short edge, and when flipped out the screen is above the keyboard. Clamshell came to be used as generic for this form factor, flip phone referred to phones that opened on the vertical axis
8.
Smartphone
–
A smartphone is a mobile phone with an advanced mobile operating system that combines features of a personal computer operating system with other features useful for mobile or handheld use. Smartphones can access the Internet and can run a variety of third-party software components and they typically have a color display with a graphical user interface that covers more than 76% of the front surface. In 1999, the Japanese firm NTT DoCoMo released the first smartphones to achieve mass adoption within a country, smartphones became widespread in the late 2000s. Most of those produced from 2012 onward have high-speed mobile broadband 4G LTE, motion sensors, in the third quarter of 2012, one billion smartphones were in use worldwide. Global smartphone sales surpassed the sales figures for regular cell phones in early 2013, devices that combined telephony and computing were first conceptualized by Nikola Tesla in 1909 and Theodore Paraskevakos in 1971 and patented in 1974, and were offered for sale beginning in 1993. Paraskevakos was the first to introduce the concepts of intelligence, data processing and they were installed at Peoples Telephone Company in Leesburg, Alabama and were demonstrated to several telephone companies. The original and historic working models are still in the possession of Paraskevakos, the first mobile phone to incorporate PDA features was a prototype developed by Frank Canova in 1992 while at IBM and demonstrated that year at the COMDEX computer industry trade show. It included PDA features and other mobile applications such as maps, stock reports. A refined version was marketed to consumers in 1994 by BellSouth under the name Simon Personal Communicator, the Simon was the first commercially available device that could be properly referred to as a smartphone, although it was not called that in 1994. The term smart phone appeared in print as early as 1995, in the mid-late 1990s, many mobile phone users carried a separate dedicated PDA device, running early versions of operating systems such as Palm OS, BlackBerry OS or Windows CE/Pocket PC. These operating systems would later evolve into mobile operating systems, in March 1996, Hewlett-Packard released the OmniGo 700LX, a modified HP 200LX palmtop PC that supported a Nokia 2110 phone with ROM-based software to support it. It had a 640×200 resolution CGA compatible four-shade gray-scale LCD screen and could be used to place and receive calls and it was also 100% DOS5.0 compatible, allowing it to run thousands of existing software titles, including early versions of Windows. In August 1996, Nokia released the Nokia 9000 Communicator, a cellular phone based on the Nokia 2110 with an integrated PDA based on the PEN/GEOS3.0 operating system from Geoworks. The two components were attached by a hinge in what known as a clamshell design, with the display above. The PDA provided e-mail, calendar, address book, calculator and notebook applications, text-based Web browsing, when closed, the device could be used as a digital cellular phone. In June 1999 Qualcomm released the pdQ Smartphone, a CDMA digital PCS Smartphone with an integrated Palm PDA, subsequent landmark devices included, The Ericsson R380 by Ericsson Mobile Communications. The first device marketed as a smartphone, it combined the functions of a phone and PDA. The Kyocera 6035 introduced by Palm, Inc, combining a PDA with a mobile phone, it operated on the Verizon network, and supported limited Web browsing
9.
Mobile operating system
–
A mobile operating system is an operating system for smartphones, tablets, or other mobile devices. This distinction is becoming blurred in some operating systems that are hybrids made for both uses. So-called mobile operating systems, or even only smartphones running them, Mobile operating systems, are now, as of late 2016, the most used kind, with traditional desktop OS, now a minority used kind, see usage share of operating systems. However, variations occur in popularity by regions, while desktop-minority also applies on some days in e. g. the US and UK. By the end of 2016, over 430 million smartphones were sold with 81.7 percent running Android,17.9 percent running iOS,0.3 percent running Windows Mobile and the other OSes cover 0.1 percent. Research has shown that these systems may contain a range of security vulnerabilities permitting malicious base stations to gain high levels of control over the mobile device. Mobile operating system milestones mirror the development of mobile phones and smartphones,1994 – The first smartphone, the IBM Simon, has a touchscreen, email, and PDA features. 1996 Palm Pilot 1000 personal digital assistant is introduced with the Palm OS mobile operating system, first Windows CE Handheld PC devices are introduced. 1999 – Nokia S40 Platform is introduced officially along with the Nokia 7110,2000 – Symbian becomes the first modern mobile OS on a smartphone with the launch of the Ericsson R380. 2001 – The Kyocera 6035 is the first smartphone with Palm OS.2002 Microsofts first Windows CE smartphones are introduced,2005 – Nokia introduces Maemo OS on the first Internet tablet N770. 2007 Apple iPhone with iOS is introduced as an iPhone, mobile phone and Internet communicator. Open Handset Alliance formed by Google, HTC, Sony, Dell, Intel, Motorola, Samsung, LG,2009 – Palm introduces webOS with the Palm Pre. By 2012, webOS devices were discontinued, Samsung announces the Bada OS with the introduction of the Samsung S8500. November – Windows Phone OS phones are released but are not compatible with the prior Windows Mobile OS, July – MeeGo, a mobile Linux distribution, combining Maemo and Moblin, is introduced with the Nokia N9, a collaboration of Nokia, Intel, and Linux Foundation. September – Samsung, Intel, and the Linux Foundation announced that their efforts will shift from Bada, MeeGo to Tizen during 2011 and 2012. October – The Mer project was announced, based on a core for building products, composed of Linux, HTML5, QML, and JavaScript. July – Mozilla announced that the formerly named Boot to Gecko was now Firefox OS and had several handset OEMs on board. September – Apple releases iOS6, January – BlackBerry releases their new operating system for smartphones, BlackBerry 10
10.
Android (operating system)
–
Android is a mobile operating system developed by Google, based on the Linux kernel and designed primarily for touchscreen mobile devices such as smartphones and tablets. In addition to devices, Google has further developed Android TV for televisions, Android Auto for cars. Variants of Android are also used on notebooks, game consoles, digital cameras, beginning with the first commercial Android device in September 2008, the operating system has gone through multiple major releases, with the current version being 7.0 Nougat, released in August 2016. Android applications can be downloaded from the Google Play store, which features over 2.7 million apps as of February 2017, Android has been the best-selling OS on tablets since 2013, and runs on the vast majority of smartphones. In September 2015, Android had 1.4 billion monthly active users, Android is popular with technology companies that require a ready-made, low-cost and customizable operating system for high-tech devices. The success of Android has made it a target for patent, Android Inc. was founded in Palo Alto, California in October 2003 by Andy Rubin, Rich Miner, Nick Sears, and Chris White. Rubin described the Android project as tremendous potential in developing smarter mobile devices that are aware of its owners location. The early intentions of the company were to develop an operating system for digital cameras. Despite the past accomplishments of the founders and early employees, Android Inc. operated secretly and that same year, Rubin ran out of money. Steve Perlman, a friend of Rubin, brought him $10,000 in cash in an envelope. In July 2005, Google acquired Android Inc. for at least $50 million and its key employees, including Rubin, Miner and White, joined Google as part of the acquisition. Not much was known about Android at the time, with Rubin having only stated that they were making software for mobile phones, at Google, the team led by Rubin developed a mobile device platform powered by the Linux kernel. Google marketed the platform to handset makers and carriers on the promise of providing a flexible, upgradeable system, Google had lined up a series of hardware components and software partners and signaled to carriers that it was open to various degrees of cooperation. Speculation about Googles intention to enter the communications market continued to build through December 2006. In September 2007, InformationWeek covered an Evalueserve study reporting that Google had filed several patent applications in the area of mobile telephony, the first commercially available smartphone running Android was the HTC Dream, also known as T-Mobile G1, announced on September 23,2008. Since 2008, Android has seen numerous updates which have improved the operating system, adding new features. Each major release is named in order after a dessert or sugary treat, with the first few Android versions being called Cupcake, Donut, Eclair. In 2010, Google launched its Nexus series of devices, a lineup in which Google partnered with different device manufacturers to produce new devices and introduce new Android versions
11.
Android Marshmallow
–
Android Marshmallow is the sixth major version of the Android operating system. First released as a build on May 28,2015, it was officially released on October 5,2015. Marshmallow primarily focuses on improving the user experience of its predecessor. As of 6 March 2017,31. 3% of devices accessing Google Play run Android 6.0. The first developer preview build, codenamed Android M, was unveiled and released at Google I/O on May 28,2015, for the Nexus 5 and Nexus 6 smartphones, Nexus 9 tablet, and Nexus Player set-top box. The second developer preview was released on July 9,2015, on September 29,2015, Google unveiled launch devices for Marshmallow, the LG-produced Nexus 5X, the Huawei-produced Nexus 6P, alongside Googles own Pixel C tablet. Android 6.0.1, a patch featuring security fixes, support for Unicode 8.0 emoji. A new Assist API allows information from an app, including text. This system is used by the Google Search app feature Google Now on Tap, by holding the Home button or using a voice command, on-screen cards are generated which display information, suggestions, and actions related to the content. Direct Share allows Share menus to display recently used combinations of contacts, adoptable storage allows a newly inserted SD card or other secondary storage media to be designated as either portable or internal storage. When designated as Internal storage, the media is reformatted with an encrypted ext4 file system. Existing data are migrated to the storage, and normal operation of the device becomes dependent on the presence of the media. Apps and operating system functions will not function if the adopted storage device is removed. If the user access to the storage media, the adopted storage can be forgotten. Android Marshmallow introduces a redesigned application permissions model, apps are no longer automatically granted all of their specified permissions at installation time. An opt-in system is now used, in which users are prompted to grant or deny individual permissions to an application when they are needed for the first time, applications remember the grants, which can be revoked by the user at any time. The new permissions model is used only by applications developed for Marshmallow using its software development kit, permissions can still be revoked for those apps, though this might prevent them from working properly, and a warning is displayed to that effect. In this state, network connectivity and background processing is restricted, additionally, network access by apps is deferred if the user has not recently interacted with the app
12.
Central processing unit
–
The computer industry has used the term central processing unit at least since the early 1960s. The form, design and implementation of CPUs have changed over the course of their history, most modern CPUs are microprocessors, meaning they are contained on a single integrated circuit chip. An IC that contains a CPU may also contain memory, peripheral interfaces, some computers employ a multi-core processor, which is a single chip containing two or more CPUs called cores, in that context, one can speak of such single chips as sockets. Array processors or vector processors have multiple processors that operate in parallel, there also exists the concept of virtual CPUs which are an abstraction of dynamical aggregated computational resources. Early computers such as the ENIAC had to be rewired to perform different tasks. Since the term CPU is generally defined as a device for software execution, the idea of a stored-program computer was already present in the design of J. Presper Eckert and John William Mauchlys ENIAC, but was initially omitted so that it could be finished sooner. On June 30,1945, before ENIAC was made, mathematician John von Neumann distributed the paper entitled First Draft of a Report on the EDVAC and it was the outline of a stored-program computer that would eventually be completed in August 1949. EDVAC was designed to perform a number of instructions of various types. Significantly, the programs written for EDVAC were to be stored in high-speed computer memory rather than specified by the wiring of the computer. This overcame a severe limitation of ENIAC, which was the considerable time, with von Neumanns design, the program that EDVAC ran could be changed simply by changing the contents of the memory. Early CPUs were custom designs used as part of a larger, however, this method of designing custom CPUs for a particular application has largely given way to the development of multi-purpose processors produced in large quantities. This standardization began in the era of discrete transistor mainframes and minicomputers and has accelerated with the popularization of the integrated circuit. The IC has allowed increasingly complex CPUs to be designed and manufactured to tolerances on the order of nanometers, both the miniaturization and standardization of CPUs have increased the presence of digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in electronic devices ranging from automobiles to cellphones, the so-called Harvard architecture of the Harvard Mark I, which was completed before EDVAC, also utilized a stored-program design using punched paper tape rather than electronic memory. Relays and vacuum tubes were used as switching elements, a useful computer requires thousands or tens of thousands of switching devices. The overall speed of a system is dependent on the speed of the switches, tube computers like EDVAC tended to average eight hours between failures, whereas relay computers like the Harvard Mark I failed very rarely. In the end, tube-based CPUs became dominant because the significant speed advantages afforded generally outweighed the reliability problems, most of these early synchronous CPUs ran at low clock rates compared to modern microelectronic designs. Clock signal frequencies ranging from 100 kHz to 4 MHz were very common at this time, the design complexity of CPUs increased as various technologies facilitated building smaller and more reliable electronic devices
13.
Graphics processing unit
–
GPUs are used in embedded systems, mobile phones, personal computers, workstations, and game consoles. In a personal computer, a GPU can be present on a video card, the term GPU was popularized by Nvidia in 1999, who marketed the GeForce 256 as the worlds first GPU, or Graphics Processing Unit. It was presented as a processor with integrated transform, lighting, triangle setup/clipping. Rival ATI Technologies coined the visual processing unit or VPU with the release of the Radeon 9700 in 2002. Arcade system boards have been using specialized graphics chips since the 1970s, in early video game hardware, the RAM for frame buffers was expensive, so video chips composited data together as the display was being scanned out on the monitor. Fujitsus MB14241 video shifter was used to accelerate the drawing of sprite graphics for various 1970s arcade games from Taito and Midway, such as Gun Fight, Sea Wolf, the Namco Galaxian arcade system in 1979 used specialized graphics hardware supporting RGB color, multi-colored sprites and tilemap backgrounds. The Galaxian hardware was used during the golden age of arcade video games, by game companies such as Namco, Centuri, Gremlin, Irem, Konami, Midway, Nichibutsu, Sega. In the home market, the Atari 2600 in 1977 used a video shifter called the Television Interface Adaptor,6502 machine code subroutines could be triggered on scan lines by setting a bit on a display list instruction. ANTIC also supported smooth vertical and horizontal scrolling independent of the CPU and it became one of the best known of what were known as graphics processing units in the 1980s. The Williams Electronics arcade games Robotron,2084, Joust, Sinistar, in 1985, the Commodore Amiga featured a custom graphics chip, with a blitter unit accelerating bitmap manipulation, line draw, and area fill functions. Also included is a coprocessor with its own instruction set, capable of manipulating graphics hardware registers in sync with the video beam. In 1986, Texas Instruments released the TMS34010, the first microprocessor with on-chip graphics capabilities and it could run general-purpose code, but it had a very graphics-oriented instruction set. In 1990-1992, this chip would become the basis of the Texas Instruments Graphics Architecture Windows accelerator cards, in 1987, the IBM8514 graphics system was released as one of the first video cards for IBM PC compatibles to implement fixed-function 2D primitives in electronic hardware. Fujitsu later competed with the FM Towns computer, released in 1989 with support for a full 16,777,216 color palette, in 1988, the first dedicated polygonal 3D graphics boards were introduced in arcades with the Namco System 21 and Taito Air System. In 1991, S3 Graphics introduced the S3 86C911, which its designers named after the Porsche 911 as an implication of the performance increase it promised. The 86C911 spawned a host of imitators, by 1995, all major PC graphics chip makers had added 2D acceleration support to their chips. By this time, fixed-function Windows accelerators had surpassed expensive general-purpose graphics coprocessors in Windows performance, throughout the 1990s, 2D GUI acceleration continued to evolve. As manufacturing capabilities improved, so did the level of integration of graphics chips, arcade systems such as the Sega Model 2 and Namco Magic Edge Hornet Simulator in 1993 were capable of hardware T&L years before appearing in consumer graphics cards
14.
Secure Digital
–
Secure Digital is a non-volatile memory card format developed by the SD Card Association for use in portable devices. The standard was introduced in August 1999 by joint efforts between SanDisk, Panasonic and Toshiba as an improvement over MultiMediaCards, and has become the industry standard. The three companies formed SD-3C, LLC, a company that licenses and enforces intellectual property rights associated with SD memory cards and SD host, the companies also formed the SD Association, a non-profit organization, in January 2000 to promote and create SD Card standards. SDA today has about 1,000 member companies, the SDA uses several trademarked logos owned and licensed by SD-3C to enforce compliance with its specifications and assure users of compatibility. There are many combinations of factors and device families, although as of 2016. Secure Digital includes four card families available in three different sizes, the four families are the original Standard-Capacity, the High-Capacity, the eXtended-Capacity, and the SDIO, which combines input/output functions with data storage. The three form factors are the size, the mini size, and the micro size. Electrically passive adapters allow a card to fit and function in a device built for a larger card. The SD cards small footprint is a storage medium for smaller, thinner. The second-generation Secure Digital card was developed to improve on the MultiMediaCard standard, which continued to evolve, Secure Digital changed the MMC design in several ways, Asymmetrical slots in the sides of the SD card prevent inserting it upside down. Most SD cards are 2.1 mm thick, compared to 1.4 mm for MMCs. The SD specification defines a card called Thin SD with a thickness of 1.4 mm, the cards electrical contacts are recessed beneath the surface of the card, protecting them from contact with a users fingers. The SD specification envisioned capacities and transfer rates exceeding those of MMC, for a comparison table, see below. While MMC uses a pin for data transfers, the SD card added a four-wire bus mode for higher data rates. The SD card added Content Protection for Recordable Media security circuitry for digital rights management content-protection, full-size SD cards do not fit into the slimmer MMC slots, and other issues also affect the ability to use one format in a host device designed for the other. The Secure Digital High Capacity format, announced in January 2006 and defined in version 2.0 of the SD specification, the SDHC trademark is licensed to ensure compatibility. SDHC cards are physically and electrically identical to standard-capacity SD cards, Version 2.0 also introduces a High-speed bus mode for both SDSC and SDHC cards, which doubles the original Standard Speed clock to produce 25 MB/s. SDHC host devices are required to accept older SD cards, however, older host devices do not recognize SDHC or SDXC memory cards, although some devices can do so through a firmware upgrade
15.
Battery (electricity)
–
An electric battery is a device consisting of one or more electrochemical cells with external connections provided to power electrical devices such as flashlights, smartphones, and electric cars. When a battery is supplying power, its positive terminal is the cathode. The terminal marked negative is the source of electrons that when connected to a circuit will flow. It is the movement of ions within the battery which allows current to flow out of the battery to perform work. Historically the term specifically referred to a device composed of multiple cells. Primary batteries are used once and discarded, the materials are irreversibly changed during discharge. Common examples are the battery used for flashlights and a multitude of portable electronic devices. Secondary batteries can be discharged and recharged multiple times using mains power from a wall socket, examples include the lead-acid batteries used in vehicles and lithium-ion batteries used for portable electronics such as laptops and smartphones. According to a 2005 estimate, the battery industry generates US$48 billion in sales each year. Batteries have much lower energy than common fuels such as gasoline. This is somewhat offset by the efficiency of electric motors in producing mechanical work. The usage of battery to describe a group of electrical devices dates to Benjamin Franklin, alessandro Volta built and described the first electrochemical battery, the voltaic pile, in 1800. This was a stack of copper and zinc plates, separated by brine-soaked paper disks, Volta did not understand that the voltage was due to chemical reactions. Although early batteries were of value for experimental purposes, in practice their voltages fluctuated. It consisted of a pot filled with a copper sulfate solution, in which was immersed an unglazed earthenware container filled with sulfuric acid. These wet cells used liquid electrolytes, which were prone to leakage and spillage if not handled correctly, many used glass jars to hold their components, which made them fragile and potentially dangerous. These characteristics made wet cells unsuitable for portable appliances, near the end of the nineteenth century, the invention of dry cell batteries, which replaced the liquid electrolyte with a paste, made portable electrical devices practical. Batteries convert chemical energy directly to electrical energy, a battery consists of some number of voltaic cells
16.
Ampere hour
–
The commonly seen milliampere hour is one-thousandth of an ampere hour. The ampere hour is used in measurements of electrochemical systems such as electroplating. A milliampere second is a unit of measure used in X-ray imaging, diagnostic imaging and this quantity is proportional to the total X-ray energy produced by a given X-ray tube operated at a particular voltage. The same total dose can be delivered in different time periods depending on the X-ray tube current, an ampere hour is not a unit of energy. In a battery system, for example, accurate calculation of the energy delivered requires integration of the power delivered over the discharge interval, generally, the battery voltage varies during discharge, an average value or nominal value may be used to approximate the integration of power. The Faraday constant is the charge on one mole of electrons and it is used in electrochemical calculations. An AA size dry cell has a capacity of about 2 to 3 ampere hours, automotive car batteries vary in capacity but a large automobile propelled by an internal combustion engine would have about a 50 ampere hour battery capacity. Since one ampere hour can produce 0.336 grams of aluminium from aluminium chloride
17.
Lithium-ion battery
–
A lithium-ion battery or Li-ion battery is a type of rechargeable battery in which lithium ions move from the negative electrode to the positive electrode during discharge and back when charging. Li-ion batteries use an intercalated lithium compound as one electrode material, the electrolyte, which allows for ionic movement, and the two electrodes are the constituent components of a lithium-ion battery cell. Lithium-ion batteries are common in home electronics and they are one of the most popular types of rechargeable batteries for portable electronics, with a high energy density, tiny memory effect and low self-discharge. Beyond consumer electronics, LIBs are also growing in popularity for military, battery electric vehicle, for example, lithium-ion batteries are becoming a common replacement for the lead–acid batteries that have been used historically for golf carts and utility vehicles. Chemistry, performance, cost and safety characteristics vary across LIB types, handheld electronics mostly use LIBs based on lithium cobalt oxide, which offers high energy density, but presents safety risks, especially when damaged. Lithium iron phosphate, lithium ion manganese oxide battery and lithium nickel manganese cobalt oxide offer lower energy density, such batteries are widely used for electric tools, medical equipment and other roles. NMC in particular is a contender for automotive applications. Lithium nickel cobalt aluminum oxide and lithium titanate are specialty designs aimed at particular niche roles, the newer lithium–sulfur batteries promise the highest performance-to-weight ratio. Lithium-ion batteries can pose unique safety hazards since they contain a flammable electrolyte, an expert notes If a battery cell is charged too quickly, it can cause a short circuit, leading to explosions and fires. Because of these risks, testing standards are more stringent than those for acid-electrolyte batteries, there have been battery-related recalls by some companies, including the 2016 Samsung Note 7 recall for battery fires. This article covers just the topics and general principles of lithium-ion batteries. Nearly all facets have elements that are impacted by currently very active development or research, a cell is a basic electrochemical unit that contains the basic components, such as electrodes, separator, and electrolyte. In the case of cells, this is the single cylindrical, prismatic or pouch unit. In this regard, the simplest battery is a cell with perhaps a small electronic circuit for protection. In many cases, distinguishing between cell and battery is not important, however, this should be done when dealing with specific applications, for example, battery electric vehicles, where battery may indicate a high voltage system of 400 V, and not a single cell. The term module is used as an intermediate topology, with the understanding that a battery pack is made of modules. In electrochemistry, the anode is the electrode where oxidation is taking place in the battery, i. e. electrons get free, however, this happens on opposite electrodes during charge vs. discharge. Historically, for lithium cells, which started as single use discharge cells and this is true on discharge, but with a rechargeable system, the negative electrode chemically becomes the cathode while charging
18.
Pixel density
–
Horizontal and vertical density are usually the same, as most devices have square pixels, but differ on devices that have non-square pixels. PPI can also describe the resolution, in pixels, of an image file, the unit is not square centimeters—a 100×100 pixel image printed in a 1 cm square has a resolution of 100 pixels per centimeter. Used this way, the measurement is meaningful when printing an image and it has become commonplace to refer to PPI as DPI, even though PPI refers to input resolution. Industry standard, good quality photographs usually require 330 pixels per inch, at 100% size and this delivers a quality factor of 2, which is optimum. The lowest acceptable quality factor is considered 1.5, which equates to printing a 225 ppi image using a 150 lpi screen onto coated paper, screen frequency is determined by the type of paper the image is printed on. An absorbent paper surface, uncoated recycled paper for instance, lets ink droplets spread —so requires a more open printing screen, input resolution can therefore be reduced to minimize file size without loss in quality, as long as the quality factor of 2 is maintained. This is easily determined by doubling the line frequency, for example, printing on an uncoated paper stock often limits printing screen frequency to no more than 120 lpi, therefore, a quality factor of 2 is achieved with images of 240 ppi. The PPI of a display is related to the size of the display in inches. This measurement is referred to as dots per inch, though that measurement more accurately refers to the resolution of a computer printer. This figure is determined by dividing the width of the area in pixels by the width of the display area in inches. It is possible for a display to have different horizontal and vertical PPI measurements, the dot pitch of a computer display determines the absolute limit of possible pixel density. In January 2008, Kopin Corporation announced a 0.44 inch SVGA LCD with a density of 2272 PPI. In 2011 they followed this up with a 3760 DPI0. 21” diagonal VGA colour display, the manufacturer says they designed the LCD to be optically magnified, as in high-resolution eyewear devices. Holography applications demand even greater density, as higher pixel density produces a larger image size. Spatial light modulators can reduce pixel pitch to 2.5 μm, some observations indicate that the unaided human generally cant differentiate detail beyond 300 PPI. However, this figure depends both on the distance between viewer and image, and the visual acuity. The human eye also responds in a different way to a bright, high pixel density display technologies would make supersampled antialiasing obsolete, enable true WYSIWYG graphics and, potentially enable a practical “paperless office” era. For perspective, such a device at 15 inch screen size would have to more than four Full HD screens
19.
AMOLED
–
AMOLED is a display technology used in smartwatches, mobile devices, laptops, and televisions. As of 2008, AMOLED technology was used in mobile phones, media players and digital cameras, TFT backplane technology is crucial in the fabrication of AMOLED displays. Red and green OLED films have longer lifespans compared to blue OLED films and this variation results in colour shifts as a particular pixel fades faster than the other pixels. AMOLED displays are prone to screen burn-in, which leaves a permanent imprint of overused colours represented by overused images, manufacturers have developed in-cell touch panels, integrating the production of capacitive sensor arrays in the AMOLED module fabrication process. In-cell sensor AMOLED fabricators include AU Optronics and Samsung, Samsung has marketed its version of this technology as Super AMOLED. Using custom modeling and analytic approaches, Samsung has developed short and long-range film-thickness control, AMOLED displays provide higher refresh rates than passive-matrix, often reducing the response time to less than a millisecond, and they consume significantly less power. This advantage makes active-matrix OLEDs well-suited for portable electronics, where power consumption is critical to battery life, the amount of power the display consumes varies significantly depending on the color and brightness shown. Because the black pixels turn completely off, AMOLED also has contrast ratios that are higher than LCD. AMOLED displays may be difficult to view in direct sunlight compared with LCDs because of their maximum brightness. Samsungs Super AMOLED technology addresses this issue by reducing the size of gaps between layers of the screen, flagship smartphones sold as of December 2011 used either Super AMOLED or IPS panel premium LCD. Super AMOLED displays, such as the one on the Galaxy Nexus and Samsung Galaxy S III have often compared to IPS panel premium LCDs, found in the iPhone 4S, HTC One X. For example, according to ABI Research the AMOLED display found in the Motorola Moto X draws just 92 mA during bright conditions and 68 mA while dim. On the other hand, compared with the IPS, the rate of AMOLED is low. Super AMOLED is a term for an AMOLED display with an integrated digitizer, according to Samsung, Super AMOLED reflects one-fifth as much sunlight as the first generation AMOLED. The display technology itself is not changed, Super AMOLED is part of the Pentile matrix family, sometimes abbreviated as SAMOLED. Super AMOLED Advanced features PenTile, which sharpens subpixels in between pixels to make a higher resolution display, but by doing this, some picture quality is lost and this display type is used on the Motorola Droid RAZR and HTC One S. This variant of AMOLED is brighter and therefore more efficient than Super AMOLED displays and produces a sharper. In comparison to AMOLED and Super AMOLED displays, they are more energy efficient
20.
Camera phone
–
A camera phone is a mobile phone which is able to capture photographs. Most camera phones also record video, most camera phones are simpler than separate digital cameras. Their usual fixed-focus lenses and smaller sensors limit their performance in poor lighting, lacking a physical shutter, some have a long shutter lag. Photoflash is typically provided by an LED source which illuminates less intensely over a longer exposure time than a bright. Optical zoom and tripod screws are rare and none has a hot shoe for attaching an external flash, some also lack a USB connection or a removable memory card. Most have Bluetooth and WiFi, and can make geotagged photographs, some of the more expensive camera phones have only a few of these technical disadvantages, but with bigger image sensors, their capabilities approach those of low-end point-and-shoot cameras. In the smartphone era, the sales increase of camera phones caused point-and-shoot camera sales to peak about 2010. Most model lines improve their cameras every year or two, most smartphones only have a menu choice to start a camera application program and an on-screen button to activate the shutter. Some also have a camera button, for quickness and convenience. The principal advantages of camera phones are cost and compactness, indeed for a user who carries a mobile phone anyway, smartphones that are camera phones may run mobile applications to add capabilities such as geotagging and image stitching. However, the screen, being a general purpose control, lacks the agility of a separate cameras dedicated buttons. Nearly all camera phones use CMOS image sensors, due to reduced power consumption compared to CCD type cameras, which are also used. Some of camera phones even use more expensive Back Side Illuminated CMOS which uses energy lesser than CMOS, although more expensive than CMOS and CCD. The latest generation of cameras also apply distortion, vignetting. Most camera phones have a digital zoom feature, an external camera can be added, coupled wirelessly to the phone by Wi-Fi. They are compatible with most smartphones, images are usually saved in the JPEG file format, except for some high-end camera phones which have also RAW feature and the Android 5.0 Lollipop has facility of it. Windows Phones can be configured to operate as a camera if the phone is asleep. An external flash can be employed, to improve performance, Phones usually store pictures and video in a directory called /DCIM in the internal memory
21.
High-definition video
–
High-definition video is video of higher resolution and quality than standard-definition. While there is no standardized meaning for high-definition, generally any video image with more than 480 horizontal lines or 576 horizontal lines is considered high-definition. 480 scan lines is generally the even though the majority of systems greatly exceed that. Images of standard resolution captured at rates faster than normal, by a camera may be considered high-definition in some contexts. Some television series shot on video are made to look as if they have been shot on film. The first electronic scanning format,405 lines, was the first high definition television system, from 1939, Europe and the US tried 605 and 441 lines until, in 1941, the FCC mandated 525 for the US. In wartime France, René Barthélemy tested higher resolutions, up to 1,042, in late 1949, official French transmissions finally began with 819. In 1984, however, this standard was abandoned for 625-line color on the TF1 network, modern HD specifications date to the early 1980s, when Japanese engineers developed the HighVision 1, 125-line interlaced TV standard that ran at 60 frames per second. The Sony HDVS system was presented at a meeting of television engineers in Algiers, April 1981. HighVision video is still usable for HDTV video interchange, but there is almost no modern equipment available to perform this function, attempts at implementing HighVision as a 6 MHz broadcast channel were mostly unsuccessful. All attempts at using this format for terrestrial TV transmission were abandoned by the mid-1990s, Europe developed HD-MAC, a member of the MAC family of hybrid analogue/digital video standards, however, it never took off as a terrestrial video transmission format. HD-MAC was never designated for video interchange except by the European Broadcasting Union, in essence, the end of the 1980s was a death knell for most analog high definition technologies that had developed up to that time. In the end, however, the DVB standard of resolutions, the FCC officially adopted the ATSC transmission standard in 1996, with the first broadcasts on October 28,1998. In the early 2000s, it looked as if DVB would be the video standard far into the future, high definition video is defined threefold, by, The number of lines in the vertical display resolution. High-definition television resolution is 1,080 or 720 lines, in contrast, regular digital television is 480 lines or 576 lines. However, since HD is broadcast digitally, its introduction sometimes coincides with the introduction of DTV, additionally, current DVD quality is not high-definition, although the high-definition disc systems Blu-ray Disc and the HD DVD are. The scanning system, progressive scanning or interlaced scanning, progressive scanning redraws an image frame when refreshing each image, for example 720p/1080p. Interlaced scanning yields greater image resolution if subject is not moving, the number of frames or fields per second
22.
Wi-Fi
–
Wi-Fi or WiFi is a technology for wireless local area networking with devices based on the IEEE802.11 standards. Wi-Fi is a trademark of the Wi-Fi Alliance, which restricts the use of the term Wi-Fi Certified to products that successfully complete interoperability certification testing. Devices that can use Wi-Fi technology include personal computers, video-game consoles, smartphones, digital cameras, tablet computers, digital audio players, Wi-Fi compatible devices can connect to the Internet via a WLAN network and a wireless access point. Such an access point has a range of about 20 meters indoors, hotspot coverage can be as small as a single room with walls that block radio waves, or as large as many square kilometres achieved by using multiple overlapping access points. Wi-Fi most commonly uses the 2.4 gigahertz UHF and 5 gigahertz SHF ISM radio bands, having no physical connections, it is more vulnerable to attack than wired connections, such as Ethernet. In 1971, ALOHAnet connected the Hawaiian Islands with a UHF wireless packet network, ALOHAnet and the ALOHA protocol were early forerunners to Ethernet, and later the IEEE802.11 protocols, respectively. A1985 ruling by the U. S. Federal Communications Commission released the ISM band for unlicensed use and these frequency bands are the same ones used by equipment such as microwave ovens and are subject to interference. In 1991, NCR Corporation with AT&T Corporation invented the precursor to 802.11, the first wireless products were under the name WaveLAN. They are the credited with inventing Wi-Fi. In 1992 and 1996, CSIRO obtained patents for a method used in Wi-Fi to unsmear the signal. The first version of the 802.11 protocol was released in 1997 and this was updated in 1999 with 802. 11b to permit 11 Mbit/s link speeds, and this proved to be popular. In 1999, the Wi-Fi Alliance formed as an association to hold the Wi-Fi trademark under which most products are sold. Wi-Fi uses a number of patents held by many different organizations. In April 2009,14 technology companies agreed to pay CSIRO $1 billion for infringements on CSIRO patents and this led to Australia labeling Wi-Fi as an Australian invention, though this has been the subject of some controversy. In 2016, the local area network Test Bed was chosen as Australias contribution to the exhibition A History of the World in 100 Objects held in the National Museum of Australia. The name Wi-Fi, commercially used at least as early as August 1999, was coined by the brand-consulting firm Interbrand, the Wi-Fi Alliance had hired Interbrand to create a name that was a little catchier than IEEE802. 11b Direct Sequence. Phil Belanger, a member of the Wi-Fi Alliance who presided over the selection of the name Wi-Fi, has stated that Interbrand invented Wi-Fi as a pun upon the word hi-fi. Interbrand also created the Wi-Fi logo, the yin-yang Wi-Fi logo indicates the certification of a product for interoperability
23.
IEEE 802.11
–
They are created and maintained by the Institute of Electrical and Electronics Engineers LAN/MAN Standards Committee. The base version of the standard was released in 1997, and has had subsequent amendments, the standard and amendments provide the basis for wireless network products using the Wi-Fi brand. As a result, in the marketplace, each tends to become its own standard. The 802.11 family consists of a series of half-duplex over-the-air modulation techniques that use the basic protocol. 802. 11-1997 was the first wireless networking standard in the family, but 802. 11b was the first widely accepted one, followed by 802. 11a,802. 11g,802. 11n, and 802. 11ac. Other standards in the family are service amendments that are used to extend the current scope of the existing standard, which may also include corrections to a previous specification. 802. 11b and 802. 11g use the 2.4 GHz ISM band, operating in the United States under Part 15 of the U. S. Federal Communications Commission Rules and Regulations. Because of this choice of band,802. 11b and g equipment may occasionally suffer interference from microwave ovens, cordless telephones. Better or worse performance with higher or lower frequencies may be realized,802. 11n can use either the 2.4 GHz or the 5 GHz band,802. 11ac uses only the 5 GHz band. The segment of the frequency spectrum used by 802.11 varies between countries. In the US,802. 11a and 802. 11g devices may be operated without a license, as allowed in Part 15 of the FCC Rules and Regulations. Frequencies used by one through six of 802. 11b and 802. 11g fall within the 2.4 GHz amateur radio band. Licensed amateur radio operators may operate 802. 11b/g devices under Part 97 of the FCC Rules and Regulations, allowing increased power output but not commercial content or encryption. 802.11 technology has its origins in a 1985 ruling by the U. S. Federal Communications Commission that released the ISM band for unlicensed use, in 1991 NCR Corporation/AT&T invented a precursor to 802.11 in Nieuwegein, the Netherlands. The inventors initially intended to use the technology for cashier systems, the first wireless products were brought to the market under the name WaveLAN with raw data rates of 1 Mbit/s and 2 Mbit/s. Vic Hayes, who held the chair of IEEE802.11 for 10 years, in 1999, the Wi-Fi Alliance was formed as a trade association to hold the Wi-Fi trademark under which most products are sold. The original version of the standard IEEE802.11 was released in 1997 and clarified in 1999 and it specified two net bit rates of 1 or 2 megabits per second, plus forward error correction code. The latter two radio technologies used microwave transmission over the Industrial Scientific Medical frequency band at 2.4 GHz, some earlier WLAN technologies used lower frequencies, such as the U. S.900 MHz ISM band
24.
IEEE 802.11b-1999
–
IEEE802. 11b-1999 or 802. 11b, is an amendment to the IEEE802.11 wireless networking specification that extends throughput up to 11 Mbit/s using the same 2. 4GHz band. A related amendment was incorporated into the IEEE802. 11-2007 standard,802.11 is a set of IEEE standards that govern wireless networking transmission methods. They are commonly used today in their 802. 11a,802. 11b,802. 11g,802. 11n and 802. 11ac versions to provide connectivity in the home, office. 802. 11b has a raw data rate of 11 Mbit/s. Due to the CSMA/CA protocol overhead, in practice the maximum 802. 11b throughput that an application can achieve is about 5.9 Mbit/s using TCP and 7.1 Mbit/s using UDP. 802. 11b products appeared on the market in mid-1999, since 802. 11b is a extension of the DSSS modulation technique defined in the original standard. The Apple iBook was the first mainstream computer sold with optional 802. 11b networking, technically, the 802. 11b standard uses Complementary code keying as its modulation technique. The dramatic increase in throughput of 802. 11b along with simultaneous substantial price reductions led to the acceptance of 802. 11b as the definitive wireless LAN technology. 802. 11b devices suffer interference from other products operating in the 2.4 GHz band, devices operating in the 2.4 GHz range include, microwave ovens, Bluetooth devices, baby monitors and cordless telephones. Interference issues and user density problems within the 2.4 GHz band have become a major concern,802. 11b is used in a point-to-multipoint configuration, wherein an access point communicates via an omnidirectional antenna with mobile clients within the range of the access point. Typical range depends on the radio environment, output power. Allowable bandwidth is shared across clients in discrete channels, a directional antenna focuses output power into a smaller field which increases point-to-point range. Designers of such installations who wish to remain within the law must however be careful about legal limitations on effective radiated power. Some 802. 11b cards operate at 11 Mbit/s, but scale back to 5.5, then to 2, note, Channel 14 is only allowed in Japan, Channels 12 &13 are allowed in most parts of the world. More information can be found in the List of WLAN channels, IEEE802.11 IEEE802. 11g-2003 Wi-Fi List of WLAN channels
25.
IEEE 802.11g-2003
–
IEEE802. 11g-2003 or 802. 11g is an amendment to the IEEE802.11 specification that extended throughput to up to 54 Mbit/s using the same 2.4 GHz band as 802. 11b. This specification under the name of Wi-Fi has been implemented all over the world. The 802. 11g protocol is now Clause 19 of the published IEEE802. 11-2007 standard,802.11 is a set of IEEE standards that govern wireless networking transmission methods. They are commonly used today in their 802. 11a,802. 11b,802. 11g,802. 11n and 802. 11ac versions to provide connectivity in the home, office. 802. 11g is the modulation standard for wireless LANs. It works in the 2.4 GHz band but operates at a raw data rate of 54 Mbit/s. Using the CSMA/CA transmission scheme,31.4 Mbit/s is the maximum net throughput possible for packets of 1500 bytes in size and a 54 Mbit/s wireless rate. In practice, access points may not have an ideal implementation,1500 bytes is the usual limit for packets on the Internet and therefore a relevant size to benchmark against. Smaller packets give even lower theoretical throughput, down to 3 Mbit/s using 54 Mbit/s rate and 64 byte packets,802. 11g hardware is fully backwards compatible with 802. 11b hardware. Details of making b and g work well together occupied much of the technical process. In an 802. 11g network, however, the presence of a legacy 802. 11b participant will significantly reduce the speed of the overall 802. 11g network, some 802. 11g routers employ a back-compatible mode for 802. 11b clients called 54g LRS. Even though 802. 11g operates in the frequency band as 802. 11b. Of the 52 OFDM subcarriers,48 are for data and 4 are pilot subcarriers with a separation of 0.3125 MHz. Each of these subcarriers can be a BPSK, QPSK, 16-QAM or 64-QAM, the total bandwidth is 20 MHz with an occupied bandwidth of 16.6 MHz. Symbol duration is 4 microseconds, which includes an interval of 0.8 microseconds. The actual generation and decoding of orthogonal components is done in baseband using DSP which is then upconverted to 2.4 GHz at the transmitter, each of the subcarriers could be represented as a complex number. The time domain signal is generated by taking an Inverse Fast Fourier transform, correspondingly the receiver downconverts, samples at 20 MHz and does an FFT to retrieve the original coefficients. The advantages of using OFDM include reduced multipath effects in reception, the then-proposed 802. 11g standard was rapidly adopted by consumers starting in January 2003, well before ratification, due to the desire for higher speeds and reductions in manufacturing costs
26.
Hotspot (Wi-Fi)
–
A hotspot is a physical location where people may obtain Internet access, typically using Wi-Fi technology, via a wireless local area network using a router connected to an internet service provider. Public hotspots may be found in a number of businesses for use of customers in developed urban areas throughout the world. Many hotels offer wifi access to guests, either in guest rooms or in the lobby, Hotspots differ from wireless access points, which are the hardware devices used to provide a wireless network service. Private hotspots allow Internet access to a device via another device which may have access via say a mobile device. Public access wireless local area networks were first proposed by Henrik Sjödin at the NetWorld+Interop conference in The Moscone Center in San Francisco in August 1993, Sjödin did not use the term hotspot but referred to publicly accessible wireless LANs. The first commercial venture to attempt to create a local area access network was a firm founded in Richardson. The founders of the venture, Mark Goode, Greg Jackson, the firm was one of the first to sign such public access locations as Starbucks, American Airlines, and Hilton Hotels. The company was sold to Deutsche Telecom in 2001, who converted the name of the firm into T-Mobile Hotspot. It was then that the term entered the popular vernacular as a reference to a location where a publicly accessible wireless LAN is available. ABI Research reported there was a total of 4.9 million global Wi-Fi hotspots in 2012 and projected that number would surpass 6.3 million by the end of 2013. The latest Wireless Broadband Alliance Industry Report outlines a scenario for the Wi-Fi market. Collectively, WBA operator members serve more than 1 billion subscribers, the public can use a laptop or other suitable portable device to access the wireless connection provided. Of the estimated 150 million laptops,14 million PDAs, and other emerging Wi-Fi devices sold per year for the last few years, most include the Wi-Fi feature. For venues that have broadband Internet access, offering wireless access is as simple as configuring one access point, in conjunction with a router, a single wireless router combining these functions may suffice. More than 10,900 hotspots are on trains, planes and airports, the region with the largest number of public hotspots is Europe, followed by North America and Asia. Security is a concern in connection with Hotspots. There are three possible attack vectors, first, there is the wireless connection between the client and the access point. This needs to be encrypted, so that the connection cannot be eavesdropped or attacked by a man-in-the-middle-attack, second, there is the Hotspot itself
27.
Digital Living Network Alliance
–
DLNA works with cable, satellite, and telecom service providers to provide link protection on each end of the data transfer. The extra layer of digital rights management security allows broadcast operators to enable consumers to share their content on multimedia devices without the risk of piracy, as of June 2015 the organization claims membership of more than 200 companies. The group published its first set of guidelines in June 2004, in March 2014, DLNA publicly released the VidiPath Guidelines, originally called DLNA CVP-2 Guidelines. As of October 2015, over 25,000 different device models have obtained DLNA Certified status and it was estimated that by 2017 over 6 billion DLNA-certified devices, from digital cameras to game consoles and TVs, would be installed in users homes. On January 5,2017, DLNA announced on its web site that the organization has fulfilled its mission and its certification program will be conducted by SpireSpark International of Portland, Oregon. Sony established the DLNA in June 2003 as the Digital Home Working Group, changing its name 12 months later, Home Networked Device Interoperability Guidelines v1. The DLNA Certified Device Classes are separated as follows, Digital Media Server, store content and make it available to networked digital media players, examples include PCs and network-attached storage devices. Digital Media Player, find content on digital media servers and provide playback, examples include TVs, stereos and home theaters, wireless monitors and game consoles. Digital Media Renderer, play content as instructed by a digital media controller, examples include TVs, audio/video receivers, video displays and remote speakers for music. It is possible for a device to function both as a DMR and DMP Digital Media Controller, find content on digital media servers. Content doesnt stream from or through the DMC, examples include tablet computers, Wi-Fi enabled digital cameras and smartphones. Generally, digital media players and digital media controllers with print capability can print to DMPr, examples include mobile phones and portable music players. Mobile Digital Media Player, find and play content on a media server or mobile digital media server. Examples include mobile phones and mobile media tablets designed for viewing multimedia content, mobile Digital Media Uploader, send content to a digital media server or mobile digital media server. Examples include digital cameras and mobile phones, mobile Digital Media Downloader, find and store content from a digital media server or mobile digital media server. Examples include portable music players and mobile phones, mobile Digital Media Controller, find content on a digital media server or mobile digital media server and send it to digital media renderers. Examples include personal digital assistants and mobile phones, mobile Network Connectivity Function, provide a bridge between mobile handheld device network connectivity and home network connectivity. Media Interoperability Unit, provide content transformation between required media formats for home network and mobile handheld devices, the specification uses DTCP-IP as link protection for copyright-protected commercial content between one device to another
28.
Global Positioning System
–
The Global Positioning System is a space-based radionavigation system owned by the United States government and operated by the United States Air Force. The GPS system operates independently of any telephonic or internet reception, the GPS system provides critical positioning capabilities to military, civil, and commercial users around the world. The United States government created the system, maintains it, however, the US government can selectively deny access to the system, as happened to the Indian military in 1999 during the Kargil War. The U. S. Department of Defense developed the system and it became fully operational in 1995. Roger L. Easton of the Naval Research Laboratory, Ivan A, getting of The Aerospace Corporation, and Bradford Parkinson of the Applied Physics Laboratory are credited with inventing it. Announcements from Vice President Al Gore and the White House in 1998 initiated these changes, in 2000, the U. S. Congress authorized the modernization effort, GPS III. In addition to GPS, other systems are in use or under development, mainly because of a denial of access. The Russian Global Navigation Satellite System was developed contemporaneously with GPS, GLONASS can be added to GPS devices, making more satellites available and enabling positions to be fixed more quickly and accurately, to within two meters. There are also the European Union Galileo positioning system and Chinas BeiDou Navigation Satellite System, special and general relativity predict that the clocks on the GPS satellites would be seen by the Earths observers to run 38 microseconds faster per day than the clocks on the Earth. The GPS calculated positions would quickly drift into error, accumulating to 10 kilometers per day, the relativistic time effect of the GPS clocks running faster than the clocks on earth was corrected for in the design of GPS. The Soviet Union launched the first man-made satellite, Sputnik 1, two American physicists, William Guier and George Weiffenbach, at Johns Hopkinss Applied Physics Laboratory, decided to monitor Sputniks radio transmissions. Within hours they realized that, because of the Doppler effect, the Director of the APL gave them access to their UNIVAC to do the heavy calculations required. The next spring, Frank McClure, the deputy director of the APL, asked Guier and Weiffenbach to investigate the inverse problem — pinpointing the users location and this led them and APL to develop the TRANSIT system. In 1959, ARPA also played a role in TRANSIT, the first satellite navigation system, TRANSIT, used by the United States Navy, was first successfully tested in 1960. It used a constellation of five satellites and could provide a navigational fix approximately once per hour, in 1967, the U. S. Navy developed the Timation satellite, which proved the feasibility of placing accurate clocks in space, a technology required by GPS. In the 1970s, the ground-based OMEGA navigation system, based on comparison of signal transmission from pairs of stations. Limitations of these systems drove the need for a more universal navigation solution with greater accuracy, during the Cold War arms race, the nuclear threat to the existence of the United States was the one need that did justify this cost in the view of the United States Congress. This deterrent effect is why GPS was funded and it is also the reason for the ultra secrecy at that time
29.
GLONASS
–
GLONASS, or Global Navigation Satellite System, is a space-based satellite navigation system operating in the radionavigation-satellite service and used by the Russian Aerospace Defence Forces. It provides an alternative to GPS and is the second alternative navigational system in operation with global coverage, smartphones generally tend to use the same chipsets and the versions used since 2015 receive GLONASS signals and positioning information along with GPS. Since 2012, GLONASS was the second most used positioning system in mobile phones after GPS, the system has the advantage that smartphone users receive a more accurate reception identifying location to within 2 meters. Development of GLONASS began in the Soviet Union in 1976, beginning on 12 October 1982, numerous rocket launches added satellites to the system until the constellation was completed in 1995. After a decline in capacity during the late 1990s, in 2001, under Vladimir Putins presidency, GLONASS is the most expensive program of the Russian Federal Space Agency, consuming a third of its budget in 2010. By 2010, GLONASS had achieved 100% coverage of Russias territory and in October 2011, the GLONASS satellites designs have undergone several upgrades, with the latest version being GLONASS-K. GLONASS is a satellite navigation system, providing real time position and velocity determination for military. The satellites are located in middle circular orbit at 19,100 kilometres altitude with a 64.8 degree inclination, GLONASS orbit makes it especially suited for usage in high latitudes, where getting a GPS signal can be problematic. The constellation operates in three planes, with eight evenly spaced satellites on each. A fully operational constellation with global coverage consists of 24 satellites, to get a position fix the receiver must be in the range of at least four satellites. GLONASS satellites transmit two types of signal, open standard-precision signal L1OF/L2OF, and obfuscated high-precision signal L1SF/L2SF, the signals use similar DSSS encoding and binary phase-shift keying modulation as in GPS signals.0 MHz, known as the L1 band. The center frequency is 1602 MHz + n ×0.5625 MHz, signals are transmitted in a 38° cone, using right-hand circular polarization, at an EIRP between 25 and 27 dBW. The L2 band signals use the same FDMA as the L1 band signals, but transmit straddling 1246 MHz with the center frequency 1246 MHz + n×0.4375 MHz, the pseudo-random code is generated with a 9-stage shift register operating with a period of 1 ms. The navigational message is modulated at 50 bits per second, the superframe of the open signal is 7500 bits long and consists of 5 frames of 30 seconds, taking 150 seconds to transmit the continuous message. Each frame is 1500 bits long and consists of 15 strings of 100 bits, with 85 bits for data and check-sum bits, and 15 bits for time mark. Strings 1-4 provide immediate data for the satellite, and are repeated every frame, the data include ephemeris, clock and frequency offsets. Strings 5-15 provide non-immediate data for each satellite in the constellation, with frames I-IV each describing five satellites, the almanac uses modified Keplerian parameters and is updated daily. The details of the signal have not been disclosed
30.
Near-field communication
–
NFC devices are used in contactless payment systems, similar to those used in credit cards and electronic ticket smartcards and allow mobile payment to replace/supplement these systems. NFC is used for networking, for sharing contacts, photos, videos or files. NFC-enabled devices can act as electronic identity documents and keycards, NFC offers a low-speed connection with simple setup that can be used to bootstrap more capable wireless connections. Similar ideas in advertising and industrial applications were not generally successful commercially, outpaced by technologies such as barcodes, NFC protocols established a generally supported standard. When one of the devices has Internet connectivity, the other can exchange data with online services. NFC-enabled portable devices can be provided with software, for example to read electronic tags or make payments when connected to an NFC-compliant apparatus. NFC reader/writer—enables NFC-enabled devices to read information stored on inexpensive NFC tags embedded in labels or smart posters, NFC peer-to-peer—enables two NFC-enabled devices to communicate with each other to exchange information in an adhoc fashion. NFC tags are passive data stores which can be read, and under some circumstances written to and they typically contain data and are read-only in normal use, but may be rewritable. Applications include secure personal data storage, NFC tags can be custom-encoded by their manufacturers or use the industry specifications. The standards were provided by the NFC Forum, the forum was responsible for promoting the technology and setting standards and certifies device compliance. Secure communications are available by applying encryption algorithms as is done for credit cards, NFC standards cover communications protocols and data exchange formats and are based on existing radio-frequency identification standards including ISO/IEC14443 and FeliCa. The standards include ISO/IEC18092 and those defined by the NFC Forum, in addition to the NFC Forum, the GSMA group defined a platform for the deployment of GSMA NFC Standards within mobile handsets. GSMAs efforts include Trusted Services Manager, Single Wire Protocol, testing/certification, a patent licensing program for NFC is under deployment by France Brevets, a patent fund created in 2011. This program was under development by Via Licensing Corporation, an independent subsidiary of Dolby Laboratories, a platform-independent free and open source NFC library, libnfc, is available under the GNU Lesser General Public License. Present and anticipated applications include contactless transactions, data exchange and simplified setup of more complex such as Wi-Fi. This is used for identification, authentication and tracking,1983 The first patent to be associated with the abbreviation RFID was granted to Charles Walton. 1997 Early form patented and first used in Star Wars character toys for Hasbro, the patent was originally held by Andrew White and Marc Borrett at Innovision Research and Technology. The device allowed data communication between two units in close proximity,2002 Sony and Philips agreed to establish a technology specification and created a technical outline on March 25,2002
31.
Bluetooth
–
Bluetooth is a wireless technology standard for exchanging data over short distances from fixed and mobile devices, and building personal area networks. Invented by telecom vendor Ericsson in 1994, it was conceived as a wireless alternative to RS-232 data cables. It can connect up to seven devices, overcoming problems that older technologies had when attempting to connect to each other. Bluetooth is managed by the Bluetooth Special Interest Group, which has more than 30,000 member companies in the areas of telecommunication, computing, networking, the IEEE standardized Bluetooth as IEEE802.15.1, but no longer maintains the standard. The Bluetooth SIG oversees development of the specification, manages the qualification program, a manufacturer must meet Bluetooth SIG standards to market it as a Bluetooth device. A network of patents apply to the technology, which are licensed to individual qualifying devices, the development of the short-link radio technology, later named Bluetooth, was initiated in 1989 by Nils Rydbeck, CTO at Ericsson Mobile in Lund, Sweden, and by Johan Ullman. The purpose was to develop wireless headsets, according to two inventions by Johan Ullman, SE 8902098-6, issued 1989-06-12 and SE9202239, issued 1992-07-24, Nils Rydbeck tasked Tord Wingren with specifying and Jaap Haartsen and Sven Mattisson with developing. Both were working for Ericsson in Lund, the specification is based on frequency-hopping spread spectrum technology. The idea of this name was proposed in 1997 by Jim Kardach who developed a system that would allow mobile phones to communicate with computers, at the time of this proposal he was reading Frans G. Bengtssons historical novel The Long Ships about Vikings and King Harald Bluetooth. The implication is that Bluetooth does the same with communications protocols, the Bluetooth logo is a bind rune merging the Younger Futhark runes and, Haralds initials. Bluetooth operates at frequencies between 2402 and 2480 MHz, or 2400 and 2483.5 MHz including guard bands 2 MHz wide at the end and 3.5 MHz wide at the top. This is in the globally unlicensed Industrial, Scientific and Medical 2.4 GHz short-range radio frequency band, Bluetooth uses a radio technology called frequency-hopping spread spectrum. Bluetooth divides transmitted data into packets, and transmits each packet on one of 79 designated Bluetooth channels, each channel has a bandwidth of 1 MHz. It usually performs 800 hops per second, with Adaptive Frequency-Hopping enabled, Bluetooth low energy uses 2 MHz spacing, which accommodates 40 channels. Originally, Gaussian frequency-shift keying modulation was the modulation scheme available. Since the introduction of Bluetooth 2. 0+EDR, π/4-DQPSK and 8DPSK modulation may also be used between compatible devices, devices functioning with GFSK are said to be operating in basic rate mode where an instantaneous data rate of 1 Mbit/s is possible. The term Enhanced Data Rate is used to describe π/4-DPSK and 8DPSK schemes, the combination of these modes in Bluetooth radio technology is classified as a BR/EDR radio. Bluetooth is a protocol with a master-slave structure
32.
Bluetooth Low Energy
–
Compared to Classic Bluetooth, Bluetooth Smart is intended to provide considerably reduced power consumption and cost while maintaining a similar communication range. Bluetooth Smart was originally introduced under the name Wibree by Nokia in 2006 and it was merged into the main Bluetooth standard in 2010 with the adoption of the Bluetooth Core Specification Version 4.0. Mobile operating systems including iOS, Android, Windows Phone and BlackBerry, as well as macOS, Linux, Windows 8 and Windows 10, the Bluetooth SIG predicts that by 2018 more than 90 percent of Bluetooth-enabled smartphones will support Bluetooth Smart. The Bluetooth SIG officially unveiled Bluetooth 5 on June 16,2016 during an event in London. One change on the side is that they dropped the point number. Bluetooth Smart is not backward-compatible with the previous Bluetooth protocol, the Bluetooth 4.0 specification permits devices to implement either or both of the LE and Classic systems. Bluetooth Smart uses the same 2.4 GHz radio frequencies as Classic Bluetooth, LE does, however, use a simpler modulation system. In 2011, the Bluetooth Special Interest Group announced the Bluetooth Smart logo so as to clarify compatibility between the new low energy devices and other Bluetooth devices, Bluetooth Smart Ready indicates a dual-mode device compatible with both Classic and low energy peripherals. Bluetooth Smart indicates a low energy-only device which requires either a Smart Ready or another Smart device in order to function, in the new branding information, the Bluetooth SIG has made one fundamental change. It is phasing out the Bluetooth Smart and Bluetooth Smart Ready logos and word marks and has reverted to using the Bluetooth logo, the logo uses a new blue color. The Bluetooth SIG identifies a number of markets for low energy technology, particularly in the home, health, sport. The company began developing a wireless technology adapted from the Bluetooth standard which would provide lower power usage, the results were published in 2004 using the name Bluetooth Low End Extension. Integration of Bluetooth Smart with version 4.0 of the Core Specification was completed in early 2010, the first smartphone to implement the 4.0 specification was the iPhone 4S, released in October 2011. A number of manufacturers released Bluetooth Smart Ready devices in 2012. Borrowing from the original Bluetooth specification, the Bluetooth SIG defines several profiles — specifications for how a device works in a particular application — for low energy devices, manufacturers are expected to implement the appropriate specifications for their device in order to ensure compatibility. A device may contain implementations of multiple profiles, Bluetooth 4.0 provides low power consumption with higher bit rates. In 2014, Cambridge Silicon Radio launched CSR Mesh, CSR Mesh protocol uses Bluetooth Smart to communicate with other Bluetooth Smart devices in the network. Each device can pass the information forward to other Bluetooth Smart devices creating a “mesh” effect, for example, switching off an entire building of lights from a single smartphone
33.
USB
–
It is currently developed by the USB Implementers Forum. USB was designed to standardize the connection of peripherals to personal computers. It has become commonplace on other devices, such as smartphones, PDAs, USB has effectively replaced a variety of earlier interfaces, such as serial ports and parallel ports, as well as separate power chargers for portable devices. Also, there are 5 modes of USB data transfer, in order of increasing bandwidth, Low Speed, Full Speed, High Speed, SuperSpeed, USB devices have some choice of implemented modes, and USB version is not a reliable statement of implemented modes. Modes are identified by their names and icons, and the specifications suggests that plugs, unlike other data buses, USB connections are directed, with both upstream and downstream ports emanating from a single host. This applies to power, with only downstream facing ports providing power. Thus, USB cables have different ends, A and B, therefore, in general, each different format requires four different connectors, a plug and receptacle for each of the A and B ends. USB cables have the plugs, and the corresponding receptacles are on the computers or electronic devices, in common practice, the A end is usually the standard format, and the B side varies over standard, mini, and micro. The mini and micro formats also provide for USB On-The-Go with a hermaphroditic AB receptacle, the micro format is the most durable from the point of view of designed insertion lifetime. The standard and mini connectors have a lifetime of 1,500 insertion-removal cycles. Likewise, the component of the retention mechanism, parts that provide required gripping force, were also moved into plugs on the cable side. A group of seven companies began the development of USB in 1994, Compaq, DEC, IBM, Intel, Microsoft, NEC, a team including Ajay Bhatt worked on the standard at Intel, the first integrated circuits supporting USB were produced by Intel in 1995. The original USB1.0 specification, which was introduced in January 1996, Microsoft Windows 95, OSR2.1 provided OEM support for the devices. The first widely used version of USB was 1.1, the 12 Mbit/s data rate was intended for higher-speed devices such as disk drives, and the lower 1.5 Mbit/s rate for low data rate devices such as joysticks. Apple Inc. s iMac was the first mainstream product with USB, following Apples design decision to remove all legacy ports from the iMac, many PC manufacturers began building legacy-free PCs, which led to the broader PC market using USB as a standard. The USB2.0 specification was released in April 2000 and was ratified by the USB Implementers Forum at the end of 2001.1 specification, the USB3.0 specification was published on 12 November 2008. Its main goals were to increase the transfer rate, decrease power consumption, increase power output. USB3.0 includes a new, higher speed bus called SuperSpeed in parallel with the USB2.0 bus, for this reason, the new version is also called SuperSpeed
34.
USB On-The-Go
–
Use of USB OTG allows those devices to switch back and forth between the roles of host and device. For instance, a phone may read from removable media as the host device. The device controlling the link is called the master or host, standard USB uses a master/slave architecture, a host acts as the master device for the entire bus, and a USB device acts as a slave. If implementing standard USB, devices must assume one role or the other, with computers generally set up as hosts, in the absence of USB OTG, cell phones often implemented slave functionality to allow easy transfer of data to and from computers. Such phones, as slaves, could not readily be connected to printers as they implemented the slave role. USB OTG directly addresses this issue, when a device is plugged into the USB bus, the master device, or host, sets up communications with the device and handles service provisioning. That allows the devices to be greatly simplified compared to the host, for example, the host controls all data transfers over the bus, with the devices capable only of signalling that they require attention. To transfer data between two devices, for example from a phone to a printer, the host first reads the data from one device, then writes it to the other. While the master-slave arrangement works for some devices, many devices can act either as master or as slave depending on what else shares the bus, USB OTG recognizes that a device can perform both master and slave roles, and so subtly changes the terminology. With OTG, a device can be either a host when acting as a link master, the choice between host and peripheral roles is handled entirely by which end of the cable the device is connected to. The device connected to the A end of the cable at start-up, known as the A-device, acts as the default host, while the B end acts as the default peripheral, known as the B-device. After initial startup, setup for the bus operates as it does with the normal USB standard, with the A-device setting up the B-device, however, when the same A-device is plugged into another USB system or a dedicated host becomes available, it can become a slave. Role swapping does not work through a hub, as one device will act as a host. USB OTG is a part of a supplement to the Universal Serial Bus 2.0 specification originally agreed upon in late 2001, the latest version of the supplement also defines behavior for an Embedded Host which has targeted abilities and the same USB Standard-A port used by PCs. SuperSpeed OTG devices, Embedded Hosts and peripherals are supported through the USB OTG, the USB OTG and Embedded Host Supplement to the USB2. It does so by measuring the capacitance on the USB port to determine whether there is another device attached. When a large change in capacitance is detected to indicate device attachment. At the same time, a B-device will generate SRP and wait for the USB bus to become powered, session Request Protocol Allows both communicating devices to control when the links power session is active, in standard USB, only the host is capable of doing so
35.
Phone connector (audio)
–
A phone connector, also known as phone jack, audio jack, headphone jack or jack plug, is a common family of electrical connector typically used for analog signals, primarily audio. It is cylindrical in shape, typically two, three, four and, recently, five contacts. Three-contact versions are known as TRS connectors, where T stands for tip, R stands for ring, similarly, two-, four- and five- contact versions are called TS, TRRS and TRRRS connectors respectively. The phone connector was invented for use in telephone switchboards in the 19th century and is widely used. In its original configuration, the diameter of the sleeve conductor is 1⁄4 inch. The mini connector has a diameter of 3.5 mm, specific models are termed stereo plug, mini-stereo, mini jack, headphone jack and microphone jack, or are referred to by size, i. e.3. 5mm or 6. 35mm. In the UK, the terms jack plug and jack socket are commonly used for the male and female phone connectors. In the US, an electrical connector is called a jack. Phone plugs and jacks are not to be confused with the similar terms phono plug and phono jack which refer to RCA connectors common in consumer hi-fi, the 3.5 mm connector is, however, sometimes—but counter to the connector manufacturers nomenclature—referred to as mini phono. Modern phone connectors are available in three standard sizes, the original 1⁄4 in version dates from 1878, when it was used for manual telephone exchanges, making it the oldest electrical connector standard still in use. The 3.5 mm or miniature and 2.5 mm or sub-miniature sizes were originally designed as two-conductor connectors for earpieces on transistor radios since the 1950s, the standard still used today. The 3.5 mm connector, which is the most commonly used in application today, was popularized by the Sony EFM-117J radio which was released in 1964. It became very popular with its application on the Walkman in 1979, the 3.5 mm and 2.5 mm sizes are sometimes referred to as 1⁄8 in and 3⁄32 in respectively in the United States, though those dimensions are only approximations. All three sizes are now available in two-conductor and three-conductor versions. Four- and five-conductor versions of the 3.5 mm plug are used for certain applications, a four-conductor version is often used in compact camcorders and portable media players, and sometimes also in laptop computers and smartphones, providing stereo sound plus a video signal. Proprietary interfaces using both four- and five-conductor versions exist, where the conductors are used to supply power for accessories. The four-conductor 3.5 mm plug is used as a connector on handheld amateur radio transceivers from Yaesu. It is also used in some amps like the LH Labs Geek Out V2+ for a balanced output, the most common arrangement remains to have the male plug on the cable and the female socket mounted in a piece of equipment, the original intention of the design
36.
System on a chip
–
A system on a chip or system on chip is an integrated circuit that integrates all components of a computer or other electronic systems. It may contain digital, analog, mixed-signal, and often radio-frequency functions—all on a single substrate, SoCs are very common in the mobile computing market because of their low power-consumption. A typical application is in the area of embedded systems, the contrast with a microcontroller, SoC integrates microcontroller with advanced peripherals like graphics processing unit, Wi-Fi module, or coprocessor. As long as we remember that the SoC does not necessarily contain built-in memory, in general, we can distinguish three types of SoC. SoC built around a microcontroller, SoC built around a microprocessor, a separate category may be Programmable SoC, part of elements is not permanently defined and can be programmable in a manner analogous to the FPGA or CPLD. When it is not feasible to construct a SoC for a particular application, in large volumes, SoC is believed to be more cost-effective than SiP since it increases the yield of the fabrication and because its packaging is simpler. Another option, as seen for example in cell phones, is package on package stacking during board assembly. The SoC includes processors and numerous digital peripherals, and comes in a ball grid package with lower and upper connections. The lower balls connect to the board and various peripherals, with the balls in a ring holding the memory buses used to access NAND flash. Memory packages could come from multiple vendors, DMA controllers route data directly between external interfaces and memory, bypassing the processor core and thereby increasing the data throughput of the SoC. A SoC consists of both the hardware, described above, and the controlling the microcontroller, microprocessor or DSP cores, peripherals. The design flow for a SoC aims to develop hardware and software in parallel. Most SoCs are developed from pre-qualified hardware blocks for the elements described above. Of particular importance are the protocol stacks that drive industry-standard interfaces like USB, the hardware blocks are put together using CAD tools, the software modules are integrated using a software-development environment. Once the architecture of the SoC has been defined, any new elements are written in an abstract language termed RTL which defines the circuit behaviour. These elements are connected together in the same RTL language to create the full SoC design, chips are verified for logical correctness before being sent to foundry. This process is called functional verification and it accounts for a significant portion of the time, with the growing complexity of chips, hardware verification languages like SystemVerilog, SystemC, e, and OpenVera are being used. Bugs found in the stage are reported to the designer
37.
Random-access memory
–
Random-access memory is a form of computer data storage which stores frequently used program instructions to increase the general speed of a system. A random-access memory device allows data items to be read or written in almost the same amount of time irrespective of the location of data inside the memory. RAM contains multiplexing and demultiplexing circuitry, to connect the lines to the addressed storage for reading or writing the entry. Usually more than one bit of storage is accessed by the same address, in todays technology, random-access memory takes the form of integrated circuits. RAM is normally associated with types of memory, where stored information is lost if power is removed. Other types of non-volatile memories exist that allow access for read operations. These include most types of ROM and a type of memory called NOR-Flash. Integrated-circuit RAM chips came into the market in the early 1970s, with the first commercially available DRAM chip, early computers used relays, mechanical counters or delay lines for main memory functions. Ultrasonic delay lines could only reproduce data in the order it was written, drum memory could be expanded at relatively low cost but efficient retrieval of memory items required knowledge of the physical layout of the drum to optimize speed. Latches built out of vacuum tube triodes, and later, out of transistors, were used for smaller and faster memories such as registers. Such registers were relatively large and too costly to use for large amounts of data, the first practical form of random-access memory was the Williams tube starting in 1947. It stored data as electrically charged spots on the face of a cathode ray tube, since the electron beam of the CRT could read and write the spots on the tube in any order, memory was random access. The capacity of the Williams tube was a few hundred to around a thousand bits, but it was smaller, faster. In fact, rather than the Williams tube memory being designed for the SSEM, magnetic-core memory was invented in 1947 and developed up until the mid-1970s. It became a form of random-access memory, relying on an array of magnetized rings. By changing the sense of each rings magnetization, data could be stored with one bit stored per ring, since every ring had a combination of address wires to select and read or write it, access to any memory location in any sequence was possible. Magnetic core memory was the form of memory system until displaced by solid-state memory in integrated circuits. Data was stored in the capacitance of each transistor, and had to be periodically refreshed every few milliseconds before the charge could leak away
38.
Pixel
–
The address of a pixel corresponds to its physical coordinates. LCD pixels are manufactured in a grid, and are often represented using dots or squares. Each pixel is a sample of an image, more samples typically provide more accurate representations of the original. The intensity of each pixel is variable, in color imaging systems, a color is typically represented by three or four component intensities such as red, green, and blue, or cyan, magenta, yellow, and black. The word pixel is based on a contraction of pix and el, the word pixel was first published in 1965 by Frederic C. Billingsley of JPL, to describe the elements of video images from space probes to the Moon. Billingsley had learned the word from Keith E. McFarland, at the Link Division of General Precision in Palo Alto, McFarland said simply it was in use at the time. The word is a combination of pix, for picture, the word pix appeared in Variety magazine headlines in 1932, as an abbreviation for the word pictures, in reference to movies. By 1938, pix was being used in reference to pictures by photojournalists. The concept of a picture element dates to the earliest days of television, some authors explain pixel as picture cell, as early as 1972. In graphics and in image and video processing, pel is often used instead of pixel, for example, IBM used it in their Technical Reference for the original PC. A pixel is generally thought of as the smallest single component of a digital image, however, the definition is highly context-sensitive. For example, there can be printed pixels in a page, or pixels carried by electronic signals, or represented by digital values, or pixels on a display device, or pixels in a digital camera. This list is not exhaustive and, depending on context, synonyms include pel, sample, byte, bit, dot, Pixels can be used as a unit of measure such as,2400 pixels per inch,640 pixels per line, or spaced 10 pixels apart. For example, a high-quality photographic image may be printed with 600 ppi on a 1200 dpi inkjet printer, even higher dpi numbers, such as the 4800 dpi quoted by printer manufacturers since 2002, do not mean much in terms of achievable resolution. The more pixels used to represent an image, the closer the result can resemble the original, the number of pixels in an image is sometimes called the resolution, though resolution has a more specific definition.3 megapixels. The pixels, or color samples, that form an image may or may not be in one-to-one correspondence with screen pixels. In computing, a composed of pixels is known as a bitmapped image or a raster image
39.
Qualcomm
–
Qualcomm is an American multinational semiconductor and telecommunications equipment company that designs and markets wireless telecommunications products and services. It derives most of its revenue from chipmaking and the bulk of its profit from patent licensing businesses, the company headquarters are located in San Diego, California, United States, and has 224 worldwide locations. The parent company is Qualcomm Incorporated, which includes the Qualcomm Technology Licensing Division, Jacobs and Viterbi had previously founded Linkabit. In 1990, Qualcomm began the design of the first CDMA-based cellular base station and this work began as a study contract from AirTouch which was facing a shortage of cellular capacity in Los Angeles. Two years later Qualcomm began to manufacture CDMA cell phones, base stations, the initial base stations were not reliable and the technology was licensed wholly to Nortel in return for their work in improving the base station switching. The first CDMA technology was standardized as IS-95, Qualcomm has since helped to establish the CDMA2000, WCDMA and LTE cellular standards. The following year, Qualcomm acquired Eudora, a client for PC that could be used with the OmniTRACS system. The acquisition associated a widely used email client with a company that was little-known at the time, in 1997, Qualcomm paid $18 million for the naming rights to the Jack Murphy Stadium in San Diego, renaming it to Qualcomm Stadium. The naming rights belong to Qualcomm until 2017. In 1999, Qualcomm sold its base station business to Ericsson, the company was now focused on developing and licensing wireless technologies and selling ASICs that implement them. Steve Mollenkopf was promoted to president and chief operating officer of the company, mollenkopfs appointment as CEO was announced on December 13,2013 and took effect on March 4,2014. He succeeded Paul E. Jacobs, who remains executive chairman, CFO Bill Keitel retired and was replaced by Applied Materials CFO George Davis on March 11,2013. Vista Equity Partners took over the Omnitracs business from Qualcomm Incorporated in November 2013, in October 2014, Qualcomm wrapped up a deal for chip maker CSR Plc for a fee of $2.5 billion, beating its biggest rival Microchip Technology. However, surprised by the release of the 64-bit Apple A7 in September 2013, furthermore, Qualcomm was facing anti-trust investigations in China, the European Union, and the United States. The combination of pressures caused a significant fall in Qualcomms profits. In July 2015, the company cut 4,700 jobs or about 15 percent of its 31,300 current workforce due to decline of sales, executive management knew this was coming so they came up with a plan to retain its employees. However, instead of paying reasonable salary, executive management used this plan as a justification to give themselves a big payout first and then lay off employees later. In December 2015, Qualcomm Inc. announced that it had rejected calls to split itself in two, deciding to keep its chipmaking and patent licensing businesses together