Debugging is the process of finding and resolving defects or problems within a computer program that prevent correct operation of computer software or a system. Debugging tactics can involve interactive debugging, control flow analysis, unit testing, integration testing, log file analysis, monitoring at the application or system level, memory dumps, profiling; the terms "bug" and "debugging" are popularly attributed to Admiral Grace Hopper in the 1940s. While she was working on a Mark II computer at Harvard University, her associates discovered a moth stuck in a relay and thereby impeding operation, whereupon she remarked that they were "debugging" the system. However, the term "bug", in the sense of "technical error", dates back at least to 1878 and Thomas Edison; the term "debugging" seems to have been used as a term in aeronautics before entering the world of computers. Indeed, in an interview Grace Hopper remarked; the moth fit the existing terminology, so it was saved. A letter from J. Robert Oppenheimer used the term in a letter to Dr. Ernest Lawrence at UC Berkeley, dated October 27, 1944, regarding the recruitment of additional technical staff.
The Oxford English Dictionary entry for "debug" quotes the term "debugging" used in reference to airplane engine testing in a 1945 article in the Journal of the Royal Aeronautical Society. An article in "Airforce" refers to debugging, this time of aircraft cameras. Hopper's bug was found on September 9, 1947; the term was not adopted by computer programmers until the early 1950s. The seminal article by Gill in 1951 is the earliest in-depth discussion of programming errors, but it does not use the term "bug" or "debugging". In the ACM's digital library, the term "debugging" is first used in three papers from 1952 ACM National Meetings. Two of the three use the term in quotation marks. By 1963 "debugging" was a common enough term to be mentioned in passing without explanation on page 1 of the CTSS manual. Kidwell's article Stalking the Elusive Computer Bug discusses the etymology of "bug" and "debug" in greater detail; as software and electronic systems have become more complex, the various common debugging techniques have expanded with more methods to detect anomalies, assess impact, schedule software patches or full updates to a system.
The words "anomaly" and "discrepancy" can be used, as being more neutral terms, to avoid the words "error" and "defect" or "bug" where there might be an implication that all so-called errors, defects or bugs must be fixed. Instead, an impact assessment can be made to determine if changes to remove an anomaly would be cost-effective for the system, or a scheduled new release might render the change unnecessary. Not all issues are mission-critical in a system, it is important to avoid the situation where a change might be more upsetting to users, long-term, than living with the known problem. Basing decisions of the acceptability of some anomalies can avoid a culture of a "zero-defects" mandate, where people might be tempted to deny the existence of problems so that the result would appear as zero defects. Considering the collateral issues, such as the cost-versus-benefit impact assessment broader debugging techniques will expand to determine the frequency of anomalies to help assess their impact to the overall system.
Debugging ranges in complexity from fixing simple errors to performing lengthy and tiresome tasks of data collection and scheduling updates. The debugging skill of the programmer can be a major factor in the ability to debug a problem, but the difficulty of software debugging varies with the complexity of the system, depends, to some extent, on the programming language used and the available tools, such as debuggers. Debuggers are software tools which enable the programmer to monitor the execution of a program, stop it, restart it, set breakpoints, change values in memory; the term debugger can refer to the person, doing the debugging. High-level programming languages, such as Java, make debugging easier, because they have features such as exception handling and type checking that make real sources of erratic behaviour easier to spot. In programming languages such as C or assembly, bugs may cause silent problems such as memory corruption, it is difficult to see where the initial problem happened.
In those cases, memory debugger tools may be needed. In certain situations, general purpose software tools that are language specific in nature can be useful; these take the form of static code analysis tools. These tools look for a specific set of known problems, some common and some rare, within the source code. Concentrating more on the semantics rather than the syntax, as compilers and interpreters do; some tools claim to be able to detect over 300 different problems. Both commercial and free tools exist for various languages; these tools can be useful when checking large source trees, where it is impractical to do code walkthroughs. A typical example of a problem detected would be a variable dereference that occurs before the variable is assigned a value; as another example, some such tools perform strong type checking when the language does not require it. Thus, they are better at locating errors in code, syntactically correct, but these tools have a reputation of false positives. The old Unix lint program is an early example.
For debugging electronic hardware (e.g
The Z80 CPU is an 8-bit based microprocessor. It was introduced by Zilog in 1976 as the startup company's first product; the Z80 was conceived by Federico Faggin in late 1974 and developed by him and his then-11 employees at Zilog from early 1975 until March 1976, when the first working samples were delivered. With the revenue from the Z80, the company built its own chip factories and grew to over a thousand employees over the following two years; the Zilog Z80 was a software-compatible extension and enhancement of the Intel 8080 and, like it, was aimed at embedded systems. According to the designers, the primary targets for the Z80 CPU were products like intelligent terminals, high end printers and advanced cash registers as well as telecom equipment, industrial robots and other kinds of automation equipment; the Z80 was introduced on the market in July 1976 and came to be used in general desktop computers using CP/M and other operating systems as well as in the home computers of the 1980s.
It was common in military applications, musical equipment, such as synthesizers, in the computerized coin operated video games of the late 1970s and early 1980, the arcade machines or video game arcade cabinets. The Z80 was one of the most used CPUs in the home computer market from the late 1970s to the mid-1980s. Zilog licensed the Z80 to the US-based Synertek and Mostek, which had helped them with initial production, as well as to a European second source manufacturer, SGS; the design was copied by several Japanese, East European and Soviet manufacturers. This won the Z80 acceptance in the world market since large companies like NEC, Toshiba and Hitachi started to manufacture the device. In recent decades Zilog has refocused on the ever-growing market for embedded systems and the most recent Z80-compatible microcontroller family, the pipelined 24-bit eZ80 with a linear 16 MB address range, has been introduced alongside the simpler Z180 and Z80 products; the Z80 came about when physicist Federico Faggin left Intel at the end of 1974 to found Zilog with Ralph Ungermann.
At Fairchild Semiconductor, at Intel, Faggin had been working on fundamental transistor and semiconductor manufacturing technology. He developed the basic design methodology used for memories and microprocessors at Intel and led the work on the Intel 4004, the 8080 and several other ICs. Masatoshi Shima, the principal logic and transistor level-designer of the 4004 and the 8080 under Faggin's supervision, joined the Zilog team. By March 1976, Zilog had developed the Z80 as well as an accompanying assembler based development system for its customers, by July 1976, this was formally launched onto the market. Early Z80s were manufactured by Synertek and Mostek, before Zilog had its own manufacturing factory ready, in late 1976; these companies were chosen because they could do the ion implantation needed to create the depletion-mode MOSFETs that the Z80 design used as load transistors in order to cope with a single 5 Volt power supply. Faggin designed the instruction set to be binary compatible with the Intel 8080 so that most 8080 code, notably the CP/M operating system and Intel's PL/M compiler for 8080, would run unmodified on the new Z80 CPU.
Masatoshi Shima designed most of the microarchitecture as well as the gate and transistor levels of the Z80 CPU, assisted by a small number of engineers and layout people. CEO Federico Faggin was heavily involved in the chip layout work, together with two dedicated layout people. Faggin worked 80 hours a week in order to meet the tight schedule given by the financial investors, according to himself; the Z80 offered many improvements over the 8080: An enhanced instruction set including single-bit addressing, shifts/rotates on memory and registers other than the accumulator, rotate instructions for BCD number strings in memory, program looping, program counter relative jumps, block copy, block input/output, byte search instructions. The Z80 had better support for signed 8 - and 16-bit arithmetics. New IX and IY index registers with instructions for direct base+offset addressing A better interrupt system A more automatic and general vectorized interrupt system, mode 2 intended for Zilog's line of counter/timers, DMA and communications controllers, as well as a fixed vector interrupt system, mode 1, for simple systems with minimal hardware.
A non maskable interrupt which can be used to respond to power down situations or other high priority events. Two separate register files, which could be switched, to speed up response to interrupts such as fast asynchronous event handlers or a multitasking dispatcher. Although they were not intended as extra registers for general code, they were used that way in some applications. Less hardware required for power supply, clock generation and interface to memory and I/O Single 5-volt power supply. Single-phase 5 V clock. A built-in DRAM refresh mechanism. Non-multiplexed buses. A special reset function which clears only the program counter so that a single Z80 CPU could be used in a
Quantum computing is the use of quantum-mechanical phenomena such as superposition and entanglement to perform computation. A quantum computer is used to perform such computation, which can be implemented theoretically or physically; the field of quantum computing is a sub-field of quantum information science, which includes quantum cryptography and quantum communication. Quantum Computing was started in the early 1980s when Richard Feynman and Yuri Manin expressed the idea that a quantum computer had the potential to simulate things that a classical computer could not. In 1994, Peter Shor shocked the world with an algorithm that had the potential to decrypt all secured communications. There are two main approaches to physically implementing a quantum computer analog and digital. Analog approaches are further divided into quantum simulation, quantum annealing, adiabatic quantum computation. Digital quantum computers use quantum logic gates to do computation. Both approaches use quantum qubits.
Qubits are fundamental to quantum computing and are somewhat analogous to bits in a classical computer. Qubits can be in a 0 quantum state, but they can be in a superposition of the 1 and 0 states. However, when qubits are measured they always give a 0 or a 1 based on the quantum state they were in. Today's physical quantum computers are noisy and quantum error correction is a burgeoning field of research. Quantum supremacy is the next milestone that quantum computing will achieve soon. While there is much hope and research in the field of quantum computing, as of March 2019 there have been no commercially useful algorithms published for today's noisy quantum computers. A classical computer has a memory made up of bits, where each bit is represented by either a one or a zero. A quantum computer, on the other hand, maintains a sequence of qubits, which can represent a one, a zero, or any quantum superposition of those two qubit states. In general, a quantum computer with n qubits can be in any superposition of up to 2 n different states..
A quantum computer operates on its qubits using measurement. An algorithm is composed of a fixed sequence of quantum logic gates and a problem is encoded by setting the initial values of the qubits, similar to how a classical computer works; the calculation ends with a measurement, collapsing the system of qubits into one of the 2 n eigenstates, where each qubit is zero or one, decomposing into a classical state. The outcome can, therefore, be at most n classical bits of information. If the algorithm did not end with a measurement, the result is an unobserved quantum state. Quantum algorithms are probabilistic, in that they provide the correct solution only with a certain known probability. Note that the term non-deterministic computing must not be used in that case to mean probabilistic because the term non-deterministic has a different meaning in computer science. An example of an implementation of qubits of a quantum computer could start with the use of particles with two spin states: "down" and "up".
This is true. A quantum computer with a given number of qubits is fundamentally different from a classical computer composed of the same number of classical bits. For example, representing the state of an n-qubit system on a classical computer requires the storage of 2n complex coefficients, while to characterize the state of a classical n-bit system it is sufficient to provide the values of the n bits, that is, only n numbers. Although this fact may seem to indicate that qubits can hold exponentially more information than their classical counterparts, care must be taken not to overlook the fact that the qubits are only in a probabilistic superposition of all of their states; this means that when the final state of the qubits is measured, they will only be found in one of the possible configurations they were in before the measurement. It is incorrect to think of a system of qubits as being in one particular state before the measurement; the qubits are in a superposition of states before any measurement is made, which directly affects the possible outcomes of the computation.
To better understand this point, consider a classical computer that operates on a three-bit register. If the exact state of the register at a given time is not known, it can be described as a probability distribution over the 2 3 = 8 different three-bit strings 000, 001, 010, 011, 100, 101, 110, 111. If there is no uncertainty over its state it is in one of these states with probability 1. However, if it is a probabilistic computer there is a possibility of it being in any one of a number of different states; the state of a three-qubit quantum computer is described by an eight-dimensional vector (
National Semiconductor was an American semiconductor manufacturer which specialized in analog devices and subsystems with headquarters in Santa Clara, United States. The company produced power management integrated circuits, display drivers and operational amplifiers, communication interface products and data conversion solutions. National's key markets included wireless handsets, displays and a variety of broad electronics markets, including medical, automotive and test and measurement applications. On September 23, 2011, the company formally became part of Texas Instruments as the "Silicon Valley" division. National Semiconductor was founded in Danbury, Connecticut, by Dr. Bernard J. Rothlein on May 27, 1959, when he and seven colleagues, Edward N. Clarke, Joseph J. Gruber, Milton Schneider, Robert L. Hopkins, Robert L. Koch, Richard R. Rau and Arthur V. Siefert, left their employment at the semiconductor division of Sperry Rand Corporation; the founding of the new company was followed by Sperry Rand filing a lawsuit against National Semiconductor for patent infringement.
By 1965, as it was reaching the courts, the preliminaries of the lawsuit had caused the stock value of National to be depressed. The depressed stock values allowed Peter J Sprague to invest in the company with Sprague's family funds. Sprague relied on further financial backing from a pair of West Coast investment firms and a New York underwriter to take control as the chairman of National Semiconductor. At that time Sprague was 27 years old. Jeffrey S. Young characterized the era as the beginning of venture capitalism; that same year National Semiconductor acquired Molectro. Molectro was founded in 1962 in Santa Clara, California, by J. Nall and D. Spittlehouse, who were employed at Fairchild Camera and Instrument Corporation; the acquisition brought to National Semiconductor two experts in linear semiconductor technologies, Robert Widlar and Dave Talbert, who were formerly employed at Fairchild. The acquisition of Molectro provided National with the technology to launch itself in the fabrication and manufacture of monolithic integrated circuits.
In 1967, Sprague hired five top executives away from Fairchild, among whom were Charles E. Sporck and Pierre Lamond. At the time of Sporck's hiring, Robert Noyce was de facto head of semiconductor operations at Fairchild and Sporck was his operations manager. Sporck was appointed CEO of National. To make the deal better for Sporck's hiring and appointment at half his former salary at Fairchild, Sporck was allotted a substantial share of National's stock. In essence, Sporck took four of his personnel from Fairchild with him as well as three others from TI, Perkin-Elmer, Hewlett-Packard to form a new eight-man team at National Semiconductor. Incidentally, Sporck had been Widlar's superior at Fairchild before Widlar left Fairchild to join Molectro after a compensation dispute with Sporck. In 1968, National shifted its headquarters from Connecticut, to Santa Clara, California. However, like many companies, National retained its registration as a Delaware corporation, for legal and financial expediency.
Over the years National Semiconductor acquired several companies like Fairchild Semiconductor, Cyrix. However, over time National Semiconductor spun off these acquisitions. Fairchild Semiconductor became a separate company again in 1997, the Cyrix microprocessors division was sold to VIA Technologies of Taiwan in 1999. From 1997 to 2002, National enjoyed a large amount of publicity and awards with the development of the Cyrix Media Center, Cyrix WebPad, WebPad Metro and National Origami PDA concept devices created by National's Conceptual Products Group. Based on the success of the WebPad, National formed the Information Appliance Division in 1998; the Information Appliance Division was sold to AMD in 2003. Other businesses dealing in such products as digital wireless chipsets, image sensors, PC I/O chipsets have been closed down or sold off as National has reincarnated itself as a high-performance analog semiconductor company. Peter Sprague, Pierre Lamond and the affectionately called Charlie Sporck worked hand-in-hand, with support of the board of directors to transform the company into a multinational and world-class semiconductor concern.
After becoming CEO, Sporck started a historic price war among semiconductor companies, which trimmed the number of competitors in the field. Among the casualties to exit the semiconductor business were General Electric and Westinghouse. Cost control, overhead reduction and a focus on profits implemented by Sporck was the key element to National surviving the price war and subsequently in 1981 becoming the first semiconductor company to reach the US$1 billion annual sales mark. However, the foundation that made National successful was its expertise in analog electronics, TTL and MOSFET integrated circuit technologies; as they had while employed in Fairchild and Lamond directed National Semiconductor towards the growing industrial and commercial markets and began to rely less on military and aerospace contracts. Those decisions coupled with inflationary growth in use of computers provided the market for the expansion of National. Meanwhile, sources of funds associated with Sprague coupled with creative structuring of cash flow buffering due to Sporck and Lamond provided the financing required for that expansion.
Lamond and Sporck had managed to attract and extract substantial funds to finance the expansion. Among Sporck's cost control efforts was his offshore outsourcing of labour. National Semiconductor was among the pioneers in the semicon
Motorola, Inc. was an American multinational telecommunications company founded on September 25, 1928, based in Schaumburg, Illinois. After having lost $4.3 billion from 2007 to 2009, the company was divided into two independent public companies, Motorola Mobility and Motorola Solutions on January 4, 2011. Motorola Solutions is considered to be the direct successor to Motorola, as the reorganization was structured with Motorola Mobility being spun off. Motorola Mobility was sold to Google in 2012, acquired by Lenovo in 2014. Motorola designed and sold wireless network equipment such as cellular transmission base stations and signal amplifiers. Motorola's home and broadcast network products included set-top boxes, digital video recorders, network equipment used to enable video broadcasting, computer telephony, high-definition television, its business and government customers consisted of wireless voice and broadband systems, public safety communications systems like Astro and Dimetra. These businesses are now part of Motorola Solutions.
Google sold Motorola Home to the Arris Group in December 2012 for US$2.35 billion. Motorola's wireless telephone handset division was a pioneer in cellular telephones. Known as the Personal Communication Sector prior to 2004, it pioneered the "mobile phone" with DynaTAC, "flip phone" with the MicroTAC, as well as the "clam phone" with the StarTAC in the mid-1990s, it had staged a resurgence by the mid-2000s with the Razr, but lost market share in the second half of that decade. It focused on smartphones using Google's open-source Android mobile operating system; the first phone to use the newest version of Google's open source OS, Android 2.0, was released on November 2, 2009 as the Motorola Droid. The handset division was spun off into the independent Motorola Mobility. On May 22, 2012, Google CEO Larry Page announced that Google had closed on its deal to acquire Motorola Mobility. On January 29, 2014, Page announced that, pending closure of the deal, Motorola Mobility would be acquired by Chinese technology company Lenovo for US$2.91 billion.
On October 30, 2014, Lenovo finalized its purchase of Motorola Mobility from Google. Motorola started in Chicago, Illinois, as Galvin Manufacturing Corporation in 1928 when brothers Paul V. and Joseph E. Galvin purchased the bankrupt Stewart Battery Company's battery-eliminator plans and manufacturing equipment at auction for $750. Galvin Manufacturing Corporation set up shop in a small section of a rented building; the company had $565 in five employees. The first week's payroll was $63; the company's first products were the battery eliminators, devices that enabled battery-powered radios to operate on household electricity. Due to advances in radio technology, battery-eliminators soon became obsolete. Paul Galvin learned that some radio technicians were installing sets in cars, challenged his engineers to design an inexpensive car radio that could be installed in most vehicles, his team was successful, Galvin was able to demonstrate a working model of the radio at the June 1930 Radio Manufacturers Association convention in Atlantic City, New Jersey.
He brought home enough orders to keep the company in business. Paul Galvin wanted a brand name for Galvin Manufacturing Corporation's new car radio, created the name “Motorola” by linking "motor" with "ola", a popular ending for many companies at the time, e.g. Moviola, Crayola; the company sold its first Motorola branded radio on June 23, 1930, to Herbert C. Wall of Fort Wayne, for $30. Wall went on to become one of the first Motorola distributors in the country; the Motorola brand name became so well known that Galvin Manufacturing Corporation changed its name to Motorola, Inc. Galvin Manufacturing Corporation began selling Motorola car-radio receivers to police departments and municipalities in November 1930; the company's first public safety customers included the Village of River Forest, Village of Bellwood Police Department, City of Evanston Police, Illinois State Highway Police, Cook County Police with a one-way radio communication. In the same year, the company built its research and development program with Dan Noble, a pioneer in FM radio and semiconductor technologies, who joined the company as director of research.
The company produced the hand-held AM SCR-536 radio during World War II, vital to Allied communication. Motorola ranked 94th among United States corporations in the value of World War II military production contracts. Motorola went public in 1943, became Motorola, Inc. in 1947. At that time Motorola's main business was selling televisions and radios. In October 1946 Motorola communications equipment carried the first calls on Illinois Bell telephone company's new car radiotelephone service in Chicago; the company began making televisions in 1947, with the model VT-71 with 7-inch cathode ray tube. In 1952, Motorola opened its first international subsidiary in Toronto, Canada to produce radios and televisions. In 1953, the company established the Motorola Foundation to support leading universities in the United States. In 1955, years after Motorola started its research and development laboratory in Phoenix, Arizona, to research new solid-state technology, Motorola introduced the world's first commercial high-power germanium-based transistor.
Very Large Scale Integration
Very-large-scale integration is the process of creating an integrated circuit by combining millions of transistors or devices into a single chip. VLSI began in the 1970s when complex semiconductor and communication technologies were being developed; the microprocessor is a VLSI device. Before the introduction of VLSI technology most ICs had a limited set of functions they could perform. An electronic circuit might consist of ROM, RAM and other glue logic. VLSI lets IC designers add all of these into one chip; the history of the transistor dates to the 1920s when several inventors attempted devices that were intended to control current in solid-state diodes and convert them into triodes. Success came after World War II, when the use of silicon and germanium crystals as radar detectors led to improvements in fabrication and theory. Scientists who had worked on radar returned to solid-state device development. With the invention of transistors at Bell Labs in 1947, the field of electronics shifted from vacuum tubes to solid-state device.
With the small transistor at their hands, electrical engineers of the 1950s saw the possibilities of constructing far more advanced circuits. However, as the complexity of circuits grew, problems arose. One problem was the size of the circuit. A complex circuit like a computer was dependent on speed. If the components were large, the wires interconnecting them must be long; the electric signals took time thus slowing the computer. The invention of the integrated circuit by Jack Kilby and Robert Noyce solved this problem by making all the components and the chip out of the same block of semiconductor material; the circuits could be made smaller, the manufacturing process could be automated. This led to the idea of integrating all components on a single-crystal silicon wafer, which led to small-scale integration in the early 1960s, medium-scale integration in the late 1960s, large-scale integration as well as VLSI in the 1970s and 1980s, with tens of thousands of transistors on a single chip; the first semiconductor chips held two transistors each.
Subsequent advances added more transistors, as a consequence, more individual functions or systems were integrated over time. The first integrated circuits held only a few devices as many as ten diodes, transistors and capacitors, making it possible to fabricate one or more logic gates on a single device. Now known retrospectively as small-scale integration, improvements in technique led to devices with hundreds of logic gates, known as medium-scale integration. Further improvements led to large-scale integration, i.e. systems with at least a thousand logic gates. Current technology has moved far past this mark and today's microprocessors have many millions of gates and billions of individual transistors. At one time, there was an effort to name and calibrate various levels of large-scale integration above VLSI. Terms like ultra-large-scale integration were used, but the huge number of gates and transistors available on common devices has rendered such fine distinctions moot. Terms suggesting greater than VLSI levels of integration are no longer in widespread use.
In 2008, billion-transistor processors became commercially available. This became more commonplace as semiconductor fabrication advanced from the then-current generation of 65 nm processes. Current designs, unlike the earliest devices, use extensive design automation and automated logic synthesis to lay out the transistors, enabling higher levels of complexity in the resulting logic functionality. Certain high-performance logic blocks like the SRAM cell, are still designed by hand to ensure the highest efficiency. Structured VLSI design is a modular methodology originated by Carver Mead and Lynn Conway for saving microchip area by minimizing the interconnect fabrics area; this is obtained by repetitive arrangement of rectangular macro blocks which can be interconnected using wiring by abutment. An example is partitioning the layout of an adder into a row of equal bit slices cells. In complex designs this structuring may be achieved by hierarchical nesting. Structured VLSI design had been popular in the early 1980s, but lost its popularity because of the advent of placement and routing tools wasting a lot of area by routing, tolerated because of the progress of Moore's Law.
When introducing the hardware description language KARL in the mid' 1970s, Reiner Hartenstein coined the term "structured VLSI design", echoing Edsger Dijkstra's structured programming approach by procedure nesting to avoid chaotic spaghetti-structured program As microprocessors become more complex due to technology scaling, microprocessor designers have encountered several challenges which force them to think beyond the design plane, look ahead to post-silicon: Process variation – As photolithography techniques get closer to the fundamental laws of optics, achieving high accuracy in doping concentrations and etched wires is becoming more difficult and prone to errors due to variation. Designers now must simulate across multiple fabrication process corners before a chip is certified ready for production, or use system-level techniques for dealing with effects of variation. Stricter design rules – Due to lithography and etch issues with scaling, design rules for layout have become stringent.
Designers must keep in mind an increasing list of rules when laying out custom circuits. The overhead for custom design is now reaching a tipping point, with many design houses opting to switch to electronic design automation tools to automate their design process. Timing/design clo
Conexant Systems, Inc. was an American-based software developer and fabless semiconductor company. They provided products for voice and audio processing and modems; the company began as a division of Rockwell International, before being spun off as a public company. Conexant itself spun off several business units, creating independent public companies which included Skyworks Solutions and Mindspeed Technologies; the company was acquired by Synaptics, Inc. in July 2017. In 1996, Rockwell International Corporation incorporated its semiconductor division as Rockwell Semiconductor Systems, Inc. On January 4, 1999, Rockwell spun off Inc. as a public company. It was listed on the NASDAQ under symbol CNXT on January 4, 1999. At that time, Conexant became the world's largest. Dwight W. Decker was its first chairman of its board of directors; the company was based in California. In the early 2000s, Conexant spun off several standalone technology businesses to create public companies. In March 2002, Conexant entered into a joint venture agreement with The Carlyle Group to share ownership of its wafer fabrication plant, called Jazz Semiconductor.
In June 2002, Conexant spun off its wireless communications division, which merged following the spinoff with Massachusetts-based chip manufacturer Alpha Industries Inc. to form publicly held Skyworks Solutions Inc. In June 2003, Conexant spun off its Internet infrastructure business to create the publicly held company Mindspeed Technologies Inc. Mindspeed would be acquired by Lowell, MA-based M/A-COM Technology Solutions. In 2004, Conexant merged with Red Bank, New Jersey semiconductor company GlobespanVirata, Inc. with Conexant as the surviving corporation. Subsequently, GlobespanVirata's name was changed to Inc.. In April 2008, Conexant announced the sale of its broadband media processing business, which provided products for satellite, cable and IPTV applications, to Dutch semiconductor manufacturer NXP Semiconductors NV. In September 2008, Jazz was sold to Israel-based Tower Semiconductor Ltd and became known as TowerJazz. In August 2009, Conexant sold its broadband access product line to Fremont, CA semiconductor company Ikanos Communications.
In February 2011, an agreement was announced for San Francisco investment firm Golden Gate Capital to acquire all of the outstanding shares of Conexant at a price of $2.40 per share, take the company private. In February 2013, citing the burden of servicing debt related to multiple corporate acquisitions in the late 1990s, as well as the loss of revenue from the bankruptcy of key customer Eastman Kodak, Conexant filed for Chapter 11 protection in the U. S. Bankruptcy Court for the District of Delaware; as part of the bankruptcy agreement, the company agreed on a restructuring plan with owners and its sole secured lender, QP SFM Capital Holdings Ltd. The reorganized company emerged from bankruptcy in July 2013; as part of the operational restructuring, the company moved its headquarters from Newport Beach to nearby Irvine, focused on a narrower product portfolio, consisting of far-field voice input processing-based devices, video surveillance and printer systems on a chip. Since 2013, Conexant's silicon and software solutions for voice processing have been instrumental in the CE industry's proliferation of voice-enabled devices.
The company's AudioSmart brand of voice input processors and embedded far-field processing software has become adopted by CE device manufacturers in numerous products ranging from Artificially Intelligent digital assistant devices and smart speakers to voice-enabled televisions and personal robots. In February 2016, it was announced that Korean electronics company LG Electronics was going to integrate Conexant's CX2092x far-field voice input processor system-on-chip into two of its smart home products: a set top box and an IoT hub for controlling home electronic devices. In March 2016, Conexant announced that their AudioSmart software was being integrated into Qualcomm's Hexagon digital signal processor family, a major component of Qualcomm's Snapdragon processor contained in over 1 billion smart devices. In December 2016, Conexant and Amazon co-announced the AudioSmart 2-Mic Development Kit for Amazon AVS, a commercial-grade reference solution that streamlines the design and implementation of audio front end systems.
Based on the Conexant AudioSmart™ CX20921 Voice Input Processor, the dual microphone board was designed to reduce time-to-market for new third-party voice-enabled Alexa devices. On 11 May 2017 news appeared that security researchers discovered that Conexant's audio drivers were installing keylogger software, including many laptops sold by HP; the keylogger writes every single keystroke typed by a user and stores them in an unencrypted file on the user's computer. On July 26, 2017, Synaptics completed its acquisition of Conexant Systems, LLC. Conexant has two main product families: the AudioSmart brand of audio processors and the ImagingSmart brand of image processors and modems. AudioSmart is a line of analog-to-digital converters, codecs, USB digital signal processor codecs, voice/speech processors, software that improves how audio signals are processed for electronic audio equipment. AD Converters - Conexant's analog to digital converters are used for far-field voice/speech capture applications.
They convert analog signals to digital in order to enhance the signal before transmitting it to third party speech recognition products. The technology is used in voice-enabled consumer products. A low power version with a standby mode and a fast wake up mode is used for battery powered devices. Codecs - Conexant's codecs encode and decode digital signals