A capacitance meter is a piece of electronic test equipment used to measure capacitance of discrete capacitors. Depending on the sophistication of the meter, it may display the capacitance only, or it may measure a number of other parameters such as leakage, equivalent series resistance, inductance. For most purposes and in most cases the capacitor must be disconnected from circuit; some checks can be made without a specialised instrument on aluminium electrolytic capacitors which tend to be of high capacitance and to be subject to poor leakage. A multimeter in a resistance range can detect a short-circuited capacitor or one with high leakage. A crude idea of the capacitance can be derived with an analog multimeter in a high resistance range by observing the needle when first connected; the amplitude of the kick is an indication of capacitance. Interpreting results requires some experience, or comparison with a good capacitor, depends upon the particular meter and range used. Many DVMs have a capacitance-measuring function.
These operate by charging and discharging the capacitor under test with a known current and measuring the rate of rise of the resulting voltage. DVMs can measure capacitance from nanofarads to a few hundred microfarads, but wider ranges are not unusual. Measurement the capacity of rotary capacitor with digital multimeter It is possible to measure capacitance by passing a known high-frequency alternating current through the device under test and measuring the resulting voltage across it; when troubleshooting circuit problems, a few problems are intermittent or only show up with the working voltage applied, are not revealed by measurements with equipment, however sophisticated, which uses low test voltages. Some problems are revealed by observing the effect on circuit operation. In difficult cases routine replacement of capacitors is easier than arranging measurements of all relevant parameters in working conditions; some more specialised instruments measure capacitance over a wide range using the techniques described above, can measure other parameters.
Low stray and parasitic capacitance can be measured. Leakage current is measured by applying a direct voltage and measuring the current in the normal way. More sophisticated instruments use other techniques such as inserting the capacitor-under-test into a bridge circuit. By varying the values of the other legs in the bridge, the value of the unknown capacitor is determined; this method of indirect use of measuring capacitance ensures greater precision. The bridge can measure series resistance and inductance. Through the use of Kelvin connections and other careful design techniques, these instruments can measure capacitors over a range from picofarads to farads. Combined LCR meters that can measure inductance and capacitance are available. Bridge circuits do not themselves measure leakage current, but a DC bias voltage can be applied and the leakage measured directly. Modern bridge instruments include a digital display and, where relevant, some sort of go/no go testing to allow simple automated use in a production environment.
As with all modern instruments, bridges can be interfaced to computer and other equipment to export readings and allow external control
Peak programme meter
A peak programme meter is an instrument used in professional audio that indicates the level of an audio signal. Different kinds of PPM fall into broad categories: True peak programme meter; this shows the peak level of the waveform. Quasi peak programme meter; this only shows the true level of the peak if it exceeds a certain duration a few milliseconds. On peaks of shorter duration, it indicates less than the true peak level; the extent of the shortfall is determined by the'integration time'. Sample peak programme meter; this is a PPM for digital audio --. It may have either a'true' or a'quasi' integration characteristic. Over-sampling peak programme meter; this is a sample PPM in which the signal has first been over-sampled by a factor of four, to alleviate the problem with a basic sample PPM. In professional usage, where consistent level measurements are needed across an industry, audio level meters comply with a detailed formal standard; this ensures. The principal standard for PPMs is IEC 60268-10.
It describes two different quasi-PPM designs that have roots in meters developed in the 1930s for the AM radio broadcasting networks of Germany and the United Kingdom. The term Peak Programme Meter refers to these IEC-specified types and similar designs. Though designed for monitoring analogue audio signals, these PPMs are now used with digital audio. PPMs do not provide effective loudness monitoring. Newer types of meter do, there is now a push within the broadcasting industry to move away from the traditional level meters in this article to two new types: loudness meters based on EBU Tech. 3341 and oversampling true PPMs. The former would be used to standardise broadcast loudness to −23 LUFS and the latter to prevent digital clipping. In common with many other types of audio level meter, PPMs used electro-mechanical displays; these took the form of moving-coil panel meters or mirror galvanometers with demanding'ballistics': the key requirement being that the indicated level should rise as as possible with negligible overshoot.
These displays require active driver electronics. Nowadays PPMs are implemented as'bargraph' incremental displays using solid-state illuminated segments in a vertical or horizontal array. For these, IEC 60268-10 requires a minimum of 100 segments and a resolution better than 0.5 dB at the higher levels. Many operators prefer the moving-coil meter type of display in which a needle moves in an arc, because an angular movement is easier for the human eye to monitor than the linear movement of a bargraph. PPMs can be implemented in software—in a general-purpose computer or by a dedicated device that inserts a PPM image into a picture signal for display on a picture monitor. A variety of terms such as'line-up level' and'operating level' exist, their meaning may vary from place to place. In an attempt bring clarity to level definitions in the context of programme transmission from one country to another, where different technical practices may apply, ITU-R Rec. BS.645 defined three reference levels: Measurement Level, Alignment Level and Permitted Maximum Level.
This document shows the reading corresponding to these levels for several types of meter. Alignment Level is the level of a steady sine-wave alignment tone. Permitted Maximum Level refers to the permitted maximum meter indication that operators should aim for on speech, music etc. not tone. PPMs use white-on-black displays, to minimise eyestrain with extended periods of use. PPMs are calibrated in one of these ways: In decibels relative to Alignment Level In decibels relative to Permitted Maximum Level In decibels relative to 0 dBu In decibels relative to 0 dBFS In simple numerical marks that can be correlated with any of the above Whichever scheme is used there is a scale mark corresponding to Alignment Level. Most PPMs have an logarithmic scale, i.e. linear in decibels, to provide useful indications over a wide dynamic range. Quasi-PPMs use a short integration time so they can register peaks longer than a few milliseconds in duration. In the original context of AM radio broadcasting in the 1930s, overloads due to shorter peaks were considered unimportant on the grounds that the human ear could not detect distortion due to momentary clipping.
Ignoring momentary clipping made it possible to increase average modulation levels. In modern digital audio practice, where quality standards are much higher than AM radio in the 1930s, clipping of short peaks is regarded as something to avoid. On typical, real-world audio signals, a quasi-PPM under-reads the true peak by 6 to 8 dB. Quasi-PPMs are still used in the digital age because of their usefulness in achieving programme balance. Overloads are avoided by allowing 9 dB of headroom when controlling digital levels with a quasi-PPM; the extent to which quasi-PPMs show less than the true amplitude of momentary peaks is determined by the'integration time'. This is defined by IEC 60268-10 as, "...the duration of a burst of sinusoidal voltage of 5000 Hz at reference level that results in an indication 2 dB below reference indication." This standard contains tables showing the difference between indicated and true peaks for tone bursts of other durations. The longer the integration time, the greater the difference between the true and indicated peaks.
In earlier standards, d
A spectrum analyzer measures the magnitude of an input signal versus frequency within the full frequency range of the instrument. The primary use is to measure the power of the spectrum of known and unknown signals; the input signal. Optical spectrum analyzers exist, which use direct optical techniques such as a monochromator to make measurements. By analyzing the spectra of electrical signals, dominant frequency, distortion, harmonics and other spectral components of a signal can be observed that are not detectable in time domain waveforms; these parameters are useful in the characterization of electronic devices, such as wireless transmitters. The display of a spectrum analyzer has frequency on the horizontal axis and the amplitude displayed on the vertical axis. To the casual observer, a spectrum analyzer looks like an oscilloscope and, in fact, some lab instruments can function either as an oscilloscope or a spectrum analyzer; the first spectrum analyzers, in the 1960s, were swept-tuned instruments.
Following the discovery of the fast Fourier transform in 1965, the first FFT-based analyzers were introduced in 1967. Today, there are three basic types of analyzer: the swept-tuned spectrum analyzer, the vector signal analyzer, the real-time spectrum analyzer. Spectrum analyzer types are distinguished by the methods used to obtain the spectrum of a signal. There are swept-tuned and Fast Fourier Transform based spectrum analyzers: A swept-tuned analyzer uses a superheterodyne receiver to down-convert a portion of the input signal spectrum to the center frequency of a narrow band-pass filter, whose instantaneous output power is recorded or displayed as a function of time. By sweeping the receiver's center-frequency through a range of frequencies, the output is a function of frequency, but while the sweep centers on any particular frequency, it may be missing short-duration events at other frequencies. An FFT analyzer computes a time-sequence of periodograms. FFT refers to a particular mathematical algorithm used in the process.
This is used in conjunction with a receiver and analog-to-digital converter. As above, the receiver reduces the center-frequency of a portion of the input signal spectrum, but the portion is not swept; the purpose of the receiver is to reduce the sampling rate. With a sufficiently low sample-rate, FFT analyzers can process all the samples, are therefore able to avoid missing short-duration events. Spectrum analyzers tend to fall into four form factors: benchtop, portable and networked; this form factor is useful for applications where the spectrum analyzer can be plugged into AC power, which means in a lab environment or production/manufacturing area. Bench top spectrum analyzers have offered better performance and specifications than the portable or handheld form factor. Bench top spectrum analyzers have multiple fans to dissipate heat produced by the processor. Due to their architecture, bench top spectrum analyzers weigh more than 30 pounds; some bench top spectrum analyzers offer optional battery packs, allowing them to be used away from AC power.
This type of analyzer is referred to as a "portable" spectrum analyzer. This form factor is useful for any applications where the spectrum analyzer needs to be taken outside to make measurements or carried while in use. Attributes that contribute to a useful portable spectrum analyzer include: Optional battery-powered operation to allow the user to move outside. Viewable display to allow the screen to be read in bright sunlight, darkness or dusty conditions.. Light weight; this form factor is useful for any application where the spectrum analyzer needs to be light and small. Handheld analyzers offer a limited capability relative to larger systems. Attributes that contribute to a useful handheld spectrum analyzer include: Very low power consumption. Battery-powered operation while in the field to allow the user to move outside. Small size Light weight; this form factor does not include a display and these devices are designed to enable a new class of geographically-distributed spectrum monitoring and analysis applications.
The key attribute is the ability to connect the analyzer to a network and monitor such devices across a network. While many spectrum analyzers have an Ethernet port for control, they lack efficient data transfer mechanisms and are too bulky or expensive to be deployed in such a distributed manner. Key applications for such devices include RF intrusion detection systems for secure facilities where wireless signaling is prohibited; as well cellular operators are using such analyzers to remotely monitor interference in licensed spectral bands. The distributed nature of such devices enable geo-location of transmitters, spectrum monitoring for dynamic spectrum access and many other such applications. Key attributes of such devices include: Network-efficient data transfer Low power consumption The ability to synchronize data captures across a network of analyzers Low cost to enable mass deployment; as discussed above in types, a swept-tuned spectrum analyzer down-converts a portion of the input signal spectrum to the center frequency of a band-pass filter by sweeping the voltage-controlled oscillator through a range of frequencies, enabling the consideration of the full frequency range of the instrument.
A time-domain reflectometer is an electronic instrument that uses time-domain reflectometry to characterize and locate faults in metallic cables. It can be used to locate discontinuities in a connector, printed circuit board, or any other electrical path; the equivalent device for optical fiber is an optical time-domain reflectometer. A TDR measures reflections along a conductor. In order to measure those reflections, the TDR will transmit an incident signal onto the conductor and listen for its reflections. If the conductor is of a uniform impedance and is properly terminated there will be no reflections and the remaining incident signal will be absorbed at the far-end by the termination. Instead, if there are impedance variations some of the incident signal will be reflected back to the source. A TDR is similar in principle to radar; the reflections will have the same shape as the incident signal, but their sign and magnitude depend on the change in impedance level. If there is a step increase in the impedance the reflection will have the same sign as the incident signal.
The magnitude of the reflection depends not only on the amount of the impedance change, but upon the loss in the conductor. The reflections are measured at the output/input to the TDR and displayed or plotted as a function of time. Alternatively, the display can be read as a function of cable length because the speed of signal propagation is constant for a given transmission medium; because of its sensitivity to impedance variations, a TDR may be used to verify cable impedance characteristics and connector locations and associated losses, estimate cable lengths. TDRs use different incident signals; some TDRs transmit a pulse along the conductor. Narrow pulses can offer good resolution, but they have high frequency signal components that are attenuated in long cables; the shape of the pulse is a half cycle sinusoid. For longer cables, wider pulse widths are used. Fast rise time steps are used. Instead of looking for the reflection of a complete pulse, the instrument is concerned with the rising edge, which can be fast.
A 1970s technology TDR used steps with a rise time of 25 ps. Still other TDRs detect reflections with correlation techniques. See spread-spectrum time-domain reflectometry; these traces were produced by a time-domain reflectometer made from common lab equipment connected to 100 feet of coaxial cable having a characteristic impedance of 50 ohms. The propagation velocity of this cable is 66% of the speed of light in a vacuum; these traces were produced by a commercial TDR using a step waveform with a 25 ps risetime, a sampling head with a 35 ps risetime, an 18-inch SMA cable. The far end of the SMA cable was left connected to different adapters, it takes about 3 ns for the pulse to travel down the cable and reach the sampling head. A second reflection can be seen in some traces. Consider the case where the far end of the cable is shorted; when the rising edge of the pulse is launched down the cable, the voltage at the launching point "steps up" to a given value and the pulse begins propagating down the cable towards the short.
When the pulse hits the short, no energy is absorbed at the far end. Instead, an opposing pulse reflects back from the short towards the launching end, it is only when this opposing reflection reaches the launch point that the voltage at this launching point abruptly drops back to zero, signalling the fact that there is a short at the end of the cable. That is, the TDR has no indication that there is a short at the end of the cable until its emitted pulse can travel down the cable at the speed of light and the echo can return up the cable at the same speed, it is only after this round-trip delay that the short can be perceived by the TDR. Assuming that one knows the signal propagation speed in the particular cable-under-test in this way, the distance to the short can be measured. A similar effect occurs. In this case, the reflection from the far end is polarized identically with the original pulse and adds to it rather than cancelling it out. So after a round-trip delay, the voltage at the TDR abruptly jumps to twice the originally-applied voltage.
Note that a theoretical perfect termination at the far end of the cable would absorb the applied pulse without causing any reflection. In this case, it would be impossible to determine the actual length of the cable. Luckily, perfect terminations are rare and some small reflection is nearly always caused; the magnitude of the reflection is referred to as the reflection coefficient or ρ. The coefficient ranges from 1 to -1; the value of zero means. The reflection coefficient is calculated as follows: ρ = Z t − Z o Z t + Z o Where Zo is defined as the characteristic impedance of the transmission medium and Zt is the impedance of the termination at the far end of the transmission line. Any discontinuity can be viewed as a termination impe
Serial Attached SCSI
In computing, Serial Attached SCSI is a point-to-point serial protocol that moves data to and from computer-storage devices such as hard drives and tape drives. SAS replaces the older Parallel SCSI bus technology. SAS, like its predecessor, uses the standard SCSI command set. SAS offers optional compatibility with Serial ATA, versions 2 and later; this allows the connection of SATA drives to most SAS controllers. The reverse, connecting SAS drives to SATA backplanes, is not possible; the T10 technical committee of the International Committee for Information Technology Standards develops and maintains the SAS protocol. A typical Serial Attached SCSI system consists of the following basic components: An initiator: a device that originates device-service and task-management requests for processing by a target device and receives responses for the same requests from other target devices. Initiators may be provided as an on-board component on the motherboard or as an add-on host bus adapter. A target: a device containing logical units and target ports that receives device service and task management requests for processing and sends responses for the same requests to initiator devices.
A target device could be a disk array system. A service delivery subsystem: the part of an I/O system that transmits information between an initiator and a target. Cables connecting an initiator and target with or without expanders and backplanes constitute a service delivery subsystem. Expanders: devices that form part of a service delivery subsystem and facilitate communication between SAS devices. Expanders facilitate the connection of multiple SAS End devices to a single initiator port. SAS-1: 3.0 Gbit/s, introduced in 2004 SAS-2: 6.0 Gbit/s, available since February 2009 SAS-3: 12.0 Gbit/s, available since March 2013 SAS-4: 22.5 Gbit/s called "24G",standard completed in 2017 A SAS Domain is the SAS version of a SCSI domain—it consists of a set of SAS devices that communicate with one another by means of a service delivery subsystem. Each SAS port in a SAS domain has a SCSI port identifier that identifies the port uniquely within the SAS domain, the World Wide Name, it is assigned by the device manufacturer, like an Ethernet device's MAC address, is worldwide unique as well.
SAS devices use. In addition, every SAS device has a SCSI device name, which identifies the SAS device uniquely in the world. One doesn't see these device names because the port identifiers tend to identify the device sufficiently. For comparison, in parallel SCSI, the SCSI ID is the port device name. In Fibre Channel, the port identifier is a WWPN and the device name is a WWNN. In SAS, both SCSI port identifiers and SCSI device names take the form of a SAS address, a 64 bit value in the NAA IEEE Registered format. People sometimes refer to a SCSI port identifier as the SAS address of a device, out of confusion. People sometimes call a SAS address a World Wide Name or WWN, because it is the same thing as a WWN in Fibre Channel. For a SAS expander device, the SCSI port identifier and SCSI device name are the same SAS address; the SAS "bus" operates point-to-point. Each SAS device is connected by a dedicated link to the initiator. If one initiator is connected to one target, there is no opportunity for contention.
SAS has no termination issues and does not require terminator packs like parallel SCSI. SAS eliminates clock skew. SAS allows up to 65,535 devices through the use of expanders, while Parallel SCSI has a limit of 8 or 16 devices on a single channel. SAS allows a higher transfer speed than most parallel SCSI standards. SAS achieves these speeds on each initiator-target connection, hence getting higher throughput, whereas parallel SCSI shares the speed across the entire multidrop bus. SAS devices feature dual ports, allowing for redundant backplanes or multipath I/O. SAS controllers may connect to SATA devices, either directly connected using native SATA protocol or through SAS expanders using Serial ATA Tunneling Protocol. Both SAS and parallel SCSI use. There is little physical difference between SAS and SATA. SAS protocol provides for multiple initiators in a SAS domain, while SATA has no analogous provision. Most SAS drives provide tagged command queuing, while most newer SATA drives provide native command queuing.
SATA uses a command set, based on the parallel ATA command set and extended beyond that set to include features like native command queuing, hot-plugging, TRIM. SAS uses the SCSI command set, which includes a wider range of features like error recovery and block reclamation. Basic ATA has commands only for direct-access storage; however SCSI commands may be tunneled through ATAPI for devices such as CD/DVD drives. SAS hardware allows multipath I/O to devices. Per specification, SATA 2.0 makes use of port multipliers to achieve port expansion, some port multiplier manufacturers have implemented multipath I/O using port multiplier hardware. SATA is marketed as a general-purpose successor to parallel ATA and has become common in the consumer market, whereas the more-expensive SAS targets critical server applications. SAS error-recovery and error-reporting uses SCSI commands, which have more funct
A frequency counter is an electronic instrument, or component of one, used for measuring frequency. Frequency counters measure the number of cycles of oscillation, or pulses per second in a periodic electronic signal; such an instrument is sometimes referred to as a cymometer one of Chinese manufacture. Most frequency counters work by using a counter which accumulates the number of events occurring within a specific period of time. After a preset period known as the gate time, the value in the counter is transferred to a display and the counter is reset to zero. If the event being measured repeats itself with sufficient stability and the frequency is lower than that of the clock oscillator being used, the resolution of the measurement can be improved by measuring the time required for an entire number of cycles, rather than counting the number of entire cycles observed for a pre-set duration; the internal oscillator which provides the time signals is called the timebase, must be calibrated accurately.
If the event to be counted is in electronic form, simple interfacing to the instrument is all, required. More complex signals may need some conditioning to make them suitable for counting. Most general purpose frequency counters will include some form of amplifier and shaping circuitry at the input. DSP technology, sensitivity control and hysteresis are other techniques to improve performance. Other types of periodic events that are not inherently electronic in nature will need to be converted using some form of transducer. For example, a mechanical event could be arranged to interrupt a light beam, the counter made to count the resulting pulses. Frequency counters designed for radio frequencies are common and operate on the same principles as lower frequency counters, they have more range before they overflow. For high frequencies, many designs use a high-speed prescaler to bring the signal frequency down to a point where normal digital circuitry can operate; the displays on such instruments take this into account.
Microwave frequency counters can measure frequencies up to 56 GHz. Above these frequencies the signal to be measured is combined in a mixer with the signal from a local oscillator, producing a signal at the difference frequency, low enough to be measured directly; the accuracy of a frequency counter is dependent on the stability of its timebase. A timebase is delicate like the hands of a watch, can be changed by movement, interference, or drift due to age, meaning it might not "tick" correctly; this can make a frequency reading, when referenced to the timebase, seem higher or lower than the actual value. Accurate circuits are used to generate timebases for instrumentation purposes using a quartz crystal oscillator within a sealed temperature-controlled chamber, known as an oven controlled crystal oscillator or crystal oven. For higher accuracy measurements, an external frequency reference tied to a high stability oscillator such as a GPS disciplined rubidium oscillator may be used. Where the frequency does not need to be known to such a high degree of accuracy, simpler oscillators can be used.
It is possible to measure frequency using the same techniques in software in an embedded system. A central processing unit for example, can be arranged to measure its own frequency of operation provided it has some reference timebase to compare with. Accuracy is limited by the available resolution of the measurement. Resolution of a single count is proportional to the timebase oscillator frequency and the gate time. Improved resolution can be obtained by several techniques such as oversampling/averaging. Additionally, accuracy can be degraded by jitter on the signal being measured, it is possible to reduce this error by oversampling/averaging techniques. I/O interfaces allow the user to send information to the frequency counter and receive information from the frequency counter. Used interfaces include RS232, USB, GPIB and Ethernet. Besides sending measurement results, a counter can notify the user when user-defined measurement limits are exceeded. Common to many counters are the SCPI commands used to control them.
A new development is built-in LAN-based control via Ethernet complete with GUI's. This allows one computer to control one or several instruments and eliminates the need to write SCPI commands. Frequency meter Agilent's AN200: Fundamentals of electronic frequency counters 12 LCD Frequency Counter How to build your own Frequency Counter
Conventional PCI shortened to PCI, is a local computer bus for attaching hardware devices in a computer. PCI is part of the PCI Local Bus standard; the PCI bus supports the functions found on a processor bus but in a standardized format, independent of any particular processor's native bus. Devices connected to the PCI bus appear to a bus master to be connected directly to its own bus and are assigned addresses in the processor's address space, it is a parallel bus, synchronous to a single bus clock. Attached devices can take either the form of an integrated circuit fitted onto the motherboard itself or an expansion card that fits into a slot; the PCI Local Bus was first implemented in IBM PC compatibles, where it displaced the combination of several slow ISA slots and one fast VESA Local Bus slot as the bus configuration. It has subsequently been adopted for other computer types. Typical PCI cards used in PCs include: network cards, sound cards, extra ports such as USB or serial, TV tuner cards and disk controllers.
PCI video cards replaced ISA and VESA cards until growing bandwidth requirements outgrew the capabilities of PCI. The preferred interface for video cards became AGP, itself a superset of conventional PCI, before giving way to PCI Express; the first version of conventional PCI found in consumer desktop computers was a 32-bit bus using a 33 MHz bus clock and 5 V signalling, although the PCI 1.0 standard provided for a 64-bit variant as well. These have one locating notch in the card. Version 2.0 of the PCI standard introduced 3.3 V slots, physically distinguished by a flipped physical connector to prevent accidental insertion of 5 V cards. Universal cards, which can operate on either voltage, have two notches. Version 2.1 of the PCI standard introduced optional 66 MHz operation. A server-oriented variant of conventional PCI, called PCI-X operated at frequencies up to 133 MHz for PCI-X 1.0 and up to 533 MHz for PCI-X 2.0. An internal connector for laptop cards, called Mini PCI, was introduced in version 2.2 of the PCI specification.
The PCI bus was adopted for an external laptop connector standard – the CardBus. The first PCI specification was developed by Intel, but subsequent development of the standard became the responsibility of the PCI Special Interest Group. Conventional PCI and PCI-X are sometimes called Parallel PCI in order to distinguish them technologically from their more recent successor PCI Express, which adopted a serial, lane-based architecture. Conventional PCI's heyday in the desktop computer market was 1995–2005. PCI and PCI-X have become obsolete for most purposes. Many kinds of devices available on PCI expansion cards are now integrated onto motherboards or available in USB and PCI Express versions. Work on PCI began at Intel's Architecture Development Lab c. 1990. A team of Intel engineers defined the architecture and developed a proof of concept chipset and platform partnering with teams in the company's desktop PC systems and core logic product organizations. PCI was put to use in servers, replacing MCA and EISA as the server expansion bus of choice.
In mainstream PCs, PCI was slower to replace VESA Local Bus, did not gain significant market penetration until late 1994 in second-generation Pentium PCs. By 1996, VLB was all but extinct, manufacturers had adopted PCI for 486 computers. EISA continued to be used alongside PCI through 2000. Apple Computer adopted PCI for professional Power Macintosh computers in mid-1995, the consumer Performa product line in mid-1996; the 64-bit version of plain PCI remained rare in practice though, although it was used for example by all G3 and G4 Power Macintosh computers. Revisions of PCI added new features and performance improvements, including a 66 MHz 3.3 V standard and 133 MHz PCI-X, the adaptation of PCI signaling to other form factors. Both PCI-X 1.0b and PCI-X 2.0 are backward compatible with some PCI standards. These revisions were used on server hardware but consumer PC hardware remained nearly all 32 bit, 33 MHz and 5 volt; the PCI-SIG introduced the serial PCI Express in c. 2004. At the same time, they renamed PCI as Conventional PCI.
Since motherboard manufacturers have included progressively fewer Conventional PCI slots in favor of the new standard. Many new motherboards do not provide conventional PCI slots at all, as of late 2013. PCI provides separate memory and I/O port address spaces for the x86 processor family, 64 and 32 bits, respectively. Addresses in these address spaces are assigned by software. A third address space, called the PCI Configuration Space, which uses a fixed addressing scheme, allows software to determine the amount of memory and I/O address space needed by each device; each device can request up to six areas of memory space or I/O port space via its configuration space registers. In a typical system, the firmware queries all PCI buses at startup time to find out what devices are present and what system resources each needs, it allocates the resources and tells each device what its allocation is. The PCI configuration space contains a small amount of device type information, which helps an operating system choose device drivers for it, or at least to have a dialogue with a user about the system configuration.
Devices may have an on-board ROM containing executable code for x86 or PA-RISC processors, an Op