PCI Express, officially abbreviated as PCIe (and sometimes confused with PCI Extended, which is officially abbreviated as PCI-X), is a computer system bus/expansion card interface format. It was designed as a much faster interface to replace PCI, PCI-X, and AGP interfaces for computer expansion cards and graphics cards. The PCI Express (PCIe) physical connection (slot) is completely different than those of the older standard PCI slots or those for PCI Extended (PCI-X).
As with all computing standards, PCIe is a technology which receives further development and improvement. The current standard version in general use at time of writing is PCIe 1.1; however, PCI-SIG announced the availability of the PCI Express Base 2.0 specification on 15 January 2007. PCIe 2.0 doubles the PCIe bus standard throughput or bandwidth from 2.5Gbps to 5Gbps. PCIe 2.0 is still compatible with PCIe 1.1 as a physical interface slot and from within software, so older cards will still be able to work in machines fitted with this new version. Further information on PCIe 2.0 is detailed below.
PCIe is a flexible hybrid serial-parallel interface format. That is, it uses multiple connections each of which individually transmit a single stream of data in parallel to one another. This type of interfacing is sometimes referred to as Channel bonding. PCIe 1.1 transfers data at 250 MB/s in each direction per lane. With a maximum of 32 lanes, PCIe allows for a total combined transfer rate of 8 GB/s in each direction. To put these figures into perspective, a single lane has nearly twice the data rate of normal PCI, a four lane slot has a comparable data rate to the fastest version of PCI-X 1.0, and an eight lane slot has a data rate comparable to the fastest version of AGP.
Unlike preceding PC expansion interface standards, PCIe is both full duplex and point to point. This means that while standard PCI-X (133mhz 64 bit) and PCIe x4 have the same data transfer rate, PCIe x4 will give better performance if multiple device pairs are communicating simultaneously or if communication within a single device pair is bidirectional.
While in development, PCI Express (PCIe) was referred to as Arapaho or 3GIO for 3rd Generation I/O.
The PCIe physical layer consists of a network of serial interconnects. A hub on the mainboard acts as a crossbar switch allowing point-to-point device interconnections to be rerouted on the fly. This dynamic point-to-point connection behavior leads to parallelism since more than one pair of devices may communicate with each other at the same time. (In contrast, older PC interfaces had all devices permanently wired to the same bus; therefore, only one device could talk at a time.) This is similar to the difference between conversing over a telephone, where you are directly connected to one person at a time, and conversing in a meeting, where you must wait your turn to speak. The format also allows channel grouping, where mutliple lanes are bonded to a single device pair in order to provide higher bandwidth.
The bonded serial format was chosen over a traditional parallel format due to the phenomenon of timing skew. Timing skew is a direct result of the limitations imposed by the speed of light: when an electrical signal travels down a wire, it does so at a finite speed. Because different traces in an interface have different lengths, parallel signals transmitted simultaneously from a source arrive at their destinations at different times. When the interconnection clock rate rises to the point where the wavelength of a single bit exceeds this difference in path length, the bits of a single word do not arrive at their destination simultaneously, making parallel recovery of the word impossible. Thus, the speed of light, combined with the difference in length between the longest and shortest trace in a parallel interconnect, leads to a naturally imposed maximum bandwidth. Serial channel bonding avoids this issue by not requiring the bits to arrive simultaneously. PCIe is just one example of a general trend away from parallel buses to serial interconnects. For other examples, see HyperTransport, Serial ATA, USB, SAS or FireWire. The multichannel serial design also increases flexibility by allowing slow devices to be allocated fewer lanes than fast devices.
PCIe is supported primarily by Intel, which started working on the standard as the Arapahoe project after pulling out of the InfiniBand system. PCIe is intended to be used as a local interconnect only. It was designed to be software compatible with the preexisting PCI standard, making the conversion of PCI cards and systems to PCI Express as simple as replacing the physical layer without requiring a change to the supporting software. The increased bandwidth on PCI Express has led to unification, as it is fast enough to replace almost all existing internal buses, including AGP and PCI. Intel envisions a single PCI Express controller talking to all external devices in the future, as opposed to the northbridge/southbridge solution used in current machines.
Hardware protocol summaryEdit
The PCIe link is built around dedicated unidirectional couples of serial (1-bit), point-to-point connections known as "lanes". This is in sharp contrast to the PCI connection, which is a bus-based system where all the devices share the same bidirectional, 32-bit (or 64-bit), parallel bus.
PCI Express is a layered protocol, consisting of a Transaction Layer, a Data Link Layer, and a Physical Layer. The Physical Layer is further divided into a logical sublayer and an electrical sublayer. The logical sublayer is frequently further divided into a Physical Coding Sublayer (PCS) and a Media Access Control (MAC) sublayer (terms borrowed from the IEEE 802 model of networking protocol).
Physical Layer Edit
At the electrical level, each lane utilizes two unidirectional low voltage differential signaling (LVDS) pairs at 2.5 Gbit/s. Transmit and receive are separate differential pairs, for a total of 4 data wires per lane.
A connection between any two PCIe devices is known as a "link", and is built up from a collection of 1 or more lanes. All devices must minimally support single-lane (x1) link. Devices may optionally support wider links composed of 2, 4, 8, 12, 16, or 32 lanes. This allows for very good compatibility in two ways: A PCIe card will physically fit (and work correctly) in any slot that is at least as large as it is (e.g. an x1 sized card will work in any sized slot), and a slot of a large physical size (e.g. x16) can be wired electrically with fewer lanes (e.g. x1 or x8) as long as it provides the power and ground connections required by the larger physical slot size. In both cases, PCIe will negotiate the highest mutually supported number of lanes. It is not possible to place a physically larger PCIe card (e.g. a 16x sized card) into a smaller slot, even though the two would be signal-compatible if it were possible.
PCIe sends all control messages, including interrupts, over the same links used for data. The serial protocol can never be blocked, so latency is still comparable to PCI, which has dedicated interrupt lines.
Data transmitted on multiple-lane links is interleaved, meaning that each successive byte is sent down successive lanes. The PCIe specification refers to this interleaving as "data striping." While requiring significant hardware complexity to synchronize (or deskew) the incoming striped data, striping can significantly increase the throughput of the link. Due to padding requirements, striping may not necessarily reduce the latency of small data packets on a link.
As with all high data rate serial transmission protocols, clocking information must be embedded in the signal. At the physical level, PCI Express utilizes the very common 8B/10B encoding scheme to ensure that strings of consecutive ones or consecutive zeros are limited in length. This is necessary to prevent the receiver from losing track of where the bit edges are. In this coding scheme every 8 (uncoded) payload bits of data are replaced with 10 (encoded) bits of transmit data, consuming an extra 20% of the overall electrical bandwidth.
Some other protocols (such as SONET) use a different form of encoding known as "scrambling" to embed clock information into data streams. The PCI Express specification also defines a scrambling algorithm, but its form of scrambling is not to be confused with the scrambling included in SONET. Rather than embedding clock information, the scrambling in PCI Express is designed to prevent repeating data patterns in the transmitted data stream from causing RF emission peaks.
First-generation PCIe is constrained to a single signalling rate of 2.5 Gbit/s. The PCI Special Interest Group (the industry organization that maintains and develops the various PCI standards) plans future versions adding signalling rates of 5 and 10 Gbit/s.
Data Link LayerEdit
The Data Link Layer implements sequencing of Transaction Layer Packets (TLPs) that are generated by the Transaction Layer, data protection via a 32-bit cyclic redundancy check code (CRC, known in this context as LCRC), and an acknowledgement protocol (ACK and NAK signaling). TLPs that pass an LCRC check and a sequence number check result in an acknowledgement, or ACK, while those that fail these checks result in a negative acknowledgement, or NAK. TLPs that result in a NAK, or timeouts that occur while waiting for an ACK, result in the TLPs being replayed from a special buffer in the transmit data path of the Data Link Layer. This guarantees delivery of TLPs in spite of electrical noise, barring any malfunction of the device or transmission medium.
ACK and NAK signals are communicated via a low-level packet known as a data link layer packet, or DLLP. DLLPs are also used to communicate flow control information between the transaction layers of two connected devices, as well as some power management functions.
Transaction Layer Edit
PCI Express implements split transactions (transactions with request and response separated by time), allowing the link to carry other traffic while the target device gathers data for the response.
PCI Express utilizes credit-based flow control. In this scheme, a device advertises an initial amount of credit for each of the receive buffers in its Transaction Layer. The device at the opposite end of the link, when sending transactions to this device, will count the number of credits consumed by each TLP from its account. The sending device may only transmit a TLP when doing so does not result in its consumed credit count exceeding its credit limit. When the receiving device finishes processing the TLP from its buffer, it signals a return of credits to the sending device, which then increases the credit limit by the restored amount. The credit counters are modular counters, and the comparison of consumed credits to credit limit requires modular arithmetic. The advantage of this scheme (compared to other methods such as wait states or handshake-based transfer protocols) is that the latency of credit return does not affect performance, provided that the credit limit is not encountered. This assumption is generally met if each device is designed with adequate buffer sizes.
First-generation PCIe is often quoted to support a data rate of 250 MB/s in each direction, per lane. This figure is a calculation from the physical signalling rate (2.5 Gbaud) divided by the encoding overhead (10bits/byte.) This means a 16 lane (x16) PCIe card would then be theoretically capable of 250 * 16 = 4 GB/s in each direction. While this is correct in terms of data bytes, more meaningful calculations will be based on the usable data payload rate, which depends on the profile of the traffic, which is a function of the high-level (software) application and intermediate protocol levels.
Like other high data rate serial interconnect systems, PCIe has a protocol and processing overhead due to the additional transfer robustness (CRC and Acknowledgements). Long continuous unidirectional transfers (such as those typical in high-performance storage controllers) can approach >95% of PCIe's raw (lane) data rate. These transfers also benefit the most from increased number of lanes (x2, x4, etc.) But in more typical applications (such as a USB or Ethernet controller), the traffic profile is characterized as short data packets with frequent enforced acknowledgementsTemplate:Fact. This type of traffic reduces the efficiency of the link, due to overhead from packet parsing and forced interrupts (either in the device's host interface or the PC's CPU.) This loss of efficiency is not particular to PCIe.
Form factors Edit
- Low height card
- Mini Card: a replacement for the Mini PCI form factor (with x1 PCIe, USB 2.0 and SMBus buses on the connector)
- ExpressCard: similar to the PCMCIA form factor (with x1 PCIe and USB 2.0; hot-pluggable)
- XMC: similar to the CMC/PMC form factor (with x4 PCIe or Serial RapidI/O)
- AdvancedTCA: a complement to CompactPCI for larger applications; supports serial based backplane topologies
- AMC: a complement to the AdvancedTCA specification; supports processor and I/O modules on ATCA boards (x1,x2,x4 or x8 PCIe).
- PCI Express External Cabling
- Mobile PCI Express Module (MXM) A laptop graphics module spec created by NVIDIA.
- Advanced eXpress I/O Module (AXIOM) graphics module design endorsed by ATI Technologies.
Essentially the differences are based on the tradeoffs between flexibility and extensibility vs. latency and overhead. An example of such a tradeoff is adding complex header information to a transmitted packet to allow for complex routing (PCI Express is not capable of this). This additional overhead reduces the effective bandwidth of the interface and complicates bus discovery and initialization software. Also making the system hot-pluggable requires that software track network topology changes. Examples of buses suited for this purpose are InfiniBand and StarFabric.
Another example is making the packets shorter to decrease latency (as is required if a bus is to be operated as a memory interface). Smaller packets mean that the packet headers consume a higher percentage of the packet, thus decreasing the effective bandwidth. Examples of bus protocols designed for this purpose are RapidIO and HyperTransport.
PCI Express falls somewhere in the middle, targeted by design as a system interconnect (local bus) rather than a device interconnect or routed network protocol. Additionally, its design goal of software transparency constrains the protocol and raises its latency somewhat.
As of 2006, PCI Express appears to be well on its way to becoming the new backplane standard in personal computers. There are several explanations for this, but the principal reason is that it was designed to be completely transparent to software developers — an operating system designed for PCI can boot in a PCI Express system without any code modification. Other secondary reasons include its enhanced performance and strong brand recognition.Template:Fact
Almost all of the high end graphics cards being released today (2007) from ATI and NVIDIA use PCI Express. NVIDIA uses the high bandwidth data transfer of PCIe for its newly developed Scalable Link Interface (SLI) technology, which allows two graphics cards of the same chipset and model number to be run at the same time, allowing increased performance. ATI has also developed a dual-GPU system based on PCIe called CrossFire.
ExpressCard has been introduced on several mid- to high-range laptops such as the Dell Precision and the MacBook Pro. The problem is many laptops have only one PCMCIA slot and it is difficult to give that up for a new ExpressCard slot. Desktops do not have this problem as they have multiple slots and can more easily support PCI Express and the legacy PCI slots concurrently.
PCI Express 2.0Edit
PCI-SIG announced the availability of the PCI Express Base 2.0 specification on 15 January 2007. PCIe 2.0 is still compatible with PCIe 1.1, so older cards will still be able to work in machines with this new version.
The PCI-SIG also said PCIe 2.0 also features improvements to the point-to-point data transfer protocol and its software architecture
Intel is expected to release its first chipsets supporting PCIe 2.0 in the second quarter of 2007 with its 'Bearlake' family. AMD starts supporting PCIe 2.0 from its RD700 chipset series. NVIDIA has revealed that the MCP72 will be their first PCIe 2.0 equipped chipset.<
See also Edit
- Industry Standard Architecture (ISA)
- Extended Industry Standard Architecture (EISA)
- Micro Channel architecture (MCA)
- VESA Local Bus (VLB)
- Peripheral Component Interconnect (PCI)
- Accelerated Graphics Port (AGP)
- List of device bandwidths (A useful listing of device bandwidths that include PCI Express)
- Geneseo (A future generation PCI Express)