Video Cards Have So Many Stats!

If you research video cards, because you’re buying one or something, you’re gonna see a TON of stats. And let’s be honest, you won’t understand all of them. This blog post will fix that problem! Maybe. Hopefully.

This is pretty much an info dump of all stats mentioned in NewEgg, AnandTech, and TomsHardware listings. Stats are split up by general category, whether they affect the video card EXIST AS A HUNK OF METAL, or MOVE DATA AROUND, or DO CALCULATIONS.


MANUFACTURING PROCESS: Measured in nanometers. This measures how small the semiconductors in the video card are (semiconductors are the building blocks of, like, all electronic devices). The smaller the semiconductors, the less heat/electricity they consume, and the more you can pack on a card.

TRANSISTOR COUNT: Transistors are made of semiconductors, so transistor count is inversely proportional to manufacturing process. Again, more transistors = more better.

THERMAL DESIGN POWER (TDP): Measured in watts. Measures how much power the video card expects to consume. Most overclocking software lets you increase wattage beyond TDP, but you’ll need to upgrade the stock fans to dissipate the extra heat, and you probably won’t get as good performance as just buying a card with a greater TDP. TDP should be close to load power, or how much power the card consumes when running Crysis or something. Most video cards have TDPs in the 200W range — which, for the record, is beastly, 3x+ the power of a good x64 CPU.


PHYSICAL INTERFACE: The physical part that hooks in to the motherboard and lets data move between your video card and your motherboard. Whatever your video card’s interface is, make sure your motherboard has a slot of that interface type. New video cards are usually PCIe 2.0 x16 (which can transfer 8 gigabytes / second) or PCIe 3.0 x16 (15 gb/s!). Transfer speeds of 15gb/s may sound like overkill for a 4gb video game, but in addition to textures, etc., the computer is sending a LOT of data about game state to the GPU 30 times a second, so it’s needed.

RAMDAC: Stands for “Random Access Memory Digital-to-Analog Converter”. It takes a rendered frame and pushes pixels to your monitor to display. The DAC isn’t used if you’re using digital interfaces for your monitor (like HDMI), and the information held in the RAM isn’t used in modern full-color displays. So everything about the name ‘RAMDAC’ is outdated. Most RAMDACs run at 400MHz, which means it can output 400 million RGB pixel sets per second, enough to drive a 2560×1600 monitor at 97fps. Probably good enough for you.

MEMORY SIZE: How much data the video card can store in memory. Although video cards can communicate with the main computer and therefore save/load data in the computer’s RAM / hard drive, memory that resides inside the video card can be accessed with less latency and higher bandwidth. Bandwidth is one of the biggest bottlenecks (and therefore one of the most important measures) for graphics cards.

MEMORY TYPE: Probably GDDR 2/3/4/5. ‘DDR’ stands for “double data rate”, because DDR memory performs transfers twice per clock cycle. The ‘G’ stands for ‘Graphics’ — since memory access patterns differ between GPUs (who want lots of data / can wait for it) and CPUs (who want little data / can’t wait for it), GDDR memory and computer DDR memory went down separate upgrade paths. Higher numbers represent new architectures that allow more memory transfers per clock cycle.

MEMORY INTERFACE: Measured in bits. Represents how much data is carried per individual data transfer.

MEMORY CLOCK: Measured in MHz/GHz. Represents how many memory-transfer cycles occur per second (although more than one memory-transfer can occur per cycle). Sometimes you’ll see “effective memory clock” listed, which means “real clock speed * number of memory transfers per clock cycle afforded by our memory type”.

MEMORY BANDWIDTH: How many bytes of data can be transferred between the memory on the graphics cards and the GPUs themselves, per second. Measured by clock speed * interface size * [4 for GDDR2, 8 for GDDR3 or GDDR4, 16 for GDDR5]. Or, effective clock speed * interface size. This is one of the most important numbers for comparing graphics cards.


CORE CLOCK: Measured in MHz/GHz. Represents how many computation cycles occur per second. If you see references to the shader clock — it’s tied to the core clock.

BOOST CLOCK: Measured in MHz/GHz. If your GPU detects that it’s running at full capacity but not using much power (which happens when you’re not using all parts of the card — i.e. GPGPU computing that doesn’t render anything, or poorly optimized games), it’ll overclock itself until it consumes the extra power. It may overclock itself to a frequency below or above boost clock frequency, based on how little power it’s using, so boost clock is a nebulous measurement. Although only Nvidia uses the term ‘Boost Clock’, AMD offers the same controls, called ‘PowerTune’.

SHADER CORE COUNT: Sometimes called “CUDA cores” for Nvidia or “Stream Processors” for ATI. It’s the number of cores, similar to processor count in CPUs. Note that, compared to CPUs, GPUs generally have 100x the cores at 0.3x the clock speed (this is still, obviously, a win). That architecture means GPUs ideal for running the same operation thousands of times on thousands of different sets of data, which is exactly what video games need (for instance, figuring out where every vertex in a 3d model is located on screen).

TEXTURE UNITS: Also called texture mapping units or TMUs. Video games have 3D models and need to apply textures to them. However, there are many different issues that arise when you try texturing a model at an angle, or far away, or super-close up. When your GPU asks for a given pixel in a given texture, the texture unit handles that request and solves all these problems before passing it back to the GPU. More texture units means you can look up more textures per second!

TEXTURE FILL RATE: Measured as number of texture units * core clock speed. Represents the number of pixels in textures that the card can lookup every second. If you’re playing a game where every object on screen is textured (which is most games), this should be higher than the resolution of your screen * desired framerate, because one on-screen pixel can be determined by many textures (i.e. diffuse + specular + normal).

ROPs: Stands for “Raster Operators”. These units receive the final color value for a given pixel and write it to the output image, to be passed to the RAMDAC to be rendered on your monitor.

PIXEL FILL RATE: Measured as number of ROPs * core clock speed. Represents the number of pixels that can be written to the output image to display on your monitor, per second. This also has to be higher than screen resolution * desired framerate, because one on-screen pixel can be determined by the output color of many 3d models (i.e. looking at a mountain through a semi-transparent pane of glass requires 2 ROPs per pixel, one for the mountain, one for the pane of glass in front of it). If you see “fill rate”, it usually refers to this instead of texture fill rate.

FLOPs: Measured in megaflops/gigaflops. Stands for “floating point operations per second”, and represents how many times a video card can multiply/divide/add/subtract two floating point numbers, per-second (most video card calculations are done with floats instead of integers).

Whew! So that’s a pretty intense crash course in video card specs. Hope that helps!

Leave a Reply

Your email address will not be published.