Monthly Archives: April 2013

Motherboards: Where Are Their Ladyparts??

Continuing in a series of explanations of computer hardware, let’s look at motherboards! How do you tell motherboards apart? If you can hook all your hardware up to two motherboards, which one is better?

BASIC STUFF: THE MOTHERBOARD AS CONNECTIVE TISSUE

The most important thing about the motherboard is that it connects all the individual parts of your computer. The motherboard contains the wires that let data flow between CPU, GPU, RAM, HDD, your keyboard and mouse, etc. — it is the spinal cord of your computer.

When you’re buying a motherboard, the most important spec is the “socket type” of CPU that it supports. “Socket type” refers to the physical connection between the pins on the CPU and the sockets on the motherboard. A processor built for one socket type will physically not fit in a motherboard built to accept another socket type. Intel and AMD processors use different socket types. Furthermore, both companies create new socket types every few years, requiring motherboard upgrades to use the (presumably better) CPUs built to the new socket type. So, buying an Intel LGA 1155 socket-type motherboard locks you out of all AMD chips and all Intel chips older or newer than that socket type. Chances are, if you have to buy a new motherboard, it’s because new CPUs can’t work in your old one.

Number and type of expansion slots is the next most important thing. You may want to use many sticks of RAM or 2+ graphics cards, and some motherboards can’t support that. Furthermore, RAM, USB ports, hard drives, and GPUs are also built to standards that evolve over time. Although these standards don’t change as frequently as CPU chipsets (and Intel and AMD socket motherboards both support the same standards for all these things), you’ve still got to make sure that they’re supported.

Most desktop motherboards will support 240-pin DDR3 RAM, but different motherboards support different speeds of RAM (this is the 1066, 1333, 1600, etc.) Higher is better. Make sure to look up the number of memory slots too — Fewer slots of RAM isn’t a dealbreaker, but more slots is better. For instance, you can save money by buying 16GB of RAM as 4x 4GB sticks instead of 2x 8GB sticks.

You’ll also see PCI (“Peripheral Component Interconnect”) expansion slots listed on motherboard specs. These slots are where you insert specialized hardware as needed — audio cards, network cards, video cards, and even some SSDs use PCI slots. There’s multiple standards of PCI, and again, bigger numbers (and the word “express”) is better. Stuff that can afford to be slow, such as network cards, only need PCI. Nice sound cards and PCI-based SSDs will want PCI Express, which allows faster data transfer. Video cards are the only things that really need PCI Express 2.0+, since they transfer absurd amounts of data.

There’s other considerations like form factor (desktops are ATX), built-in audio/network/video cards (use expansion-slot cards if you can, but these get the job done), and HDD connectors (almost everything runs on SATA 6gb/s nowadays). These are all boring and don’t change very much, so we’re glossing over them.

GETTING FANCIER: THE MOTHERBOARD AS BRAIN

You can go out and buy a usable motherboard just based on the information above, and you’ll be fine. But stopping now is for losers! Motherboards are more than just the wires connecting components.

First, the motherboard contains the BIOS. BIOS stands for “Basic Input/Output System” (pretend like it means “Built-in Operating System” — Neal Stephenson’s idea — because that’s a better name). It’s a super-low-level system where you can see hardware stats and change them. This is the settings screen that you get when you hit F11 or Delete while booting, where you do things like RAID together hard drives, control voltage/timings, and specify to boot off HDD or CD. Honestly, 95% of motherboard manufacturers’ BIOS utilities feature 95% of the things you care about, so it’s not a factor in your motherboard purchase.

Less visibly, but more importantly, the motherboard contains the Northbridge and Southbridge chips. These chips manage communication between each part of your computer, and they are vital. The Northbridge manages access to high-importance / high-data-transfer-rate parts of the computer, like RAM and video cards in PCIe slots (also, the Northbridge is being phased out of existence, as more system-on-chips wrap the Northbridge into the CPU). Southbridges manage access to lower-importance / lower-data-transfer-rate peripherals in PCI, USB, or SATA slots (i.e. audio cards, keyboards/mice, slow hard drives).

Since a ton of data transfers happen that don’t involve the CPU at all (RAM <-> video card; USB stick <-> printer), and the CPU is usually held back by memory transfer speed anyway, the Northbridge and Southbridge play an important role: not slowing the CPU down by making it a middleman in these transfers. A good Northbridge and Southbridge are the caretakers that make your entire machine flow smoothly.

What’s the difference between a good and bridge Southbridge? the Intel Z77 chipset can communicate at 5GB/s with 8 PCIe 2.0 slots and 6GB/s with 6 SATA ports. However, Intel’s H61 can only handle 6 PCIe 2.0 slots, and only 4 SATA ports at 3GB/s — even if you installed a USB 3.0 card on a H61 motherboard, it wouldn’t run at full speed. (In general, ‘H’ means ‘budget’ and ‘Z’ means ‘performant’ for Intel).

These chips are the things that make one motherboard more expensive than another with the same slots — and although a budget Northbridge/Southbridge can hold you if you aren’t pushing the boundaries of your CPU, certain PC builds will see stunning increases by swapping out the motherboard alone.

On The History of Programming

We’re starting this post with a Zen Koan. Here it is, from The Gateless Gate, a 13th century Chinese compilation.

A monk asked the master to teach him.
The master asked, “Have you eaten your rice?”
“Yes, I have,” replied the monk.
“Then go wash your bowl”, said the master.
With this, the monk was enlightened.

Cool! We’ll get back to this.

If you are a programmer, and you care about programming, then you should study the history of programming. In fact, for most professionals, I’d argue that studying the history of programming is a better hour-per-hour investment than studying programming itself.

See, the history of programming is different from most other histories. While gravity existed before Newton, and DNA existed before Watson and Crick, programming is literally nothing but the sum of peoples’ historical contributions to programming. It is a field built from nothingness by human thought alone. It is completely artificial.

This leads to a useful fact: every addition to the world of programming, that gained enough traction to be in use today, was created to solve a problem. FORTRAN was created to make assembly coding faster on IBM mainframes. BASIC built on FORTRAN and allowed the same code to be run on different computers. C wasn’t actually based off BASIC, but it kept BASIC’s portability and instead focused on easier, more readable programming. C++ came after C and created object-oriented programming, allowing code re-use across tasks. Java expanded on C++, removing the need to recompile for different computer architectures.

Your favorite programming language is influenced by the contributions and thought patterns of every language before it.

By understanding the problems that birthed each new language, you can appreciate the solutions they offer. Java’s virtual machine was (and is) a Big Deal, the entire reason that Java exists. Learning Java without learning (or at least understanding) C++ robs that internalization from you. And don’t just learn about the languages that inspired your preferred language — learn about the offshoots and derivatives of languages you’re interested in, too. Each language highlights the deficiencies and tradeoffs of its parents and children, and learning Java can be just as useful to a C++ programmer as learning C++ could be to a Java programmer.

Even if you refuse to leave your favorite language, knowledge of its ancestors and children can make you recognize a language’s strengths and weaknesses, and tailor your program to match. And that is a valuable skill.

See, trying to master a programming language without understanding the context it arose from is like trying to understand a Zen Koan without understanding the context in which it’s meant to be read. You can’t.

Video Cards Have So Many Stats!

If you research video cards, because you’re buying one or something, you’re gonna see a TON of stats. And let’s be honest, you won’t understand all of them. This blog post will fix that problem! Maybe. Hopefully.

This is pretty much an info dump of all stats mentioned in NewEgg, AnandTech, and TomsHardware listings. Stats are split up by general category, whether they affect the video card EXIST AS A HUNK OF METAL, or MOVE DATA AROUND, or DO CALCULATIONS.

THESE MAKE THE VIDEO CARD EXIST AS A HUNK OF METAL

MANUFACTURING PROCESS: Measured in nanometers. This measures how small the semiconductors in the video card are (semiconductors are the building blocks of, like, all electronic devices). The smaller the semiconductors, the less heat/electricity they consume, and the more you can pack on a card.

TRANSISTOR COUNT: Transistors are made of semiconductors, so transistor count is inversely proportional to manufacturing process. Again, more transistors = more better.

THERMAL DESIGN POWER (TDP): Measured in watts. Measures how much power the video card expects to consume. Most overclocking software lets you increase wattage beyond TDP, but you’ll need to upgrade the stock fans to dissipate the extra heat, and you probably won’t get as good performance as just buying a card with a greater TDP. TDP should be close to load power, or how much power the card consumes when running Crysis or something. Most video cards have TDPs in the 200W range — which, for the record, is beastly, 3x+ the power of a good x64 CPU.

THESE MAKE THE VIDEO CARD MOVE DATA AROUND

PHYSICAL INTERFACE: The physical part that hooks in to the motherboard and lets data move between your video card and your motherboard. Whatever your video card’s interface is, make sure your motherboard has a slot of that interface type. New video cards are usually PCIe 2.0 x16 (which can transfer 8 gigabytes / second) or PCIe 3.0 x16 (15 gb/s!). Transfer speeds of 15gb/s may sound like overkill for a 4gb video game, but in addition to textures, etc., the computer is sending a LOT of data about game state to the GPU 30 times a second, so it’s needed.

RAMDAC: Stands for “Random Access Memory Digital-to-Analog Converter”. It takes a rendered frame and pushes pixels to your monitor to display. The DAC isn’t used if you’re using digital interfaces for your monitor (like HDMI), and the information held in the RAM isn’t used in modern full-color displays. So everything about the name ‘RAMDAC’ is outdated. Most RAMDACs run at 400MHz, which means it can output 400 million RGB pixel sets per second, enough to drive a 2560×1600 monitor at 97fps. Probably good enough for you.

MEMORY SIZE: How much data the video card can store in memory. Although video cards can communicate with the main computer and therefore save/load data in the computer’s RAM / hard drive, memory that resides inside the video card can be accessed with less latency and higher bandwidth. Bandwidth is one of the biggest bottlenecks (and therefore one of the most important measures) for graphics cards.

MEMORY TYPE: Probably GDDR 2/3/4/5. ‘DDR’ stands for “double data rate”, because DDR memory performs transfers twice per clock cycle. The ‘G’ stands for ‘Graphics’ — since memory access patterns differ between GPUs (who want lots of data / can wait for it) and CPUs (who want little data / can’t wait for it), GDDR memory and computer DDR memory went down separate upgrade paths. Higher numbers represent new architectures that allow more memory transfers per clock cycle.

MEMORY INTERFACE: Measured in bits. Represents how much data is carried per individual data transfer.

MEMORY CLOCK: Measured in MHz/GHz. Represents how many memory-transfer cycles occur per second (although more than one memory-transfer can occur per cycle). Sometimes you’ll see “effective memory clock” listed, which means “real clock speed * number of memory transfers per clock cycle afforded by our memory type”.

MEMORY BANDWIDTH: How many bytes of data can be transferred between the memory on the graphics cards and the GPUs themselves, per second. Measured by clock speed * interface size * [4 for GDDR2, 8 for GDDR3 or GDDR4, 16 for GDDR5]. Or, effective clock speed * interface size. This is one of the most important numbers for comparing graphics cards.

THESE MAKE THE VIDEO CARD DO CALCULATIONS

CORE CLOCK: Measured in MHz/GHz. Represents how many computation cycles occur per second. If you see references to the shader clock — it’s tied to the core clock.

BOOST CLOCK: Measured in MHz/GHz. If your GPU detects that it’s running at full capacity but not using much power (which happens when you’re not using all parts of the card — i.e. GPGPU computing that doesn’t render anything, or poorly optimized games), it’ll overclock itself until it consumes the extra power. It may overclock itself to a frequency below or above boost clock frequency, based on how little power it’s using, so boost clock is a nebulous measurement. Although only Nvidia uses the term ‘Boost Clock’, AMD offers the same controls, called ‘PowerTune’.

SHADER CORE COUNT: Sometimes called “CUDA cores” for Nvidia or “Stream Processors” for ATI. It’s the number of cores, similar to processor count in CPUs. Note that, compared to CPUs, GPUs generally have 100x the cores at 0.3x the clock speed (this is still, obviously, a win). That architecture means GPUs ideal for running the same operation thousands of times on thousands of different sets of data, which is exactly what video games need (for instance, figuring out where every vertex in a 3d model is located on screen).

TEXTURE UNITS: Also called texture mapping units or TMUs. Video games have 3D models and need to apply textures to them. However, there are many different issues that arise when you try texturing a model at an angle, or far away, or super-close up. When your GPU asks for a given pixel in a given texture, the texture unit handles that request and solves all these problems before passing it back to the GPU. More texture units means you can look up more textures per second!

TEXTURE FILL RATE: Measured as number of texture units * core clock speed. Represents the number of pixels in textures that the card can lookup every second. If you’re playing a game where every object on screen is textured (which is most games), this should be higher than the resolution of your screen * desired framerate, because one on-screen pixel can be determined by many textures (i.e. diffuse + specular + normal).

ROPs: Stands for “Raster Operators”. These units receive the final color value for a given pixel and write it to the output image, to be passed to the RAMDAC to be rendered on your monitor.

PIXEL FILL RATE: Measured as number of ROPs * core clock speed. Represents the number of pixels that can be written to the output image to display on your monitor, per second. This also has to be higher than screen resolution * desired framerate, because one on-screen pixel can be determined by the output color of many 3d models (i.e. looking at a mountain through a semi-transparent pane of glass requires 2 ROPs per pixel, one for the mountain, one for the pane of glass in front of it). If you see “fill rate”, it usually refers to this instead of texture fill rate.

FLOPs: Measured in megaflops/gigaflops. Stands for “floating point operations per second”, and represents how many times a video card can multiply/divide/add/subtract two floating point numbers, per-second (most video card calculations are done with floats instead of integers).

Whew! So that’s a pretty intense crash course in video card specs. Hope that helps!