Showing posts with label processor. Show all posts
Showing posts with label processor. Show all posts

Wednesday, September 24, 2014

Cleanin' Out My Closet: MacBook Pro SSD Upgrade and Clean Up

Despite my dislike for all things Apple, I do actually own a MacBook Pro 6,2 (Mid 2010) that I've been using as a work machine for the last 4 years. This system has helped me understand a great deal about multibooting operating systems, GPT vs. MBR and (U)EFI booting, so it's not all bad! For a while now, the system has been showing it's age, with the mechanical hard disk the primary bottleneck. It's also been in desperate need of both a physical clean out of all the dust that's gradually built up over time and a re-installation of the three operating systems I have on the machine. I finally made the decision to actually try and carry out this much needed maintenance after the recent release of Corsair's MX100 SSD:

It's not the fastest SATA SSD you can buy currently, but it certainly has the lowest price per GB (at the time of writing): I ended up buying the 512GB model from Amazon, costing just under £150, to replace the HDD in my Mac. To facilitate the transfer of my data back onto my laptop, I picked up a cheap USB 3.0 HDD caddy as well. As I was going to have to open the system up, I decided to not only clean out the dust from the system, but to completely strip the laptop down and replace the thermal paste under the heat sink. I already have a couple of tubes of Arctic Silver 5 thermal paste, so I simply added some ArctiClean and some compressed air to my order. Later, after commencing the disassembly of the system, I was unable to find my Torx screwdrivers necessary to complete the process, so I ended up buying a replacement set:

I have replaced many storage devices in Apple laptops over the past few years, so I was comfortable with the SSD upgrade, however, the cleanout and thermal paste replacement seemed like it would be a far more complex task. Thankfully, iFixIt have some excellent guides on the maintenance of Apple products and I found a specific guide for removing the heat sink assembly; it's worth noting that iFixIt have flagged this process as "difficult", confirming my initial fears! Still, I was determined to take on the challenge, as I'll demonstrate with pictures I took throughout the process:
Removing the bottom of the laptop's chassis; this step is necessary for pretty much all the hardware maintenance you'll need to perform on this particular model.

Eww.

Double eww.

WTF!?! After removing the heat sink, I was greeted with this sight: this seems to be far too much thermal paste.

The CPU and GPU ready for fresh thermal paste. I stopped short of scraping the remnants from around the chip edges as I was afraid I'd break something - I'm used to chips with integrated heat spreaders!

After putting the system back together,  I made a small prayer and offering to the Machine Spirit within and powered it on: success! It booted without issue and I was pleased to find that during the arduous process of installing/updating the three operating system, it not only performed a lot better (thanks to the SSD), but it was a lot quieter. However, since I've been working with the machine for a few days, I've noticed that the fans still spin up fairly regularly while I'm working in Linux; but there seems to be rather high CPU utilisation, even after switching to the proprietary Nvidia driver. Something for me to investigate further; at least I know it's able to actually dissipate the heat after I've cleaned out the heat sink!

Friday, August 9, 2013

Cooler and the Gang: Introducing Rig Number 2

I have a secondary system in our house, which is a bit of a Frankenstein's monster; it comprises of spare parts, hand-me-downs from upgrades and basically anything I can get my hands on! It used run off an old Pentium D chip I had from a few years back and recently I managed to acquire an Intel Core 2 Duo E6400, which I thought would make an excellent upgrade to the system. Unfortunately, after I switched processors, I found the stock Intel cooler that shipped with the Pentium was clearly not able to dissipate the heat generated by Core 2 chip; temps were around 80°C while idling! I ended up abandoning the system for a while until yesterday when I installed the Arctic Freezer 7 Pro I had left over from installing an H80i in my primary rig. I also used the Arctic Silver 5 paste I had left over from that upgrade as well; I needed to try everything to help keep the CPU running cool, especially as I wanted to try my hand at overclocking the chip at some point.

The process took me a couple of hours in total, and I took plenty of pictures to document it all. Afterwards, I was extremely happy to see idle temps around 30°C and just over 55°C at full load. Please be kind; the poor system is still residing in an old biege box I used to have for my primary system years ago, before I built my current gaming system. A new case is probably the only thing I'm considering actually buying specifically for this rig; I've been eyeing up the Cooler Master HAF XB LAN Box.

Here's the system with the stock cooler installed:

A close up of the stock cooler:

The CPU, once I had removed the stock cooler and cleaned off the old thermal paste:

The Arctic Freezer Pro cooler mount, which is only used for Intel (LGA755 to be precise) installations:

A small blob of Arctic Silver 5 thermal paste on the CPU, before I commenced tinting:

Using an old BCS membership card, I smoothed out that blob across the CPU (tinting):

I did the same to the underside of the cooler:

For the E6400, Arctic Silver recommend the vertical line method for applying the paste, however, I think I may have got mixed up here and accidentally gone with a horizontal line method:

This is the new cooler in place:

A final shot of the case for comparison:

Monday, June 10, 2013

Unlocked and Overclocked - Overclocking my AMD Phenom II 550 - Part 2

Previously, I posted about my experiences overclocking my Phenom II CPU that sits at the heart of my gaming rig. In order to see what difference the tweaks made to system performance, I had to benchmark the machine before and after and I'll attempt to draw some conclusions from the results.

Performance Testing

Given the amount of time I had spent producing a stable system, I wasn't keen on spending an age producing performance figures, but I still wanted to be able to highlight any gains in performance. With this in mind, I decided to use the following benchmarks:

  • Synthetic Tests
    • Cinebench - both the single and multi-threaded CPU rendering tests.
    • POV-Ray - again, both single and multi-threaded CPU rendering tests.
    • Unigine Heaven Benchmark - Full screen, 1920x1080, 8x Anti-Aliasing, 16x Anisotropy, with textures and shading set to high/maximum, occlusion, refraction and volumetric enabled, tessellation set to normal, with a trilinear filter. Vsync was disabled to see how fast the system could push out frames, despite the screen-tearing that would occur.
  • Real-World Tests
    • ARMA2:CO - 1920x1080, with the quality preference set to "high" and vsync enabled. I profiled the E08:Benchmark scenario (provided by ARMA2:OA) performance by running FRAPS for the duration.
    • Battlefield 3 - 1920x1080 with with graphics quality set to "ultra" and vsync enabled. I played through the car park segment of the Operation Swordbreaker mission, recording performance for 60 seconds using FRAPS.
    • Crysis - 1920x1080, details set to "very high", anti-aliasing set to x16 and vsync enabled. I ran the 64bit version of the "Assualt_Harbour" benchmark provided by the Crysis Benchmarking Tool and recorded frame times with FRAPS.
    • TES V: Skyrim - 1920x1080, with graphics options set to "ultra" and vsync on. I found an outdoor location that was near a giant's encampment with a dragon circling overhead and started benchmarking before attacking the giants, recording frame times with FRAPS for 60 seconds.

The first two synthetic tests (Cinebench and POV-Ray) are primarily to see how much additional raw processing power is unlocked by tweaking the CPU. I expected to see quite linear performance increases here, as the test are primarily CPU-bound. From Unigine Heaven through all the real-world tests, I expected to see varying performance gains; each engine will rely on CPU performance to a different degree, with multiple cores making more of a difference in some and clock-speed providing more of a boost in others.

The Results

Analysis

As expected, there were some variations in results, which I'll try to provide some analysis for below:

  • The synthetic CPU benchmarks (Cinebench and POV-Ray) produced pretty unsurprising results; performance appears to increase linearly with additional CPU horsepower. For example, when comparing the dual-core and tri-core results you can see there is a around 50% improvement with the additional core.
  • Unigine Heaven was a little disappointing: the only tangible improvement was a slight increase in minimum frame rate. However, I suspect this is because it's a GPU-focused test, without any other factors to impact performance, such as AI-related calculations, etc.
  • The most impressive improvement was produced by the ARMA2:CO benchmark. In fact the game is very CPU-dependent given it's primarily a military simulation title, as opposed to your usual run-and-gun FPS affair. The unlocked and overclocked CPU provided much less deviation in the frame rate, bringing up the minimum frame rate to almost 10 FPS over the stock configuration. Surprisingly, the overclocked dual-core configuration seemed to reduce performance, which leads me to believe there's some additional optimisation required at the higher clock speed (e.g. increasing CPU-NB bandwidth).
  • Looking at Battlefield 3, there's a similar story: the 3.4GHz tri-core brought up the minimum frame rate and you can clearly see on the frame-time graph a more consistent performance was produced throughout the benchmarking run.
  • Crysis seemed to gain similar improvements with both the 3.6GHz dual-core and 3.4GHz tri-core configurations. I suspect that the older title might not benefit from the addition core and the improvements seen with the 3.4GHz tri-core are tied to the marginal increase in clock speed. This theory is backed up by the frame time graph as all three CPU configurations seem to suffer from the same dips in performance at the same time during the benchmarking run.
  • The performance improvements in Skyrim were in line with ARMA2 and Battlefield 3, with the tri-core producing the most significant benefit.
Conclusion
In line with current understanding of game engines, newer titles seem to favour additional cores over CPU frequency. Fortunately for me, I have a Phenom II were I can easily unlock the third core, which instantly gives me better frame rates in the games I regularly play. I would have liked to try and push the clock speed of both the dual and tri-core configurations a bit higher, but I was seriously limited by the thermals of the system. Improving the cooling of the CPU could give me a bit more head room for me to increase voltages, both in the CPU core and the CPU-NB. Overall, I'm pleased with the additional performance I've unlocked in my PC and I'm very happy with the amount I've learnt in the process.

Tuesday, December 18, 2012

Soldering On - The Future of the Desktop CPU

Back in November a slide was leaked to the press that detailed Intel's CPU road-map for the next couple of years. What made the leak so controversial was the suggested move away from the Land Grid Array (LGA) sockets currently used to seat processors. The slide seems to indicate that in 2014 a Ball Grid Array (BGA) mechanism would be used to directly mount Broadwell CPUs to the motherboard using solder.

Shortly after the leak, there were a flurry of articles posted online about the future of the (desktop) PC as we know it, which culminated in Intel making a statement to Maximum PC; essentially denying the move away from LGA packaging. In his article, Maximum PC's Deputy Editor Gordon Mah Ung made some good points around why it would be difficult for Intel, and the desktop PC industry as a whole to switch to BGA packaging. However, I feel that demand for the traditional desktop form factor has decreased significantly in the consumer marketplace and I felt that this recent insight into a possible future was worthy of a post.

Standards vs. Style

For literally decades now, the desktop pc form factor (specifically, the AT and ATX standards) has allowed people to design, build, upgrade and maintain their own system(s) using a slew of components produced by dozens of manufacturers. With the move towards smaller and more portable devices, we have seen the general consumer eschew this bulky device in favour of laptops and more recently, tablets and smartphones. Those consumers who do opt for a desktop system are often drawn towards all-in-one devices like the Apple iMac or HP Omni machines.

In many of these new, slimmer devices, the CPU is often attached directly to the motherboard or difficult to replace. In the case of the iMac, it appears you can upgrade the CPU, but the new processor can't deviate too much from the original CPU's TDP. This makes sense; Apple don't want consumers meddling with their systems' interiors, so why add support for other processors? For the most part, processors are not considered "upgradable" in slimmer/portable devices; people will just buy a whole new device when their existing one becomes too slow.

But it hasn't stopped at the CPU: in a quest to produce the most portable, slim and desirable devices manufacturers have been directly attaching other components to systems' motherboards, such as RAM and storage (solid-state NAND chips). This has produced some extremely thin machines, but at the expense of user-serviceability (in fact, it's produced some of the most environmentally unfriendly machines ever). Purchasing one of these machine is akin to buying a phone or tablet; you need to ensure the system's specs will be sufficient for the entire life cycle of the machine. I think this is a bit unfair, considering most people upgrade their phones yearly, whereas I would expect a laptop purchase to last at an absolute minimum of three years (I actually have a seven year old laptop at the heart of my CCTV system).

The End is Nigh?

Given the trend of smartphone/tablet SoCs slowly absorbing more functionality and portable computers becoming harder to maintain and upgrade, I felt comfortable knowing that my trusty (yes, and big and bulky) desktop would continue to provide me with the upgradable, customisable and truly personal computer. That was, until the slide was leaked to the press and all the speculation started!

It was while listening to the Anandtech podcast (and later reading the Maximum PC article) that things were put into perspective. On the show, the possibility was raised that Broadwell may only be intended for mobile devices and that desktop users would have to wait until the following major architectural change (a "tock" in Intel's release cadance). It was even pointed out that Intel has set a precedent for this already with the move from the Nehalem to Sandy Bridge; no six-core, high-end part was released to replace the older equivalent, which is why so many people delayed upgrading their Gulftown-based systems (i7-970,980X,990X) until Sandy Bridge-E was announced. If you wanted an eight-core part that using the newer architecture, it would be necessary to splash out on a Xeon system; a much more expensive proposition.

For the enthusiast market, it has been argued that processor upgrades are a rarity. Given Intel's socket lifecycle, upgrading one's PC will usually involve a new motherboard purchase anyway, to accommodate a new CPU architecture. While this is true, I know many people who bought/built a system and had to settle for a lower spec CPU, but later on were able to afford a higher-end part because of the naturally falling price of product lines as they age. But not only that, there are two other issues that myself and many others in the system builder community have concerns over:

  • Repairing/replacing damaged components - the obvious issue. If I have an expensive CPU attached to a motherboard, it's not easy to replace either one in the event of hardware failure. A motherboard has a lot of additional components; should I need to replace it, why should I be lumbered with the cost of replacing a (perfectly good) CPU?

  • High end motherboards paired high-end CPUs/confusing product line-ups - it's not clear how this integrated CPU approach will work from a buyer's perspective. Will motherboard manufacturers attempt to produce a variant of each board, each with a different CPU, or will they simply put the high end CPUs on the high end boards? The former would probably be costly to the manufacturer and confusing for the consumer, while the latter will reduce consumer choice; currently, it's possible to pair a some high-end motherboard with a lower end CPU, taking advantage of the board's features (additional SATA and USB ports, RAID, Wifi, etc.) to create a useful workstation or home-server.

Brave New World

As more and more motherboard functionality is absorbed by the CPU, it will be interesting to see how long the above two issues cause problems for system builders. It's pretty clear that the ATX form factor isn't a priority any more. Without AMD providing competition in the enthusiast/performance market, Intel can focus on the battle with ARM-based systems.

If the ATX form factor is slowly becoming extinct, will we see an alternative for the hardware enthusiast? Personally, I hope so and there seems to have been a rise in popularity of Micro-ATX and Mini-ITX systems, which is promising. These tend to require more forward planning and are more complex to build in; sometimes requiring case modifications, especially if the build uses higher end components or liquid-cooling solutions.

Something I found particularly interesting in the Anandtech podcast was the mention of discussions had with industry players (before this recent leak) who had revealed a possible future: boards with CPU, RAM, etc. directly attached, producing modular systems. This would fall in line with Intel's Next Unit of Computing (NUC) initiative and so perhaps the future for home desktops are consumer blade systems; upgrading or replacing a computing module in your home server would be as simple as removing a NUC. I could see specialised NUCs being developed for gaming or GPU compute, high I/O requirements, etc., which would allow hardware enthusiasts to customise their builds to their needs.

Maybe the outlook isn't as bleak as I first imagined... Still, I might brush up on my soldering skills and invest in re-flowing equipment, just to be on the safe side!

Monday, October 29, 2012

"Core Blimey!": Phenom II Core Unlocking

The manufacturing of microprocessors, such as the CPUs found in today's modern computers, is a complex and time consuming process, often resulting in a low yield of fully functional components. A percentage of the chips produced may only partially work, while others not at all; only through rigorous testing can those perfect parts be identified. Rather than simply discard the chips with imperfections, the manufacturer will identify the working components on each die and stable clock speeds they are able to operate at in order for them to be able to "bin" the parts as different SKUs. Even after a manufacturing process has been perfected and the percentage yield of fully functional parts is higher, a company may decide to bin chips with no defects into a lesser SKU simply to meet demand for cheaper products; disabling perfectly functional CPU cores, for example.

In 2009, a Korean over-clocker using a Biostar motherboard discovered it was possible to re-enable factory-disabled cores in AMD's Phenom series of CPUs. Once the news broke, some motherboard manufacturers started to add core "unlocking" features to their high-end products to tr to facilitate the process. When building my current PC, the motherboard I settled on was an Asus M4A77TD Pro, which I discovered had this feature present in the BIOS the first time I booted the system.

Ever since then, I've been toying with the idea of trying to unlock any additional cores present on my dual-core Phenom II CPU. However, I was finally convinced by a YouTube video I watched recently. In the the video, modern game titles were benchmarked to see the effect multiple CPU cores have on performance and the Frostbite 2 engine was mentioned as specifically taking advantage of additional cores in a system, so I decided to undertake my own experiment:

Would it be possible for me to unlock additional cores present on my CPU and benefit from improved performance in games I regularly play?

To measure the change in performance of the system, I used a mixture of synthetic and read-world benchmarks. The synthetic benchmarks (save for Unigine Heaven) are geared towards testing raw compute power the CPU has to offer, while the real-world tests will hopefully show the impact additional cores have on gaming.

Synthetic Tests

Cinebench 11.529
Both the single and multi-core CPU tests. The result is a score calculated by the program, a higher score indicating a better performance.
POVRay 3.7 Beta
Both single and multi-core tests again. Previous versions of the program only support single-threaded rendering, hence the need for the beta release. This test simply reports the amount of time (in seconds) that the benchmark render took to complete; a lower time is obviously preferable in this case.
wPrime 2.09
Calculates the square roots of numbers and another benchmark that measures performance by timing how long the system takes to complete.
Unigine Heaven Benchmark 3.0
Using DirectX 11, with the resolution set to 1920x1080, with 8xAA, 16x anisotropic filtering, hardware tessellation set to "normal", shaders and textures set to "high" and vsync off. The benchmark simply takes several predetermined paths through a 3D environment, recording the minimum, maximum and average frame-rate and awarding an score based on the performance.

Each of these tests will be run five times, with the median of the scores being used. In order to get the most consistent results possible, these benchmarks would be run as "Administrator" after preparing the system using Maximum PC's "How To Properly Benchmark Your PC" pre-flight checklist:

  1. Turn off screen savers
  2. Turn off power saving modes
  3. Disconnect from the network/Internet
  4. Disable antivirus and any other security-related tools
  5. Turn off Windows update
  6. Defrag HDD if needed
  7. Disable System Restore
  8. Reboot: Self explanatory
  9. Wait for the machine to fully boot and log on
  10. Force Windows to process tasks schedule to run when the system is idle. This is a neat trick I learned from the Maximum PC article, which I think is worth repeating here:
    1. Run "Command Prompt" as Administrator
    2. Type: Rundll32.exe advapi32.dll,ProcessIdleTasks
    3. Wait for disk activity to die down

Real-World Tests

For my real world tests, I would be conducting a short play through of three games I regularly play. Each was performed five times, but without running the system through any special pre-flight checklist (i.e. a "real world" scenario). I configured FRAPS to benchmark over 60 seconds and to record the minimum, maximum and average frame-rate. I also decided to measure the time taken to render each frame during the benchmark, after reading a very intriguing article on The Tech Report site, entitled "Inside the second: A new look at game benchmarking", which investigated the difference between measuring the average FPS and the time taken to render each individual frame. Without going into too much detail, a system that produces a good average frame-rate can still struggle to render the occasional frame, which can result in a jarring experience while gaming. The three games I

Battlefield 3
The resolution set to 1920x1080, with the graphics options set to "ultra" and vsync on. I ran through a portion of the Operation Swordbreaker stage; in the parking lot, just after being attacked with the RPG.
The Elder Scrolls V: Skyrim
The Resolution set to 1920x1080, with graphics options set to "ultra" and vsync on. I found an outdoor location that was near a giant's encampment with a dragon circling overhead and started benchmarking before attacking the giants.
Civilization V
Running in DirectX 11, with the resolution set to 1920x1080, 2x anti-aliasing, 16x anisotropic filtering and vsync on. I loaded a save from a late-stage game I had been playing and benchmarked the graphics performance while taking a turn. As this game really taxes other components in the system as well as the GPU, I decided to measure the time taken to load a saved game and how long the AI moves take to complete. This was probably the least accurate test, as I had to manually time these actions.

Stability Testing

One final round of tests I would need to carry out were aimed at testing any cores I was able to unlock (and the entire system) for stability.

  • IntelCPUBurn - 100 iterations with the stress test option set to "maximum".
  • Prime95 - small FFT test run on all available cores overnight and the next working day and the blend test for around 2 hours.

The Unlocking Process

After I had run the above synthetic and real-world benchmarks, I began the unlocking process:

  1. As the core unlocking feature is controlled via the BIOS, the first step was to reboot the machine.
  2. During POST, I pressed "4" on the keyboard (as prompted by the BIOS splash screen).
  3. This resulted in the machine instantly powering off (which I found rather disconcerting!), remaining in that state for a second or two, before powering back on again.
  4. Worryingly, the system did not seem to proceed to display the usual POST messages and instead the screen remained dark. I left this for several minutes before deciding to hit the reset switch on the machine.
  5. Fortunately, this allowed the machine to boot as normal, but this time, the splash screen displayed a message stating "3 cores are activated!" (see the image at the beginning of this blog post).
  6. Booting into Windows and starting Task Manager confirmed an additional core was now available to the operating system. Additionally, CPU-Z identified my processor as a Phenom II X4 B50 (codename "Deneb"), instead of a Phenom II X2 550 (codename "Callisto"), but with only 3 cores:

Before I started my stability tests, I did reboot and enter the BIOS to see the available settings relating to core unlocking and I even tried running the core unlocker again to see if a 4th core could be activated. However, the result was always the same; just 3 cores could be unlocked. I did consider forcing the BIOS to unlock the 4th core, but I suspected that it remained locked because the BIOS was unable to confirm it's stability.

The Results

After the extensive and lengthy stress testing completed without any errors, BSODs or shutdowns, I considered the additional core stable. That's a positive result in itself; I had successfully unlocked a third core on my CPU! In addition, I noticed the highest recorded temperature of the CPU was 52°C. This is much lower than the maximum operating temperature AMD state for the component, so I have some potential headroom if I decide to overclock the chip.

Continuing with the performance testing, I started seeing some interesting results. First, let's take a look at the CPU-bound synthetic tests:

As expected, when conducting multi-threaded tests after enabling the third core, there were up to 50% performance gains recorded (or exactly 50% in the case of Cinebench); great news if I run any CPU intensive tasks, like encoding video!

When comparing the Unigine results, a different story emerges:

A bit disappointing, especially the tri-core posting a lower minimum FPS value than the dual core! Considering that fact, I'm not too sure how the tri-core was awarded a higher score, as the maximum and average FPS values were only marginally higher. Whatever the reason, it doesn't look like the additional core has improved graphical performance at all. I suspect that the Unigine benchmark doesn't benefit from additional cores; it's not a full game that could potentially use additional threads for AI subroutines.

Moving onto the real-world tests, Battlefield 3 produced unimpressive, but interesting results. First, let's take a look at the minimum, maximum and average FPS achieved during each play-through:

Strangely, the tri-core system recorded ever-so-slightly lower maximum and average FPS values, but the minimum FPS value was raised by a similarly small amount. This is confirmed when looking at the time taken to render each frame:

The tri-core system actually renders fewer frames over the 60 second benchmark, but they are rendered at a more consistent rate. In fact, while the dual-core configuration resulted in 4 frames taking over 50ms to render, the 3 slowest frames in the tri-core setup only took just over 40ms. So despite the lower average FPS, the tri-core system should produce a smoother gameplay experience.

Skyrim's results are even more positive, with the min/max/avg comparison showing a clear improvement:

The tri-core system posted significantly higher maximum, minimum and average FPS values. In fact, for all 5 repetitions of this test 61 FPS was the maximum, which suggests the machine was hitting the vsync limit and potentially could be rendering faster without it enabled.

Looking at the frame-times, the tri-core configuration produced frames quicker (and therefore more of them in total) almost for the majority of the sixty second benchmark. However, even with the additional core there was still a single frame that took over 60ms to produce.

Benchmarking Civilization 5 really made me understand just how much the game pushes my system:

The minimum frame-rate recorded for both the dual and tri-core configurations is extremely low; this is most likely while panning around the map with the mouse. Happily though, there are noticeable performance increases, which the frame-time comparison also shows:

The next two benchmarks are by far the most inaccurately measured; I had to simply use a stop-watch to record the time from me clicking the mouse and the operation completing. I estimate that this introduced a one or two second margin of error for the results. Despite this, the tri-core setup appears to have improved saved game load times:

The time taken to process AI moves seems to improve ever so slightly, but I'm not sure the difference is significant enough, given the timing inaccuracies mentioned previously:

Conclusions

Given the results of the real world tests, I wish I had taken the time to benchmark more games to see which titles have benefited from my tinkering. Since the core unlock however, I have noticed is that if I have Windows Task Manager open while playing certain games (specifically, Crysis and Diablo III), there is significant load displayed on all three CPU cores, leading me to believe that they are taking advantage of the additional processing core.

Overall, I am happy with the results; clearly the third core provides some assistance when playing modern titles. What I'm particularly pleased about is that I'm able to notice the improved performance in Skyrim; the experience does seem smoother.

Given how easy it was to coax a little bit of extra performance out of my system, I'm now considering trying to over-clock the CPU to boot it further. As I mentioned previously, the operating temperature of the CPU was well below the maximum, so I should be able to increase the voltage if necessary.