I have seen hardly any evidence of FX-8150s selling at all from the online community. They haven't been quite as much of an economical flop as a technological flop, but they still haven't really made much of an impact. Llano on the other hand has been selling pretty well, and as a low-end all-in-one system for HTPCs and the likes, it's one of the best options out there. Ivy Bridge has turned out to be a failure, there is literally no reason to buy one if you already have any i5/i7, let alone a sandy bridge CPU - the performance/mhz is roughly the same, around a 5% gain, the base clock speeds are the same too, so out of the box they're no faster. They use less power, but due to a badly designed heatspreader they run hotter than their sandy bridge counterparts, and are therefore less easily overclocked, so all the advantages of having lower power consumption from 22nm silicon are lost. In effect, Ivy Bridge is almost like Intel's Bulldozer.
I hate moving and I know what you mean about slowing down but with two moves so close, wow. I build my own shed(s) too, in fact the last one is the size of a nice luxury bedroom, throw a heater out there and you could live in it, it even has sky lights in it. LOL At any rate no hurry mate for sure.... The launch of the 6600 Quad was my favorite for the Intel's and they really performed for the times that was/is a great processor. I think with the pricing and times we are in is why the newer Intel generations are not flying off the shelf's, if times were better they would be selling better. I was waiting for Sam to chime in and of course no surprise to his reply either. I'm with you many people I've talked to have or are thinking of buying the FX CPU's. In fact one of my better friends that was talked into buying the Dirtybridge 2600 is changing course back to AMD after two new Intel's. The Best of course, Stevo
I'd say the majority (but not all) of people who use quad graphics use two dual cards, because it's much easier primarily from a cooling standpoint - using three+ cards usually involves zero spacing between cards which is notorious for causing overheating issues. Being limited to two PCIe slots, even running at 8x, is a total non-issue, as it's been proven that even running at 4x per GPU has typically less than a 10% performance impact in all but one or two outlier scenarios. I ran my HD4870X2 quad crossfire setup on 16/16 (X48-DS4) and 8/8 (P55A-UD4) and couldn't tell the difference, even at 2560x1600 - the faster CPU of the i5 versus the Core 2 made more of a noticeable difference than the slot speed, as the system was faster or the same in every case, depending on whether the bottleneck was GPU or CPU. Knowing that the Phenom II range and 45nm Core 2s were pretty much equivalent (the nomenclature of the Phenom IIs was deliberate - 940/Q9400, 955/Q9550 and so on), and knowing how much of a benefit I saw from upgrading from my Q9550 to the i5 750, and also seeing both from benchmark numbers which you can take or leave, but also from first-hand experience how much faster the Sandy Bridge CPUs are, it surprises me that anyone would prefer AMDs for games. I don't honestly think anyone does though. The biggest proponents of FailDozer (not like me to really use such terms, but since we're starting that trend with 'Dirty Bridge', might as well continue it), are typically those who run encoding applications, as that's where the additional cores of the Phenom II X6 and Bulldozer play more of a role. For those who only really use the sort of software that properly utilises multi-core systems, it's really a pretty much level playing field. The X6 1100T, FX-8150 and i7 2600K are all within spitting distance of each other. The difference is how many cores they use to achieve the same result. The FX-8150 can only ever compete in these environments (which is fine for many) and can't really be overclocked without becoming almost a closed circuit. The 1100T is probably the best balance, or should I say was, between having many cores, and having sufficient muscle - it was cheap and not too demanding on power. Overclocking results weren't spectacular, but there were tangible benefits. Alas, with the discontinuation of the X6 though, it's inferior Bulldozer or 'Dirty Bridge'. The i7 2600K lacks the extra cores to provide a totally indistinguishable level of performance at the desktop when encoding video at 100% CPU, but it's still capable enough to do anything you need to do, including watch HD video. It also has an enormous lead in anything that doesn't use every core, and can be overclocked considerably, with a comparatively minimal power and heat output footprint. The long and short of it is that Bulldozer is the inferior product, but it's cheaper, and thus is genuinely the better buy for those that can make best use of its advantages. For people whose primary objective is not video encoding, the i5 and i7 are streets ahead, and not a great deal more expensive.
Well yes and no. There are a few particular games that will show a CPU bottleneck long before a graphics bottleneck. Sins of a Solar Empire(fantastic) and Supreme Commander Forged Alliance(fantastic-er, lol) to name a few. You are generally right on the money though. I'm due for a video upgrade soon, but prices are such that it's become a bit difficult for me to replace the power of these cards even for the same price I had paid(about ~$200 each). So prices are going to need a drop before I can go ahead with an upgrade. Yep it's expensive I'll be limiting myself to dual graphics and 4 cores for the foreseeable future. Unless Bulldozer gets fixed or I can find a Phenom II X6. Otherwise I'm still going to be using quads. Graphics might be another story, but I'm reasonably sure It's gointg to be 2 cards, or 1. I could never afford to keep a quad graphics setup up-to-date. Dual graphics I can just about swing Yep Sam hit it right on the head. This was most recently tested with an HD5870 which is no slouch of a card. Almost zero difference until you hit 1x. I prefer AMD as my brand of choice, but for gaming? Intel is unarguably better. Again though, price is a large factor, and regardless of anything else, Phenom II was straight-up cheaper. Speed-wise Phenom II is still quite respectable, and modern games have driven graphics requirements more than CPUs so do the math. If it didn't push my video cards adequately, I wouldn't be using it ;P
Definitely running multiple video cards in SLi or Crossfire is a waste of money as even if all the slots are capable of 16x the GPU's still are master/slave and only 1/4+ of the work is handed off from the master card. It is much better to get a x2 video card then two singles. Don't buy that the Bulldozer is inferior and the extra cores are a major plus, again Intel is much better at clock rates and AMD is more core brute force approach. The CPU's need to be compared as CPU's and how much they can handle not to a core-core basis. This is why I don't buy your argument. I agree though if I want to push my CPU to the max clock rates Intel is hands down much better. As that may be true it is poor programming by the game maker, which is nothing new, or the game is optimized for a given CPU instruction set. In the old days people would program for flaws in a given CPU so that their program or game would perform better. But that doesn't really matter to the gamer if it is their favorite game you have to go with what works best for your use for sure. Now these days I would think that would be extremely rare.
That's debatable. There are some marginal gains to be had from a single dual-chip card but in practice there isn't much difference except heat. Likewise 90%+ scaling on quite a few games would argue with that as well. All the way down to 4x, the cards will perform the same, so running on a single slot only really save you space and maybe some heat due to more open space around the card. Also, the dual-chip cards are generally using two very high end GPUs, ie out of my price range. I can Crossfire using two individual mid-range cards, and be much more cost-effective. Well I AM a gamer, and regardless of how good Bulldozer performs as a CPU, games still need single core performance as much as multi-core performance. So some games, Bulldozer runs just great, but others it chokes pretty badly. I need both good single and multi performance in a gaming CPU. I'm not saying Bulldozer is inferior my good sir Simply that it isn't as ideal for gaming as Intel CPUs currently are.
I missed this point. While it's true that graphics are more often the bottleneck than the CPU, this is far from always the case. SCFA is one good example, but there are several others out there. Even counter-strike: source despite its age, with the sort of mods that are very popular (thinking mainly gungame here) - my old Q9550 could not keep up in a 24-player map. The difference between it and my i5 is enormous. Given that CSS is single-threaded, and Bulldozer is even slower per-thread than my Q9550 was, that's one CPU that can't really play that mod well at all, whereas my £150 i5 from February 2010 can with ease. And before you call insignificance on it being a mod, gungame is if anything more popular than CSS itself, which is still very popular despite its age. Strategy games in general are another key example. I far from push my CPUs to the max. When I overclocked my i5 I literally chose a value that I thought realistic (4.11Ghz from a base of 2.66), set it, upped the voltage a bit, and left it there. That was literally it, I haven't touched the settings since the day after I bought the CPU well over two years ago. More than a 50% performance boost for free, on air cooling? Why not. We're all technology enthusiasts here, so advocating the CPU that can't do this, and is slower even to begin with is a choice I will never truly understand. Mr-Movies: I don't wish to be blunt here, but it's clear you don't really understand how multi-GPU technology works. X2 cards work the exact same way as using crossfire with a bridge connector, and the master/slave arrangement hasn't been the case since dongles were done away with back in 2007. GPUs are all interleaved now, and while the technology is far from perfect, it's fairly typical to get a 90%+ gain when using two cards, or two GPUs.
Yep, Master/Slave hasn't been around since the X1900 series Thousands of units on-screen at once, each with their own path-finding, AI, attack routines, build routines. Not to mention large battle with huge amounts of particulates and explosions. The CPU is still very important for performance in Real-TIme Strategy. First person shooters, are generally another story. Graphics cards are much more important for shooter performance. Gaming is very multi-faceted, and no single setup will work best for every game. I've tried to build for all scenarios.
That would be a good example of something that relies on single core performance. Very common with many games. Bad programming or not, Intel runs better. Is it a sad truth? YES, but it IS true. I have been using AMD since I was a wee lad of 15 building my first PC, so please Movies, don't take me as anti-AMD in any way. You couldn't be further from the truth
Let's not focus too much on one arena though. A lot of dyed-in-the-wool AMD fans concede that Bulldozer is not for gaming, but maintain its superiority in other areas, and that's fair enough, but what has to be spelt out is that the single-core, 2-core and 4-core limitations that hit games so hard also hit other areas equally hard. Outside of when I encode video, nothing I use ever manages to take advantage of all 4 cores on my CPU, let alone if I had 8.
A sad truth. I am very partial to AMD in general but I can't ignore facts. Multi-core performance still gets largely ignored by software developers, so Bulldozer gets screwed, even if it is actually a very good CPU. That chart is a good example of why I haven't gone to Bulldozer. It simply isn't any faster in my chosen software. Does that mean it's straight-up slower? NO. It's just slower in games. I will maintain that Bulldozer is a fantastic CPU, but maybe a bit ahead of its time as far as programmers are concerned.
Perhaps, but in my mind it's no better than the X6 1100T, because each core is roughly 75% of the performance of an X6 core, therefore the raw performance, even in a best 8-core scenario, is the same as an X6. If it were faster in an 8-core environment than anything else out there, then sure, it'd be a great CPU, just set back by poor programmers. Unfortunately, however, that's not the case, the raw performance isn't sufficient to take on the i7 2700K. It just wins on costing less than the 2700K. Even if this were the case, you unfortunately can't sympathise with a company that's built a CPU for a market that doesn't really exist. Server CPUs exist for a reason, and they cost a lot. The FX-8150 is basically a server chip being pedalled to standard end users.
I've been back and forth on the issue a lot. But this is also exactly why I expect a revival ala Phenom II to bring Bulldozer up to snuff. Phenom I to Phenom II was a fantastic upgrade.
Ever hopeful, because it's the only thing that will spur Intel on to drop prices and innovate further, if anything else. Their lead has been so large for so long, it seems like they didn't really bother with Ivy Bridge.
A fare majority of the newer programs I use do or are capable of multiple cores and even if they are not I can still multitask older programs better with more cores and Windows 7 as the OS will handle some of that. There is nothing wrong with server chips, that certainly is not a negative. Besides Intel and AMD will base their consumer chips on server CPU's from time-to-time. No big deal, petty excuse... In fact I've had friends that use server boards as their gaming platform and were very happy with the results. I've said it before I just don't see the beef in your precious Intel's and I've given them a fare go to. But I also don't use some of the games that prefer the Intel's either and if that was my main focus then I might see the worth.
It's not even an Intel/AMD bias. It's not a "precious" intel thing or anything like that. Sam, like myself, got his start gaming on AMD hardware as well. We're not trying to be biased, just realistic. You can't pick one and say the other is evil. You just rob yourself of performance. GHz clock speeds are not what's important; it's the speed per-core, and Bulldozer just doesn't have it. It doesn't even fully match up to the product it was meant to replace. Yes, it's newer technology and a very innovative one at that, but it's having teething problems. Some server platforms would be fine for gaming. Older Socket 939 Opterons were wonderful as they were essentially Clawhammer/San Diego chips with a different core name and better overclocking. Same case with previous and current generations of Intel CPUs, the Server and Desktop lines largely maintain the same basic hardware. The old Opteron chips and Xeon chips were also literally higher binned Athlon 64s, and Core 2 Duos. But beyond the server chips that share a socket with desktop chips, it's very hard to effectively build a gaming PC using server hardware. In the case of Bulldozer, it's not a higher binned desktop chip, it is really a server CPU meant for cluster environments such as render farms or super computers. They aren't entirely suited to be used in a desktop environment with so many variable loads. They are probably wonderful when you have many of them together, because a pure multithreaded environment such as a high load server or computing cluster is more suited to the design. We're not down-talking AMD's product at all. It simply isn't the right product for desktops as it currently is. The OS will allow excellent multi-tasking, but this is multi-tasking, not single application performance. As stated many times previously, Bulldozer has many strengths, but they are not in pure performance. I wish the case were different, but it just isn't. Particularly if you do video encoding or Folding or some other heavily multi-threaded task, they are very cost effective, but they still aren't even the fastest. Most people don't even do video encoding, Folding, Photoshop, or anything else very heavily mutli-threaded. Regardless of relative value, they are slower in most things. It's not an opinion it's a fact.
It's truly not hard to build a gaming machine on a server platform. Variable loads on a server should be a piece of cake also. That is what servers are made for and you're not running a server OS so that shouldn't hinder one. I still don't buy that most games don't run fine on an AMD platform. I do game a little still and have games that are very intensive, they all run fine as well. You are down talking but that doesn't matter if it is true, and maybe for the few games you like it is true. And I've found from personal experience that I would disagree. My disdain for Intel isn't that they are a bad processor, they aren't, it is the cost as they are not a smart buy and I totally don't see the bloated performance increases which are much smaller then typically reported. Again that is not an opinion it is backed by personal testing with multiple real environment apps. If your argument back is that most people don't even do heavy tasking on their PC's then that would even more so support the fact that they would not need the extreme performance of the Intel, being sarcastic of course. Like I've said time and again, over and over, if they spin your boat knock yourself out however I would argue the point and a few games that might perform better isn't enough to change my thoughts on the matter. I won't be wasting my hard earned cash on an Intel anytime soon.
It has happened before - Pentium 4 and Pentium D being prime examples. This is actually the most accurate criticism of the AMD fans out there - often people don't need the performance of the Intels, and cheaper AMDs are a better buy. Of course, cheaper Intels are also a better buy, so it's still a toss up, but the i5/i7s aren't necessary for everyone, far from it. I would far sooner see people with low-end AMDs and SSDs than i5s and mechanical OS drives, for instance. Actually it's neither fact nor opinion, it's a placebo. This is the single reason why people are still buying Bulldozers essentially, barring the few odd occasions when there are genuine advantages (i.e. people who don't overclock, and use video encoding apps that have a bias towards AMD and use all 8 cores), it is simply because people who use AMDs don't realise what they're using is slower, and when they test Intels, they don't physically notice the difference, because they are predisposed against the idea of it being there. The human mind does work in odd ways and can often conceal hard evidence displayed before your very eyes - if you start out anticipating something to be true, often when it turns out not to be the case, it can be very easy to ignore that fact. I've had my scepticisms about several technologies in the past, dual-core CPUs for games, dual graphics, SSDs, and a fair few other things. When you first get them, you don't often realise the benefits even though they're right in front of you. Fortunately, perhaps long and arduous though the process was, I've learnt to always take a step back and examine things at face value. I've now been doing this long enough to spot dubious results in benchmarks, and trust me, they are there. Benchmarks are far from hard fact in many cases, but when you examine a large number of them from all sorts of angles, and simply think about situations you know from the past, you get a good picture of what's going on. Ultimately it is irrefutable fact that the original Core i5/i7 far surpassed the performance-per-core at stock of the Phenom II, that was a given. The Bulldozer is slower per-core than Phenom II in every case, we know this. Disregarding potentially biased sites you can always use intra-brand comparisons to work out how much better a CPU is than its predecessor, and from looking through countless sites some very solid figures emerge. Bulldozer is about 75-80% as powerful, per core, per mhz, as Phenom II. The first core i5/i7 (Lynnfield/Bloomfield) is 35% more powerful, per core, per mhz. The second generation i5/i7 (Sandy Bridge) is 10% beyond that, and the third generation (Ivy Bridge) is 5% beyond that. That gives us (and I've thrown in some legacy results as well): NETBURST ARCHITECTURE (Intel 90nm, P4 3.4Ghz etc.): 46% performance, e.g. P4 3400: 3400x1x46% = 1,564 CORE ARCHITECTURE (Intel 65nm, Q6600 etc.): 87% performance, e.g. Q6600: 2400x4x87% = 8,352 PENRYN ARCHITECTURE (Intel 45nm, Q9550 etc.): 100% performance, e.g. Q9550: 2833x4x100% = 11,333 BLOOMFIELD ARCHITECTURE (Intel 45nm, i7 920 etc.): 120% performance, e.g. i7 920: 2666x4x120% = 12,800 SANDY BRIDGE ARCHITECTURE (Intel 32nm, i7 2600K etc.): 138% performance, e.g. i7 2600K: 3400x4x138% = 18,768 IVY BRIDGE ARCHITECTURE (Intel 22nm, i7 3770K etc.): 145% performance, e.g. i7 3770K: 3500x4x145% = 20,3000 AMD MANCHESTER/TOLEDO (X2 4200+ etc.): 85% performance, e.g. X2 4200+: 2200x2x85% = 3,740 AMD DENEB (X4 955BE etc.) 88% performance, e.g. X4 955BE: 3200x4x88% = 11,264 AMD ZAMBEZI (FX-8150 etc.) 66% performance, e.g. FX-8150: 3600x8x66% = 19,008