Following questions regarding the alleged expiry of MDF and rebates pertaining to Vega’s launch, AMD responded to GN’s inquiries about pricing allegations with a form statement. We attempted to engage in further conversation, but received replies of limited usefulness as the discussion fell into the inevitable “I’m not allowed to discuss this” territory.
Regardless, if you’ve seen the story, AMD’s official statement on Vega price increases is as follows:
As exciting as it is to see “+242% power offset” in overclocking tools, it’s equally deflating to see that offset only partly work. It does, though, and so we’ve minimally managed to increase our overclocking headroom from the stock +50% offset. The liquid cooler helps, considering we attached a 360mm radiator, two Corsair 120mm maglev fans, a Noctua NF-F12 fan, and a fourth fan for VRM cooling. Individual heatsinks were also added to hotter VRM components, leaving two sets unsinked, but cooled heavily with direct airflow.
This mod is our coolest-running hybrid mod yet, with large thanks to the 360mm radiator. There’s reason for that, too – we’re now able to push peak power of about 370-380W through the card, up from our previous limitation of ~308W. We were gunning for 400W, but it’s just not happening right now. We’re still working on BIOS mods and powerplay table mods.
Following the initial rumors stemming from an Overclockers.co.uk post about Vega price soon changing, multiple AIB partners reached out to GamersNexus – and vice versa – to discuss the truth of the content. The post by Gibbo of Overclockers suggested that launch rebates and MDF would be expiring from AMD for Vega, which would drive pricing upward as retailers scramble to make a profit on the new GPU. Launch pricing of Vega 64 was supposed to be $500, but quickly shot to $600 USD in the wake of immediate inventory selling out. This is also why the packs exist – it enables AMD to “lower” the pricing of Vega by making return on other components.
In speaking with different sources from different companies that work with AMD, GamersNexus learned that “Gibbo is right” regarding the AMD rebate expiry and subsequent price jump. AMD purportedly provided the top retailers and etailers with a $499 price on Vega 64, coupling sale of the card with a rebate to reduce spend by retailers, and therefore use leverage to force the lower price. The $100 rebate from AMD is already expiring, hence the price jump by retailers who need return. Rebates were included as a means to encourage retailers to try to sell at the lower $499 price. With those expiring, leverage is gone and retailers/etailers return to their own price structure, as margins are exceptionally low on this product.
Where video cards have had to deal with mining cost, memory and SSD products have had to deal with NAND supply and cost. Looks like video cards may soon join the party, as – according to DigiTimes and sources familiar with SK Hynix & Samsung supply – quotes in August increased 30.8% for manufacturers. That’s a jump from $6.50 in July to $8.50 in August.
It sounds as if this stems from a supply-side deficit, based on initial reporting, and that’d indicate that products with a higher count of memory modules should see a bigger price hike. From what we’ve read, mobile devices (like gaming notebooks) may be more immediately impacted, with discrete cards facing indeterminate impact at this time.
Tearing open the RX Vega 56 card revealed more of what we expected: A Vega Frontier Edition card, which is the same as Vega 64, which is the same as Vega 56. It seems as if AMD took the same PCB & VRM run and increased volume to apply to all these cards, thereby ensuring MOQ is met and theoretically lowering cost for all devices combined. That said, the price also increases in unnecessary ways for the likes of Vega 56, which has one of the most overkill VRMs a card of its ilk possibly could -- especially given the native current and power constraints enforced by BIOS. That said, we're working on power tables mods to bypass these constraints, despite the alleged Secure Boot compliance by AMD.
We posted a tear-down of the card earlier today, though it is much the same as the Vega: Frontier Edition -- and by "much the same," we mean "exactly the same." Though, to be fair, V56 does lack the TR6 & TR5 screws of FE.
Here's the tear-down:
“Indecision” isn’t something we’ve ever titled a review, or felt in general about hardware. The thing is, though, that Vega is launching in the midst of a market which behaves completely unpredictably. We review products as a value proposition, looking at performance to dollars and coming to some sort of unwavering conclusion. Turns out, that’s sort of hard to do when the price is “who knows” and availability is uncertain. Mining does all this, of course; AMD’s launching a card in the middle of boosted demand, and so prices won’t stick for long. The question is whether the inevitable price hike will match or exceed the price of competing cards. NVidia's GTX 1070 should be selling below $400 (a few months ago, it did), the GTX 1080 should be ~$500, and the RX Vega 56 should be $400.
Conclusiveness would be easier with at least one unchanging value.
Storing multiple terabytes of video content monthly is, obviously, a drive-intensive business -- particularly when using RAID for local editing scratch disks, a NAS for internal server access, and web remote backup. Rather than buy more drives and build a data library that is both impossible to manage and impossible to search, we decided to use our disks smarter and begin compressing broll as it falls into disuse. Deletion is the final step, at some point, but the compression is small enough as to be a non-concern right now. We're able to compress our broll anywhere from 50-86%, depending on what kind of content is contained therein, and do so with nearly 0 perceptible impact to content quality. All that's required is a processor with a lot of threads, as that's what we wrote our compression script to use, and some extra power each month.
Threadripper saw use recently in a temporary compression rig for us, as we wanted to try the CPU out in a real-world use case for our day-to-day operations. The effort can be seen below:
Computers have come a long way since their inception. Some of the first computers (built by the military) used electromagnets to calculate torpedo trajectories. Since then, computers have become almost incomprehensibly more powerful and accessible to the point at which the concept of virtual reality headsets aren’t even science fiction.
In gaming PCs, these power increases have often been used to ensure higher FPS, faster game mechanics, and more immersive graphics settings. Despite this, the computational power in modern PCs can be used for a variety of applications. Many uses such as design, communication, servers, etc. are well known, but one lesser known use is contributing to distributed computation programs such as BOINC and Folding@Home.
BOINC (Berkeley Open Infrastructure for Network Computing) and Folding@home (also sometimes referred to as FAH and F@H) are research programs that utilize distributed computing to provide researchers large amounts of computational power without the need of supercomputers. BOINC allows for users to support a variety of programs (including searching for extraterrestrial life, simulating molecular simulations, predicting the climate, etc.). In contrast, Folding@home is run by Stanford and is a singular program that simulates protein folding.
First we’ll discuss what distributed computing is (and its relation to traditional supercomputers), then we’ll cover some noteable projects we’re fond of.
Visiting AMD during the Threadripper announcement event gave us access to a live LN2-overclocking demonstration, where one of the early Threadripper CPUs hit 5.2GHz on LN2 and scored north of 4000 points in Cinebench. Overclocking was performed on two systems, one using an internal engineering sample motherboard and the other using an early ASRock board. LN2 pots will be made available by Der8auer and KINGPIN, though the LN2 pots used by AMD were custom-made for the task, given that the socket is completely new.
The launch of Threadripper marks a move closer to AMD’s starting point for the Zen architecture. Contrary to popular belief, AMD did not start its plans with desktop Ryzen and then glue modules together until Epyc was created; no, instead, the company started with an MCM CPU more similar to Epyc, then worked its way down to Ryzen desktop CPUs. Threadripper is the fruition of this MCM design on the HEDT side, and benefits from months of maturation for both the platform and AMD’s support teams. Ryzen was rushed in its weeks leading to launch, which showed in both communication clarity and platform support in the early days. Finally, as things smoothed-over and AMD resolved many of its communication and platform issues, Threadripper became advantaged in its receipt of these improvements.
“Everything we learned with AM4 went into Threadripper,” one of AMD’s representatives told us, and that became clear as we continued to work on the platform. During the test process for Threadripper, work felt considerably more streamlined and remarkably free of the validation issues that had once plagued Ryzen. The fact that we were able to instantly boot to 3200MHz (and 3600MHz) memory gave hope that Threadripper would, in fact, be the benefactor of Ryzen’s learning pains.
Threadripper will ship in three immediate SKUs:
Respectively, these units are targeted at price-points of $1000, $800, and $550, making them direct competitors to Intel’s new Skylake-X family of CPUs. The i9-7900X would be the flagship – for now, anyway – that’s being more heavily challenged by AMD’s Threadripper HEDT CPUs. Today's review looks at the AMD Threadripper 1950X and 1920X CPUs in livestreaming benchmarks, Blender, Premiere, power consumption, temperatures, gaming, and more.