It's been three years since we first visited the Cloud Imperium Games studios in Santa Monica, though we've conducted a dozen interviews with CIG CEO Chris Roberts in the time since. Now, taking a victory lap through Southern California's hardware manufacturers, we stopped over at CIG's offices for a second in-person visit.
A lot has changed. The studio, for one, is now in a new location that's farther from the Santa Monica beach, but in a larger space. The team has grown significantly in both organization and team size, and challenges faced, and Roberts has adjusted his interview technique just enough to ease off on providing release dates.
Our latest visit had us focusing on the new planetary procedural generation tech, version 2.0, the predecessor for which we originally detailed two full years ago. Roberts talked us through the start-to-finish plans for CitizenCon's presentations, additional Alpha 3.0 launch details, Star Marine, procedural generation, character tech, and engine refactoring in a forty-minute interview. We've split the interview into two parts, the second of which will go live on Monday (September 26). Our time spent in the office was doubled to accommodate a second interview with Technical Director Sean Tracy, responsible for answering our deeper hardware and software engineering questions. That content will go live next week, after the first two parts of the interview with Chris Roberts.
MSI and system integrator CyberPower are selling the new GT83VR Titan SLI notebook, which sells with K-SKU Intel CPUs and dual GTX 1070 or GTX 1080 GPUs. The move away from M-suffixed cards means that these GPUs are effectively identical to their desktop counterparts, with the exception of the GTX 1070's core increase and clock reduction.
That difference, just to quickly clear it away, results in 2048 CUDA cores on the notebook 1070 (vs. 1920 on the desktop) and a baseline clock-rate of 1645MHz on the notebook (1683MHz on the desktop). Despite talk about the 1060, 1070, and 1080 model notebooks, we haven't yet gotten into the SLI models for this generation.
Implementation of liquid coolers on GPUs makes far more sense than on the standard CPU. We've shown in testing that actual performance can improve as a result of a better cooling solution on a GPU, particularly when replacing weak blower fan or reference cooler configurations. With nVidia cards, Boost 3.0 dictates clock-rate based upon a few parameters, one of which is remedied with more efficient GPU cooling solutions. On the AMD side of things, our RX 480 Hybrid mod garnered some additional overclocking headroom (~50MHz), but primarily reduced noise output.
Clock-rate also stabilizes with better cooling solutions (and that includes well-designed air cooling), which helps sustain more consistent frametimes and tighten frame latency. We call these 1% and 0.1% lows, though that presentation of the data is still looking at frametimes at the 99th and 99.9th percentile.
The EVGA GTX 1080 Hybrid has thus far had the most interesting cooling solution we've torn down on an AIO cooled GPU this generation, but Gigabyte's Xtreme Waterforce card threatens to take that title. In this review, we'll benchmark the Gigabyte GTX 1080 Xtreme Water Force card vs. the EVGA 1080 FTW Hybrid and MSI/Corsair 1080 Sea Hawk. Testing is focused on thermals and noise primarily, with FPS and overclocking thrown into the mix.
A quick thanks to viewer and reader Sean for loaning us this card, since Gigabyte doesn't respond to our sample requests.
Samsung's 900-series SSDs marked the arrival of VNAND (NAND Flash explained here), a new NAND type that expanded capacity vertically to theoretically drive down the cost per gigabyte metric. Today, the company officially announced its 960 series SSDs, including the Samsung 960 Pro and Samsung 960 EVO. Both devices modernize their architecture by supporting NVMe on their new VNAND-based SSDs.
We think NVMe is one of the next big standards, as the standard SATA interface has reached its cap with data transfer rates. Intel and Samsung have both made moves to build the NVMe market and achieve higher throughput than was possible on a SATA bus.
As we board planes for our impending trip to Southern California (office tours upcoming), we've just finalized the Gigabyte GTX 1080 Xtreme Water Force tear-down coverage. The Gigabyte GTX 1080 Xtreme Water Force makes use of a similar cooling philosophy as the EVGA GTX 1080 FTW Hybrid, which we recently tore-down and reviewed vs. the Corsair Hydro GFX.
Gigabyte's using a closed-loop liquid cooler to deal with the heat generation on the GP104-400 GPU, but isn't taking the “hybrid” approach that its competitors have taken. There's no VRM/VRAM blower fan for this unit; instead, the power and memory components are cooled by an additional copper and aluminum heatsink, which are bridged by a heatpipe. That copper plate (mounted atop the VRAM) transfers its heat to the coldplate of what we believe to be a Cooler Master CLC, which then sinks everything for dissipation by the 120mm radiator.
Rosewill's Cullinan PC case is the company's most modern endeavor since the R5, and manages to get to the front of the case industry's current trends. It's a mid-tower with a PSU shroud and a full tempered glass side window, which is checking almost all the boxes created by NZXT's H440, In-Win's more expensive cases, and Corsair's 760T. The only 2016 trend missing from the Cullinan is a set of RGB LED fans, but they've still got blue LEDs.
We first saw the Rosewill Cullinan mid-tower at Computex 2016, but the case was impacted by shipping delays (and other internal delays) that pushed back its launch until now-ish. In theory, the ~$150 Cullinan will begin availability just before October, and should begin shipping to customers by the first week of October. That long lead-in to production has allowed competitors to enter the growing market of cases with tempered glass side panels, including Corsair with its brand new 460X, In Win with its 303, and Anidees with its identical AI Crystal ($150).
The Anidees AI Crystal and Rosewill Cullinan enclosures both boast 5mm thick tempered glass side windows and a 4mm thick tempered glass front panel. The enclosures target the front edge of a trend in the industry to adopt tempered glass on affordable cases (read: ~$100 to ~$200), replacing the cheaper acrylic that's found in almost all windowed panels. Rosewill and Anidees both use Chinese OEM designer Jonsbo, whom we believe to be a customer of case factory God Speed Casing. If that name's familiar, it's because God Speed Casing is the manufacturer used (and effectively grown) by NZXT; we've even toured their factories in China.
Dell's XPS 13 Ultrabook equivalent has moved to the new Intel Kaby Lake architecture. For the past few generations, Intel's small gains in IPC and processing performance have been largely overshadowed by the focus on power efficiency increases. NVIDIA and AMD are also on-board with this focus, and all three silicon manufacturers are pushing to use clock-gating, non-planar process, and algorithmic advancements to lower watt draw.
Reductions in TDP and moves by Intel to improve power efficiency (including idle improvements & S0iX) lengthen battery life, a move with which Dell has synergized by increasing battery capacity to 60Wh. The two together should grant a specified 22 hours of battery life on the XPS 13 notebooks; we are not sure the specifics of the methodology used to make that measurement.
Dell's XPS 13 units ship with Intel i3, i7, and i5 Kaby Lake CPUs (Gen 7). Bottom-up, the laptops will host Intel i3-7100U, i5-7200U, or i7-7500U CPUs and will start at $800 with Ubuntu (unclear on Windows price). Display, CPU, and memory choices dictate price scaling, with the displays alone specified at 1080p (minimally) to 3200x1800. This latter resolution is also used by Razer in the new Blade, which we hope to look at within the next month or two.
Dell also noted the following specifications in its press release:
This episode of Ask GN (#28) addresses the concept of HBM in non-GPU applications, primarily concerning its imminent deployment on CPUs. We also explore GPU Boost 3.0 and its variance within testing when working on the new GTX 1080 cards. The question of Boost's functionality arose as a response to our EVGA GTX 1080 FTW Hybrid vs. MSI Sea Hawk 1080 coverage, and asked why one 1080 was clock-dropping differently from another. We talk about that in this episode.
Discussion begins with proof that the Cullinan finally exists and has been sent to us – because it was impossible to find, after Computex – and carries into Knights Landing (Intel) coverage for MCDRAM, or “CPU HBM.” Testing methods are slotted in between, for an explanation on why some hardware choices are made when building a test environment.
This week's news announcements include AMD AM4 Zen chipset naming (rumors, technically), NZXT's new RGB LED 'Aer' fans, and a pair of cases from Rosewill and Cooler Master.
AMD's initial AM4 chipset announcement was made at PAX, where the B350, A320, and XBA300 chipsets were announced for mainstream and low-end Gen 7 APUs. The high-end Zen chipset for Summit Ridge was concealed during this announcement, but is now known to be the X370 platform.
X370 will ship alongside the Summit Ridge CPUs and will add to the lanes available for high-speed IO devices, mostly SATA and new generation USB. Most of the IO with the Zen architecture will be piped through the CPU itself, with what remains of the chipset acting more as an IO controller than a full chipset.
The GTX 1060 3GB ($200) card's existence is curious. The card was initially rumored to exist prior to the 1060 6GB's official announcement, and was quickly debunked as mythological. Exactly one month later, nVidia did announce a 3GB GTX 1060 variant – but with one fewer SM, reducing the core count by 10%. That drops the GTX 1060 from 1280 CUDA cores to 1152 CUDA cores (128 cores per SM), alongside 8 fewer TMUs. Of course, there's also the memory reduction from 6GB to 3GB.
The rest of the specs, however, remain the same. The clock-rate has the same baseline 1708MHz boost target, the memory speed remains 8Gbps effective, and the GPU itself is still a declared GP106-400 chip (rev A1, for our sample). That makes this most the way toward a GTX 1060 as initially announced, aside from the disabled SM and halved VRAM. Still, nVidia's marketing language declared a 5% performance loss from the 6GB card (despite a 10% reduction in cores), and so we decided to put those claims to the test.
In this benchmark, we'll be reviewing the EVGA GTX 1060 3GB vs. GTX 1060 6GB performance in a clock-for-clock test, with 100% of the focus on FPS. The goal here is not to look at the potential for marginally changed thermals (which hinges more on AIB cooler than anything) or potentially decreased power, but to instead look strictly at the impact on FPS from the GTX 1060 3GB card's changes. In this regard, we're very much answering the “is a 1060 6GB worth it?” question, just in a less SEF fashion. The GTX 1060s will be clocked the same, within normal GPU Boost 3.0 variance, and will only be differentiated in the SM & VRAM count.
For those curious, we previously took this magnifying glass to the RX 480 8GB & 4GB cards, where we pitted the two against one another in a versus. In that scenario, AMD also reduced the memory clock of the 4GB models, but the rest remained the same.