Micron confirms GDDR5X now in mass production

This announcement might seem a bit anticlimactic now that Nvidia has already stated that the GeForce GTX 1080 will use GDDR5X RAM, but Micron has announced that GDDR5X is now in mainstream production, months ahead of its originally expected schedule.

One interesting thing about the GTX 1080 is that the RAM it uses is at the lower-end of the memory Micron said it could ship. GDDR5X has previously been touted as a solution capable of providing up to 12.0 Gbps of bandwidth in quad-data rate (QDR) mode. Nvidia, in contrast, has stated that the GTX 1080 uses just 10Gbps memory — towards the low-end of the scale. One potential advantage of starting at the lower end, however, is that the company has plenty of headroom to scale in the future.

Absolute bandwidth on the GTX 1080 is 320GB/s, which nearly matches the GTX 980 Ti while retaining the GTX 980’s 256-bit memory bus. Keeping the memory bus small is one way to control die size and lower production costs; large ring buses eat up die space and consume a significant amount of power.

Brand-new standard or GDDR4?

We suspect that GDDR5X will be similar to GDDR4. GDDR4 was introduced by AMD in 2006 as a way to improve memory bandwidth and GPU performance. Nvidia never switched to GDDR4, preferring instead to retain GDDR3 until GDDR5 became available. AMD used GDDR4 in various products from 2006 – 2009, at which point it moved to GDDR5.

AMD hasn’t said whether or not it will use GDDR5X in upcoming products, but the fact that Nvidia is only using GDDR5X for the GTX 1080 suggests that the memory is still rather expensive; the GTX 1070 uses GDDR5. AMD’s Polaris is expected to target the mainstream PC market, which means it’ll likely use GDDR5 as well. Meanwhile, both AMD and Nvidia are planning late-year launches for next-generation products, both of which are confirmed to use HBM2.

micron-gddr5x-chip-3

The implication here is that GDDR5X’s market position long-term will depend on whether or not AMD and Nvidia see a need for the memory to offset higher costs in HBM2. If HBM2 costs come down relatively quickly, there may not be much room left for GDDR5X in the market, given that GDDR5 is produced by multiple companies and is relatively cheap (Micron, thus far, is the only company building GDDR5X). If, on the other hand, interposer costs stay high, we may still see all three standards in-market for some period of time.

What’s highly unlikely is that we’ll see any single GPU supporting both HBM2 and GDDR5X. Sources we’ve spoken to have indicated that supporting both memory standards simultaneously in the same silicon would be extremely difficult for minimal benefits. Supporting GDDR5 and 5X in the same silicon is much simpler than an HBM2 + GDDR5X configuration.

Backblaze releases billion-hour hard drive reliability report

Backblaze has released its reliability report for Q1 2016 covering cumulative failure rates both by specific model numbers and by manufacturer. The company noted that as of this quarter, its 61,590 drives have cumulatively spun for over one billion hours (that’s 42 million days or 114,155 years, for those of you playing along at home).

Backblaze’s reports on drive lifespan and failure rates are a rare peek into hard drive longevity and lifespan. One of the most common questions from readers is which hard drives are the most reliable. It’s also one of the most difficult to answer. Companies do not release failure data and the handful of studies on the topic typically cloak vendor names and model numbers. As always, I recommend taking this data with a grain of salt: Backblaze uses consumer drives in a demanding enterprise environment and while the company has refined its storage pod design to minimize drive vibration, the average Backblaze hard drive does far more work in a day than a consumer HDD sitting in an external chassis.

For those of you wondering if drive vibration actually matters, here’s a video of someone stopping a drive array by yelling at it.

Here’s Backblaze’s hard drive failure stats through Q1 2016:

blog-table-q1-2016-only

The discrepancy between the 61,590 drives Backblaze deploys and the 61,523 drives listed in this chart is that the company doesn’t show data unless it has at least 45 drives. That seems an acceptable threshold given the relatively small gap. Backblaze also notes that the 8.63% failure rate on the Toshiba 3TB is misleadingly high — the company has just 45 of those drives, and one of them happened to fail.

Here’s the same data broken down by manufacturer. This chart combines all drive data, regardless of size, for the past three years.

Drive stats

HGST is the clear leader here, with an annual failure rate of just 1% for three years running. Seagate comes out the worst, though we suspect much of that rating was warped by the company’s crash-happy 3TB drive. Backblaze prominently pulled the 3TB drives from service just over a year ago, and Seagate’s drive failure rate fell precipitously as a result. Western Digital now holds that dubious honor, though the company’s ratings have also improved in the past year.

Asked why it sources the vast majority of its drives from HGST or Seagate, Backblaze reported that it has little choice:

These days we need to purchase drives in reasonably large quantities, 5,000 to 10,000 at a time. We do this to keep the unit cost down and so we can reliably forecast our drive cost into the future. For Toshiba we have not been able to find their drives in sufficient quantities at a reasonable price. For WDC, we sometimes get offered a good price for the quantities we need, but before the deal gets done something goes sideways and the deal doesn’t happen. This has happened to us multiple times, as recently as last month. We would be happy to buy more drives from Toshiba and WDC, if we could, until then we’ll continue to buy our drives from Seagate and HGST.

The company notes that 4TB drives continue to be the sweet spot for building out its storage pods, but that it might move to 6, 8, or 10TB drives as the price on the hardware comes down. Overall it’s an interesting look at a topic we rarely get to explore.

New medical tech coming to the rescue for the vision-impaired

Ever since the invention of the magnifying glass nearly 25 centuries ago, we’ve been using technology to help us see better. For most of us, the fix is fairly simple, such as a pair of glasses or contact lenses. But for many with more seriously impaired vision — estimated at around 285 million people worldwide — technology has been short on answers until fairly recently. Doctors and scientists are making up for lost time though, with a slew of emerging technologies to help everyone from the mildly-colorblind to the completely un-sighted. They’re also part of a wide swath of new medical advances we’ll be covering all this week here at in our new Medical Tech series.

Superman glasses for the vision-impaired

These space-age-style smart glasses from Vast can help the vision-impaired see what is around themWe’re all familiar with the accessibility options available on our computers, including larger cursors, high-contrast fonts, and magnified screens. But those do nothing to help the vision-impaired navigate the rest of their day. Instead, a number of different “smart glasses” have been invented that help make the rest of the world more accessible.

These glasses work by using the image from one or more cameras — often including a depth sensor — and processing it to pass-along an enhanced version of the scene to a pair of displays in front of the eyes. Deciding on the best way to enhance the image — autofocus, zoom, object outlining, etc. — is an active area of research, as the best way for the wearer to control them. Right now they tend to require an external box that does the image processing and has knobs for controlling settings. Emerging technologies including eye tracking will provide improved ways to control these devices. Better object recognition algorithms will also help improve their utility. One day it will be easy to have these glasses know enough to highlight house keys, or a wallet, or other commonly-needed, but sometimes hard to locate, possessions.

One of the more clever solutions comes out of Oxford, via Google Global Impact Challenge winner VA-ST. I had a chance to try out VA-ST’s prototype Smart Specs last year, and can see how they could be very helpful for those who otherwise can’t make out details of a scene. It’s hard, though, to get a real feel for their effectiveness unless you are actually suffering from a particular vision impairment. Some work is being done to help simulate these conditions, and allow those with normal vision to evaluate solutions. But until then willing subject participants with uncommon vision disorders are actually a scarce resource for scientists attempting to do trials of their devices.

The VA-ST system highlights edges and objects to make important parts of a scene more visibleMost solutions available today suffer not only from technical issues like how they are controlled, but cut off eye contact and are socially awkward — which has also hampered their adoption. Less-obtrusive devices using wave guides, like the ones developed by Israeli startup Lumus, will be needed to overcome this issue. Startup GiveVision is already demoing a version of its vision-assisting wearable using Lumus wave guides to help make it more effective and less obtrusive. Similar advanced augmented reality display technology is also being used in Microsoft’s HoloLens and Magic Leap’s much-rumored device. While it is mostly mainstream AR devices like those that are driving the technology to market, there is no doubt the medical device sector will be quick to take advantage of it.

Other efforts to enhance awareness of the visual world, including EyeMusic, render salient aspects of the scene — such as distance to the closest object — as audible tones. The OrCam system recognizes text and reads it to the wearer out loud, for example. These systems have the advantage that they don’t require placing anything over the wearer’s eyes, so they don’t interfere with eye contact.

Retinal implants provide sight for many of the blind

In many blind people — particularly those suffering from Retinitis Pigmentosa and age-related Macular Degeneration — the retinal receptors may be missing, but the neurons that carry information from them to the brain are intact. In that case, it is sometimes possible to install a sensor — an artificial retina — that relays signals from a camera directly to the vision neurons. Since the pixels on the sensor (electrodes) don’t line up exactly with where the rods and cones would normally be, the restored vision isn’t directly comparable with what is seen with a natural retina, but the brain is able to learn to make sense of the input and partial vision is restored.

Palankar's team is also looking at using special glasses to provide wireless data to the retinal implantRetinal implants have been in use for over a decade, but until recently have only provided a very minimal level of vision — equivalent to about 20/1250 — and have needed to be wired to an external camera for input. Now, though, industry-leader Retina Implant has introduced a wireless version with 1,500 electrodes on its 3mm square surface. Amazingly, previously-blind patients suffering from Retinitis Pigmentosa have been able to recognize faces and even read the text on signs. Another wireless approach, base on research by Stanford professor Daniel Palanker’s lab, involves projecting the processed camera data into the eye as near IR — and onto the retinal implants — from a special pair of glasses. The implants then convert that to the correct electrical impulses to transmit to the brain’s neurons. The technology is being commercialized by vision tech company Pixium Vision as its PRIMA Bionic Vision Restoration System, and is currently in clinical trials.

Even color-blind people can benefit from clever glasses

While severe vision disorders affect a large number of people, even more suffer from the much more common problem of color blindness. There are many types of color blindness — some caused by missing the correct cones to discriminate one or more of the primary colors. But many who have what is commonly called “red-green colorblindness” simply have cones with sensitivities that are too close together to help distinguish between red and green. Startup Enchroma stumbled across the idea of filtering out some of the overlap, after noticing that surgeons were often taking their OR glasses with them to the beach to use as sunglasses. From there, the company worked to tune the effect to assist with color deficiency — the result being less overall light let through its glasses, but a better ability to discriminate between red and green. If you’re curious whether the company’s glasses can help you, it offers an online test of your vision.

Accessibility tech breakthroughs for the blind

Startup blitab is adding braille to a standard tabletThere are plenty of limits on what medical technology can currently accomplish for those who are blind or vision-impaired. Fortunately, accessibility technology has also continued to advance. Most of us are familiar with magnified cursors, zoomed-in text, and speech input-and-output, but there are other more sophisticated tools available. There are too many to even list them here, but for example, startup blitab is creating a tablet for the world’s estimated 150 million braille users that features a tactile braille interface as well as speech input and output. On the lighter side, Pixar is developing an application that will provide a narrative description of the screen while viewers watch.

However good your vision, you’re likely to benefit from medical technology for improving it at some point, since the incidence of vision-related conditions increases dramatically with age. Everyone eventually suffers from at least relatively minor conditions like Presbyopia (the inability for the eye to accommodate to near and far focusing), and over 25% of those who make it to age 80 suffer from major vision impairment. Even for those of us with only minor vision issues, the advent of smartphone apps to help measure our vision and diagnose possible problems will help lower costs. With the rapid advances in microelectronics, surgical technology, and augmented reality, though, there are likely to be some amazing treatments for those conditions in the future.

IBM researchers announce major breakthrough in phase change memory

For years, scientists and researchers have looked for the so-called Holy Grail of memory technology — a non-volatile memory standard that’s faster than NAND flash while offering superior longevity, higher densities, and ideally, better power characteristics. One of the more promising technologies that’s been in development is phase-change memory, or PCM. IBM researchers announced a major breakthrough in PCM this week, declaring that they’ve found a way to store up to three bits of data per “cell” of memory. That’s a significant achievement, given that previous work in the field was limited to a single bit of data per memory cell.

Phase change memory exploits the properties of a metal alloy known as chalcogenide. Applying heat to the alloy changes it from an amorphous mass into a crystal lattice with significantly different properties, as shown below:

Theseus1

Scientists have long known that chalcogenide could exist in states between crystal lattice or amorphous, but building a solution that could exploit these in-between states to store more memory has been extremely difficult. While phase-change memory works on very different principles than NAND flash, some of the problems with scaling NAND density are conceptually similar to those faced by PCM. Storing multiple bits of data in NAND flash is difficult because the gap between the voltage levels required to read each specific bit is smaller the more bits you store. This is also why TLC NAND flash, which stores three bits of data per cell, is slower and less durable than MLC (2-bit) or SLC (single bit) NAND.

IBM researchers have discovered how to store three bits of data in a 64K array at elevated temperatures and for one million endurance cycles.

“Phase change memory is the first instantiation of a universal memory with properties of both DRAM and flash, thus answering one of the grand challenges of our industry,” said Dr. Haris Pozidis, an author of the paper and the manager of non-volatile memory research at IBM Research – Zurich. “Reaching 3 bits per cell is a significant milestone because at this density the cost of PCM will be significantly less than DRAM and closer to flash.”

Here’s how the PR blast describes the breakthrough:

To achieve multi-bit storage IBM scientists have developed two innovative enabling technologies: a set of drift-immune cell-state metrics and drift-tolerant coding and detection schemes.

More specifically, the new cell-state metrics measure a physical property of the PCM cell that remains stable over time, and are thus insensitive to drift, which affects the stability of the cell’s electrical conductivity with time. To provide additional robustness of the stored data in a cell over ambient temperature fluctuations a novel coding and detection scheme is employed. This scheme adaptively modifies the level thresholds that are used to detect the cell’s stored data so that they follow variations due to temperature change. As a result, the cell state can be read reliably over long time periods after the memory is programmed, thus offering non-volatility.

“Combined these advancements address the key challenges of multi-bit PCM, including drift, variability, temperature sensitivity and endurance cycling,” said Dr. Evangelos Eleftheriou, IBM Fellow.

There’s still a great deal of work to do before phase-change memory can be considered as a candidate to replace NAND flash or DRAM in certain situations. The performance and power impact of these new structures has not been characterized and the switching time hasn’t been revealed.

Universal-PCM

The graphic above is from an IBM video explaining how PCM memory works and some general information on this latest breakthrough. Note that PCM, like NAND flash, takes a performance hit when it shifts to a multi-bit architecture. While single-bit PCM is nearly as fast as DRAM (according to IBM), multi-bit PCM is significantly slower. Data retention (how long data remains in the cell) was also worse than NAND flash, which has lower endurance (how many read/write cycles the cells can withstand) but higher data retention.

Phase-change memory is theoretically capable of replacing DRAM in at least some instances, but if these density gains come at the cost of programming speed, the net gain may be minimal. Phase-change memory also requires large amounts of power to program and generates a great deal of heat as a result.

This video from IBM walks through the history of phase-change memory, explains the basics of its function, and covers the most-recent breakthrough. We think IBM’s discovery here could help pave the way for a long-term replacement to NAND flash, but we’re still years away from that. Intel’s Optane 3D XPoint memory may make its own play for the server and data center space, and Micron, which used to manufacture PCM for the mass market doesn’t build it anymore.

Microsoft’s Windows 10 Anniversary Update doubles up on Start Menu advertising

One of the changes Microsoft introduced when it launched Windows 10 was the ability to show suggested applications, aka advertisements, within the Start Menu and on the lock screen. The “suggested application” function can be disabled relatively easily, but Microsoft is making changes in Windows 10 to increase application visibility and hopefully entice more users to head for the Windows Store.

Once the Anniversary Update drops, the number of promoted apps in the Start Menu will double, from five to 10. To accommodate this change, the number of static Microsoft applications will decrease, from 17 to 12.

Win10-1

Many of these promoted applications (aka Programmable Tiles) aren’t actually installed on the system by default. Instead, they take the user to the Windows Store where the app can be installed.

Win10-2

Neowin isn’t sure if this will apply to existing Windows 10 PCs, or if this change will only go live on new installations. Either way, it’s a smart move for Microsoft.

Shifting paradigms

One of the most significant barriers to Windows Store adoption is the entrenched behavior of Windows’ users. For decades, Windows users have been used to downloading software from various sites on the Internet. If you need a media player, you use VLC or MPC-HC. If you need messaging software, you can download various apps from individual vendors or grab an all-in-one product like Trillian or Pidgin. Your first browser might come from Microsoft, but if you want something else you’ll head for Firefox or Google Chrome.

Microsoft wants users to see the Windows Store as a one-stop shop for its applications, but it’s difficult to shift how people use a system they’ve spent decades with. We don’t blame the company for using promoted apps what the Windows Store can offer. The problem is, the majority of the programs we’ve seen on the Windows Store don’t compare well against the applications you can download on the Internet. We’ve chronicled the problems with various UWP games already, but applications you download from the Windows Store are often tablet-centric and explicitly designed around certain limitations Microsoft enforces.

The real problem for the Windows Store isn’t getting people to look at it — it’s building up an application library of stuff people want to actually use. This has been a problem for Microsoft since it launched Windows 8, and while the store’s layout and UI have improved significantly, breakout application successes are few and far between. The app model simply hasn’t caught on for desktop software, possibly because most people expect PC software to be more complicated and have a greater range of capability than the application-equivalent. On a smartphone or tablet, apps can be good stand-ins for browsing or using websites. On desktops, the existing paradigm is different. Unless Microsoft can offer users some stellar software, it may not see the uptake it’s looking for, no matter how many PC users upgrade to Windows 10.

Microsoft releases unofficial service pack for Windows 7

One of the disadvantages to using an older Microsoft operating system is the need to install several hundred megabytes of patches after the initial OS is loaded. In the past, Microsoft ameliorated this problem by releasing several service packs over the life of the OS, but Windows 7 only ever got one service pack, in 2011. As a result, the last four years of updates and patches has to be run manually.

Now, that’s changing. Microsoft isn’t calling this new “convenience rollup” Windows 7 SP2, but that’s functionally what it provides. The update will also support slipstream installations, meaning you can roll the software updates into a unified installer and bring a system fully up-to-date at base install.

No such update has been announced for Windows 8.1 yet, but Microsoft has also stated that it will begin releasing monthly comprehensive updates for non-security patches. Windows 7 SP1, Windows 8.1, Windows 2008 R2 SP1, Windows Server 2012, and Windows Server 2012 R2 will all begin receiving single updates on a monthly basis (security updates will continue to be released on their own schedule).

Update availability and contents

One significant change going forward is that updates will no longer be available via the Microsoft Download Center. Instead, they’ll use the Microsoft Update Catalog. If you’re wondering what that is, it’s a Windows XP relic that currently depends on Microsoft Internet Explorer and uses ActiveX. Chrome, Firefox, and other third-party browsers can’t access it (Microsoft says they’re working to modernize this).

Microsoft-Update

Yeah. It’s a little… dated-looking is the kind way to put it.

One question we’re certain will come up is whether or not the Windows 7 roll-up includes the various updates and packages designed to push Windows 10. The answer to that, so far as we can tell, is no. There are a number of KB articles associated with the Windows 10 rollout and the telemetry updates to Windows 7, including:

  • KB2952664
  • KB2977759
  • KB3022345
  • KB3050267
  • KB3035583
  • KB3068708*
  • KB3075249*
  • KB3080149*
  • KB3146449

We’ve gone through the included KB files in the Windows 7 convenience rollup and can confirm that the majority of these updates are not included in the software. There are three exceptions: KB3068708, KB3075249, and KB3080149. All three of these updates add additional telemetry tracking to Windows 7 to bring its reports into line with Windows 10, but they don’t add GWX.exe or any of the “Get Windows 10” adds that people have complained about since Microsoft’s latest OS went live.

While I realize that some readers won’t be thrilled with any backported changes from Windows 10 into Windows 7, the truth is, telemetry tracking in Windows 7 can still be disabled; you aren’t forced to participate in the Customer Experience Improvement Program (CEIP). If you’re still doing Windows 7 installations on new hardware, turning off telemetry tracking is a lot less hassle than manually performing multiple patch / reboot cycles — and it takes a lot less time.

Adobe Spark will unleash your inner graphic designer

I’ve always been jealous of graphic designers. Their ability to transform a few photos and some text into a visually-compelling communication is both an art and a skill. There are plenty of applications around that provide the technical tools needed for great graphic design — starting with Adobe’s own Illustrator — but they are not only painful to learn, but they don’t help if you don’t have the talent to use them correctly. With the explosion of social media, effective visual communication is more important than ever. Today Adobe has unveiled Spark — one of the best attempts yet to allow people of all skill levels, and on all platforms, to create sophisticated graphic designs with only a few minutes of work.

Spark is in part a rebranding of Adobe’s current Slate and Voice apps. Their capability to create page-based and animated video experience has been extended and complemented by the ability to create social media posts. The market for Spark is (at least initially) small businesses without access to professional graphics talent, bloggers, students, and non-profits that lack the budget (which I’m sure won’t thrill the pro community that Adobe relies on to make a living, but that’s the nature of things these days).

Spark handles Posts, Pages, and narrated Videos

Spark lets you create new Posts, Pages, or Videos from themed templates or pick a category and let Adobe guide your creationSpark allows the creation of three different types of content: Posts — for sharing on social media, Pages — for sharing as web experiences, and Videos — essentially animated slideshows. Of the three, the Post module seems to be the most full-featured so far. You can not only create content in a variety of sizes — including those optimized for popular sharing services like Facebook and Instagram — but add additional text boxes and images. Once you’ve added them you have full control over how and where they appear, but you can also let your creation be guided by the theme you’ve chosen, which will provide a generic but well-composed starting point. One nice feature is the ability to specify the focal point of a photo, so that if your post needs to be resized, it will keep the focal point in view.

The Pages module is a little more restrictive, with a limited number of layout options (at least in the current version). The result is a fairly-typical vertically-oriented page, but with limited support for further-enhancing each element. The pages look great, and are in keeping with the visual look and feel of currently-trendy web site designs, but they are all fairly simple in structure. Ironically, the Video module doesn’t actually work with video. Instead, it is a user-friendly way to create a narrated slideshow. If you prefer, you can use music (or some other pre-recorded soundtrack) instead of narrating.

Spark themes are super-powered templates

Templates for designs are not new, but typically they are static. If you need to change output format or layout, you’re on your own. Spark features very flexible “themes” that support a wide variety of output sizes and shapes for your creation. Of course, if all Spark offered was flexible templates, content created with it would all start to look pretty similar. Fortunately, you can tweak each theme to your heart’s content. In addition to obvious customizations like adding more text items or changing the background, you can click through a variety of color palettes, and change fonts. One especially cool feature of Spark themes is that as you manipulate the outline of a text box, it dynamically changes the layout of the text and which words are emphasized. So you can quickly drag a corner of a text box around until you get the effect you want. Adobe calls this feature, appropriately enough, Magic Text.

Easy way to get Creative Commons photos

It’s not that hard to search the web for free-to-use photos, but Spark makes it even easier. You can simply click on find photo, type in your search term, and it will retrieve images you can use under the Creative Commons license. Given that Adobe Stock is a growing portion of Adobe’s business, I’d also expect to see it add a connection to its stock portfolio fairly soon.

Spark echoes Canva, but Adobe’s clout will help it catch up

Startup Canva has been offering many of the features of Spark (and lots that Spark doesn’t have) for several years now. I’ve used Canva for Facebook posts and ads, and been pleased with the quality of its output. What I didn’t like was that, unless you go out of your way to avoid it, the actual graphic is hosted on its site and linked, rather than feeling like it was truly “mine.” Adobe Spark is a little-bit better on this, but both companies are clearly hoping to leverage their free offerings into a business — which means they want to host and control traffic and audiences over time.

Spark is not unique. Startup Canva has been providing tools for intelligently creating graphic designs like this one for years now.

Hosting versus download

The most straightforward way to use Spark is to allow Adobe to host your creations, and simply share links to them as needed. Personally, I’m skeptical of using that approach, as over the years these services come and go, so your content may disappear — and in any case, if you have a web presence you’re better off generating traffic to it than having people link out to Adobe. Fortunately, for Posts and Videos, Adobe offers a simple way to download the output, so that you can post it directly, either to your own blog or website, or to social media. Pages are an integrated experience (much like Microsoft’s Sway) and need to be hosted by Adobe. Canva creations can be downloaded in a similar fashion.

Spark lets you customize Posts with additional captions and effects, including your own background photosOnce your creation is on your computer (typically as a JPEG), you can then upload and share it the way you would any other content. Spark supports downloading its Post format (as a JPEG) and Videos (as a video), but not Pages. Those need to be hosted on Adobe’s site — similar to the way Microsoft Office’s Sway tool works.

iOS and web are the favored platforms

As it has with most of its mobile products, Adobe has released Spark in the app store for iOS, with only a hand wave about Android. Fortunately, a very-robust web interface is also available, so Android users with access to Chrome or some other desktop OS (or who are willing to brave a complex web UI on their smartphone) can still take advantage of Spark. For pre-release evaluation, we only had access to the web UI, so I can’t report directly on the iOS app yet.

Is Adobe Spark for you?

The good news is that Spark is free and only takes a couple minutes to try. You can login with your Facebook, Google, or Adobe ID (even without a Creative Cloud subscription), and experiment. I recommend starting with a Post or two. Now that the press embargo is lifted and we can actually post our Posts, I’ll be doing some of the same. Simply head tospark.adobe.com to get started.

ARM announces new Artemis CPU core, first 10nm test chip, built at TSMC

ARM and TSMC have had a joint agreement in place for several years to collaborate on R&D work and early validation on process nodes, and they’ve announced a major milestone in that process. As of yesterday, ARM is announcing that it has successfully validated a new 10nm FinFET design at TSMC.

The unnamed multi-core test chip features a quad-core CPU from ARM, codenamed Artemis, a single-core GPU as a proof of concept, and the chip’s interconnect and other various features.

Artemis-TestChip

This isn’t an SoC that ARM will ever bring to market. Instead, it’s purpose is to function as a validation tool and early reference design that helps both TSMC and ARM understand the specifics of the 10nm FinFET process as it moves towards commercial viability. One of the features that pure-play foundries like TSMC offer their customers are tools and libraries specifically designed to match the capabilities of each process node. Since each new node has its own design rules and best practices, TSMC has to tune its offerings accordingly — and working with ARM to create a reasonably complex test chip is a win/win situation for both companies. ARM gets early insight into how best to tune upcoming Cortex processors; TSMC gets a standard architecture and SoC design that closely corresponds to the actual chips it’ll be building for its customers as the new process node moves into production.

ARM-10nm-vs-16nm

The slide above shows the gains TSMC expects to realize from moving to 10nm as opposed to its current 16nm process. To the best of our knowledge, TSMC’s 10nm is a hybrid process, but it’s not clear exactly what that hybrid looks like. Our current understanding is that the upcoming 10nm node would combine a 10nm FEOL (Front end-of-line) with a 14nm BEOL (Back-end-of-line, which governs die size). EETimes, however, reported in March that TSMC’s 10nm shrink would retain a 20nm minimum feature size, while its 7nm would deliver a 14nm minimum feature size (10/20 and 7/14, respectively). Either way, Intel is the only company that has announced a “true” 14nm or 10nm die shrink. (The degree to which this process advantage materially helps Intel these days is open to debate).

Two things to note: First, the top line of the slide is potentially confusing. The 0.7x reduction of power would be easier to read if ARM had labeled it “ISO Performance at 0.7x power.” Second, the performance gains predicted here purely as a result of the node transition are downright anemic. I don’t want to read too much into these graphs because it’s very early days for 10nm, but there’s been a lot of talk around 16/14nm as a long-lived node, and results like this are part of why — only a handful of companies will want to pay the extra costs for the additional masks required as part of the die shrink. TSMC has already said that it believes 10nm will be a relatively short-lived node, and that it thinks it’ll have more significant customer engagement for 7nm.

None of this means that ARM can’t deliver compelling improvements at 10nm — but the limited amount of lithography improvements mean a heavier lift for the CPU research teams and design staff, who need to find additional tricks they can use to squeeze more performance out of silicon without driving up power consumption.

As for when 10nm might ship, past timelines suggest it’ll be a while yet. TSMC has said it expects early 10nm tapeouts to drive sizeable demand starting in Q2 2017. While that’s a quick turn-around for a company whose 16nm only entered volume production in August 2015, the speed could be explained if the 10nm node continues to leverage TSMC’s existing 20nm technology. Bear in mind that there’s a significant delay between when TSMC typically ships hardware and when consumer products launch, particularly in mobile devices where multiple companies perform complex verification procedures on multiple parts of the chip.

Either way, this tapeout is a significant step forward for both ARM and TSMC, and 10nm will deliver improvements over the 16nm tech available today.

New Windows 10 update will change hardware requirements for the first time since 2009

Ever since Windows Vista launched in 2007, the minimum hardware requirements for Windows have remained mostly unchanged (Windows 7 slightly increased the required storage footprint from 15GB of HDD space to 16GB). Now, Windows 10’s Anniversary Update, which drops in roughly two months, will make three significant change for the first time in seven years.

RAM requirements, which have been 1GB for 32-bit installations of the OS and 2GB for 64-bit installations, will now be 2GB across both platforms. The improved memory won’t really impact anyone but system builders, and the overwhelming majority of systems currently ship with more than 2GB of memory anyway. As with other versions of Windows, Windows 10 will technically run with less than the minimum amount of memory; it’ll just page out to your physical storage at an insane rate.

Windows 10 Secure Boot

The second change being introduced in Windows 10 Anniversary Update is a requirement that new devices implement support for TPM (Trusted Platform Module) 2.0 either as part of the device’s firmware or via a separate physical chip. A TPM is used to secure the device and provide a secured storage area for cryptographic keys. Note that this requirement does not apply to device upgrades, which is why so many different types of computers from the past seven years have no problem upgrading to Windows 10. If you’re buying new hardware, however, it needs to support TPM 2.0. This shouldn’t be an issue, since all modern processors from both AMD and Intel have support baked in via either the Intel Management Engine or AMD’s Beema / Carrizo / Mullins families. Microsoft’s documentation on the topic notes:

For end consumers, TPM is behind the scenes but still very relevant for Hello, Passport and in the future, many other key features in Windows 10. It offers the best Passport experience, helps encrypt passwords, secures streaming high quality 4K content and builds on our overall Windows 10 experience story for security as a critical pillar. Using Windows on a system with a TPM enables a deeper and broader level of security coverage.

The capability underlines certain biometric security authentication procedures in Windows 10 and helps ensure that media playback on W10 devices is secured and can’t be pirated as easily. UEFI options to disable the TPM are not required.

The final change is an expansion of the screen sizes for each operating system variant. Previously, Windows 10 Mobile was specced for devices with up to a 7.9 inch screen, while anything above that used full Windows 10. Now, Windows 10 Mobile can ship on anything with a display smaller than nine inches, while full Windows 10 can ship on a 7-inch device.

IBM Watson amps up Moogfest 2016 with AI-infused programming

Watson IBMIBM Watson came to Moogfest 2016, but there were no Jeopardy! questions this time around. If you’ve been following, you already know that IBM Watson, an artificially intelligent system capable of answering questions in natural language, has been up to much more than that recently. At Moogfest, IBM Watson team spokesperson Ally Schneider was on hand to outline all of the latest developments.

Everyone remembers Watson from its Jeopardy! performance on television in 2011. But work on the project was started much earlier — not just in 2006, when three researchers at IBM first got the idea to build a system for the game show, but really decades before that, as IBM began doing work on natural language processing and cognitive computing in the 1970s.

IBM-Watson

The Jeopardy! Watson system in 2011 had three main abilities, as Schneider explained. First, it could understand unstructured text. “[Normally] we don’t have to think about it, but we inherently understand what sentences are, and how verbs, nouns, etc. come together to produce text,” Schneider said. Watson could read through human-generated content and parse it in a way that other systems haven’t been able to do before. Next, Watson could come up with its own hypotheses, and then return the one with the highest confidence. Finally, there’s a machine learning component — one that’s not hard-coded or programmed, but that really learns as it goes. “When you were back in school, not too long ago for some, how did your teachers test you to see if you understood what you were reading?” Schneider asked. “They would give you feedback on your answers. [For example], yes, full credit… maybe you got partial credit… or no, incorrect, here’s what you should have done instead.” Watson is able to “reason” in the same manner.

Today, after continuous improvements, Watson consists of 30 open-source APIs across four categories: language, speech, vision, and data insights. “Watson [today] has the ability to read through and understand unstructured data like a human and pull out the relevant answers and insights and now images,” Schneider said. She then began to illustrate some recent examples of Watson’s power. The first and arguably most significant one was a joint effort with Memorial Sloan Kettering Cancer Center. The goal was to train Watson to think like a doctor, in order to assist oncologists working with breast and colon cancers. IBM’s team fed Watson a steady diet of medical journals, clinical trial results, encyclopedias, and textbooks to teach it the language of medicine.

From there, Watson could look at a patient’s individual information and compare it against what the system knows about medicine, and then come back with recommended treatment options. Schneider said it’s still up to the doctor to decide how to use that information; it’s not a question of man versus machine, but rather, how machines can enhance what humans can already perform. In this case, the goal was to empower doctors so that they don’t have to read an impossible 160 hours worth of material each week — an actual estimated figure for how much new research is being published on a weekly basis!

Watson Logo

Next up was an application for the music industry. Quantone delivers in-depth data on music consumption. It not only leverages structured metadata the way Pandora, Spotify, and other music services do, such as the genre of music, the number of beats in songs, and so on, but using IBM Watson technologies, it can also process unstructured data, such as album reviews, artist-curated content, and natural language classification. Using Quantone, as Schneider put it, an end user can say, “I’m looking for a playlist reminiscent of Michael Jackson from a certain time period,” and get an answer that also pulls in and considers unstructured data.

Content creators can also benefit from AI-infused programming. Sampack offers algorithmically and artistically generated samples that are royalty-free. It’s essentially an automated license-free music sample generator. It takes in descriptions of tones (such as “dark” or “mellow”) and then translates them into an audio sample using Watson’s Tone Analyzer capability. Sampack can understand descriptions and emotions and translate them into music effects, sounds, and filters.

IBM also published a cookbook recently, which as Schneider pointed out isn’t something you would have expected to hear before it happened. The book is called Cognitive Cooking with Chef Watson: Recipes for Innovation from IBM & the Institute of Culinary Education. Watson would analyze the molecular construction of foods, figured out what goes well together, take in inputs such as specific ingredients and what to exclude (such as gluten or other allergy triggers), and then create 100 new recipes using that query. It doesn’t search through an existing recipe database for these, either; instead, it creates 100 new recipes based on your inputs. The first recipe is usually pretty normal; by the time it gets to recipe 100, it’s “a little out there,” as Schneider put it.

In the art world, World of Watson was a recent exhibit (pictured below) by Stephen Holding in Brooklyn, in collaboration with IBM Watson using a deviation of a color API. Watson mined through Watson-specific brand imagery and came up with a suggested color palette for Holding to use. The goal was to invoke innovation, passion, and creativity with an original piece of art.

Stephen Holding IBM Watson World of Watson Art

Finally, IBM Watson partnered with fashion label Marchesa for the recent Metropolitan Museum of Art gala with model Karolina Kurkova. Watson was tasked with coming up with a new dress design that was “inherently Marchesa and true to the brand.” Watson was involved in every step of the way. Using another color deviation API, Watson mined through hundreds of images from Marchesa, including model photos, to get a feel for the color palette, Schneider said. Then Inno360 (an IBM Watson ecosystem partner) used several APIs and considered 40,000 options for fabric. With inputs from Marchesa that were consistent with the brand, but while also evaluating fabrics that would work with embeddded LEDs, Watson came up with 35 distinct choices. The third step involved embedding the LED technology into the dress using the tone analyzer, with specific colors being lit up through the flowers.

 

Today, anyone can get started working with IBM Watson by heading to IBM BlueMix and signing up for a Watson Developer Cloud account. Back in February 2015, IBM boostedWatson Developer Cloud with speech-to-text, image analysis, visual recognition, and the ability to analyze tradeoffs between different drug candidates. In July last year, Watson gained a new Tone Analyzer that could scan a piece of text and then critique the tone of your writing. We’ve also interviewed IBM’s Jerome Pesenti on many of the latest Watson developments.