Nvidia’s Ansel, VR Funhouse apps will enhance screenshots, showcase company’s VR technology

Friday night’s big GTX 1080 unveil was the talk of the tech community, but it’s not the only project that Nvidia unveiled this past weekend. The company also showcased a pair of software projects it’s working on to showcase both its efforts in VR and its ability to beautify game screenshots.

Nvidia’s Ansel (named after Ansel Adams, the famous American environmentalist and photographer) is a new tool designed to allow users to create screenshots and even 360-degree “bubble” images. The ability to take screenshots in games is nothing new, of course, but Ansel allows you to step “outside” your character and manipulate the camera position before settling on a shot.


One of the frustrating things about trying to create “perfect” screenshots in gaming is that how easy it is to do so largely depends on whether the camera is a flexible, powerful, and intuitive tool or something kludged together by three chimpanzees and a rat after six years of perpetual crunch time. Ansel aims to reduce this type of problem by giving gamers powerful tools to pose and create screenshots — provided that developers support the feature, at least.

Ansel allows you to freeze time inside a game and adjust the camera position to anything you like — even in games that don’t allow a completely free camera already. It then scales up the resolution of the final screenshot to as high as 32x native resolution (4.5 gigapixels). These truly enormous image files — because seriously, that’s going to be onehell of a file size — can then be downsampled for an incredibly high-resolution focus on one specific area.


Other features include the ability to apply specific filters (Instagram for games, we suppose), capture and export in OpenEXR, and the option to capture 360-degree “bubbles” for viewing in VR. Nvidia announced at the same event that it has released an Nvidia VR Viewer for the Google Cardboard app (sadly only Android is supported as of this writing). You’ll be able to adjust the yaw, pitch, and roll of the camera, change the brightness or color, and create 360-degree shots (a gallery of these is available on Nvidia’s website). It’ll be supported on all Nvidia GPUs from the 600 family forwards, which means Kepler and Maxwell users will still have access to this tool.


The only downside is that support will be baked in on a game-by-game level, not implemented across the board at this point. Whether Nvidia will be able to convince game devs to standardize on a set of capabilities that enable Ansel in the future or not is unclear. But since support will ship in some games that have already been out for quite some time, it’s clearly something that can be patched in rather than required from Day 1.

The other major Nvidia announcement on the software front was its new VR Funhouse. This is a clever way for Nvidia to highlight the advances of both its VRWorks SDK and its overall technology — the various mini-games in VR Funhouse showcase technologies like Nvidia Hairworks, particle effects, Nvidia Flow (used for simulating fire and water) and PhysX.

Nvidia Flex (partical-based physics simulation) and the company’s physically simulated audio engine (Nvidia VRWorks Audio) are also used in Funhouse, which is best understood as a tech demo to showcase cutting-edge capabilities in a series of mini-games. It should also serve as a fun introduction to VR technology for early adopters and users who want to show visitors an easy, simple series of mini-games with low stakes and friendly controls.

We didn’t have the opportunity to demo much of Nvidia’s VR work this weekend, but the Nvidia audio demo we attended was quite good — the ability to simulate position based on where we were in the virtual space was impressive. Whether or not this capability will find much uptake in the real world, however, is less clear — multiple companies throughout the years have tried to convince game devs to implement impressive audio capabilities (most recently AMD, with its TrueAudio DSP) and the vast majority of developers seemingly can’t be arsed to bother.

Nvidia will also use VR Funhouse to support its VRWorks SLI capabilities. While most VR games and apps to-date are single-GPU affairs, both AMD and Nvidia are working hard to change that. Nvidia will support VR SLI with VR Funhouse, dedicating one GPU to rendering each eye. Unlike Nvidia Ansel, VR Funworks appears to be a Pascal-only title.

Apple denies plan to kill music downloads, as evidence mounts Apple Music can delete existing libraries without permission

Yesterday, reports claimed that Apple was drawing up plans to leave digital music sales. The company was said to be in the “when, not if” phase, and was debating leaving the market on either a two-year or 3-4 year timeline. Today, the Cupertino company has stated that these reports are “not true,” and that it has no imminent plans to leave downloadable music.

The reason the rumor that Apple might leave downloads likely spread as far as it did is simple: iTunes sales have been falling for years, and they aren’t likely to recover. Streaming services have been siphoning revenue from downloads as consumers move away from iTunes and towards Spotify, Pandora, and of course, Apple Music.


This slide shows how revenue splits have shifted over time, with revenue earned by streaming services up $957 million while digital downloads have declined by $452 million. The difference between the two figures implies that significant amount of customers have either signed up for streaming services that didn’t previously purchase music, or that the average revenue from streaming services is significantly higher compared with how much music people tended to buy.

When the iTunes Store first launched, many analysts and pundits fretted about the loss of control inherent to digital media as opposed to physical CDs. Streaming music degrades that ownership further — if you download music in a file format like MP3, you can typically do what you please with that file thereafter. Services like Apple Music are meant to blur the line between downloadable songs and streaming, and Apple touts the ability to match songs you have in your library to music on its own streaming service. In theory, this uploads copies of your songs into a personal cloud that Apple can then stream to you on demand, without changing anything about your hard drive or the data stored therein.

Unfortunately, Apple Music doesn’t always get its facts straight. Roughly a week ago, music enthusiast and Apple Music subscriber James Pinkstone published a blog postdetailing how roughly 60GB of music had been wiped off his hard drive by Apple Music. An Apple customer service representative told him this was working as intended, while others claimed that this couldn’t have happened — Pinkstone must have made a mistake that wiped his system.

Now a different person, Robert Etropolsky, has come forward with a similar story — 60GB of music wiped off his system (Pinkstone lost 122GB). The YouTube video above addresses claims that Etropolsky or Pinkstone somehow did something wrong to delete their own massive music collections; Etropolsky shows that his music was previously stored in a Time Machine backup, reiterates that the music files that were once on his system have vanished since he subscribed to Apple Music, and notes that there’s no way to replace them. When he downloads files from Apple Music, they’re downloaded in an encrypted Apple format and will be deleted if he ever stops being an Apple Music subscriber.


In his particular case, the problem is exacerbated because much of his collection was based on rare recordings, demo tapes, and other rare versions of songs that were “matched,” uploaded, and then deleted from his hard drive. He also demonstrates that Apple’s music matching service isn’t as foolproof as the company thinks it is — in one case it deleted a song off his hard drive while offering a completely different piece of music as an uploaded alternative.

It’s still not clear how this happened or what’s responsible for the issue, but problems like this aren’t just going to just go away. Streaming services can be enormously convenient, but high profile stories like this are one reason to keep your digital music collection far away from a service like Apple Music. For some, the convenience simply isn’t worth the risk.

New Windows 10 build kills controversial password-sharing Wi-Fi Sense

When Microsoft announced Windows 10, it added a feature called Wi-Fi Sense that had previously debuted on the Windows Phone operating system. Wi-Fi Sense was a password-sharing option that allowed you to share Wi-Fi passwords with your friends and contacts in Skype, Outlook, and Facebook. Here’s how Microsoft described the featurelast year:

“When you share Wi-Fi network access with Facebook friends, Outlook.com contacts, or Skype contacts, they’ll be connected to the password-protected Wi-Fi networks that you choose to share and get Internet access when they’re in range of the networks (if they use Wi-Fi Sense). Likewise, you’ll be connected to Wi-Fi networks that they share for Internet access too. Remember, you don’t get to see Wi-Fi network passwords, and you both get Internet access only. They won’t have access to other computers, devices, or files stored on your home network, and you won’t have access to these things on their network.”


There were security concerns related to Windows 10’s management of passwords and whether or not said passwords could be intercepted on the fly. To our knowledge, no security breaches or problems were associated with Wi-Fi Sense. According to Microsoft, few people actually used the feature and some were actively turning it off. “The cost of updating the code to keep this feature working combined with low usage and low demand made this not worth further investment,” said Gabe Aul, Microsoft’s Windows Insider czar.

These changes are incorporated into the latest build of Windows, Windows 10 Insider Preview 14342. Other changes in this build include:

  • Microsoft Edge extensions are now downloaded from the Windows Store (Adblock and Adblock Plus are now available for download);
  • Swipe gestures are now supported in Microsoft Edge;
  • Bash on Ubuntu on Windows now supports symlinks (symbolic links);
  • Certain websites can now be directed to open in apps instead, ensuring that one of the mobile Internet’s worst features will be available in Windows 10.

Microsoft has also fixed playback errors with DRM-protected content from Groove Music, Microsoft Movies & TV, Netflix, Amazon Instant Video, and Hulu. The company fixed audio crashes for users who play audio to a receiver using S/PDIF or HDMI while using Dolby Digital Live or DTS Connect, and fixed some bugs that prevented common keyboard commands like Ctrl-C, Ctrl-V, or Alt-Space from working in Windows 10 apps. Full details on the changes and improvements to the build can be found here.

One final note:  Earlier this year, we theorized that Microsoft might extend the free upgrade period longer than the July 29 cutoff, especially if it was serious about hitting its 1 billion user target. The company has since indicated that it has no plans to continue offering Windows 10 for free after July 29. If you want to upgrade to Windows 10 or are still on the fence about whether or not to accept Microsoft’s offer, you only have a little over two months to make the decision.

Nvidia’s excellent first quarter buoyed by gaming, automotive wins, and data centers

Nvidia announced first quarter results for its fiscal year 2017 yesterday, and the firm’s results were excellent — particularly in a market where companies like AMD and Intel have been taking a hammering. First-quarter revenue was up 13% to $1.3 billion, with strong gains in gaming, data centers, and the automotive market.

The slide below breaks down Nvidia’s revenue in two different ways. Reportable segment revenue reflects Nvidia’s chosen method of grouping its businesses (Tegra, GPU, Other). Revenue by market platform provides additional color into how each individual area of the company is performing. One does not map cleanly to the other, but it’s worth considering both data sets.


These two charts suggest that the bulk of Nvidia’s growth is linked to its strong performance in gaming, data centers, and automotive sales. The drop off in the OEM and IP market was most likely caused by declines in Nvidia’s original Tegra mobile business and offset by a significant uptick in demand for Nvidia’s automotive designs. Nvidia logged a 63% increase in data center revenue, driven by its efforts to position itself at the center of both the driverless car initiative and deep learning networks. Both of these efforts have been front and center during a number of recent company demos and presentations.

Gaming also saw strong gains year-on-year, and Nvidia implied this was due to increased sales volume in all areas rather than increased ASPs. The 8% quarterly decline is in line with seasonal projections, which means Nvidia has probably taken market share from AMD over the past 12 months. The company’s recent GTX 1080 and 1070 announcementshave set the stage for an aggressive move to take over the high-end of the market. AMD countered the GTX 980 Ti with the Fury family in 2015, but Polaris isn’t a high-end uber-GPU and Nvidia has obviously planned to sweep both the high-end market in general and the VR space, specifically.

AMD hasn’t formally announced Polaris positioning or performance yet, but the rumor mill suggests it’s an extremely potent competitor in much less expensive markets that constitute the actual bulk of the GPU space. For all the ink lavished on high-end cards, very few people actually buy a $600 GPU. Most of the market is in the $150-$250 space, and if AMD launches a strong midrange part, it could seize leadership in that area. We don’t know yet how all these variables will play out.

Nvidia’s long-term success

It’s interesting to look at where Nvidia is now as opposed to what conventional wisdom predicted roughly eight years ago. Back then, AMD and Intel both had plans to combine GPUs with CPUs to create products that would likely kill the low-end GPU markets. By and large this happened, which is why both AMD and Nvidia focus on the $100+ space these days. The cards sold below that price point tend to be older hardware from previous low-end generations.

Nvidia poured enormous resources into Tegra to win early space in mobile — Tegra 2 was one of the most popular smartphone and tablet processors in the early dual-core days — before pivoting the entire segment towards automotive designs. Using GeForce cards fordeep learning and HPC work is another market Nvidia has largely dominated. Until quite recently, AMD didn’t seriously compete for these spaces and the company has a long way to go to ramp up its resources to match Team Green.

The flip side to this is that Nvidia’s own Project Denver CPU core hasn’t amounted to much in the market to date, and Nvidia’s efforts to create a comprehensive SoC platform with Icera’s modem technology also failed. Like Microsoft and Intel, Nvidia has had difficulty breaking out of its core GPU market — but one could argue that it’s also spent less money chasing alternatives that haven’t panned out. Microsoft and Intel have both pivoted their business strategies and created new products, but both also threw huge amounts of money and mobile for a number of years.

Overall, the company is well positioned for FY 2017 (calendar 2016). We’ll see if and how that changes when Polaris launches this summer. And just to be clear: Knowledgeable sources ET has spoken to have confirmed that Polaris is on-track for a mid-year launch. Rumors that AMD has pulled Vega in for an October launch are just that — rumors.

Micron confirms GDDR5X now in mass production

This announcement might seem a bit anticlimactic now that Nvidia has already stated that the GeForce GTX 1080 will use GDDR5X RAM, but Micron has announced that GDDR5X is now in mainstream production, months ahead of its originally expected schedule.

One interesting thing about the GTX 1080 is that the RAM it uses is at the lower-end of the memory Micron said it could ship. GDDR5X has previously been touted as a solution capable of providing up to 12.0 Gbps of bandwidth in quad-data rate (QDR) mode. Nvidia, in contrast, has stated that the GTX 1080 uses just 10Gbps memory — towards the low-end of the scale. One potential advantage of starting at the lower end, however, is that the company has plenty of headroom to scale in the future.

Absolute bandwidth on the GTX 1080 is 320GB/s, which nearly matches the GTX 980 Ti while retaining the GTX 980’s 256-bit memory bus. Keeping the memory bus small is one way to control die size and lower production costs; large ring buses eat up die space and consume a significant amount of power.

Brand-new standard or GDDR4?

We suspect that GDDR5X will be similar to GDDR4. GDDR4 was introduced by AMD in 2006 as a way to improve memory bandwidth and GPU performance. Nvidia never switched to GDDR4, preferring instead to retain GDDR3 until GDDR5 became available. AMD used GDDR4 in various products from 2006 – 2009, at which point it moved to GDDR5.

AMD hasn’t said whether or not it will use GDDR5X in upcoming products, but the fact that Nvidia is only using GDDR5X for the GTX 1080 suggests that the memory is still rather expensive; the GTX 1070 uses GDDR5. AMD’s Polaris is expected to target the mainstream PC market, which means it’ll likely use GDDR5 as well. Meanwhile, both AMD and Nvidia are planning late-year launches for next-generation products, both of which are confirmed to use HBM2.


The implication here is that GDDR5X’s market position long-term will depend on whether or not AMD and Nvidia see a need for the memory to offset higher costs in HBM2. If HBM2 costs come down relatively quickly, there may not be much room left for GDDR5X in the market, given that GDDR5 is produced by multiple companies and is relatively cheap (Micron, thus far, is the only company building GDDR5X). If, on the other hand, interposer costs stay high, we may still see all three standards in-market for some period of time.

What’s highly unlikely is that we’ll see any single GPU supporting both HBM2 and GDDR5X. Sources we’ve spoken to have indicated that supporting both memory standards simultaneously in the same silicon would be extremely difficult for minimal benefits. Supporting GDDR5 and 5X in the same silicon is much simpler than an HBM2 + GDDR5X configuration.

Linksys bucks trend, will support open source firmware on WRT routers

We’ve previously covered how some router companies are planning to kill their support for open-source firmware updates after June 2. But one company, Linksys, has explicitly stepped forward to guarantee some its devices will remain open source compatible. The June 2 date is from the FCC, which has mandated that router manufacturers prevent third-party firmware loading, in order to ensure that devices cannot be configured to operate in bands that interfere with Doppler weather radar stations.

According to the FCC’s regulations and statements, open source firmware isn’t banned — it just has to be prevented from adjusting frequencies into ranges that conflict with other hardware. The problem is, this is considerably more difficult than just banning open source firmware altogether, which is why some companies have gone the lockdown route. Linksys won’t be retaining firmware compatibility on all its products, but the existing WRT line will remain compatible. Starting on June 2, new routers will store their RF data in a different location from the rest of the data on the router.

The router that started it all

“They’re named WRT… it’s almost our responsibility to the open source community,” Linksys router product manager Vince La Duca told Ars. WRT is a naming convention that dates back more than a decade to 2005’s WRT54G. That router was the first product supported by third-party firmware after Linksys was forced to release the source code for the device under the terms of the General Public License (GPL). This writeup from 2005 examines why third-party firmware became popular for the WRT54G if you feel like taking a walk down memory lane.

That said, we’re definitely seeing open-source firmware support being used as a marketing strategy. Linksys will lock down all devices that aren’t specifically marketed as supporting open-source firmware. If sales of WRT devices spike as a result, other companies will almost certainly invest in creating support of their own. While this would practically fill the niche for open-source compatible devices, it’ll come at the cost of part of what made these devices popular. Until now, projects like DD-WRT or OpenWRT were ways of getting the performance and features of a much more expensive router baked into much cheaper products.

It’s not clear what other manufacturers will do. Making WRT continue to work under the FCC’s guidelines required a three-way collaboration between Marvell, Linksys, and OpenWRT authors, as Ars Technica details. Most companies apparently weren’t prepared to make this kind of transition. It’s not clear when they’ll respond or how enthusiastic they’ll be about making changes to existing products.

Backblaze releases billion-hour hard drive reliability report

Backblaze has released its reliability report for Q1 2016 covering cumulative failure rates both by specific model numbers and by manufacturer. The company noted that as of this quarter, its 61,590 drives have cumulatively spun for over one billion hours (that’s 42 million days or 114,155 years, for those of you playing along at home).

Backblaze’s reports on drive lifespan and failure rates are a rare peek into hard drive longevity and lifespan. One of the most common questions from readers is which hard drives are the most reliable. It’s also one of the most difficult to answer. Companies do not release failure data and the handful of studies on the topic typically cloak vendor names and model numbers. As always, I recommend taking this data with a grain of salt: Backblaze uses consumer drives in a demanding enterprise environment and while the company has refined its storage pod design to minimize drive vibration, the average Backblaze hard drive does far more work in a day than a consumer HDD sitting in an external chassis.

For those of you wondering if drive vibration actually matters, here’s a video of someone stopping a drive array by yelling at it.

Here’s Backblaze’s hard drive failure stats through Q1 2016:


The discrepancy between the 61,590 drives Backblaze deploys and the 61,523 drives listed in this chart is that the company doesn’t show data unless it has at least 45 drives. That seems an acceptable threshold given the relatively small gap. Backblaze also notes that the 8.63% failure rate on the Toshiba 3TB is misleadingly high — the company has just 45 of those drives, and one of them happened to fail.

Here’s the same data broken down by manufacturer. This chart combines all drive data, regardless of size, for the past three years.

Drive stats

HGST is the clear leader here, with an annual failure rate of just 1% for three years running. Seagate comes out the worst, though we suspect much of that rating was warped by the company’s crash-happy 3TB drive. Backblaze prominently pulled the 3TB drives from service just over a year ago, and Seagate’s drive failure rate fell precipitously as a result. Western Digital now holds that dubious honor, though the company’s ratings have also improved in the past year.

Asked why it sources the vast majority of its drives from HGST or Seagate, Backblaze reported that it has little choice:

These days we need to purchase drives in reasonably large quantities, 5,000 to 10,000 at a time. We do this to keep the unit cost down and so we can reliably forecast our drive cost into the future. For Toshiba we have not been able to find their drives in sufficient quantities at a reasonable price. For WDC, we sometimes get offered a good price for the quantities we need, but before the deal gets done something goes sideways and the deal doesn’t happen. This has happened to us multiple times, as recently as last month. We would be happy to buy more drives from Toshiba and WDC, if we could, until then we’ll continue to buy our drives from Seagate and HGST.

The company notes that 4TB drives continue to be the sweet spot for building out its storage pods, but that it might move to 6, 8, or 10TB drives as the price on the hardware comes down. Overall it’s an interesting look at a topic we rarely get to explore.

New medical tech coming to the rescue for the vision-impaired

Ever since the invention of the magnifying glass nearly 25 centuries ago, we’ve been using technology to help us see better. For most of us, the fix is fairly simple, such as a pair of glasses or contact lenses. But for many with more seriously impaired vision — estimated at around 285 million people worldwide — technology has been short on answers until fairly recently. Doctors and scientists are making up for lost time though, with a slew of emerging technologies to help everyone from the mildly-colorblind to the completely un-sighted. They’re also part of a wide swath of new medical advances we’ll be covering all this week here at in our new Medical Tech series.

Superman glasses for the vision-impaired

These space-age-style smart glasses from Vast can help the vision-impaired see what is around themWe’re all familiar with the accessibility options available on our computers, including larger cursors, high-contrast fonts, and magnified screens. But those do nothing to help the vision-impaired navigate the rest of their day. Instead, a number of different “smart glasses” have been invented that help make the rest of the world more accessible.

These glasses work by using the image from one or more cameras — often including a depth sensor — and processing it to pass-along an enhanced version of the scene to a pair of displays in front of the eyes. Deciding on the best way to enhance the image — autofocus, zoom, object outlining, etc. — is an active area of research, as the best way for the wearer to control them. Right now they tend to require an external box that does the image processing and has knobs for controlling settings. Emerging technologies including eye tracking will provide improved ways to control these devices. Better object recognition algorithms will also help improve their utility. One day it will be easy to have these glasses know enough to highlight house keys, or a wallet, or other commonly-needed, but sometimes hard to locate, possessions.

One of the more clever solutions comes out of Oxford, via Google Global Impact Challenge winner VA-ST. I had a chance to try out VA-ST’s prototype Smart Specs last year, and can see how they could be very helpful for those who otherwise can’t make out details of a scene. It’s hard, though, to get a real feel for their effectiveness unless you are actually suffering from a particular vision impairment. Some work is being done to help simulate these conditions, and allow those with normal vision to evaluate solutions. But until then willing subject participants with uncommon vision disorders are actually a scarce resource for scientists attempting to do trials of their devices.

The VA-ST system highlights edges and objects to make important parts of a scene more visibleMost solutions available today suffer not only from technical issues like how they are controlled, but cut off eye contact and are socially awkward — which has also hampered their adoption. Less-obtrusive devices using wave guides, like the ones developed by Israeli startup Lumus, will be needed to overcome this issue. Startup GiveVision is already demoing a version of its vision-assisting wearable using Lumus wave guides to help make it more effective and less obtrusive. Similar advanced augmented reality display technology is also being used in Microsoft’s HoloLens and Magic Leap’s much-rumored device. While it is mostly mainstream AR devices like those that are driving the technology to market, there is no doubt the medical device sector will be quick to take advantage of it.

Other efforts to enhance awareness of the visual world, including EyeMusic, render salient aspects of the scene — such as distance to the closest object — as audible tones. The OrCam system recognizes text and reads it to the wearer out loud, for example. These systems have the advantage that they don’t require placing anything over the wearer’s eyes, so they don’t interfere with eye contact.

Retinal implants provide sight for many of the blind

In many blind people — particularly those suffering from Retinitis Pigmentosa and age-related Macular Degeneration — the retinal receptors may be missing, but the neurons that carry information from them to the brain are intact. In that case, it is sometimes possible to install a sensor — an artificial retina — that relays signals from a camera directly to the vision neurons. Since the pixels on the sensor (electrodes) don’t line up exactly with where the rods and cones would normally be, the restored vision isn’t directly comparable with what is seen with a natural retina, but the brain is able to learn to make sense of the input and partial vision is restored.

Palankar's team is also looking at using special glasses to provide wireless data to the retinal implantRetinal implants have been in use for over a decade, but until recently have only provided a very minimal level of vision — equivalent to about 20/1250 — and have needed to be wired to an external camera for input. Now, though, industry-leader Retina Implant has introduced a wireless version with 1,500 electrodes on its 3mm square surface. Amazingly, previously-blind patients suffering from Retinitis Pigmentosa have been able to recognize faces and even read the text on signs. Another wireless approach, base on research by Stanford professor Daniel Palanker’s lab, involves projecting the processed camera data into the eye as near IR — and onto the retinal implants — from a special pair of glasses. The implants then convert that to the correct electrical impulses to transmit to the brain’s neurons. The technology is being commercialized by vision tech company Pixium Vision as its PRIMA Bionic Vision Restoration System, and is currently in clinical trials.

Even color-blind people can benefit from clever glasses

While severe vision disorders affect a large number of people, even more suffer from the much more common problem of color blindness. There are many types of color blindness — some caused by missing the correct cones to discriminate one or more of the primary colors. But many who have what is commonly called “red-green colorblindness” simply have cones with sensitivities that are too close together to help distinguish between red and green. Startup Enchroma stumbled across the idea of filtering out some of the overlap, after noticing that surgeons were often taking their OR glasses with them to the beach to use as sunglasses. From there, the company worked to tune the effect to assist with color deficiency — the result being less overall light let through its glasses, but a better ability to discriminate between red and green. If you’re curious whether the company’s glasses can help you, it offers an online test of your vision.

Accessibility tech breakthroughs for the blind

Startup blitab is adding braille to a standard tabletThere are plenty of limits on what medical technology can currently accomplish for those who are blind or vision-impaired. Fortunately, accessibility technology has also continued to advance. Most of us are familiar with magnified cursors, zoomed-in text, and speech input-and-output, but there are other more sophisticated tools available. There are too many to even list them here, but for example, startup blitab is creating a tablet for the world’s estimated 150 million braille users that features a tactile braille interface as well as speech input and output. On the lighter side, Pixar is developing an application that will provide a narrative description of the screen while viewers watch.

However good your vision, you’re likely to benefit from medical technology for improving it at some point, since the incidence of vision-related conditions increases dramatically with age. Everyone eventually suffers from at least relatively minor conditions like Presbyopia (the inability for the eye to accommodate to near and far focusing), and over 25% of those who make it to age 80 suffer from major vision impairment. Even for those of us with only minor vision issues, the advent of smartphone apps to help measure our vision and diagnose possible problems will help lower costs. With the rapid advances in microelectronics, surgical technology, and augmented reality, though, there are likely to be some amazing treatments for those conditions in the future.

IBM researchers announce major breakthrough in phase change memory

For years, scientists and researchers have looked for the so-called Holy Grail of memory technology — a non-volatile memory standard that’s faster than NAND flash while offering superior longevity, higher densities, and ideally, better power characteristics. One of the more promising technologies that’s been in development is phase-change memory, or PCM. IBM researchers announced a major breakthrough in PCM this week, declaring that they’ve found a way to store up to three bits of data per “cell” of memory. That’s a significant achievement, given that previous work in the field was limited to a single bit of data per memory cell.

Phase change memory exploits the properties of a metal alloy known as chalcogenide. Applying heat to the alloy changes it from an amorphous mass into a crystal lattice with significantly different properties, as shown below:


Scientists have long known that chalcogenide could exist in states between crystal lattice or amorphous, but building a solution that could exploit these in-between states to store more memory has been extremely difficult. While phase-change memory works on very different principles than NAND flash, some of the problems with scaling NAND density are conceptually similar to those faced by PCM. Storing multiple bits of data in NAND flash is difficult because the gap between the voltage levels required to read each specific bit is smaller the more bits you store. This is also why TLC NAND flash, which stores three bits of data per cell, is slower and less durable than MLC (2-bit) or SLC (single bit) NAND.

IBM researchers have discovered how to store three bits of data in a 64K array at elevated temperatures and for one million endurance cycles.

“Phase change memory is the first instantiation of a universal memory with properties of both DRAM and flash, thus answering one of the grand challenges of our industry,” said Dr. Haris Pozidis, an author of the paper and the manager of non-volatile memory research at IBM Research – Zurich. “Reaching 3 bits per cell is a significant milestone because at this density the cost of PCM will be significantly less than DRAM and closer to flash.”

Here’s how the PR blast describes the breakthrough:

To achieve multi-bit storage IBM scientists have developed two innovative enabling technologies: a set of drift-immune cell-state metrics and drift-tolerant coding and detection schemes.

More specifically, the new cell-state metrics measure a physical property of the PCM cell that remains stable over time, and are thus insensitive to drift, which affects the stability of the cell’s electrical conductivity with time. To provide additional robustness of the stored data in a cell over ambient temperature fluctuations a novel coding and detection scheme is employed. This scheme adaptively modifies the level thresholds that are used to detect the cell’s stored data so that they follow variations due to temperature change. As a result, the cell state can be read reliably over long time periods after the memory is programmed, thus offering non-volatility.

“Combined these advancements address the key challenges of multi-bit PCM, including drift, variability, temperature sensitivity and endurance cycling,” said Dr. Evangelos Eleftheriou, IBM Fellow.

There’s still a great deal of work to do before phase-change memory can be considered as a candidate to replace NAND flash or DRAM in certain situations. The performance and power impact of these new structures has not been characterized and the switching time hasn’t been revealed.


The graphic above is from an IBM video explaining how PCM memory works and some general information on this latest breakthrough. Note that PCM, like NAND flash, takes a performance hit when it shifts to a multi-bit architecture. While single-bit PCM is nearly as fast as DRAM (according to IBM), multi-bit PCM is significantly slower. Data retention (how long data remains in the cell) was also worse than NAND flash, which has lower endurance (how many read/write cycles the cells can withstand) but higher data retention.

Phase-change memory is theoretically capable of replacing DRAM in at least some instances, but if these density gains come at the cost of programming speed, the net gain may be minimal. Phase-change memory also requires large amounts of power to program and generates a great deal of heat as a result.

This video from IBM walks through the history of phase-change memory, explains the basics of its function, and covers the most-recent breakthrough. We think IBM’s discovery here could help pave the way for a long-term replacement to NAND flash, but we’re still years away from that. Intel’s Optane 3D XPoint memory may make its own play for the server and data center space, and Micron, which used to manufacture PCM for the mass market doesn’t build it anymore.

Microsoft’s Windows 10 Anniversary Update doubles up on Start Menu advertising

One of the changes Microsoft introduced when it launched Windows 10 was the ability to show suggested applications, aka advertisements, within the Start Menu and on the lock screen. The “suggested application” function can be disabled relatively easily, but Microsoft is making changes in Windows 10 to increase application visibility and hopefully entice more users to head for the Windows Store.

Once the Anniversary Update drops, the number of promoted apps in the Start Menu will double, from five to 10. To accommodate this change, the number of static Microsoft applications will decrease, from 17 to 12.


Many of these promoted applications (aka Programmable Tiles) aren’t actually installed on the system by default. Instead, they take the user to the Windows Store where the app can be installed.


Neowin isn’t sure if this will apply to existing Windows 10 PCs, or if this change will only go live on new installations. Either way, it’s a smart move for Microsoft.

Shifting paradigms

One of the most significant barriers to Windows Store adoption is the entrenched behavior of Windows’ users. For decades, Windows users have been used to downloading software from various sites on the Internet. If you need a media player, you use VLC or MPC-HC. If you need messaging software, you can download various apps from individual vendors or grab an all-in-one product like Trillian or Pidgin. Your first browser might come from Microsoft, but if you want something else you’ll head for Firefox or Google Chrome.

Microsoft wants users to see the Windows Store as a one-stop shop for its applications, but it’s difficult to shift how people use a system they’ve spent decades with. We don’t blame the company for using promoted apps what the Windows Store can offer. The problem is, the majority of the programs we’ve seen on the Windows Store don’t compare well against the applications you can download on the Internet. We’ve chronicled the problems with various UWP games already, but applications you download from the Windows Store are often tablet-centric and explicitly designed around certain limitations Microsoft enforces.

The real problem for the Windows Store isn’t getting people to look at it — it’s building up an application library of stuff people want to actually use. This has been a problem for Microsoft since it launched Windows 8, and while the store’s layout and UI have improved significantly, breakout application successes are few and far between. The app model simply hasn’t caught on for desktop software, possibly because most people expect PC software to be more complicated and have a greater range of capability than the application-equivalent. On a smartphone or tablet, apps can be good stand-ins for browsing or using websites. On desktops, the existing paradigm is different. Unless Microsoft can offer users some stellar software, it may not see the uptake it’s looking for, no matter how many PC users upgrade to Windows 10.