Fallout 4’s free high-resolution texture pack will make even the brawniest gaming PCs sweat

Fallout 4 might look a smidge better (and more colorful!) than its last-gen predecessors, but it could always look better, right? As “a love letter to our amazing PC fans,” Bethesda announced on Monday that a high-res texture pack is coming to The Commonwealth sometime next week, for free.

The catch? You’re going to need one hell of a PC to run it. The official recommendations from Bethesda include a six-core i7-5820K and 8GB of dedicated VRAM on your graphics card (a GeForce GTX 1080 is specifically listed). Oh, and on top of that you’ll need a whopping 58GB of storage space for all these hot textures. That’s in addition to 30+ GB for the game itself.

There’s only a single picture so far, embedded below. Click on it for the full effect:

Fallout 4 - Texture Pack

Now sure, you could possibly find better textures through extensive piecemeal modding. But if you’re still playing Fallout 4 or haven’t started yet, then this is a presumably stable first-party solution a la Skyrim HD. Pretty cool.

The irony though is that many of the popular texture mods on NexusMods take things the other direction: They further compress Fallout 4’s textures, many of which are already huge for seemingly no reason.

For instance, here you’ll notice the most downloaded mod is the “Fallout 4 – Texture Optimization Project,” which aims to reduce stuttering by “replacing the high-resolution textures with properly compressed and resized textures.” Yes, the mod description terms Fallout 4’s existing textures high-resolution, because many of them already are.

Anyway, the free texture pack will be at least a slight boost for the few, the proud, who manage to run it. Not as noticeable as a lighting overhaul perhaps, but hey, never before have trash and rusty cars been so crisp and lovingly rendered.

Whether it’s worth playing Fallout 4 at all? Well, that’s a different question.

Hands-on: HP’s Lap Dock helps your Windows Phone feel more like a real PC

HP’s Lap Dock represents a future where a smartphone is powerful enough to replace your computer. Arguably, the future may already be here: HP’s Elite x3 stands as the best Windows phone on the market, integrating a Continuum dock with cloud-based legacy Win32 apps for a PC-like work environment.hp lap dock with hp elite x3

The Lap Dock is the other piece of the productivity puzzle, a “dumb” laptop powered by the phone. Many users already plug in a second monitor to their notebook and expand their virtual desktop view across both screens. HP’s Lap Dock operates under the same principle, but this time, the Elite x3 Windows phone is the computer—the Lap Dock lacks its own CPU.

If this sounds familiar, it’s because a startup, Nexdock, launched an Indiegogo campaign in March to build another one of these “dumb” laptops. While the Nexdock itself fell short in several ways, the $199 price point was spot-on, and the concept was definitely intriguing. (The company says it has moved on to designing docks for Intel’s Compute Card, too.)

At $500, HP’s Lap Dock is far more expensive. But the care with which HP engineered it certainly justifies a second look, especially for executives whose IT department is footing the bill. In addition to using it for ordinary UWP applications, you can tap into HP’s nifty Win32-in-the-cloud environment, Workspace, if your company supports it. (See our Elite x3 review for more.)

HP’s Lap Dock carries itself with the understated manner of the elite—which, as a $500 phone

It’s a credit to ultrabook designers that, when closed, you’d be hard-pressed to tell that the $500 Lap Dock is not a thin-and-light laptop. It measures 11.37 x 7.91 x 0.54 inches, it has a 12.5-inch, 1920×1080 LED-lit (non-touch) display, and it weighs 2.3 pounds—you can thank the integrated 46.5Whr battery for that heft.

Clad in reinforced black polycarbonate with a shiny aluminum hinge, the Lap Dock’s exterior conveys the sophistication of a premium, executive-class device. A mini-HDMI connector can link to an external display, and there’s even an LED battery gauge, which can visually convey how much juice is left in the tank.

If you open the Lap Dock without connecting a phone, there’s a small boot sequence of a few seconds where the Dock shows you how to connect either wired or wirelessly. That screen disappears when you connect a phone.

HP Elite x3 Lap Dock

HP’s build quality generally carries over into using the Lap Dock as, well, a laptop, though some usability issues may raise your eyebrows. The backlit keyboard feels solid, though the keys could be a bit stiffer for my taste. But there are slight annoyances, such as the lack of a function-key lock, and the omission of on-screen brightness controls, that remind you that the phone’s in control.

The only indicator of screen brightness is…the screen’s brightness, which scales up to a decent 268 nits by my measurement. You can control the volume either with the phone or the Lap Dock’s controls, and an on-screen slider visually indicates the volume level.

The Lap Dock’s Bang & Olufsen speakers are sufficiently loud. A first review unit I received had flaky audio playback, but a second Lap Dock with updated firmware had no such problems.

HP Device Hub

Like HP’s other apps, the Device Hub is a nice central point for managing your devices. Tapping the Lap Dock card checks to see if the Dock’s firmware is up to date.

HP has bought into the modern USB-C connector wholesale, with one charging port, one input port (for the phone) and a third I/O port, all using USB-C. If you have an older USB-A peripheral, you’ll have to track down an adapter dongle.

The Lap Dock’s charging behavior is a bit odd, though. If you tap the “battery” button on the right edge of the Dock, the four-LED battery indicator lights to visually show you how much charge is left, in 25-percent increments. A small, almost indiscernible LED next to the Enter key also flashes red to indicate that the Dock is totally out of power. That makes sense.

I now expect my devices to visually indicate that they’re fully charged, though, and HP’s Lap Dock doesn’t do that—not really, anyway. While tapping the battery button will always light the correct battery gauge indicators, the red keyboard LED briefly flips to green when the charge climbs over ten percent—not when it’s fully charged. Then it shuts off, unless the Lap Dock is in use. (If it’s less than ten percent, the light is amber, and continually lit.)

Maybe this is an example of HP’s over-engineering, but I found the whole thing unintuitive enough that I had to consult the manual to find out what was going on. Why not simply light one of the battery gauge’s LEDs green when the device is fully charged, and red when it’s empty?

HP Elite x3 Lap Dock

This is a nice touch: the four LED lights (two of which are lit, here) visually indicate the

A slightly more serious issue is simply what to do with the phone. You can connect a Windows phone to the Lap Dock either via the USB-C cord or wirelessly, though a wired connection is a far superior experience. But what do you do with the phone when the Lap Dock is in your lap? You’d best hope that there’s a flat surface nearby, or that you can slip the phone in your pocket—and that you don’t trigger something accidentally. The Lap Dock also lacks a camera, so you’ll need to awkwardly prop up or simply hold the phone for Skype calls.

HP Elite x3 Lap Dock

HP’s own wrappers advise you that your trackpad touch targets will be small. I’m not sure that they’re even that large.

For me, however, the worst experience I had with the Lap Dock was using the trackpad—so, every few seconds, basically. I noticed a bit of lag when swiping right from the home screen, for example, to access the apps menu. The trackpad’s buttons were also a problem. Integrated into the bottom of the trackpad, they registered only when I clicked the very lower edges, even on both of the Lap Docks I was sent for review. Note that I said “register”—on my machine, I could click midway down, but the only time the Lap Dock would actually process a click was at or very near the bottom of the trackpad. Talk about an exercise in frustration.

The ability to connect the Lap Dock to the phone either wired or wirelessly also affects the battery life. HP’s battery test is similar to ours: looping a 4K video until the battery runs down. HP rates the Lap Dock’s battery life at 7 hours, 10 minutes while connected via the cable, and about 6 hours when video is streamed wirelessly. Our measured battery life was somewhat less—6 hours exactly—primarily because HP tested at 150 nits of screen brightness, and we standardize our testing at what we consider to be optimal brightness—between 250 and 260 nits.

Naturally, the Lap Dock will charge a connected phone via the USB-C cable—but the phone will also charge while the Lap Dock itself is running off its internal battery. Incidentally, when the Lap Dock’s battery expires, you’ll probably be left with a healthy charge on the phone.

There’s quite a bit more that’s right about the Lap Dock than wrong. Aside from the truly annoying trackpad, most of my criticisms are merely nitpicks.

A bigger issue, of course, is the viability of the Windows Mobile platform that the Lap Dock is predicated upon. Microsoft continues to support it, though the responsibility for driving it forward and developing hardware for it has fallen once again to the hardware makers. The message I take away from all of this, however, is twofold: One, HP believes some mobile platform will eventually offer the power and capabilities to drive a “desktop” environment, even if it’s not Microsoft; and two, users have become comfortable with the laptop form factor. HP’s done its homework, and if “dumb” laptops take off, HP will be ready to ride the wave.

LG’s 5K monitor doesn’t work near Wi-Fi routers

If you’re eyeing LG’s 27-inch UltraFine 5K display for Mac—the monitor that Apple recommends for Mac users and sells in its online store—you’ll want to be aware of an irritating flaw in its design. The Thunderbolt 3-capable display apparently suffers from signal interference, rendering the monitor unusable when it’s within 6.5 feet or so of a wireless router. ultrafine5k

You can find several reviews on Apple’s site complaining about this problem, but 9to5 Mac’s Zac Hall recently gave us a front row seat to this issue. Hall brought the 5K display home for his personal set up and immediately discovered the problem.

This appears to be a known issue, according to Hall, with LG Support recommending to keep the monitor away from wireless access points. The UltraFine monitor’s manual also says in its warnings section to install the monitor where “no electromagnetic interference” occurs; however, the manual doesn’t mention what might cause this interference.

Apple began selling LG’s UltraFine 4K and 5K monitors in its online store in December. The 5K monitor is recommended as the next-generation replacement for Apple’s own Thunderbolt Display, which was discontinued last summer.

The monitor features 5120-by-2880 resolution with a P3 wide color gamut, Thunderbolt 3 for input, three USB-C (USB 3.1 gen 1) ports, a built-in camera, and stereo speakers. The monitor is currently available at the special price of $974 through the end of March.

LG’s display is certainly stunning, but it could be problematic for people who cram their computer, printer, and router into one corner of the house. If that’s you, and you do plan on buying LG’s 5K display, then you should probably see if your router can be moved elsewhere in the house. Even if the router issue doesn’t affect you it’s worth reading through the reviews of the monitor on Apple’s site. At this writing there were only three complaints referring to routers, while others complained of problems with disconnections and crashing on wake.

We’ve reached out to LG to see if the router proximity issue can be fixed with a firmware update, but it sounds like a crucial piece of the monitor isn’t protected well enough against electromagnetic radiation—which wouldn’t be easily fixed by a firmware tweak. Hopefully we’ll hear back soon.

This story, “LG’s 5K monitor doesn’t work near Wi-Fi routers” was originally published by Macworld.

An open source toolbox for pure mathematics

An open source toolbox for pure mathematics

The field of pure mathematics has always depended on computers to make tables, prove theorems and explore new theories. Today, computer aided experiments and the use of databases relying on computer calculations are part of the pure mathematician’s standard toolbox. In fact, these tools have become so important that some areas of mathematics are now completely dependent on them.

More recently, computers have been increasingly used to support collaborative work with the emergence of a wide array of open source tools geared towards supporting research in pure mathematics. These programmes include such computational tools as GAP, PARI/GP, LinBox, MPIR, Sage and Singular, along with online databases like LMFDB – all of which are further enhanced by the Jupyter platform for interactive and exploratory computing within the sciences.

An ecosystem of collaboration

Despite the many benefits of such open source programmes, their development has been restricted due to limited funding and an inability to link individual programmes. That’s why the EU-funded OPENDREAMKIT project is working to support the ecosystem of open-source mathematical software systems. Specifically, the project aims to promote the technological development of open source programmes for use in mathematics by, for example, improving User Interfaces (UI) and lowering the barriers between various research communities. It is also seeking to streamline access, distribution and portability on a wide range of platforms – including high performance computers and cloud services.

The core component of the project is the creation of Virtual Research Environments (VRE), or online services, that enable groups of researchers located anywhere in the world to work collaboratively on a per project basis. To do this, OPENDREAMKIT is taking such popular software-based mathematic apps as MathHub and SageMath and adapting them for use in the interactive, collaborative open source environment. The end result will be a flexible toolkit that enables researchers to set up customisable VREs capable of supporting the entire research life-cycle.

Unifying the building blocks

Over 50 people spread across 15 European sites are busy working on the OPENDREAMKIT toolkit, which will consist of community-developed open software, databases and services. The team started its work by defining an innovative, component-based VRE architecture by adapting existing software, databases and UI components for the mathematics sector. The project also involves the input of leading mathematicians, computational researchers and software developers, thus ensuring it supports actual research needs.

In the end, the toolkit will improve and unify such existing building blocks as LinBox, MPIR, SageMath, GAP, PariGP, LMFDG and Singular, along with extend the Jupyter Notebook by giving it a flexible UI. The ultimate goal is to make it as easy as possible for research teams of any size to quickly set up a customised, collaborative VRE tailored to their specific needs, research and workflow. Project organisers are confident that, as a result of the OPENDREAMKIT toolkit, these VREs will play a substantial role in improving the productivity of researchers in pure mathematics and applications by promoting collaboration on data, knowledge and software.

Collaborating to create a comprehensive maths atlas

In addition to the core objective of building an open source toolkit, the OPENDREAMKIT project is also collaborating with other similar projects. For example, it recently worked with international mathematicians from MIT and other institutions to create an online resource that provides detailed maps of previously uncharted mathematical terrain. The resulting ‘L-functions and Modular Forms Database’ (LMFDB) is a detailed atlas of mathematical objects that highlights deep relationships and serves as a guide to current research happening in physics, computer science and mathematics. The effort was part of a large collaboration of researchers from around the world.

Using cellphone data to study the spread of cholera

Vibrio cholerae

While cholera has hardly changed over the past centuries, the tools used to study it have not ceased to evolve. Using mobile phone records of 150,000 users, an EPFL-led study has shown to what extent human mobility patterns contributed to the spread of a cholera epidemic in Senegal in 2005. The researchers’ findings, published in the Proceedings of the National Academy of Sciences, highlight the critical role a mass gathering of millions of pilgrims played in spreading of the disease, and how measures to improve sanitation at transmission hotspots could decrease the progression of future outbreaks.

“There is a lot of hype around using big data from mobile phones to study epidemiology,” says senior author Enrico Bertuzzo, from the Ecohydrology Laboratory at the Ecole polytechnique fédérale de Lausanne. This is largely due to the fact that mobile phone data can be used to reconstruct, with unprecedented detail, mobility fluxes of an entire population. “But I dare say that this is the first time that such data are exploited to their full potential in an epidemiological model.”

Cholera is an infectious disease that occurs primarily in developing countries with poor sanitation infrastructure. It spreads primarily via water that has been contaminated with the bacterium Vibrio cholerae, present in the feces of infected people. Human mobility and waterways both contribute to spreading the disease among human communities, whereas heavy precipitation events increase the chances of the bacteria to contaminate drinking water sources. Researchers at EPFL have developed a mathematical simulation model that accounts for these factors, which they tested on past outbreaks such as the one in Haiti in 2010.

“A lot of local conditions play into whether a minor cholera outbreak will evolve into a major epidemic,” says Flavio Finger, the study’s lead author. “One goal of our research was to develop ways to estimate how the disease spread across populations, both in space and in time,” he says. “Knowing how many cases you are likely to have and where they are likely to be are two important pieces of information that can help dispatch healthcare workers to the right places.”

But until now, human mobility patterns had to be reconstructed from patient case data—a tedious process that, according to Finger, has some major flaws. “This project really began when we were given a chance to work with mobile phone data,” says Finger. The data, provided by Sonatel and the Orange Group, gave the researchers access to the approximate locations of 150,000 customers throughout 2013 as part of their Data for Development Challenge. “Having access to more accurate data on population movement simplified our work and eliminated much of the remaining uncertainty.”

Using the mobile phone data, Finger and his co-authors tested their model by re-running the cholera outbreak that hit Senegal in 2005 on a computer. Its spread had previousy been linked to an annual religious pilgrimage to the city of Touba that brings together millions of pilgrims. “Our simulation did a great job at reproducing the peak of reported cases of cholera in the region around Touba, where the epidemic broke out during the pilgrimage. Without the mobile phone data it would have been impossible to capture this phenomenon, which needs a high density of people to be triggered.” It was also spot-on mapping the spread of the disease across the country as pilgrims traveled home and even replicated certain local events, such as a spike in cholera cases in the country’s capital, Dakar, following intense rainfall there.

“We have also used our simulation to test different intervention strategies, says Finger. “You can use antibiotics or vaccines, or invest in improving sanitary standards. All of these approaches have different impacts and cost different amounts of money and resources. Our simulation gives us a tool to evaluate and compare their efficacy,” he says. They found that improving access to sanitation and providing clean drinking water could have considerably reduced the number of new cases of cholera during the pilgrimage. With less pilgrims disseminating the disease in the country, this would have led to a lower number of new cases later on.

LTU computer scientist to present groundbreaking research

Louisiana Tech University computer scientist to present groundbreaking research

Dr. Ben Choi, associate professor of computer science at Louisiana Tech University, will present his research on a groundbreaking new technology that has the potential to revolutionize the computing industry during a keynote speech next month at the International Conference on Measurement Instrumentation and Electronics.

Choi will present on a foundational architecture for designing and building computers, which will utilize multiple values rather than binary as used by current computers. The many-valued logic computers should provide faster computation by increasing the speed of processing for microprocessors and the speed of data transfer between the processors and the memory as well as increasing the capacity of the memory.

This technology has the potential to redefine the computing industry, which is constantly trying to increase the speed of computation and, in recent years, has run short of options.

By providing a new hardware approach, the technology will push the speed limit of computing using a progressive approach which will move from two values to four values, then to eight values, then to 16 values, and so on. Future computers could be built using this many-valued approach.

“Advances in the foundational design of the computer are needed in business and research applications as well as at the foundation of cyber security efforts across the nation,” said Dr. Galen Turner, director of computer science, cyber engineering, electrical engineering and electrical engineering technology at Louisiana Tech. “Dr. Choi’s invitation to present at the upcoming conference has increased interest in this foundational architecture.”

Louisiana Tech and Choi have filed a U.S. patent application for this groundbreaking technology titled “Method and Apparatus for Designing Many-Valued Logic Computer.”

“If this is successful, computers in the future will be based on our technology,” said Choi. In addition to the keynote speech, Choi’s research will be released in a publication in the related journal.

Choi earned his Ph.D., M.S., and B.S. degrees from The Ohio State University, specializing in computer science, computer engineering and electrical engineering. His research focus areas include Humanoid Robots, Artificial Intelligence, Machine Learning, Intelligent Agents, Semantic Web, Data Mining, Fuzzy Systems, and Parallel Computing.

Prior to coming to Louisiana Tech, Choi served as a visiting research scholar at DePaul University, University of Western Australia and Hong Kong University of Science and Technology. He has also worked in the computer industry as a System Performance Engineer at Lucent Technologies (Bell Labs) and as a private computer consultant.

Nvidia’s Ansel, VR Funhouse apps will enhance screenshots, showcase company’s VR technology

Friday night’s big GTX 1080 unveil was the talk of the tech community, but it’s not the only project that Nvidia unveiled this past weekend. The company also showcased a pair of software projects it’s working on to showcase both its efforts in VR and its ability to beautify game screenshots.

Nvidia’s Ansel (named after Ansel Adams, the famous American environmentalist and photographer) is a new tool designed to allow users to create screenshots and even 360-degree “bubble” images. The ability to take screenshots in games is nothing new, of course, but Ansel allows you to step “outside” your character and manipulate the camera position before settling on a shot.


One of the frustrating things about trying to create “perfect” screenshots in gaming is that how easy it is to do so largely depends on whether the camera is a flexible, powerful, and intuitive tool or something kludged together by three chimpanzees and a rat after six years of perpetual crunch time. Ansel aims to reduce this type of problem by giving gamers powerful tools to pose and create screenshots — provided that developers support the feature, at least.

Ansel allows you to freeze time inside a game and adjust the camera position to anything you like — even in games that don’t allow a completely free camera already. It then scales up the resolution of the final screenshot to as high as 32x native resolution (4.5 gigapixels). These truly enormous image files — because seriously, that’s going to be onehell of a file size — can then be downsampled for an incredibly high-resolution focus on one specific area.


Other features include the ability to apply specific filters (Instagram for games, we suppose), capture and export in OpenEXR, and the option to capture 360-degree “bubbles” for viewing in VR. Nvidia announced at the same event that it has released an Nvidia VR Viewer for the Google Cardboard app (sadly only Android is supported as of this writing). You’ll be able to adjust the yaw, pitch, and roll of the camera, change the brightness or color, and create 360-degree shots (a gallery of these is available on Nvidia’s website). It’ll be supported on all Nvidia GPUs from the 600 family forwards, which means Kepler and Maxwell users will still have access to this tool.


The only downside is that support will be baked in on a game-by-game level, not implemented across the board at this point. Whether Nvidia will be able to convince game devs to standardize on a set of capabilities that enable Ansel in the future or not is unclear. But since support will ship in some games that have already been out for quite some time, it’s clearly something that can be patched in rather than required from Day 1.

The other major Nvidia announcement on the software front was its new VR Funhouse. This is a clever way for Nvidia to highlight the advances of both its VRWorks SDK and its overall technology — the various mini-games in VR Funhouse showcase technologies like Nvidia Hairworks, particle effects, Nvidia Flow (used for simulating fire and water) and PhysX.

Nvidia Flex (partical-based physics simulation) and the company’s physically simulated audio engine (Nvidia VRWorks Audio) are also used in Funhouse, which is best understood as a tech demo to showcase cutting-edge capabilities in a series of mini-games. It should also serve as a fun introduction to VR technology for early adopters and users who want to show visitors an easy, simple series of mini-games with low stakes and friendly controls.

We didn’t have the opportunity to demo much of Nvidia’s VR work this weekend, but the Nvidia audio demo we attended was quite good — the ability to simulate position based on where we were in the virtual space was impressive. Whether or not this capability will find much uptake in the real world, however, is less clear — multiple companies throughout the years have tried to convince game devs to implement impressive audio capabilities (most recently AMD, with its TrueAudio DSP) and the vast majority of developers seemingly can’t be arsed to bother.

Nvidia will also use VR Funhouse to support its VRWorks SLI capabilities. While most VR games and apps to-date are single-GPU affairs, both AMD and Nvidia are working hard to change that. Nvidia will support VR SLI with VR Funhouse, dedicating one GPU to rendering each eye. Unlike Nvidia Ansel, VR Funworks appears to be a Pascal-only title.

Apple denies plan to kill music downloads, as evidence mounts Apple Music can delete existing libraries without permission

Yesterday, reports claimed that Apple was drawing up plans to leave digital music sales. The company was said to be in the “when, not if” phase, and was debating leaving the market on either a two-year or 3-4 year timeline. Today, the Cupertino company has stated that these reports are “not true,” and that it has no imminent plans to leave downloadable music.

The reason the rumor that Apple might leave downloads likely spread as far as it did is simple: iTunes sales have been falling for years, and they aren’t likely to recover. Streaming services have been siphoning revenue from downloads as consumers move away from iTunes and towards Spotify, Pandora, and of course, Apple Music.


This slide shows how revenue splits have shifted over time, with revenue earned by streaming services up $957 million while digital downloads have declined by $452 million. The difference between the two figures implies that significant amount of customers have either signed up for streaming services that didn’t previously purchase music, or that the average revenue from streaming services is significantly higher compared with how much music people tended to buy.

When the iTunes Store first launched, many analysts and pundits fretted about the loss of control inherent to digital media as opposed to physical CDs. Streaming music degrades that ownership further — if you download music in a file format like MP3, you can typically do what you please with that file thereafter. Services like Apple Music are meant to blur the line between downloadable songs and streaming, and Apple touts the ability to match songs you have in your library to music on its own streaming service. In theory, this uploads copies of your songs into a personal cloud that Apple can then stream to you on demand, without changing anything about your hard drive or the data stored therein.

Unfortunately, Apple Music doesn’t always get its facts straight. Roughly a week ago, music enthusiast and Apple Music subscriber James Pinkstone published a blog postdetailing how roughly 60GB of music had been wiped off his hard drive by Apple Music. An Apple customer service representative told him this was working as intended, while others claimed that this couldn’t have happened — Pinkstone must have made a mistake that wiped his system.

Now a different person, Robert Etropolsky, has come forward with a similar story — 60GB of music wiped off his system (Pinkstone lost 122GB). The YouTube video above addresses claims that Etropolsky or Pinkstone somehow did something wrong to delete their own massive music collections; Etropolsky shows that his music was previously stored in a Time Machine backup, reiterates that the music files that were once on his system have vanished since he subscribed to Apple Music, and notes that there’s no way to replace them. When he downloads files from Apple Music, they’re downloaded in an encrypted Apple format and will be deleted if he ever stops being an Apple Music subscriber.


In his particular case, the problem is exacerbated because much of his collection was based on rare recordings, demo tapes, and other rare versions of songs that were “matched,” uploaded, and then deleted from his hard drive. He also demonstrates that Apple’s music matching service isn’t as foolproof as the company thinks it is — in one case it deleted a song off his hard drive while offering a completely different piece of music as an uploaded alternative.

It’s still not clear how this happened or what’s responsible for the issue, but problems like this aren’t just going to just go away. Streaming services can be enormously convenient, but high profile stories like this are one reason to keep your digital music collection far away from a service like Apple Music. For some, the convenience simply isn’t worth the risk.

New Windows 10 build kills controversial password-sharing Wi-Fi Sense

When Microsoft announced Windows 10, it added a feature called Wi-Fi Sense that had previously debuted on the Windows Phone operating system. Wi-Fi Sense was a password-sharing option that allowed you to share Wi-Fi passwords with your friends and contacts in Skype, Outlook, and Facebook. Here’s how Microsoft described the featurelast year:

“When you share Wi-Fi network access with Facebook friends, Outlook.com contacts, or Skype contacts, they’ll be connected to the password-protected Wi-Fi networks that you choose to share and get Internet access when they’re in range of the networks (if they use Wi-Fi Sense). Likewise, you’ll be connected to Wi-Fi networks that they share for Internet access too. Remember, you don’t get to see Wi-Fi network passwords, and you both get Internet access only. They won’t have access to other computers, devices, or files stored on your home network, and you won’t have access to these things on their network.”


There were security concerns related to Windows 10’s management of passwords and whether or not said passwords could be intercepted on the fly. To our knowledge, no security breaches or problems were associated with Wi-Fi Sense. According to Microsoft, few people actually used the feature and some were actively turning it off. “The cost of updating the code to keep this feature working combined with low usage and low demand made this not worth further investment,” said Gabe Aul, Microsoft’s Windows Insider czar.

These changes are incorporated into the latest build of Windows, Windows 10 Insider Preview 14342. Other changes in this build include:

  • Microsoft Edge extensions are now downloaded from the Windows Store (Adblock and Adblock Plus are now available for download);
  • Swipe gestures are now supported in Microsoft Edge;
  • Bash on Ubuntu on Windows now supports symlinks (symbolic links);
  • Certain websites can now be directed to open in apps instead, ensuring that one of the mobile Internet’s worst features will be available in Windows 10.

Microsoft has also fixed playback errors with DRM-protected content from Groove Music, Microsoft Movies & TV, Netflix, Amazon Instant Video, and Hulu. The company fixed audio crashes for users who play audio to a receiver using S/PDIF or HDMI while using Dolby Digital Live or DTS Connect, and fixed some bugs that prevented common keyboard commands like Ctrl-C, Ctrl-V, or Alt-Space from working in Windows 10 apps. Full details on the changes and improvements to the build can be found here.

One final note:  Earlier this year, we theorized that Microsoft might extend the free upgrade period longer than the July 29 cutoff, especially if it was serious about hitting its 1 billion user target. The company has since indicated that it has no plans to continue offering Windows 10 for free after July 29. If you want to upgrade to Windows 10 or are still on the fence about whether or not to accept Microsoft’s offer, you only have a little over two months to make the decision.

Nvidia’s excellent first quarter buoyed by gaming, automotive wins, and data centers

Nvidia announced first quarter results for its fiscal year 2017 yesterday, and the firm’s results were excellent — particularly in a market where companies like AMD and Intel have been taking a hammering. First-quarter revenue was up 13% to $1.3 billion, with strong gains in gaming, data centers, and the automotive market.

The slide below breaks down Nvidia’s revenue in two different ways. Reportable segment revenue reflects Nvidia’s chosen method of grouping its businesses (Tegra, GPU, Other). Revenue by market platform provides additional color into how each individual area of the company is performing. One does not map cleanly to the other, but it’s worth considering both data sets.


These two charts suggest that the bulk of Nvidia’s growth is linked to its strong performance in gaming, data centers, and automotive sales. The drop off in the OEM and IP market was most likely caused by declines in Nvidia’s original Tegra mobile business and offset by a significant uptick in demand for Nvidia’s automotive designs. Nvidia logged a 63% increase in data center revenue, driven by its efforts to position itself at the center of both the driverless car initiative and deep learning networks. Both of these efforts have been front and center during a number of recent company demos and presentations.

Gaming also saw strong gains year-on-year, and Nvidia implied this was due to increased sales volume in all areas rather than increased ASPs. The 8% quarterly decline is in line with seasonal projections, which means Nvidia has probably taken market share from AMD over the past 12 months. The company’s recent GTX 1080 and 1070 announcementshave set the stage for an aggressive move to take over the high-end of the market. AMD countered the GTX 980 Ti with the Fury family in 2015, but Polaris isn’t a high-end uber-GPU and Nvidia has obviously planned to sweep both the high-end market in general and the VR space, specifically.

AMD hasn’t formally announced Polaris positioning or performance yet, but the rumor mill suggests it’s an extremely potent competitor in much less expensive markets that constitute the actual bulk of the GPU space. For all the ink lavished on high-end cards, very few people actually buy a $600 GPU. Most of the market is in the $150-$250 space, and if AMD launches a strong midrange part, it could seize leadership in that area. We don’t know yet how all these variables will play out.

Nvidia’s long-term success

It’s interesting to look at where Nvidia is now as opposed to what conventional wisdom predicted roughly eight years ago. Back then, AMD and Intel both had plans to combine GPUs with CPUs to create products that would likely kill the low-end GPU markets. By and large this happened, which is why both AMD and Nvidia focus on the $100+ space these days. The cards sold below that price point tend to be older hardware from previous low-end generations.

Nvidia poured enormous resources into Tegra to win early space in mobile — Tegra 2 was one of the most popular smartphone and tablet processors in the early dual-core days — before pivoting the entire segment towards automotive designs. Using GeForce cards fordeep learning and HPC work is another market Nvidia has largely dominated. Until quite recently, AMD didn’t seriously compete for these spaces and the company has a long way to go to ramp up its resources to match Team Green.

The flip side to this is that Nvidia’s own Project Denver CPU core hasn’t amounted to much in the market to date, and Nvidia’s efforts to create a comprehensive SoC platform with Icera’s modem technology also failed. Like Microsoft and Intel, Nvidia has had difficulty breaking out of its core GPU market — but one could argue that it’s also spent less money chasing alternatives that haven’t panned out. Microsoft and Intel have both pivoted their business strategies and created new products, but both also threw huge amounts of money and mobile for a number of years.

Overall, the company is well positioned for FY 2017 (calendar 2016). We’ll see if and how that changes when Polaris launches this summer. And just to be clear: Knowledgeable sources ET has spoken to have confirmed that Polaris is on-track for a mid-year launch. Rumors that AMD has pulled Vega in for an October launch are just that — rumors.