LTU computer scientist to present groundbreaking research

Louisiana Tech University computer scientist to present groundbreaking research

Dr. Ben Choi, associate professor of computer science at Louisiana Tech University, will present his research on a groundbreaking new technology that has the potential to revolutionize the computing industry during a keynote speech next month at the International Conference on Measurement Instrumentation and Electronics.

Choi will present on a foundational architecture for designing and building computers, which will utilize multiple values rather than binary as used by current computers. The many-valued logic computers should provide faster computation by increasing the speed of processing for microprocessors and the speed of data transfer between the processors and the memory as well as increasing the capacity of the memory.

This technology has the potential to redefine the computing industry, which is constantly trying to increase the speed of computation and, in recent years, has run short of options.

By providing a new hardware approach, the technology will push the speed limit of computing using a progressive approach which will move from two values to four values, then to eight values, then to 16 values, and so on. Future computers could be built using this many-valued approach.

“Advances in the foundational design of the computer are needed in business and research applications as well as at the foundation of cyber security efforts across the nation,” said Dr. Galen Turner, director of computer science, cyber engineering, electrical engineering and electrical engineering technology at Louisiana Tech. “Dr. Choi’s invitation to present at the upcoming conference has increased interest in this foundational architecture.”

Louisiana Tech and Choi have filed a U.S. patent application for this groundbreaking technology titled “Method and Apparatus for Designing Many-Valued Logic Computer.”

“If this is successful, computers in the future will be based on our technology,” said Choi. In addition to the keynote speech, Choi’s research will be released in a publication in the related journal.

Choi earned his Ph.D., M.S., and B.S. degrees from The Ohio State University, specializing in computer science, computer engineering and electrical engineering. His research focus areas include Humanoid Robots, Artificial Intelligence, Machine Learning, Intelligent Agents, Semantic Web, Data Mining, Fuzzy Systems, and Parallel Computing.

Prior to coming to Louisiana Tech, Choi served as a visiting research scholar at DePaul University, University of Western Australia and Hong Kong University of Science and Technology. He has also worked in the computer industry as a System Performance Engineer at Lucent Technologies (Bell Labs) and as a private computer consultant.

New medical tech coming to the rescue for the vision-impaired

Ever since the invention of the magnifying glass nearly 25 centuries ago, we’ve been using technology to help us see better. For most of us, the fix is fairly simple, such as a pair of glasses or contact lenses. But for many with more seriously impaired vision — estimated at around 285 million people worldwide — technology has been short on answers until fairly recently. Doctors and scientists are making up for lost time though, with a slew of emerging technologies to help everyone from the mildly-colorblind to the completely un-sighted. They’re also part of a wide swath of new medical advances we’ll be covering all this week here at in our new Medical Tech series.

Superman glasses for the vision-impaired

These space-age-style smart glasses from Vast can help the vision-impaired see what is around themWe’re all familiar with the accessibility options available on our computers, including larger cursors, high-contrast fonts, and magnified screens. But those do nothing to help the vision-impaired navigate the rest of their day. Instead, a number of different “smart glasses” have been invented that help make the rest of the world more accessible.

These glasses work by using the image from one or more cameras — often including a depth sensor — and processing it to pass-along an enhanced version of the scene to a pair of displays in front of the eyes. Deciding on the best way to enhance the image — autofocus, zoom, object outlining, etc. — is an active area of research, as the best way for the wearer to control them. Right now they tend to require an external box that does the image processing and has knobs for controlling settings. Emerging technologies including eye tracking will provide improved ways to control these devices. Better object recognition algorithms will also help improve their utility. One day it will be easy to have these glasses know enough to highlight house keys, or a wallet, or other commonly-needed, but sometimes hard to locate, possessions.

One of the more clever solutions comes out of Oxford, via Google Global Impact Challenge winner VA-ST. I had a chance to try out VA-ST’s prototype Smart Specs last year, and can see how they could be very helpful for those who otherwise can’t make out details of a scene. It’s hard, though, to get a real feel for their effectiveness unless you are actually suffering from a particular vision impairment. Some work is being done to help simulate these conditions, and allow those with normal vision to evaluate solutions. But until then willing subject participants with uncommon vision disorders are actually a scarce resource for scientists attempting to do trials of their devices.

The VA-ST system highlights edges and objects to make important parts of a scene more visibleMost solutions available today suffer not only from technical issues like how they are controlled, but cut off eye contact and are socially awkward — which has also hampered their adoption. Less-obtrusive devices using wave guides, like the ones developed by Israeli startup Lumus, will be needed to overcome this issue. Startup GiveVision is already demoing a version of its vision-assisting wearable using Lumus wave guides to help make it more effective and less obtrusive. Similar advanced augmented reality display technology is also being used in Microsoft’s HoloLens and Magic Leap’s much-rumored device. While it is mostly mainstream AR devices like those that are driving the technology to market, there is no doubt the medical device sector will be quick to take advantage of it.

Other efforts to enhance awareness of the visual world, including EyeMusic, render salient aspects of the scene — such as distance to the closest object — as audible tones. The OrCam system recognizes text and reads it to the wearer out loud, for example. These systems have the advantage that they don’t require placing anything over the wearer’s eyes, so they don’t interfere with eye contact.

Retinal implants provide sight for many of the blind

In many blind people — particularly those suffering from Retinitis Pigmentosa and age-related Macular Degeneration — the retinal receptors may be missing, but the neurons that carry information from them to the brain are intact. In that case, it is sometimes possible to install a sensor — an artificial retina — that relays signals from a camera directly to the vision neurons. Since the pixels on the sensor (electrodes) don’t line up exactly with where the rods and cones would normally be, the restored vision isn’t directly comparable with what is seen with a natural retina, but the brain is able to learn to make sense of the input and partial vision is restored.

Palankar's team is also looking at using special glasses to provide wireless data to the retinal implantRetinal implants have been in use for over a decade, but until recently have only provided a very minimal level of vision — equivalent to about 20/1250 — and have needed to be wired to an external camera for input. Now, though, industry-leader Retina Implant has introduced a wireless version with 1,500 electrodes on its 3mm square surface. Amazingly, previously-blind patients suffering from Retinitis Pigmentosa have been able to recognize faces and even read the text on signs. Another wireless approach, base on research by Stanford professor Daniel Palanker’s lab, involves projecting the processed camera data into the eye as near IR — and onto the retinal implants — from a special pair of glasses. The implants then convert that to the correct electrical impulses to transmit to the brain’s neurons. The technology is being commercialized by vision tech company Pixium Vision as its PRIMA Bionic Vision Restoration System, and is currently in clinical trials.

Even color-blind people can benefit from clever glasses

While severe vision disorders affect a large number of people, even more suffer from the much more common problem of color blindness. There are many types of color blindness — some caused by missing the correct cones to discriminate one or more of the primary colors. But many who have what is commonly called “red-green colorblindness” simply have cones with sensitivities that are too close together to help distinguish between red and green. Startup Enchroma stumbled across the idea of filtering out some of the overlap, after noticing that surgeons were often taking their OR glasses with them to the beach to use as sunglasses. From there, the company worked to tune the effect to assist with color deficiency — the result being less overall light let through its glasses, but a better ability to discriminate between red and green. If you’re curious whether the company’s glasses can help you, it offers an online test of your vision.

Accessibility tech breakthroughs for the blind

Startup blitab is adding braille to a standard tabletThere are plenty of limits on what medical technology can currently accomplish for those who are blind or vision-impaired. Fortunately, accessibility technology has also continued to advance. Most of us are familiar with magnified cursors, zoomed-in text, and speech input-and-output, but there are other more sophisticated tools available. There are too many to even list them here, but for example, startup blitab is creating a tablet for the world’s estimated 150 million braille users that features a tactile braille interface as well as speech input and output. On the lighter side, Pixar is developing an application that will provide a narrative description of the screen while viewers watch.

However good your vision, you’re likely to benefit from medical technology for improving it at some point, since the incidence of vision-related conditions increases dramatically with age. Everyone eventually suffers from at least relatively minor conditions like Presbyopia (the inability for the eye to accommodate to near and far focusing), and over 25% of those who make it to age 80 suffer from major vision impairment. Even for those of us with only minor vision issues, the advent of smartphone apps to help measure our vision and diagnose possible problems will help lower costs. With the rapid advances in microelectronics, surgical technology, and augmented reality, though, there are likely to be some amazing treatments for those conditions in the future.

IBM researchers announce major breakthrough in phase change memory

For years, scientists and researchers have looked for the so-called Holy Grail of memory technology — a non-volatile memory standard that’s faster than NAND flash while offering superior longevity, higher densities, and ideally, better power characteristics. One of the more promising technologies that’s been in development is phase-change memory, or PCM. IBM researchers announced a major breakthrough in PCM this week, declaring that they’ve found a way to store up to three bits of data per “cell” of memory. That’s a significant achievement, given that previous work in the field was limited to a single bit of data per memory cell.

Phase change memory exploits the properties of a metal alloy known as chalcogenide. Applying heat to the alloy changes it from an amorphous mass into a crystal lattice with significantly different properties, as shown below:

Theseus1

Scientists have long known that chalcogenide could exist in states between crystal lattice or amorphous, but building a solution that could exploit these in-between states to store more memory has been extremely difficult. While phase-change memory works on very different principles than NAND flash, some of the problems with scaling NAND density are conceptually similar to those faced by PCM. Storing multiple bits of data in NAND flash is difficult because the gap between the voltage levels required to read each specific bit is smaller the more bits you store. This is also why TLC NAND flash, which stores three bits of data per cell, is slower and less durable than MLC (2-bit) or SLC (single bit) NAND.

IBM researchers have discovered how to store three bits of data in a 64K array at elevated temperatures and for one million endurance cycles.

“Phase change memory is the first instantiation of a universal memory with properties of both DRAM and flash, thus answering one of the grand challenges of our industry,” said Dr. Haris Pozidis, an author of the paper and the manager of non-volatile memory research at IBM Research – Zurich. “Reaching 3 bits per cell is a significant milestone because at this density the cost of PCM will be significantly less than DRAM and closer to flash.”

Here’s how the PR blast describes the breakthrough:

To achieve multi-bit storage IBM scientists have developed two innovative enabling technologies: a set of drift-immune cell-state metrics and drift-tolerant coding and detection schemes.

More specifically, the new cell-state metrics measure a physical property of the PCM cell that remains stable over time, and are thus insensitive to drift, which affects the stability of the cell’s electrical conductivity with time. To provide additional robustness of the stored data in a cell over ambient temperature fluctuations a novel coding and detection scheme is employed. This scheme adaptively modifies the level thresholds that are used to detect the cell’s stored data so that they follow variations due to temperature change. As a result, the cell state can be read reliably over long time periods after the memory is programmed, thus offering non-volatility.

“Combined these advancements address the key challenges of multi-bit PCM, including drift, variability, temperature sensitivity and endurance cycling,” said Dr. Evangelos Eleftheriou, IBM Fellow.

There’s still a great deal of work to do before phase-change memory can be considered as a candidate to replace NAND flash or DRAM in certain situations. The performance and power impact of these new structures has not been characterized and the switching time hasn’t been revealed.

Universal-PCM

The graphic above is from an IBM video explaining how PCM memory works and some general information on this latest breakthrough. Note that PCM, like NAND flash, takes a performance hit when it shifts to a multi-bit architecture. While single-bit PCM is nearly as fast as DRAM (according to IBM), multi-bit PCM is significantly slower. Data retention (how long data remains in the cell) was also worse than NAND flash, which has lower endurance (how many read/write cycles the cells can withstand) but higher data retention.

Phase-change memory is theoretically capable of replacing DRAM in at least some instances, but if these density gains come at the cost of programming speed, the net gain may be minimal. Phase-change memory also requires large amounts of power to program and generates a great deal of heat as a result.

This video from IBM walks through the history of phase-change memory, explains the basics of its function, and covers the most-recent breakthrough. We think IBM’s discovery here could help pave the way for a long-term replacement to NAND flash, but we’re still years away from that. Intel’s Optane 3D XPoint memory may make its own play for the server and data center space, and Micron, which used to manufacture PCM for the mass market doesn’t build it anymore.