Ask anyone who has spent more than a few minutes inside a VR headset, and they’ll mention the screen door effect. This refers to the visible mesh you sometimes can see when viewing a screen at very close proximity, and it takes a lot of pixels to truly remove it, though this can vary depending somewhat on which screen technology you deploy. In this case, what Samsung and Stanford have developed is different from anything we’ve got on the market today.
Right now, a high-end smartphone might have a 400-500 PPI (pixels per inch), while a monitor or TV typically lands between 100 and 200. Laptops tend to be higher than desktop monitors because screen resolution has grown even at small panel sizes. PPI has been adopted as a very loose metric for how “clear” text on-screen will be, even though this is a poor way to use it due to differences in underlying panel technology and pixel layouts that can translate into different perceived levels of quality between two panels of the same resolution and size.
The fact that we’re talking about a jump from 500 PPI to 10,000 PPI is also noteworthy. PPI may not be the whole enchilada of image quality, but improving even one aspect of image quality by 20x tends to yield results. Also, please note: While the researchers are claiming 10,000 PPI, it’s not at all clear humans would ever benefit from that kind of resolution. This is actually good because it implies we could benefit from the technology even if it could only hit 2,000 PPI or 5,000 PPI. In this context, our lack of eagle eyes is an advantage. Eagles have terrible problems in VR.
So How Does This New OLED Work?
OLEDs on the market today are made one of two ways. Mobile devices tend to use dedicated Red, Green, and Blue OLEDs, while TVs have white OLEDs with color filters over them. Each approach is tailored to a specific set of constraints. This new method is something completely different than any OLED we’ve built before.
This new display technology sandwiches an OLED film between two reflective surfaces, one made of a silver film, and one IEEE Spectrum defines as a “metasurface” of microscopic pillars packed closely together. A square cluster of these pillars (80nm high, 100nm wide) can serve as a pixel. Even more interestingly, the OLED film can specify which subpixels should be lit. Nano-pillars in a target subpixel can manipulate the white light falling on them to ensure that a subpixel can reflect a specific color of light (RGB). The most densely packed clusters of nano-pillars produce red light, moderately dense clusters produce green light, and the least-dense clusters produce blue light.
According to the research team, emitted light bounces back and forth between the device’s reflective layers until escaping through the silver film covering the panel’s surface. This offers a 2x improvement in luminescence efficiency and better color purity.
“If you think of a musical instrument, you often see an acoustic cavity that sounds come out of that helps make a nice and beautiful pure tone,” says study senior author Mark Brongersma, an optical engineer at Stanford University. “The same happens here with light — the different colors of light can resonate in these pixels.”
In theory, these phenomenal pixel densities could be used to build AR and VR screens that wouldn’t be subject to the screen door effect. 10,000 pixels / inch represents a 20x leap forward over our current maximum. The question of whether there’s any kind of near-term roadmap to bring the product to market is a very different issue. OLED itself took over a decade to ramp into manufacturing. Technologies like micro-LED have tremendous promise but limited near-term commercial aspects. If Samsung and Stanford can bring this tech to market, it probably won’t be for another 5-10 years.
Stanford Researchers Plan to Replace Progressive Lenses With ‘Autofocals’
Death, taxes, and vision problems are all unavoidable, eventually. A team at Stanford is paving the way for a much better solution to the universal problem of a decrease in our eyes' ability to refocus as we age.
Stanford Researchers Build AI Directly Into Camera Optics
As AI in the form of deep neural networks infiltrates nearly every portion of our lives, there is a mad scramble to make the systems that run these algorithms more efficient. Custom silicon is one way; resistive computing is another. A team from Stanford has shown how computing using optical elements is yet a third approach.
Stanford Car Can Learn How to Handle Unknown Driving Conditions
Performance driving isn't just for race cars. If autonomous vehicles are going to be truly safe, they'll need to be able to perform at the safe maneuvering limits under a variety of conditions. A team of Stanford researchers has just brought us one step closer.
Stanford’s Latest AI Helps Doctors Diagnose Brain Aneurysms More Accurately
From breast cancer to brain aneurysms, artificial intelligence continues to establish itself as a valuable diagnostic tool. Researchers at Stanford University have created predictive AI to detect the likelihood of aneurysm in brain scans with high accuracy.