VR knowledge popularization III – you see you see the face of VR-VR screen technology and parameter analysis

Earlier we talked about VR spatial positioning and motion capture as if they were the soul of VR, so in this chapter we talk about the most important physical part of a VR virtual reality device – the face (image). The first impression of a beautiful woman comes from the face, right?


1.Resolution

Note that we usually look at the screen, is a screen shared by both eyes, and VR is an eye corresponding to a screen, so it is generally said that the VR headset is a monocular resolution.Steve Jobs introduced the concept of “retina” when he released the iPhone 4, referring to the display effect when the pixel density of the device reaches a level of 300ppi (300 pixels per inch) at a distance of 10-12 inches. The normal limit resolution of our human eye is 60 pixels per degree, and the horizontal viewing angle of a single eye is about 150 degrees, and the up and down viewing angle is about 120 degrees, so if the screen completely covers the entire viewing angle, a resolution of 9000 (150*60) x 7200 (120*60) is required to achieve the so-called “retina” effect. For both eyes, a resolution of 18,000 (9000*2) x 7200 is required.

Therefore, the conclusion is that you need a binocular 12450*6840 or higher resolution (between 8k and 16k) to achieve the “retina” effect.

Of course, this is only a theoretical value, in fact, the general monocular 1200 × 1080 (open oversampling), monocular 1600 × 1440 will be able to see the words. As long as the 8K level of resolution will basically not see the lattice and screen effects.

Note that more than 1600 x 1440 requires a larger transmission bandwidth, usually must use DP or above interface, HDMI interface is not enough, so want to use the laptop to play PC VR to pay attention to whether there is a DP interface.


2.Refresh rate

Refresh rate is now also an important parameter of the display device. Simply put, the refresh rate is the number of times the screen is refreshed per second, you can understand it this way, when watching a movie we actually see a pair of still pictures, just like a slide show, why do we feel that the picture is moving, that is because the human eye has a visual dwell effect, the previous picture in the brain to stay in the impression has not disappeared, followed by the next picture, and the difference between the two pairs of pictures is very small. An action to use a lot of split screen to display, so we feel that the picture is moving, the replacement of the screen, is refreshing, assuming that an action completed by 20 screens, we look a bit like a cartoon, and the action increased to 30 screens, looks more natural, this is the refresh rate.

Screen refresh and refresh rate you can also understand this, just replaced by the number of times per second to scan the screen, of course, the higher the refresh rate the better, the more stable the image, the less impact on the eyes.

While we know that 24 fps already provides a continuous picture, and 60 fps is smooth enough for most people; for VR, these refresh rates are not nearly enough in order to provide sufficient immersion. Theoretically, the human eye can perceive up to 1000 fps (according to Wikipedia, a source I didn’t find cited); 150-240 fps already seems realistic enough for an untrained person. So current tier 1 PC VR devices like the Vive, Oculus at 90 Hz and Valve Index at 120 Hz are still too low (Valve Index has an experimental 144 Hz).

The performance of 3D games is closely related to the resolution, which means that if you want to provide 3D game content that matches retina VR, the performance of the computer should be correspondingly increased by tens of times. According to Moore’s Law, this process will take nearly 10 years.

So what to do? Don’t worry, clever engineers have figured out that since the human eye can only see a small area (2°) around the focal point clearly, using this feature, we can reduce the number of pixels needed, as well as the computer and transmission performance requirements, by reducing the surrounding resolution.

The figure above shows the resolution curve of the human left eye. As can be seen, only the area near the central recess of the eye (Fovea centralis) has a high resolution, while the resolution around it drops dramatically to less than one-tenth of the central one. In order to take advantage of this feature, we need to implant an eye tracking device in the VR headset. It tracks the movement of the central recess of the eye to see which point the user’s eye is looking at, and then uses full-resolution rendering around that point, and low-resolution rendering elsewhere. Eye tracking technology is now very mature, but related products are still available in the form of headset plug-ins, because the device is too large for users wearing glasses to use; such devices need to be connected with USB cables, and the alignment is cumbersome. But I believe that the next generation of VR headsets will begin to integrate such devices, when there will be no such problems. According to nVidia, the application of this technology can increase the rendering performance by 2 to 3 times

Thinking about it another way, since current VR headsets use a (Fresnel) lens to correct the rectangular (nearly square) display panel to match the human eye’s field of view, some pixels on the panel are actually wasted. For example, only the content of the inline, near-circular area of the panel is visible, while the pixels in the four corners are not actually used at all, so there is no need to render them at all. For example, due to the nature of the lens, the visible pixel density at the edges is lower than at the center, so there is no need to use full-resolution rendering. nVidia’s Multi-res Shading technology takes advantage of this feature to improve rendering performance by 33%-50%.

Schematic of nVidiaMulti-res Shading, where some of the edges have a lower pixel density and are therefore rendered with lower precision.

While these techniques appear to be effective in saving pixel counts, it should be noted that it is essentially impossible in practice to produce display panels with uneven pixel densities due to production difficulties and cost issues. Therefore these techniques can only be used to alleviate the demand for computer performance.


3.Field of view, field of view, abbreviated as FOV

FOV has 2 definitions:

1、The lens of an optical instrument serves as the vertex of the target object to be measured, and the maximum range of angles between its two edges can be measured through the lens, called the field of view. This is shown in the figure below.

The size of the field of view determines the field of view of the optical instrument; the larger the field of view, the larger the field of view and the smaller the optical magnification. In layman’s terms, target objects beyond this angle will not be collected into the lens.

2.、In a display system (such as a TV), the field of view is the angle between the edge of the display and the line connecting the viewing point (the eye).

For example, in the figure below, the COD corner is the horizontal field of view and the AOC is the vertical field of view.

For VR, the definition would fall into the first category. (Although VR has both a display and a lens)

In other words, the manufacturer announces the field of view of his product, referring to the field of view of one of the lenses of the VR glasses! In other words, the field of view is just a parameter of the lens.

Let’s take an example. For example, the following figure, the eye through the VR lens to see a character, if the character is very high, you can not see the full picture, the highest and lowest we can see the light refracted through the lens to the eye, the angle of these 2 refracted rays, is the FOV.(is the angle of the 2 red lines in front of the eye)

As you know, the structure of VR glasses is: lens + screen.

If the screen is directly embedded in the helmet, without the need to connect to a PC (at this point the helmet is necessarily equipped with a high-performance chip + software system to run games and other VR content), it can be called a VR all-in-one. For example, Oculus Quset all-in-one machine

If the screen is directly embedded in the helmet, but to connect the PC, VR content (such as games) are running on the PC, can be called VR headset (splitter). For example, ValveIndex, HTC VIVE, PImax, etc.

If only glasses, no screen, but external various models of cell phones as a screen, then it can be called VR mirror frame. For example, Samsung Gear VR, Storm Magic Mirror.

Simply put, a good VR glasses design is the edge of the field of view, close to the edge of the screen. This is the best immersion. The role of field of view in VR is mainly reflected in the sense of immersion. Generally speaking, the larger the field of view, the less likely to produce vertigo and the stronger the sense of immersion.

However, there is a degree to everything, and it’s not good if there is too much. When you watch a movie with a VR device, if your field of view is too large, you may only see half of the body and the other half you can’t find.


4.Screen Type

The current VR screen is divided into P row AMOLED and RGB LCD two kinds of screen:

Because VR needs a fast enough response speed, early LCD response speed is not fast enough, will produce dragging phenomenon, only AMOLED can meet the conditions, so the early VR are using Samsung’s AMOLED screen.

AMOLED advantage is good color, high contrast, pure black, the disadvantage is that AMOLED only one Samsung, pixel arrangement are Samsung P row (usually 1 pixel consists of three primary colors 3 bright spots, and P row is 2 pixels 5 bright spots, a pixel only 2 exclusive bright spots, adjacent pixels and then share a bright spot, resulting in adjacent pixels “sticky” feeling), will appear screen effect ( OLED pixels between the highlight area for black does not emit light), lattice (because the pixels are diamond-shaped arrangement) are more obvious, low definition. Generally speaking, the physical clarity of AMOLED, compared to LCD is about 20% lower.

Samsung P row OLED screen

And in contrast, in 2019, LCD made a breakthrough, the response speed to keep up, so a large number of LCD screen VR began to appear, LCD’s disadvantage is that the color is light, contrast, black purity is not as good as AMOLED, the advantage is good clarity, especially look at the edge of the word is relatively clear (because the pixels are arranged in a square).

Normal RGB-aligned LCD screen

In general, if you don’t mind the purity of colors, RGB arranged LCD > P-row AMOLED is good.

Of course the most perfect is RGB OLED, but there is no such product with high enough resolution yet.


5.Delay

Those who play online games know what latency is. For example, if you move a character and it takes 2 seconds for the character to start moving, then the latency is 2 seconds. Latency is a very, very, very important parameter in VR. Think about it, if your body is moving, but the screen is half a beat slower, you’ll feel like vomiting, which is a very bad experience. General latency to do within 20ms (milliseconds) is relatively good.


6.Frame rate

Frame rate and screen refresh rate are similar in that they are related in the sense that the screen refresh rate determines the maximum frame rate, while the real traffic with otherwise depends on the frame rate. If the refresh rate is 90Hz, a frame rate of even 200 is useless because the screen can only display 90Hz, and accordingly, ideally, the frame rate is always greater than or equal to the screen refresh rate. In other words, for a 90Hz screen, a frame rate of 90 would be optimal, but this is generally not possible. The current industry standard is to stabilize at 60 fps, even if it is very good.


These are the parameters of a general VR headset, and the important order of reference is, resolution > refresh rate > FOV > screen type > other.

点击这里查看中文文章

发表评论 取消回复

退出移动版