Clay Bavor, head of Google VR/AR: Where are we on VR? Where will we go?

Lei Feng: According to last year's climax, VR is currently facing an adjustment period. As a leader in mobile VR, what does Google think of this period? Prior to Google I/O, Clay Bavor, Google’s VR/AR director, wrote about his and Google’s views in Medium. Let's take a look together. This article was compiled by Lei Fengnet.

The past year has been very important for VR and AR: from the smartphone-side desktop desktop to the consumer-oriented VR hardware products. Developers began to seriously study AR technology. Our team at Google is working hard to launch some important products. Six months ago, we launched Daydream, a high-quality mobile VR platform. Soon after, the first mobile phone with built-in Tango features will soon be on sale, putting the smartphone-based AR functionality into the hands of consumers for the first time.

In a short time, although great changes have taken place in the entire industry, we have made a lot of progress, but there are still many problems that need to be resolved. We already have millions of consumers beginning to experience the fun brought about by these new technologies, but it is still at an early stage.

Now is the best time to review history, take a look at the overview of VR and AR, and share our views: what stage we are in, what our goals are, and why these technologies are important to Google and why it is important to the world.

Google VR/AR Vice President Clay Bavor, Image Source Fortune

What is AR in the end? What is VR?

What does the term "VR" and "AR" mean? I often say: VR can take you anywhere, AR can bring you anything. VR can send you to other places. AR lets you stay in your own place and brings object objects and information in text form to make them appear to be around you. They all give you super power. The experience can generally be completed by wearing a corresponding head-mounted device or by viewing the viewing window of a smartphone.

Many people ask me which technology will "win." The problem is that such questions turn them into two mutually exclusive techniques of mutual exclusion. But this is a wrong idea. VR and AR are more like two points in the spectrum - the difference is how many computer-generated images can be projected into the natural environment. VR completely replaces the real world with computer-generated images, such as sending you to a virtual exhibition at the Louvre Museum. In contrast, AR adds some computer-generated images to your surroundings. For example, if you walk in the real Louvre, AR can display virtual digital footprints on the ground in front of you, guiding you to Leonardo's painting Mona Lisa.

But at some point in time, these spectral differences will be blurred: we will have AR head-worn devices that can enhance the overall field of view, and VR headsets that can realistically display the digital environment around you, and both. equipment. Once the technology has progressed to this point, the distinction between VR and AR will be even smaller than today.

At the same time, if VR and AR are two points in the spectrum, why should we call the spectrum? Here are some references - immersive computing, presence computing, physical computing, perceptual computing, mixed reality or immersive reality. This kind of technology is new and its definition still needs a long way to go, but at this stage we call it immersive computing.

Human-computer interaction development curve

1940 California Statistics Computer Punch Card Machine

Why is it important that immersive computing makes things look real? Why can technological investment make this possible?

In order to look ahead, we first review the history of computers and how we interact with them. The changes that have taken place in the past few decades, every time people make computers work more like us, every time when we remove an abstract barrier between people and computers - computers will become more extensive and easy to use. ,More valuable. In turn, we have become more capable and more productive.

In the beginning, people could only “talk” with computers by verbatim rewriting. The next punch is an improvement and the computer becomes easier to program. Later, the emergence of the command line, making typing replace the punch card.

The real breakthrough is the emergence of a graphical user interface (GUI). The first time the computer became visualized, suddenly, more people came into contact with and used them. People began to use them extensively, from writing school reports to designing jet engines.

A smart phone with a human hand further promoted the popularity and computing power of computers. The touch screen allows us to interact directly with our computer, using our fingers and a smartphone camera to enable us to see the world. Recently, a conversational interface such as Google Assistant has enabled you to interact with people more naturally and seamlessly, just as you do with people.

But as far as we are concerned, the abstract concept still exists. When talking to a friend in a video, you don't see a small, flat image on the screen as you would in real life. When you want to know the location of a restaurant, you will see an incredible detailed map, and then you have to determine the location of the blue dot in your location and the related location of your surroundings on the map. Your phone can't directly let you go there.

With immersive computing technology, we don’t have to stare at the screen or constantly check our mobile phones, but instead we surround our real and virtual worlds. We can use our hands to move things directly or just watch them and take action. Immersive computing will eliminate more abstractions between us and the computer. You can use text messages to weave computers seamlessly into your environment. This is the inevitable next step in the development of computing interaction interfaces.

Why is Google?

1998 google.com home page

Google’s mission is to organize the world’s information so that it is widely used by the general public. We start with web pages -- text and images -- and then books, maps and videos. As the information on the Web has become more abundant, the tools for searching and accessing them have changed as well.

Immersive computing will go one step further. If you want to learn more about Machu Picchu instead of reading or watching videos about it, you will be exploring the city in virtual reality. We will have VR cameras that can capture the moments of our lives and allow us to move into the scene at that time in a few years. Your AR device no longer uses the 2D street map to find the restaurant, but will know exactly where it is in space and shows the distance of arrival. The surgeon will observe the patient's 3D scan image to better understand the patient's physical condition. As immersive computing around us is increasingly integrated into our environment, information will be richer, more relevant, and more helpful to us.

However, this is not just information itself. This is about how people access it. This is why we have invested in establishing a wide range of computing platforms over the years. With Chrome, we want to make the network faster, safer, and more powerful. With Android, our goal is to enable more people to use mobile computing. Through Cardboard and Daydream, we hope to popularize immersive computing by providing diverse, useful and interesting devices.

Together with artificial intelligence and machine learning, we see VR and AR as part of the next phase of Google's organizational world information mission. This is how we think about the greater context of immersive computing, and where it is located in computer development, and why Google invests.

Where are we located?

1984 Motorola DynaTAC 8000X phone, photo credit: Motorola

We are often asked when VR and AR will be ready and what a killer application is. This problem shows that VR and AR will have a strange moment that suddenly “breaks out” and is very useful. Everyone wants to use it.

First, it is important to consider our position in the development of immersive computing and to ensure that we make the right comparison. One example is the process of watching the development of mobile phones. The iPhone was launched a decade ago and now smart phones are everywhere. So some people think that immersive computing will follow the same curve on the same time scale. This is incorrect, but it is useful to look back at the development history of the mobile phone in order to find out the reasons.

The first commercial handset, the DynaTAC 8000X, was released in 1984. It was 33 years ago, not ten years. Since we are now at the stage of immersive computing devices available to the first generation of consumers, comparing it to cell phones of the 1980s is more appropriate than any cell phone released in the past decade. This is not to say that I believe VR and AR will take 30 years to fully mature and achieve similar usage and impact. I am more optimistic than that. However, it is wrong to compare with the smartphone of ten years ago.

There is another related lesson to the development of mobile phones. In the early days of technology, only a small percentage of people can be served. GPS is considered to be of limited use - mainly emergency personnel, perhaps people who are hiking in remote wilderness. Who else can there be? The first camera phone according to the current standard, the quality will be very poor. Today, GPS and camera phones are fairly ubiquitous.

Regardless of the specific time scale, both VR and AR will experience similar developments in mobile phones. Features will improve. The equipment will become cheaper and easier to use. User interfaces and applications will have breakthroughs. With increasing value and decreasing costs, immersive computing will make sense for more and more people. This is not a hypothetical question - this is when a problem occurs.

Where will we go?

The mantra of our team is "providing everyone with an immersive computing experience." We started using Cardboard, and the next step was Daydream, which brought high-quality mobile VR to the Android ecosystem handset. Tango brings a new form of AR to smartphones that can be used in all aspects of gaming, entertainment, and education. But what about the next step? What needs to be done to achieve the goal?

A short answer is this: It is not a matter of doing better, but everything is better.

The following is a longer answer.

First of all, obstacles. We must eliminate more obstacles to using these devices. The headset must be easier to use, more comfortable and more portable. The Daydream independent VR headset that we just announced on Google I / O is a step in the right direction. It has everything that VR needs. Experience virtual reality only need to wear them easily. However, we have hardly optimized the user experience.

Next, there is the underlying technology. To make VR more portable, AR is more convincing and useful, and everything behind these experiences must be improved: display, optics, tracking, input, GPU, sensors, etc. As an example, to implement a "retina" solution in VR - that is, to give a 20/20 field of vision in a complete field of vision, we will need about 30 times more pixels than current displays. In order to make finer AR possible, smart phones will need more advanced sensing capabilities. Our equipment will need to understand movement, space and very precise location. The precision we need is not meters, but centimeters or even millimeters.

This may sound like a huge technological leap. They are indeed. But we are already making progress. WorldSense can achieve position tracking for VR and VPS - "visual positioning services" like GPS, but for precise indoor locations, these two driving factors are important. And we should remember that advanced VR and AR devices like today are mostly made up of components that produce smart phones. Just as we are building airplanes with bicycles and car parts. You can do this - this is where the White brothers started - but it can't be the final point. Over the next few years, we will begin to see more components specifically built for VR and AR, which will result in more powerful devices.

The other part here is to achieve a more extensive and in-depth experience. As the underlying technology improves, new applications and experiences will become possible. To illustrate what I mean, take an application like Tilt Brush. It is impossible to exist until we have a controller that can perform position tracking in space. Painting with a handle doesn't make any sense. Hardware and software must evolve together. We will see more common evolution in the future. Accurate hand tracking will enable a new set of interactions that will make new types of social applications rely on eye contact and realistic facial expressions.

And we need more content to experience, play and use. Need for everyone's interest in the experience and applications. In order to promote the development of content, we are supporting creators of YouTube Spaces and Jump cameras, bringing more ideas and opinions to the media. We work closely with dozens of artists to help them explore VR with our Tilt Brush Artists in Residence project. But of course, we need the help of developers, creators, storytellers, and cinematographers to explore and build these.

A cave full of possibilities

This is a metaphor. I used to think about the current stage of immersive computing: we are exploring caves full of possibilities. This is a huge one, with many branches and potential paths, and most of them are in caves in the dark. Although there are some flash places, it is difficult to see far away. However, through research, the establishment of prototypes, and the creation of products, the most important thing is to see how people use these technologies and benefit from them. We have illuminated most of the caves and have seen the direction of all this more clearly. We have made progress.

From our current point of view, it is difficult to see clearly how this all starts. However, it is clear that it will eventually begin. I am very optimistic and believe that immersive computing will soon make our life better. We have already seen the arrival of the day - helping children explore more worlds in the classroom and allowing journalists to bring audiences to the front lines around the world so that artists can create works of art that were previously unimaginable. One day, we will want to know how to complete our work without the aid of a computer—perceive environmental awareness, display textual information, and observe, feel, and act like the real world.

This will not happen overnight. But it will happen eventually. It will change all aspects of our work, entertainment, life and learning.

Thermal Printer

Thermal Printer ,Thermal Label Printer,Thermal Receipt Printer,Bluetooth Thermal Printer

ShengXiaoBang(GZ) Material Union Technology Co.Ltd , https://www.sxbgz.com