Interview with Huang Shi: Game Is the Gateway to Metaverse

原创 精选
Techplur
Technology is constantly advancing, and so are video games. With the rise of emerging concepts such as the metaverse, where will video games go in the future?

In recent years, video games have undergone radical changes. From the early computer games to the first versions of Nintendo and Atari, to today’s VR and AR games, video games have become more lifelike than ever. The days of pixelated screens and limited sound are no more. Technology is constantly advancing, and so are video games. With the rise of emerging concepts such as the metaverse, where will video games go in the future?

In this article, we interviewed Professor Huang Shi, who will share his thoughts and insights on games, VR, and the metaverse.


Q: What kind of role do game designers play in the entire process? What professional knowledge and skills are needed?

A: For game development, design is undoubtedly one of the most important and fundamental aspects. Just as all artificialities, games must also be designed, developed, and tested before being released. Designing means the initial idea, foundation, and framework, which determines the entire game’s positioning, appearance, and style. No matter how skillful the game technology is, the game would only be an empty shell without a soul, which is difficult to resonate with the players without an inspired and imaginative design.

Therefore, game designers need to master a large number of professional skills. Story designers need to have literature-related knowledge, good copywriting, and storytelling skills. Numerical designers need to have strict logical thinking and mathematical modeling ability to coordinate the overall scenario. System designers need to understand the gameplay, game mechanics, and human-computer interaction and be responsible for designing the economic system, combat system, chat system, level system, etc.

All designers have required a rich imagination, innovation, and the ability to start with bits and pieces of clues and construct a complete set of rules. However, this does not mean a whole pie-in-the-sky sort of idea. The game is software subject to strict technical restrictions and platform constraints. Designers have to apply rigorous logic before innovation and development to avoid potential underlying conflicts and logical problems. In other words, game designers have to understand design, aesthetics, literature, psychology, mathematics, computer engineering, and other knowledge. I agree with James Portnow - if you want to be a good game designer, you need to know everything.


Q: You have taken part in game designs, such as Warrior Epic and Disney’s Toy Story 3 for the iPhone. Have you met some difficulties? If any, how did you overcome them?

A: Warrior Epic is an American-style MMOPRG developed in 2007-2009. The game’s background is in the medieval magic era of “sword and magic.” The classes include warriors, musketeers, and wizards. At that time, the game development team gathered so many art elites, and the graphics were stunning.

The greatest difficulty and challenge of the project came from the technical level: the Unreal Engine was not accessible and there were no other modeling tools like Unity. Therefore, all modules of the front-and back-end of the game needed to be built from the bottom up. Our technical team innovatively adopted P2P server technology and independently developed a DirectX-based rendering module, which could interface well with tools such as 3DS Max.

At the design level, our core idea was based on the concept of “sharing a good time with friends,” making the rules of the game as straightforward as possible, highlighting the idea of players and friends working together to explore the world and avoid traps and find treasures.

Toy Story 3 for the iPhone was developed around 2010 as a Disney movie-game tie-in. The most challenging part of this project came from the team and communication. The game hardware was also undergoing a change. iPhone was replacing Nokia and rapidly taking market share. However, our team was still mainly Symbian-oriented, so the programming department needed to quickly move from J2ME to Unity 3D, while the art department also moved from 2D pixel animation to 3D modeling animation. This drastic technology transition meant great uncertainty for a team of 200+ people.

In addition, during the game development, the screenplay of the big movie was still secret, and the team in Beijing could only get sporadic scene descriptions and clips from Disney. Cross-border communication posed many problems with the time difference. Finally, with the efforts of all the staff, the team completed the talent introduction and technical transformation, and the product was successfully launched in the App Store after optimization.


Q: As a gaming veteran, could you please share your views on the game industry?

A: The game industry seems aspirational to outsiders, but it is a field with extremely high risk and low success rates. In the early days, the threshold for game development was mainly at the technical level, and teams lacking development experience often faced difficulties. Later, with the maturity and popularity of game engines, the technical threshold gave way to the channel threshold, and marketing and design became the core of competition.

Now, licensing has become a scarce resource, which determines the success or failure of products. Statistically speaking, only a handful of products can find a path and eventually achieve commercial success among the many game projects. Therefore, I think small teams can broaden their thinking, look at the world, and export their ideas and culture to the whole universe. I also hope that someday we can see more works developed in China that have both higher artistic levels and commercial values.


Q: Do you use VR technology in the games you design? What new technologies can make the virtual world more realistic and enhance users’ immersion?

A: I am currently teaching at the Communication University of China and I have developed several interactive works in VR with my students in recent years. Strictly speaking, these works should be called VR interactive movies or VR animations, which are not typical games.

In 2017, we created a VR movie to immerse the audience in the past and culture by restoring historical scenes of Ruijin. There was a tricky technical problem in this project - the film needed to rebuild historical figures, which required high character performance.  

After trying techniques such as 3D animation and green screen keying, we found that the traditional production process was insufficient. Therefore, we developed a way to present the character images in a VR environment, capturing 3D photography in the early stage, using a special camera to take parallax images of the characters, and later implementing fusion restoration for left/right image channels in the VR equipment. This can keep the details of the actors’ performance and has a sense of three-dimensionality and immersion, achieving satisfactory artistic results.  

Nowadays, most VR works prioritize PBR standards, using Zbrush, Substance Painter, and other software for modeling and material creation, and this process can achieve a better image recovery. Soon, NeRF (Neural Radiation Field) rendering technology may gain application in VR when the visual effect of this technology is more realistic.


Q: For virtual environments, the real-time dynamic visual effect is the primary condition to produce realistic feeling, without which there would be no “reality”. Can you briefly introduce the VR image generation technology? How well the technology rendering works? What are the current limitations?

A: VR and game graphics are essentially the same. They both use the so-called real-time rendering technology. Real-time rendering is a computer graphics technology that draws 3D data into images within a response time that does not cause visual or interaction delays for the user.  

The user usually feels no delay in this process because the input of user commands and the rendered display are almost synchronized. The technology features 3D objects, sightline occlusion, variable perspective, and lighting changes. Standard techniques include scan-line rendering, rasterisation, ray casting, radiation shading, ray tracing, etc.

For VR, the major difference from games is that the camera is controlled differently. The camera in VR is divided into left and right channels, which are controlled in real-time by the position and angle of the user’s headset. In contrast, the camera in games is mostly controlled by peripherals such as a keyboard, mouse, and joystick.

At present, the graphics technology of VR has not yet reached a satisfactory. The first is the problem of delay. VR requires delay control within 20ms, and any longer delay will make players feel obvious vertigo and discomfort; second is the lack of field of view.  

Generally, the current hardware perspective is only about 110 degrees, while the natural perceptible perspective of human eyes is up to about 230 degrees. In addition, there are vestibular motion sickness, low resolution, screen door effect, headsets are bulky, and other problems all need to be solved one by one.


Q: What kind of metaverse do you have in mind? What do you want to do in the coming metaverse world?

A: The metaverse I have in mind may not be quite the same as many people have imagined - it’s not as wearing VR glasses. I think the current VR devices are just a primitive stage, a stopgap measure in the absence of better alternatives. As Elon Musk has said, current VR devices are too bulky and difficult for users to keep wearing for longer.  

A true metaverse should present the world more naturally, accessible, ubiquitous, instantly connected, and covering everything. This means we need to use more advanced ways to transmit visual signals and motion signals and even do not rule out the use of brain-computer interfaces or human body modification. This seemingly anti-natural and revolutionary technology will discourage most people, and only a few brave pioneers will be the first to enter the metaverse and become the “new types” who experience a hyper-dimensional space.  

The relevant health and technical risks are still unknown, and the brave pioneers may also fall victims to the new technology. After several years of trial and error and improvement, when the technology is fully mature and safe, the rest of the ordinary users will gradually put their worries to rest and accept this new social form.

In the future metaverse, I hope human beings can improve the current way of acquiring knowledge and learn and retrieve massive information and knowledge through a brain-computer interface. I believe that humanity will then usher in a new century of infinite possibilities.


Q: What do you think are the technical difficulties in metaverse realizing?

A: I think there are three stages of the metaverse, and now we are in the stage of the primary “parallel universe”. At this stage, the main content of the metaverse is entertainment and experience, and there is less interaction with the real world, and both sides are parallel.  

The second stage is the “superimposed universe”, when the metaverse will become a tool for productivity. The second stage may be divided into seven layers, as Jon Radoff, CEO and co-founder of Beamable suggested: infrastructure, human interface, decentralization, spatial computing, creator economy, discovery, and experience. Each layer here faces more significant technical challenges. For example, transmission and computing in the infrastructure layer need to overcome challenges from 5G to 6G, cloud computing, 7nm to 1.4nm process, MEMS, GPU technology, and material science.  

The human interface layer will develop mobile computing, gesture recognition, voice recognition, and other technologies;  

the computing layer needs to develop edge computing, artificial intelligence, blockchain, and other decentralized technologies.

Spatial computing needs to develop real-time rendering, game engine, VR/AR/MR, spatial visualization, and other technologies.

The creator economy needs to crowdsource tools, asset markets, workflows, and commerce systems.  

Meanwhile, we need to upgrade social, e-commerce, and Internet advertising applications for the discovery layer and further develop games, social, exhibition, and shopping content for the experience layer.  

In evolving from the first to the second stage, we have substantial technical difficulties at almost all layers. And how to expand to the third stage, which is the actual metaverse stage, we do not have a definite technical route yet, and there is no way to talk about the so-called technical difficulties. Perhaps this roadmap will be gradually revealed when bio and information technology are highly developed in a few years.


Q: What we should do to address the future problems of data processing and network latency?

A: The metaverse contains a large amount of data content, such as holograms, virtual spaces, and natural interactions, all of which require massive information, and the demand for network bandwidth might reach the level of 10Gbps. Meanwhile, VR social requires further reduction of network latency, while blockchain further increases the requirements for network security.  

Faced with the above problems, we see companies represented by Huawei are tackling the technical challenges in this field. Experts predict network port rates will grow to more than 3T shortly, and new technologies such as rate-neutrality Ethernet, optical waveguide-based information transmission, and better chip light-emitting will further increase network bandwidth. Hopefully, network latency will be reduced to the microsecond level through cross-node computing resource pooling.

In addition, based on introducing millimeter wave spectrum and new modulation coding, Wi-Fi 8 technology may be reached with a 10-megabit rate at the bare port and a 100 G peak rate. I know little about this part, but I’m looking forward to more guidance from experts in related directions.


Q: What are your expectations for the future games?

A: I think the future games will show two trends of diversified development and media integration. Diversified development means that the global game industry will enter a state of blossoming and glory, and the more affluent products will make different player communities further subdivided. At that time, whether players prefer hardcore or casual, like ACG or realistic style, they can quickly get the game content that meets their interests.  

In the future, games will develop different sub-genres, and the gameplay of certain branches may no longer be obvious, which will give way to the experience of art or functionality. In the metaverse, new social, UGC, and self-generated content will also change the paradigm of gaming.

On the other hand, media integration means that games will merge with film, VR, artificial intelligence, and other factors to develop a whole new form. With the addition of AI technology, digital human categories such as AI-NPC and AI-DM will further develop, and even a world entirely created by AI will emerge.

I believe that the development of science and technology is an aim necessity of history, but human nature remains unchanged. No matter how far the game technology develops, games should be human-oriented and dedicated to promoting social development, artistic sublimation, and human liberation.


Journalist’s Notes

Nowadays, the visual quality of video games is improving, with the gameplay and story getting diversified and the game control is more humanized. With the continuous development of VR and other technologies, people seem to see the future game from it.

In the future, with the progress of technology, games may become “Ready Player One” depicted in the same way -- in the Oasis, everyone can become a superhero, and then the distant dreams are within reach. As Dr. Huang said, the true metaverse should present the world more naturally, accessible, ubiquitous, and connected to everything.


About Guest

Huang Shi, Associate Professor of Communication University of China, Doctor of Academy of Fine Arts of Tsinghua University, Deputy Secretary General of Science Fiction Film Committee (Science Fiction Writers Association), Deputy Director of Popular Science Game Committee, Visiting Professor of Parsons School of Design in New York, Responsible Expert of Beijing Science and Technology Commission, Committee of HCII International Human-Computer Interaction Conference.

Dr. Huang has been working on interactive art and popular science & science fiction creation for a long time. He was the game designer of Toy Story 3 for iPhone and the concept design director of the movie “The Three-Body Problem”.

He directed the science fiction short film Deep Space, which won the Best Short Film Award in the 2017 Water Drop Award. Dr. Huang has published five monographs and received one national invention patent. His work “Drift Bottles” was selected by ZeroOne & ISEA 2006 International Electronic Art Exhibition in the U.S. and Basel New Media Art Exhibition in Switzerland. His new media work “Empty Window” won the first Wu Guanzhong Science & Art Innovation Award and the Best Paper at HCII International Human-Computer Interaction Conference 2018.

责任编辑:庞桂玉 来源: 51CTO
相关推荐

2022-08-30 20:49:14

architectuCTOarchitect

2022-08-30 19:50:34

MetaverseCPPCCNPC

2022-08-31 14:39:45

metaverseSenseTimeAI

2022-08-31 08:45:47

metaverseblockchain

2022-08-30 19:41:09

NFTMetaverse

2021-12-23 15:11:46

Web 3.0元宇宙Metaverse

2022-07-08 00:08:48

MetaverseWeb3加密货币

2022-08-31 15:13:11

metaverseart

2020-12-17 16:53:23

NVIDIA

2022-08-31 10:53:46

AIAI chatbotmetaverse

2022-08-31 11:49:51

metaverseMicrosoft

2022-08-31 08:08:43

metaverseARtech giant

2022-08-31 14:34:56

metaverseAIWeb 3

2017-11-03 21:22:06

郭涛

2012-09-26 14:51:15

虚拟化

2022-08-31 14:43:58

metaverse

2022-08-31 11:22:07

open sourc

2021-05-21 08:30:26

Sentinel GateWay 微服务

2023-02-28 08:55:33

GatewayNetty服务

2024-01-30 07:58:41

KubernetesGAMMA网关
点赞
收藏

51CTO技术栈公众号