The State of Virtual Reality in Architectural Practice – 2017 Part One – Where we are

At Russell and Yelland we have been working with Virtual Reality for a year now. We would like to share some of our experiences and lessons from this journey which we see as the early days of VR.

The term Virtual Reality is an oxymoron; it is the combination of unreal ‘virtual’ and real ‘reality’. Virtual in this context is the computer simulation of what our senses perceive to be the world around us: that is, Reality. From its beginning VR hardware has looked like a bulky, cumbersome set of goggles.    While the technology is evolving it’s appearance has not changed much.

VR is not a new thing but recently there has been renewed interest and big name technology companies are jumping on board.

There are two key factors which recently arrived and made Virtual Reality accessible for architectural firms such as Russell and Yelland. We already had the first piece of the puzzle, the virtual 3D model, because this is part of our regular architectural design workflow.

The recent innovations which have been able to plug into our architectural practice are;

  1. VR hardware, now accessible, user friendly and at a level of quality able to provide an immersive experience.
  2. Real-time rendering software, aka a ‘gaming engine’ that runs in our regular design environment.

A life-like experience where one can move around the environment requires a powerful computer to perform the real-time graphics processing to update the view corresponding to every movement the user makes. The hardware providing the best experience right now is a ‘head tracking virtual reality headset’; the Oculus Rift and HTC Vive are the two products which have created the recent VR sensation. These headsets both launched consumer products in 2016. Both plug into a desktop computer and require additional sensors placed around the user to translate real world movement into the virtual environment. The VR hardware acts as both the screen and the input (keyboard/mouse), movements of the users head are translated to input commands for the virtual environment.

The virtual environment must respond instantaneously to the actions of the viewer to create a convincing reality. Without the recent development of graphics technology and ‘gaming engines’ we would have to wait hours for CAD software to render a single realistic image. While not a super realistic representation, what the gaming engines lack in detail they make up for with speed. Real-time speed. Using our regular design software we are now able to plug-in a gaming engine and see our digital models in a life-like view while; moving through them, working on a design or trying different options.

The graphic output required to present an immersive VR experience is beyond anything we require regular video to provide and it all comes down to the frame rate. Typically a video can look seamless at 24 frames per second but when you introduce head tracking (the ability for the viewpoint of the scene to move in relation to the users movements) the frame rate demand skyrockets. A typical video feed at 24 frames per second, strobes as you turn your head, leaving gaps of darkness in the path of your glance. This is referred to as ‘latency’, not only does latency disrupt the immersion of the scene it is also a cause of motion sickness. The output recommended for a realistic experience with a motion tracking VR headset is 90 frames per second (FPS) and that is per eye, as there is a unique image sent to each eye. Comparing that to the movie industry standard of 24, a quality VR experience is asking the PC to output a total of 180 FPS which is 7.5 times more images than a movie presents its viewers with. Even The Hobbit movie which created fanfare with its “High Frame Rate” of 48 FPS is not even a third of this frequency. With that perspective it is not surprising you need a very powerful PC to produce this graphic output.

Such a setup is not very portable, though we did take the whole kit to Brompton Primary School for a media event (as pictured), it is not something you would want to assemble every meeting. At a minimum it required; 5 power outlets, the desktop PC, the PC monitor, two independent sensors plugged in and on tripods and the headset link box. We also set up a projector so the students not wearing the headset could see what was happening in the virtual environment.

With all of the setup, bulk and cables between everything including from the users headset to the PC we have still found the experience and comprehension of a design to be unequalled by any other presentation method. There have also been a spectrum of ways we can share these experiences with varying dependence on technology.

Stay tuned for part two to hear what else we have been up to!