As funded by UTS and lead by Matthew Austin, we performed research with a group of first year architecture students investigating the value of VR as a tool for decoding architectural drawings. Conducted during first trimester 2017 in UTS DAB.
Our aim was to assist in one of the greatest hurdles for first year architecture students, scale. Most critically, how scale is understood from drawings to reality and vice versa. The study was performed across a semester and sat along side a design studio however was not directly influenced by it's curriculum and instead performed a supplementary role.
It was critical these tests were performed with a VR headset that provided 6doF (6 degrees of freedom) that enable 'Room Scale' interaction. The difference lies in the looseness of the term VR, a 360 image even without being stereoscopic can be claimed as VR viewing. however gives very little indication of scale. A HTC Vive, however, provides not only a 3d view, but one can stand, crouch, lean around and look under objects in VR space; it even provides hands so as to help 'sizing up'. By product of this all participants are viewing from their correct height which allows for broader conversations of differing scales of people, perhaps to be addressed in different study.
We hypothesised VR as capable of leveraging a humans intuition in seeing faults in the ordinary. Simply, that those untrained in architectural drawing convention can identify what is atypical through visual inspection. Our test cases were born though distorting household architectural features and objects, to be interrogated simultaneously in drawing and VR. We had students perform in pairs swapping in increments of 10 minutes, one in VR and one guiding the other via series of plan 'maps' and cross section.
Each session consisted of a puzzle to be solved in 2 hours, each solved via identifying what is typical against atypical. The first round focused on AS 1428.1, otherwise known as the 'real/fake doors' puzzle. To create the puzzle space we constructed a hall way approximately 40 meters across with doors both typical and atypical.
The major variables were height, width, and door handle height, having each door given a unique combination of these variables via the Parametric design tool Grasshopper. The final Hallway was hosted in Unreal Engine 4 and produced using the built in VR template for sake of expediency. At this stage we provided no additional interactions other than the in built ability to move at 'Room scale' and teleport.
The first task was orientation. Students were placed in a pseudo random position along the hall, left only with their partner to help navigate via the plan provided. Their ability to communicate information from virtual space to written notation is critical for not only the mapping task, but associating architecture to drawing as a whole.
The use of colour was to assist by providing landmark, though we omitted colour from the printed drawings it was soon annotated by the participants. Other features were also used as bearings which precipitated from the extremes of the door variables. These features were used together to reorient when partners swapped between digital and physical space.
The Second task was interrogation. The least successful strategy, and the first used by each pair had one moving from each door to the next in sequence, and the two deciding case by case if it was compliant. Issues arose where each door was being compared to the last rather than as a full set, leading to carry on error. They would question a door that was nearly identical to one seen several before as they were not using the measured orthographic to their advantage. Rather, only use the drawing as reference of their choices, not as a tool.
The most successful strategy also was also the most intuitive. To immediately eliminate all elements that were 'clearly' wrong. The issue lies in relying on what is considered obvious is by no means objective, or even describable. None the less, this strategy worked well, cutting the set of doors left to scrutinise in half.
There was a secondary emergent strategy that relied on both written and virtual; classify and cross-reference. The technique was to first classify in virtual reality, then to use the measured drawing to match with similar elements, accelerating the process. The result allowed subjective reasoning to perform its best as an incidental tool, and the measured drawing to perform as an objective comparison.
A major issue for the first puzzle was the lack of strictly defined compliance. Though students were proficient in identifying irregularities that disqualified the majority of doors, there was a lot of time spent doubting their choices with those that were on the edge of correct. In later tests we provided a minor buffer (5%) between compliant and not as this appeared adequate to allow for intuition and experience to act as the major classifier. In reality many doors from older buildings being used as reference may not pass the BCA.
The later puzzles followed suit, both in content, and dominant strategies. 'Real/Fake Stairs' marked the start of continuous space, as students tended to rely heavily on the ends of the hall of the first test as anchors. In this task, they were placed in a 5x5 grid of platforms connected via stairs, this grid repeated in each direction to provide no edge guide.
As before we provided with black and white orthographic plans, and a few spatial hints. The platforms are arranged at three heights allowing for higher vantages for orientation and the horizon always glows towards the north east. Unfortunately students did not realise this until we mentioned it after consistently becoming disoriented. Despite being supplied with both plans and sections it was only through provision of a 'north' that they could easily navigate the space.
Some students attempted to measure stairs 'by hand' using the virtual hands provided via motion controller as reference. Yet, simply looking to see if they looked 'off' worked equally if not better. Using when needed the orthographic drawings as tools for objective comparison after classification in VR.
Faced with an infinite monotony, would a clever poster keep you sane?
Looking to diagnose tables, chairs and computers, we asked our students to question workstations; Spaces of which they were very accustomed. We provided a series of posters that provided markers and a means of orientation for those that cannot see over the cubicles. As one's height is indexed to theirs in the real world, shorter participants have to rely on more localised navigation, or to jump frequently.
During the trio of puzzles we would not provide answers when asked, however we did return to them a series of provocations. The most frequent prompt was 'does it look right to you?' as students would get stuck looking at the drawings to no avail when it was clear in VR space what was incorrect. We found students once accustom to what was 'correct' in VR would quickly be able to identify in drawings what was 'clearly' wrong, and only check those that were less obvious.
Overall, we've found at first students would not be able to adequately infer spatial qualities or proportions from scale drawings alone. But through a steady stream of comparative puzzles, the underlying task of forcing comprehension across mediums came as a by-product.
Unfortunately, due to the prioritisation of studio work over voluntary research our final puzzle was left unchallenged. A maze solved via stringing together rooms in isolation to locate a virtual wedge of cheese. Instead we reviewed their architectural proposals in VR which did provide it's own insight.
Most notably, a student designed a vista over the shoreline from the first floor of their surf lifesaving club. The only issue, is the student had been designing a vista for a much taller person than them-self. In seeing their design in VR, they reconfigured their building so as not to accidentally exclude themselves.