top of page
Virtual Reality Device

VIRTUAL REALITY

WeChat Image_20211230172527_edited.jpg

THE LA CASITA
VIRTUAL TOUR

Click the Link Below for Full Experience!

Loading might take a minute

Please wait till it finishes!

STEP 1
BUILDING THE MODEL 
WITH PHOTOGRAMMETRY AND BLENDER

WeChat Image_20211231063924.png

TAKING 3000 PHOTOS

The model of the museum is constructed by photogrammetry in Reality Capture. It is the process of extracting 3D information from objects and patching them together into a 3D virtual model. The photos need to be taken in a specific manner so that the software recognizes overlapping photographs of an object, a structure or space. It took much more photos than expected to create a detailed model of the 800 square feet section. Using a Sony A6400 DSLR camera with a ring flash, I took photos at five different height levels (head, chest, waist, knee and floor).
The following example showcases a section of pictures that worked for the model.

WeChat Image_20211231063932.png

ALIGNING PHOTOS IN REALITY CAPTURE

After importing all my photos into Reality Capture, the next step was to align the photos according to the extent they overlap, eventually creating a cloud model you see on the left. Every white dot you see in the model is a camera angel where I took the photos. Finally, I textured it and exported the model into an obj file.

WeChat Image_20211231063957.png

TEXTURING AND SCULPTING IN BLENDER

The original model consisted of 100k triangles, which was a bit too large and detailed for rendering, so I ended up simplifying them down to 90k and chopping off redundant parts that slowed down the rendering in Blender. The model looked nice at this point, but there were still holes in walls, ceilings and floors that needed patching. With the sculpting tools in Blender, I was able to fill in a majority of holes and smooth surfaces that are badly rendered.

STEP 2
CODING WEBVR 
WITH A-FRAME
(HTML AND JAVASCRIPT)

aframe1.png

CODING ASSETS AND INTERACTION
IN A-FRAME

With digital artifacts and artworks acquired from the museum in advance, I then moved on to complete the match. Among the inventory, some pictures or artifacts spoke for themselves, but most had a hard time generating a narrative without further explanation. Therefore, to create an explorable and educational experience, I made the artifacts interactive. By clicking on each artifact on the walls, a text box or a video would pop out to briefly introduce the exhibition. This way, the viewers were able to explore the place informed in any order. Lastly, I chose Aframe as the platform to finalize the virtual tour. I used simple JavaScript to code the scene and Html to embed the tour into a working webpage.

WeChat Image_20220103030822.png

ADDING LIGHTS ENVIRONMENT AND CURSOR

Another crucial element to creating immersive experience is the simulation of a real environment. Therefore, I added multiple light sources to the scene to simulate indoor lighting in the museum and also a dome-shaped sky outside of the model to simulate daylight. These additions would help visitors feel the presence of space and time as if they are actually at the museum.

Check out the code for a button in the model!

<a-box static-body position="0 0 0" width="100" depth="100" height="1" color="#0000ffff" visible="true"></a-box>
      <a-box position="-4.436 4.620 -11.456" rotation="0 0 0 depth="0.1" width="0.1" height="0.1" color="#AA0000"
             event-set__enter="_event: mouseenter; material.color: #FF0000"
             event-set__leave="_event: mouseleave; material.color: #AA0000"
             event-set__tarcol="_event: click; _target: #volunteer1; visible: true"
             class="clickable">

PROBLEM SHOOTING

ZhuC_1044_(Zhu_20211124_1044).jpg

SURFACES WITH REFLECTIONS ARE HARD TO CAPTURE

One obstacle that I met during photogrammetry is that windows, glass or anything that reflects light are hard to render in Reality Capture. The section of the museum that I worked on had five large glass panels at the entrance which I had a really hard time capturing. To solve this, I covered the panels with giant sheets of paper so that the software would recognize them as objects rather than reflections.

WeChat Image_20220103061438.png

COLLISION OR CURSOR?

A compromise had to be made when I realize the originally intended character collision contradicted with a cursor feature. Originally, I coded collision features into models and added physics so that visitors could stay in boundaries when exploring the museum. Although I hoped the visitors could feel their presence in the virtual space by setting up barriers and walls, the cursor feature allowed much more interaction between the scene and the visitors. The immersive experience of interacting with artwork and videos outweighed the impact of object collision.

REFLECTION

I’m an amateur in coding and modeling so half of my time spent on the project was learning coding language and figuring out functions of Blender and Reality Capture. I even had to contact professionals to inquire about some of the difficulties I met along the way. It was a tough process, but I learnt the basics of JavaScript and Html and had a taste of video game coding. With more coding practice in the future, I am hoping to add in more interesting features to enhance the immersive experience.

The La Casita Virtual Tour project deepened my understanding of interactivity and immersive experience. I would love to continue experimenting on emerging media platforms such as VR to study their potential in game design projects.

bottom of page