Conclusion/Proposals

Product

This site displays the development cycle of the Volkspark project.

Sections

Choosing the engine

Which engine is the best solution for creating an immersive experience?

Recreating Monuments

How can photogrammetry be used to recreate existing monuments?

Using SpeedTree

How can foliage created through SpeedTree

Choosing the engine

The development begins with choosing the right engine for the job. An engine allows proper development for real-time experiences. To ensure a smooth development phase, the decision had to be felt between the Unreal Engine and Unity. They are the most approachable game engines for larger development cycles, which are publicly available.   

 

Unity:

Unity was created in 2005 by Nicolas Francis, Joachim Ante, and David Helgason, three Danish developers that wanted to make game development available to a broader audience. While first only being developed for Macintosh, a version for Windows was released in 2009 (Haas, 2014).

Over the years, Unity established itself as a game engine that is often used for small-scale projects, due to its user-friendly UI and big community (Paul E. Dickson, 2017). The engine supports mainly C# as a scripting language.

Unreal Engine:

The Unreal Engine was developed in 1994 by Tim Sweeney, with a public release in 1998 (Sweeney T. , 2009). It was developed for the videogame “Unreal”, which was released simultaneously to the engine. Similar to Unity, the Unreal Engine got multiple major releases, with the newest version, “Unreal Engine 5” being released in April 2022 (Rein, 2012) (Epic Games, 2022).

The software uses so-called “Blueprints” as a visual scripting tool. This allows developers with little to no programming skills, to create complex functions through a node-based system, based on the C++ coding language (Epic Games, n.d.).    

 
Comparing the Engines

While both engines had some major differences during their beginnings, at this point, the line between them starts to blur a little bit. Both engines offer a great variety of options to create a game. The major differences are boiling down to the following:

 
EngineDifferences.PNG

The deciding factor for the choice of the engine, in this case, is the visual scripting possibilities, and first and foremost the graphical fidelity. To achieve an immersive experience, based on existing, real-life property, it is important to capture a realistic-looking art style. Since Unity is not supporting visual scripting and has more difficulties with displaying realistic graphics, the decision was made to use the Unreal Engine.

 

Unreal Engine 5

Halfway through the development of the experience, Epic Games released Unreal Engine 5 to the public. While an early access version was released in 2021 (Epic Games, 2021), the development of the Volkspark environment started with version 4, since version 5 was still unstable and filled with bugs. Features like Nanite and Lumen are advantageous to the development process, which is why the decision was made to convert the project.

UnrealEngine5.PNG

Recreating Monuments with Photogrammetry

 

Creative Process

Photogrammetry is a technique that, as previously mentioned, was created as a tool to map out different environments. Over roughly 150 years, photogrammetry evolved into a distinctive method to scan real objects and create a 3D model out of them. To accurately recreate the monuments of the Volkspark, the decision was made to use photogrammetry to achieve the most detail. The creative process of recreating an object through photogrammetry is complex, and is going to be laid out step by step:

 
PhotogrammetrySteps.PNG

Creating the Photographs

To capture the best possible photos for a 3D model, multiple factors have to be taken into consideration. It is important to know what kind of camera can be used and what the deciding part's features of a photo are.

  • Resolution

The resolution is the width and length of a single photograph. It is the number of single points/pixels, which comprise the full picture.

  • Pixel Quality/Effective Resolution

It is not only important for the picture quality to get a great number of pixels, but also high quality of them. Factors like grain, contrast, and other aberrations can greatly improve or reduce the characteristics of an image.

  • Raw Data

Every picture taken by a camera is compressed, depending on the file format the photo is saved in. The general idea is to get the least amount of compression so that the quality stays as sharp as possible. Ideally, the file gets saved as a .raw file, which leads to no compression whatsoever, but takes a great toll on the available saving space.

(Alexandrov, 2021)

 

The capturing process of the photos follows the same procedure almost every time. The best weather condition for taking photographs is a clear sky, to avoid having hard shadows on the object. This makes it easier for the software that is later used to reconstruct the model evenly. It also avoids potential problems down the road, when a different light source is used to light the model. A ColorChecker chart can help with getting the correct colors of an object later during color correction.

colorchecker-classic_01.png

The method of taking the photos is circling the object in small steps and taking as many images as possible. This allows the 3D software to create more details. While doing a circular motion, it is impossible to change the height of the camera, so that more angles are later available.  

Structure-from-Motion-SfM-photogrammetric-principle-Source-Theia-sfmorg-2016.png

For this project, the first tests were done with a Smartphone that had the following specifications:

  • Samsung Galaxy S6

32 GB

Camera (16 Megapixels, f/1.9, 28mm, 1/2.6 inch sensor, 1.12 µm, Autofocus, Optical Image Stabilizer)

(Ho, 2015)

After testing the camera with a Playstation 5 controller, it was clear that the quality of the smartphone was not enough to create a completed model. That’s why the camera was switched out against a better model created by Panasonic:

  • Panasonic Lumix DMC-FZ300

  • 12 Megapixels, f/2.8, 4.5 – 108mm, 1/2.3 inch sensor

Each picture was taken with these stats:

  • 4000 x 2672 pixel

  • 180 dpi

  • sRGB

  • f/5.6

  • 1/60 sec

  • ISO – 100

  • 4 mm

  • .jpg

  • Ca. 90 pictures per statue

P1020366.JPG
Photogrammetry Software

After all the photos were taken, it was time to decide which photogrammetry software had to be used. The choice came down to Meshroom and 3DF Zephyr, due to them being (partially) free to use.

Meshroom

Meshroom is an open-source project, initiated through a joint research group of the Ecole des Ponts ParisTech and Centre Scientifique, together with Mikros Image (Carsten Griwodz, 2021). The software has been used since 2014 and is now one of the most popular tools for photogrammetry. Meshroom uses a node-based system, which allows changes in the workflow at any time, without having to restart the whole process from scratch.

   3DF Zephyr

3DF Zephyr was released in 2014, and created completely in-house by 3DFLOW. Multiple packages have been released at some point, though two of them were merged together with the 5.0 release (3DFLOW, 2020). The software is free to use, with the limitation of 50 pictures per 3D model. The software is widely available through additional platforms like Steam.

Comparison

Similar to the engine comparison, the photogrammetry software are very comparable. Both of them offer functions that are necessary for creating models. The tipping point for this project was the 50 picture limitation on 3D Zephyr. The comparison of the created meshes gave a clear indication of the better model. Both models were created with the same settings, with the only difference being the photo number.

Comparison.png
Meshroom In-Depth

After all the photos have been shot, Meshroom could go to work. As previously mentioned, first photogrammetry test were done with a Playstation Controller. The software used to generate the mesh was Meshroom.

MeshroomComplete.PNG

It was only through the efforts of the better camera, that improvements could be seen. The process of creating a 3D model with textures is relatively straightforward and can be explained step by step.

The first part of getting a new mesh, is to load all pictures previously taken into Meshroom. This can be done by drag-and-drop or opening them through the “Import Images” option. The photos can looked at the image viewer on the left.

ImageViewer.PNG

Meshroom operates on a node based structure, which makes changing values very easy. Without giving any input, the software automatically creates a string of nodes that directly be used to create a 3D model. To further understand Meshroom, a closer look at the single nodes has to be made. Without any changes, the node string looks like this:

Nodes2.PNG
  • CameraInit

CameraInit loads the metadata and sensor data to prepare for the FeatureExtraction

 

  • FeatureExtraction

This node extracts certain group of pixels (features). This is done so that the software can deal with viewport changes between the transitions of images.

 

  • ImageMatching

The node starts looking for images that Meshroom can match together, so that it can find specific areas in the scene.

 

  • FeatureMatching

Meshroom is now starting to match photos using the previously created features and sorts out other pictures that don’t match the initial chosen ones.

 

  • StructureFromMotion

The algorithm starts to create a 3D points out of the images.

 

  • PrepareDenseScene

The node creates .exr images.

 

  • DepthMap

DepthMap starts to retrieve depth value for every single pixel, captured by all camera views.

 

  • DepthMapFilter

To prevent overlapping of DepthMaps, this node isolates areas that are occluded, so depth consistency can happen.

 

  • Meshing

The 3D model is coming into place through the DepthMap and the point cloud.

 

  • MeshFiltering

This node filters out unwanted elements.

 

  • Texturing

The UV mapping and texture map gets created.

(Meshroom Contributers, 2020)

Nodes.PNG

While every node has options that can be adjusted, for instance of recreating the monuments, only “Meshing” and “Texturing” need changes. The “Meshing” node has the option of creating a bounding box. The box helps to condense the mesh and only creates a model of the content inside the cube. This was necessary due to lack of hardware power.

BoundingBox.PNG

The “Texturing” node lets the user set the desired texture resolution. In theory, it also can change the UV unwrapping method. But it turned out that this feature is currently broken and cannot be used. This was tested on multiple computers.

There is one issue with the mesh that gets generated. The problem with the generated mesh is that the polycount for the model reaches over 1 million. That number is not feasible for any kind of real-time experience. That’s why there is one more node that was put into the string. The node “MeshDecimate” does exactly what the name suggests. It creates a low-poly version of the already existing mesh. The poly-count can be set to a desired number. The process works well and almost not detail is lost.

Meshrooms export function is a little bit unique in the sense that every time a node gets completed, it exports the file directly to the harddrive. This allows Mehshroom to immediately start at the same position after a potential crash. That also means that the complete process from start to finish doesn’t have to be done in one go.

The “Texturing” node lets the user set the desired texture resolution and create the color map for the model. For the UV unwrapping method, Meshroom offers three different methods: Basic, LSCM, and ABF.

  • Basic: Meshrooms own unwrapping method, usually used for meshes that are larger than 600k faces. It can create multiple maps for a single model.

  • LSCM: The LSCM (Least Squares Conformal Maps) method was established in 2002 for an improved UV workflow (Bruno Lévy, 2002) and can be used for meshes under 600k faces in Meshroom. (Meshroom Contributors, 2020)  

  • ABF: With the ABF (Angle Based Flattening) method, meshes with less than 300k polygons can be unwrapped. The process closely resembles the LSCM method and is interchangeable, depending on the desired outcome. The approach of the method is using a particular parametrization of 3D surfaces that helps with mapping the content to a 2D surface.

The UV maps that get created, are rough and need to be replaced later down the line if adjustments to the mesh have to be made. 

TextureMap.png

There is one issue with the mesh that gets generated. The problem with the generated mesh is that the polycount for the model reaches over 1 million. That number is not feasible for any kind of real-time experience. That’s why there is one more node that was put into the string. The node “MeshDecimate” does exactly what the name suggests. It creates a low-poly version of the already existing mesh. The poly-count can be set to the desired number. The process works well and almost no detail is lost.

Meshrooms export function is a little bit unique in the sense that every time a node gets completed, it exports the file directly to the hard drive. This allows Mehshroom to immediately start at the same position after a potential crash. That also means that the complete process from start to finish doesn’t have to be done in one go.

Fixing the Details

There are often smaller issues with the models after they have been created in any kind of photogrammetry software. These range from geometry that sticks out of the mesh unnaturally, to floating faces that can be deleted as they aren’t used. This often happens if the software is missing visual information (e.g. due to the object/building being too high). Those issues usually get solved with drones that can reach high-up places. But since not everyone has access to a drone, these kinds of corrections have to be done most of the time. Even with drones, imperfections are common and have to be cleaned up.

For this project, the software that was chosen for the cleanup was Autodesk Maya. A tool commonly used in 3D modeling that supports the creation of custom UV maps. With Maya, small adjustments to single vertices, faces, or edges can be made. Since Maya supports both modeling and sculpting tools, alterations aren’t time-consuming and easy to make. 

MeshComparison.png

After all changes on the mesh surface have been completed, new UV maps have to be created. This is necessary for the person later working on the texture painting. A texture artist has to paint over the imperfections that were not properly captured by the camera and will have an easier job with proper UV maps.

With the texture maps finished the mesh has to be put back into Meshroom so that the software can project the color information onto the new UV map. The process for this is done again by the “Texturing” node. Normally, the “Texturing” node gets input from either the “Meshing” or “MeshDecimate” node. Since the model has been worked on in Maya in the meantime, “Texturing” needs the new file exported from Maya. For this to work, instead of using the input from the other nodes, Meshroom can use the folder path and locate the mesh data. An important element for this to work is that the new mesh in Maya cannot under any circumstances be moved or scaled differently.

TexturingNode.png

After the process is finished, the mesh can be properly painted on. Substance Painter allows the user to use the stamp tool, which makes it easy to paint over imperfections and fix the texture map. This is the final step for the model after which the mesh can be imported into Unreal Engine 5.

Foliage with SpeedTree

 

Why SpeedTree?

The Volkspark in Enschede is the home of many different types of trees. While it is too time consuming to recreate every tree from scratch, certain species that reside in the park are important for the overall feeling that should be evoked through the virtual environment. Foliage generation can be achieved through different methods. Manual creation through software like Maya or Blender would be possible, but very time consuming in comparison. Alternative programs for 3D foliage creation exists, but pale in contrast with SpeedTree, due to the available options. This is the deciding factor for choosing SpeedTree in this project.

Creating the Base

With SpeedTree, the possibilities for foliage are endless. The tools that the software provides can be adjusted in all kind of manners and create every type of vegetation that exists. Similar to Meshroom and the Unreal Engine, SpeedTree also uses a node-based system where every part of the tree is separated and can be adjusted accordingly.

SpeedTreeNodes.png

The process starts with the trunk. Using the ‘Gen’ tab in the ‘Properties’ window, changes for the number of trunks can be made. This is going to be important for the branches, but not the initial trunk. For further changes to the height, width and other shape related functions, the ‘Spine’ tab has to be used. With the trunk set up, the next step is to create the branches. The geometry for the branches can be added directly in the node. It is important to create an evenly spread out collection of twigs, so that the tree will look realistic. Additional forces like gravity can be applied in the same tab. To give the tree an accurate look, materials have to be applied to the trunk. This can be done in materials section. SpeedTree creates a new material that can load other texture map from outside the software. Detailed texture generation from scratch is not supported by SpeedTree. After the desired texture maps have been loaded into the material, it can directly be set to the trunk. 

Properties.png
Twigs

Since the branches have to be filled with leafs, needles or other vegetation, more additions have to be created. For this example the choice was made to create a pine tree, so smaller twigs with needles had to be generated. To create small twigs with better manageable details, a second SpeedTree file was created. In the new file, the twigs were created the same approach as the initial trunk. That way it was possible to let the twig branch out and adjust the needles with the desired values. With the values and materials set, the twig can be exported as a texture map. This is a method to save performance, and can still be adjusted after the maps have been imported back into the other SpeedTree file.  

Twig.png

The reimport of the twig texture maps happens through the material editor. A new material for the maps has to be created. The editor supports a function that lets the user edit the new mesh. That means that the size of the imported twigs can be adjusted and LODs created.

TwigMesh.png

The final step after the material has been applied to the mesh, is to adjust the values in the ‘Properties’ setting, so that the twigs look realistic. Another performance saving measure is to regulate the number of the twigs. In combination with the other settings, it is possible to create the illusion of a thick looking tree.

This completes the process of a tree creation in SpeedTree.