Photogrammetry with Meshroom

Photogrammetry is a method of transforming physical objects into three-dimensional digital models that can be edited with 3D software. This process typically uses specialized devices called 3D scanners, which come in two main types: optical and laser.
Optical scanners often use one or more digital cameras and special lighting to evenly illuminate the object during scanning. This allows for the creation of a 3D model. Laser scanners, on the other hand, use laser beams. These devices emit multiple laser beams and measure the time it takes for each beam to bounce back from the object. Using this data, along with information from position sensors, the scanner calculates the distance to each point on the object. This creates a “point cloud” that forms the basis of the 3D model.
Points cloud

To build the future framework of an object, the system needs to know the coordinates of each vertex in three-dimensional space. The set of vertices is called a point cloud. The more vertices there are, the more detailed the object will be. Creating a point cloud is the first and one of the most crucial steps in recreating a 3D model from photographs.
It’s important to note that each vertex in the point cloud is initially unconnected to other vertices. This allows for easy filtering: keeping the necessary points and removing the rest, before starting to recreate the object’s mesh.
Mesh objects

A mesh object is a type of 3D model consisting of triangular geometric primitives, often referred to as meshes or polymeshes. Once object points are formed, the application can independently compose triangular primitives from them. By connecting these primitives, it’s possible to create a 3D model of almost any shape. At this stage, the model lacks color and remains unpainted.
The subsequent texturing stage addresses this issue.
Texturing

The final stage is the application stretching the image texture extracted from the photos onto the prepared mesh object. The quality of the photos taken and their resolution play a key role here. If it is low, the final result will not look its best. But if a sufficient number of good quality shots were taken, then at the output you’ll receive a fully ready-to-use 3D model of a real object. Below we’ll give some useful tips on preparing the original photos.
Camera settings
To avoid disappointment with your first attempts at creating a 3D model from photographs, consider these simple basic rules. Each rule will help prevent issues that typically arise during the mesh object creation stage.
First, don’t rely on your digital camera’s automatic settings. Modern cameras try to balance four key parameters independently:
- ISO,
- white balance,
- shutter speed,
- aperture.
In automatic mode, even slight changes in external conditions can cause these settings to vary between frames. These variations can lead to noticeable inconsistencies during the texturing stage.
To maintain consistent parameters across frames, use the Manual mode (M). The aperture is a crucial setting here. Depending on your lens, aim for a position where it’s nearly closed. This helps to achieve maximum depth of field: the less open the aperture, the better. However, avoid extreme values. If your lens can be close to f/22, you’ll get good results using values between f/11 and f/20.

Left f/11, right f/22
Closing the aperture, however, creates another problem: insufficient light. This can be addressed in two ways: by increasing ISO sensitivity or lengthening shutter speed. Both methods will affect the final result, albeit differently. Raising the ISO to 6400 introduces digital noise in the image, so it’s best to use the lowest possible values. For near-ideal results, setting the ISO to 100 makes sense. Yet, this means the issue of insufficient lighting persists:

Left ISO 100, right ISO 6400
The most effective way to increase light passing through the camera sensor in low-light conditions is to lengthen the shutter speed. The longer the shutter remains open, the more photons hit the sensor, resulting in a better image quality. However, this approach presents a challenge: without a tripod, a shutter speed of 1/50 second or longer can blur the image. Using a tripod eliminates this problem.
White balance is the final crucial parameter. It’s important to disable the automatic setting and choose either a preset profile (such as “Sunny day”) or a custom value in Kelvin. For instance, 5200K is a common setting. Lower values shift the hue towards yellow, while higher values lean towards blue. To avoid time-consuming color corrections in post-processing, use the same white balance profile for all photos in a series.

WB profiles. Left “Sunny day”, right “Auto”
In summary, to capture high-quality photos for photogrammetry:
- Use a tripod when there is insufficient light.
- Close the aperture nearly to its minimum.
- Set the ISO to its minimum value.
- Choose a shutter speed that gives you the desired result (or use your camera’s built-in exposure meter).
- Use the same white balance preset.
Taking photos
Let’s discuss how many photos to take and from which angles. The type of object and its background significantly influences the final result. Objects without shiny, transparent, or reflective surfaces are ideal for photogrammetry. In practice, objects like windows and glass often require correction in a 3D editor later. However, the general shooting technique remains the same.
For small objects placed on a surface, imagine a sphere around the object. Take photos as if your camera is circling the object three times: once from below, once at the middle, and once from above.

It’s crucial that the object occupies at least half, preferably three-quarters of each frame. Instead of using zoom, try to get physically closer to the object. When creating a cloud point, the software needs as many pixels as possible.
When shooting, remember that the software combines frames into a single object for correct geometry. Make it a rule to take at least three frames from each angle. Once you’ve centered the object in the frame, mentally divide it vertically into three equal parts. Take three pictures, each focusing on one-third of the object. This provides the necessary overlap for the application to accurately calculate each point’s location in 3D space. After photographing the object from all possible sides and angles, you can start preparing the software.
Install Meshroom
Meshroom is a free, cross-platform application that sequentially performs all processing stages, utilizing CPU and GPU resources. While it can run on a standard home computer, each stage may be time-consuming. For large-scale projects involving 3D reconstruction of numerous objects, such as creating an impressive 3D scene, renting a dedicated GPU server might be a practical solution.
Let’s consider a LeaderGPU server with the following configuration: 2 x NVIDIA® RTX™ 3090, 2 x Intel® Xeon® Silver 4210 (3.20 GHz), 128GB RAM. We’ll use Windows Server 2022 as the operating system. Before installing Meshroom, you’ll need to perform some preliminary steps:
Visit the project’s official website to download Meshroom. Unpack the resulting archive to find a ready-to-use application that doesn’t require additional installation. Launch Meshroom.exe to begin.
Upload images
The main window of the application is divided into two parts: upper and lower. The upper section contains the Image Gallery, Image Viewer, and 3D Viewer. The lower section houses the Graph editor and Task Manager. To start, drag and drop your captured photos into the designated area. Both compressed (for example, JPG) and RAW file formats are supported. It is recommended to use RAW files because they contain significantly more data for each frame.

Please note that you already have a ready-made standard pipeline by default, which is schematically displayed in the Graph Editor. This is one of the most important controls that helps to configure all aspects of image processing at each stage. You can manually run each stage by right-clicking and selecting Compute from the drop-down menu.
But for the first time, you can simply click the green Start button, and the application will do everything for you. It will prompt you to save the project, so that you do not accidentally lose the results of the calculation. Click Save, specify a name and directory and save the project:

Next, the application transfers all processing stages from the Graph Editor to the Task Manager, which handles their execution in a specific order. To check the status of each stage, select the corresponding block in the Graph Editor and click the Log button in the lower right corner of the screen. You can also see in real time which stage is currently being processed:

On the right side, you can see the point cloud you’ve built. The final result, generated using the standard pipeline, is available in the directory:
[Your_Project_Path]\MeshroomCache\Texturing\[Random_Symbols]\texturedMesh.obj
Of course, if you fix the output path in the final node of the pipeline beforehand, the object will end up on the path you specified. Then you can import it into any text editor to fix surfaces, add light sources and other effects before rendering.
Integration
While the initial result may look impressive, it often requires refinement in a 3D editor. Meshroom simplifies this process by allowing you to import not just the model, but also the points cloud and camera positions into third-party editors like Houdini or Blender. In the following section we’ll explore how to do it.
Houdini
In fact, Meshroom is a user-friendly interface for the AliceVision engine, which handles all computation-related operations. This interface implements the corresponding pipeline and task manager. If you use Houdini, you can create your own pipeline directly within the application and use it alongside other tools, eliminating the need to launch Meshroom separately.
To get started, it’s best to download and install a dedicated launcher that will manage Houdini updates and plugins. Next, add the SideFX Labs plugin, which offers numerous additional tools, including specific nodes for AliceVision. To do this, click the + button, then select Shelves:

Scroll down the list and select SideFX Labs, then click the Update Toolset button:

To install a plugin, follow these steps: Click the Start Launcher button, navigate to the Labs/Packages section in the left menu, and select Install packages. This will open a window where you can choose packages to install:

Choose the Production Build for your version of Houdini and click Install. Afterward, restart the application to ensure the new effect icons appear at the top:

It’s crucial to note that you won’t find any mention of AliceVision or Meshlab here. This is because the corresponding plugin only functions within the geometry context pipeline. To verify this, click the + icon, then select New Pane Tab Type, and choose Network View:

Press the Tab key and add a Geometry node:

Double click to open the created node and type av on your keyboard. The system will instantly display a list of available nodes starting with the Labs AV symbols. These nodes allow you to control the AliceVision engine and integrate it into your own pipelines:

To create a proper pipeline, refer to the official documentation for the plugin. Additionally, consider adding the AliceVision directory to the list of environmental variables in the houdini.env file. For a standard installation using the launcher, this file is typically located in the directory C:\Users\Administrator\Documents\houdini20.5\
Open the houdini.env file with any text editor and add the following line:
ALICEVISION_PATH = [path to alicevision directory in Meshroom folder]
For example, if you installed Meshroom in the root directory of the D: drive, your path might look like this:
ALICEVISION_PATH = D:\Meshroom\aliceVision
Save the file, then restart the Houdini application.
Blender
For Blender users, we recommend the Meshroom2Blender plugin. While it functions differently from the Houdini plugin, it allows you to export point clouds and camera positions calculated by Meshroom to Blender. To access the plugin code, open the link in your browser:
https://raw.githubusercontent.com/tibicen/meshroom2blender/master/view3d_point_cloud_visualizer.py
Save the code as view3d_point_cloud_visualizer.py in a convenient directory. Next, open Blender and navigate to Edit - Preferences. From there, select the Add-ons tab:

Click the down arrow and select Install from Disk:

In the newly opened window, navigate to the directory where you saved the plugin. Select the plugin file and click the Install from Disk button:

The plugin is now installed. It’s recommended to restart the application. After restarting, you’ll see the Point Cloud Visualizer item in the viewing mode. The plugin requires you to specify the path to a file with the .ply extension:

By default, Meshroom doesn’t generate this type of file. To create it, open the pipeline and add the ConvertSfMFormat node. Use the SfMData from the StructureFromMotion node as input. For output, specify the Images Folder of the Texturing node.

The final step is to specify the format. Click on SfM File Format in the ConvertSfMFormat node and select ply from the drop-down list:

Right click on the created node and select Compute:

Once the process is complete, you’ll find the required file in the directory:
[Your_Project_Path]\MeshroomCache\ConvertSfMFormat\[Random_Symbols]\sfm.ply
You can load it into Blender in two ways: through the aforementioned plugin or via the standard import process File - Import - Stanford PLY (.ply):

For more information on using this plugin, we suggest consulting the project repository or on a specialized web resource.
Conclusion
Photogrammetry is a large field of knowledge, where we tried to tell only some basic techniques for converting 2D images into a 3D model. This is used in many industries, from architecture to the creation of computer games.
Having gained the first experience of shooting a dataset and its consistent transformation into a 3D model, you will be able to improve your skills and transfer physical objects into a virtual 3D space. Well, LeaderGPU will help you with computing power, reducing the calculation time and freeing up your workstation for other, often higher-priority tasks.
See also:
Updated: 12.08.2025
Published: 21.01.2025