-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
3D model of the building area is poor when with boundary constraints #1769
Comments
When can we solve this problem? |
Because oblique photography modeling is an important way to model urban architecture, and in this era of rapid development of smart cities, this feature has a huge impact on everyone. |
I can understand the artifacts at the edge of the model due to boundary constraints, but the artifacts in the middle of model are strange, constrain boundary shouldn't affect the mesh quality in the middle. @pierotofy For the artifacts at the edge, it could be helpful to use btype 1 in poisson reconstruction(I find it sometime better in interpreting missing area, it will form a plane instead of a pit), or may need to have a filter to remove the downside mesh faces by points density etc. I started to run some experiments with btype 1 recently, hopefully I can draw some conclusions. |
I am currently wondering if there is a logical issue to be addressed. Should we first perform mesh reconstruction, texture mapping, and georeferencing, and then crop the model results based on the boundaries, instead of first filtering the point cloud through the boundaries and then performing mesh reconstruction, texture mapping, and georeferencing。 |
How did you install ODM? (Docker, installer, natively, ...)?
install doker:
docker run -dp 8888:3000 --gpus all --name nodeodmgpu01 opendronemap/nodeodm:gpu
in python:
n = Node(ip, port)
boundary ={"type":"FeatureCollection","crs":{"type":"name","properties":{"name":"EPSG:4326"}},"features":[{"type":"Feature","id":0,"geometry":{"type":"Polygon","coordinates":[[[104.15728688732897,36.54148301442387],[ 104.15818386146158,36.54147661480294],[104.15818095611374,36.5411700324329],[104.15725376030238,36.54113720094068]]]}}]}
boundarystr = json.dumps(boundary,ensure_ascii=False)
task = n.create_task(images_name, {"3d-tiles": True,"boundary":boundarystr,"no-gpu":False})
logs:
log.json
task_output.txt
images:
[Type answer here]
What is the problem?
When dealing with oblique photography models of the building area with boundary constraints, the output model effect is very poor, the model is very rough, with many small holes, texture mapping errors at the boundaries, and is messy(as shown in Figure 1111). But without boundary constraints(if this parameter is not set:“boundary”:boundarystr), the effect of the central region of the model is still good(as shown in Figure 2222).
[Type answer here]
What should be the expected behavior? If this is a feature request, please describe in detail the changes you think should be made to the code, citing files and lines where changes should be made, if possible.
I have checked the point cloud file and the data is quite good. However, there are loopholes and rough textures in the OBJ model results of the cropped area, which I believe may be a bug. the model with all region processing showed high point cloud density and good texture effect; The model using boundary restricted area processing has low point cloud density, sparse point clouds, and poor texture effects. Analyzing the reason, it should be that when processing the 3D model, the set boundaries were used to restrict the reconstruction area. However, in reality, the reconstruction area should not be restricted. I think when dealing with oblique photography models with boundary constraints, the logic should be to first use all photos to reconstruct the entire area, and then output the model results for the range within the boundary area. But in reality, this is not the logic. When outputting the model, only part of the photos were used (according to the results, the texture at the boundary of the cropped model was based on the surrounding image texture, not the original overall texture position, resulting in a disordered texture), and the result was relatively rough without post-processing such as smoothing and hole filling. So it is necessary to optimize the development logic and perform post-processing on the cropping results.
[Type answer here]
How can we reproduce this? What steps did you do to trigger the problem? If this is an issue with processing a dataset, YOU MUST include a copy of your dataset AND task output log, uploaded on Google Drive or Dropbox (otherwise we cannot reproduce this).
[Type answer here]
The text was updated successfully, but these errors were encountered: