I am currently stuck at my project of reconstructing an object from different images of the same object.
So far I have calculated the feature matches of each image using AKAZE features.
Now I need to derive the camera parameters and the 3D Point coordinates.
However, I am a little bit confused. In my opinion I need the camera parameters to determine the 3D Points and vice versa.
My question is how can I get the 3D points and the Camera parameters in one step?
I have also looked into the bundle adjustment approach provided by http://scipy-cookbook.readthedocs.io/items/bundle_adjustment.html but there you need an initial guess for camera parameters and 3D coordinates.
Can somebody refer to a pseudocode? or has a pipeline for me?
Thanks in advance