0

I am currently stuck at my project of reconstructing an object from different images of the same object.

So far I have calculated the feature matches of each image using AKAZE features.

Now I need to derive the camera parameters and the 3D Point coordinates.

However, I am a little bit confused. In my opinion I need the camera parameters to determine the 3D Points and vice versa.

My question is how can I get the 3D points and the Camera parameters in one step?

I have also looked into the bundle adjustment approach provided by http://scipy-cookbook.readthedocs.io/items/bundle_adjustment.html but there you need an initial guess for camera parameters and 3D coordinates.

Can somebody refer to a pseudocode? or has a pipeline for me?

Thanks in advance

Max Krappmann
  • 490
  • 5
  • 19
  • Are you working on StereoVision? I can help you if you need to reconstruct 3D Point Cloud from a Stereo Pair (meaning 2 images) – Employee Jun 13 '18 at 10:36
  • Take a look at Structure from Motion. [This tutorial](https://ch.mathworks.com/help/vision/ug/structure-from-motion.html) gives an idea of how it wokrs in Matlab. You'll need to translate it to OpenCV instructions. – Sunreef Jun 13 '18 at 10:48
  • @Sunreef I looked through the tutorial and I got also the problem that the second step needs a camera matrix to get the [relativeOrientation,relativeLocation] = relativeCameraPose(M,cameraParams,inlierPoints1,inlierPoints2) – Max Krappmann Jun 13 '18 at 10:57
  • @Employee Hello actually I got 50 images of the object so I would scale up the stereo pair using matrix notation or a Bundle adjustment. – Max Krappmann Jun 13 '18 at 11:01
  • You're not going to be able to avoid this. The cameraParams are internal parameters such as focal length, distortion coefficients. You need to get those somehow. They can sometimes be found in the image metadata. – Sunreef Jun 13 '18 at 11:02
  • @Sunreef also for a adjustable lens? or do I need them as initial guess and then add my own camera matrix K[[fx,0,cx],[0,fy,cy],[0,0,1]] and then calculate the correct parameters using the bundle adjustment ? – Max Krappmann Jun 13 '18 at 11:09
  • You need them to transform from image space to camera space. When a camera takes a picture, there's a perspective projection that depends on the focal length. You need to know this parameter. Look in your image metadata. – Sunreef Jun 13 '18 at 11:12
  • @Sunreef the meta data are not provided by Basler cameras. This is the lens I am using on a dartCamera 1600x1200pixel uc, Lens Evetar N118B0818WM12 F1.8 f8 1/1.8 the focal length should be 8mm – Max Krappmann Jun 13 '18 at 11:28
  • You can get to know your camera's internal parameters by OpenCV Camera Calibration routine - https://docs.opencv.org/2.4/doc/tutorials/calib3d/camera_calibration/camera_calibration.html - this will create a file with all the parameters you need and then you can use them to move on with your project – Employee Jun 13 '18 at 11:37
  • @Employee Thanks for the remark but I cant calibrate the camera with a checkerboard since it is a build in camera. I thought there is a way of optaining the Projection model using multiple views and then include the camera parameters in the optimization process like in http://www2.maths.lth.se/vision/publdb/reports/pdf/larsson-master-13.pdf page 47 – Max Krappmann Jun 13 '18 at 11:52
  • What you mean by build in camera – Employee Jun 13 '18 at 12:05
  • @Employee it is mounted in a machine so you can only acess it via remote. – Max Krappmann Jun 13 '18 at 12:09

2 Answers2

0

As you seem to be new at this, I strongly recommend you first play with an interactive tool to gain an intuition for the issues involved in solving for camera motion and structure. Try Blender: it's free, and you can find plenty of video tutorials on YouTube on how to use it for matchmoving: examples 1, 2

Francesco Callari
  • 11,300
  • 2
  • 25
  • 40
  • I created some pipelines using vsfm, regard3D, meshRecon and openMVS. I tried to use my own features in openSFM and the format did not work. I created files in the sift format and also calculated the matches to get the pipeline of http://ccwu.me/vsfm/doc.html to work. Which is why I wanted to implement my own point cloud generation and then use the surface recon to create a surface model from my point cloud. If you have some ideas where the error might be i would appriciate your help. https://stackoverflow.com/questions/50753567/how-to-use-own-features-computed-in-opencv-in-visualsfm-pipeline – Max Krappmann Jun 15 '18 at 06:18
0

Take a look at VisualSFM (http://ccwu.me/vsfm/). It is an interactive tool for such tasks. It will give you an idea which algorithms to use.

The Computer Vison book of Richard Szeliski (http://szeliski.org/Book/) will give the theoretical background.

  • As I have written above I know what steps are needed in general the problem come from a concrete implementation to get rid of sift features and use AKAZE since those features turned out to deliver better results for my problem. But I wasnt able to use my own features in the sfm pipeline. https://stackoverflow.com/questions/50753567/how-to-use-own-features-computed-in-opencv-in-visualsfm-pipeline – Max Krappmann Jun 15 '18 at 06:20