2

I have an application where I have to detect the presence of some items in a scene. The items can be rotated and a little scaled (bigger or smaller). I've tried using keypoint detectors but they're not fast and accurate enough. So I've decided to first detect edges in the template and the search area, using Canny ( or a faster edge detection algo ), and then match the edges to find the position, orientation, and size of the match found.

All this needs to be done in less than a second.

I've tried using matchTemplate(), and matchShape() but the former is NOT scale and rotation invariant, and the latter doesn't work well with the actual images. Rotating the template image in order to match is also time consuming.

So far I have been able to detect the edges of the template but I don't know how to match them with the scene.

I've already gone through the following but wasn't able to get them to work (they're either using old version of OpenCV, or just not working with other images apart from those in the demo):

https://www.codeproject.com/Articles/99457/Edge-Based-Template-Matching

Angle and Scale Invariant template matching using OpenCV

https://answers.opencv.org/question/69738/object-detection-kinect-depth-images/

Can someone please suggest me an approach for this? Or a code snipped for the same if possible ?

This is my sample input image ( the parts to detect are marked in red )

Sample input image

These are some software that are doing this and also how I want it should be:

enter image description here

enter image description here

Community
  • 1
  • 1
mrid
  • 5,782
  • 5
  • 28
  • 71
  • The third case with pink outlines, seems to be the easiest among all of them, since the object can be segmented from background using simple color segmentation using `cv2.inrange()` and then use `cv2.findContours` and then leverage the properties of contours such as `area/perimeter` ratio, moments etc, which will help you detect geometrically similar shapes. – ZdaR Jan 10 '20 at 10:47
  • For the first and second case I would recommend training a custom haar cascade, just like we have haar cascade for Face. OpenCV also provider the API for training custom haar feature detector, And haar features are *scale* independent, I am not sure about rotation independent, but I think you can train that by provided training images in various orientations. – ZdaR Jan 10 '20 at 10:49
  • the last one could be solved with my approach from: https://stackoverflow.com/questions/59428540/how-to-determine-rotation-of-a-shape/59431743#59431743 If you can segment the shapes well, you could use minAreaRecht in many cases (I think the tool image in your example might be solved that way, or with keypoint matching). Take some research on rotation invariant keypoint descriptors, to get an idea about how to reach rotation invariance. Maybe you can use it. Probably there is no given solution for you special case though. – Micka Jan 10 '20 at 11:03
  • @Micka the problem is that the item to be detected will be set by the user ( it can be a rectangle, circle, or any random shape ). How will I find out which is the case and then try to find accordingly? this is the main reason I was thinking of going edge based. i'm even okay with contour based detection (if that's fast) but it's very difficult to detect the outermost contours in most images like the 1st and the 2nd one – mrid Jan 10 '20 at 11:42
  • chamfer matching is a nice edge based aproach, but you will have to make it scale and rotation invariant (e.g. prepare different templates for rotation and use a pyramid approach for scale). A general purpose method would be keypoint matching (sift/surf/orb) but it typically needs some textured objects. If you can identify some reproducable edges like corners, circles, etc. homography guessing by RANSAC followed by insider/outsider testing on all edges could be nice. – Micka Jan 10 '20 at 12:09
  • @Micka ` If you can identify some reproducable edges like corners, circles, etc. homography guessing by RANSAC followed by insider/outsider testing on all edges could be nice.` can you please elaborate a little ? maybe a pseudocode which I can try to build on ? – mrid Jan 14 '20 at 08:36

2 Answers2

2

This topic is what I am actually dealing for a year on a project. So I will try to explain what my approach is and how I am doing that. I assume that you already did the preprocess steps(filters,brightness,exposure,calibration etc). And be sure you clean the noises on image.

Note: In my approach, I am collecting data from contours on a reference image which is my desired object. Then I am comparing these data with the other contours on the big image.

  1. Use canny edge detection and find the contours on reference image. You need to be sure here about that it shouldn't miss some parts of contours. If it misses, probably preprocess part should have some problems. The other important point is that you need to find an appropriate mode of findContours because every modes have different properties so you need to find an appropriate one for your case. At the end you need to eliminate the contours which are okey for you.

  2. After getting contours from reference, you can find the length of every contours using outputArray of findContours(). You can compare these values on your big image and eliminate the contours which are so different.

  3. minAreaRect precisely draws a fitted, enclosing rectangle for each contour. In my case, this function is very good to use. I am getting 2 parameters using this function:

    a) Calculate the short and long edge of fitted rectangle and compare the values with the other contours on the big image.

    b) Calculate the percentage of blackness or whiteness(if your image is grayscale, get a percentage how many pixel close to white or black) and compare at the end.

  4. matchShape can be applied at the end to the rest of contours or you can also apply to all contours(I suggest first approach). Each contour is just an array so you can hold the reference contours in an array and compare them with the others at the end. After doing 3 steps and then applying matchShape is very good on my side.

  5. I think matchTemplate is not good to use directly. I am drawing every contour to a different mat zero image(blank black surface) as a template image and then I compare with the others. Using a reference template image directly doesnt give good results.

  6. OpenCV have some good algorithms about finding circles,convexity etc. If your situations are related with them, you can also use them as a step.

  7. At the end, you just get the all data,values, and you can make a table in your mind. The rest is kind of statistical analysis.

Note: I think the most important part is preprocess part. So be sure about that you have a clean almost noiseless image and reference.

Note: Training can be a good solution for your case if you just want to know the objects exist or not. But if you are trying to do something for an industrial application, this is totally wrong way. I tried YOLO and haarcascade training algorithms several times and also trained some objects with them. The experiences which I get is that: they can find objects almost correctly but the center coordinates, rotation results etc. will not be totally correct even if your calibration is correct. On the other hand, training time and collecting data is painful.

Yunus Temurlenk
  • 4,085
  • 4
  • 18
  • 39
  • I'm developing an industrial application, what would you suggest? Will yolo or haarcascade work for these small objects? – mrid Jan 11 '20 at 10:09
  • 1
    I dont suggest you to use training for an industrial app. If you think that it can solve your problem, I suggest haarcascade first because training time is very low comparing to YOLO(5-6 hours for 2000 pos.,1000 neg. images for 10 stage in an average processor). You can use createSample function for labeling and get positive images quickly. More collecting better data, more good results you will get. – Yunus Temurlenk Jan 11 '20 at 10:51
0

You have rather bad image quality very bad light conditions, so you have only two ways: 1. To use filters -> binary threshold -> find_contours -> matchShape. But this very unstable algorithm for your object type and image quality. You will get a lot of wrong contours and its hard to filter them. 2. Haarcascades -> cut bounding box -> check the shape inside

All "special points/edge matching " algorithms will not work in such bad conditions.

Alex.Lit.
  • 1
  • 1