Removing matched features and points from OpenCV and Emgu image detection in C#

This is a follow up to the article about using Emgu and OpenCV for the Kaggle Competition: NOAA Sea Lion Population Count

In that article, we show how to use Emgu C# to find sea lion image matches in scenes. After finding a match, we need to remove the features and points included in that match from our target scene in order to not double match on the same features and points.

Remember from the previous article, we are calling FindMatch and the output is a match collection along with a homography object when a match is identified.

using (VectorOfVectorOfDMatch matches = new VectorOfVectorOfDMatch())
{
Mat mask;
FindMatch(modelFeature.seaLionPoints, modelFeature.seaLionDescriptors, observedKeyPoints, observedDescriptors, out matchTime, matches,
  out mask, out homography);

When the homography object is not null, we’ll want to loop through the matches and identify a new collection of the observed key points to be removed:

List<MKeyPoint> observedPointsToRemove = new List<MKeyPoint>();
for (int i = 0; i < matches.Size; i++)
{
var arrayOfMatches = matches[i].ToArray();
if (mask.GetData(i)[0] == 0) continue;
foreach (var match in arrayOfMatches)
{
var matchingModelKeyPoint = modelFeature.seaLionPoints[match.TrainIdx];
var matchingObservedKeyPoint = observedKeyPoints[match.QueryIdx];
if (!observedPointsToRemove.Contains(matchingObservedKeyPoint))
{
observedPointsToRemove.Add(matchingObservedKeyPoint);
}
}
}

Then, we will use this new set of observed points to be removed to pull omit rows from the feature Mat object. We can’t simply edit the existing Mat object and remove rows, so we need to create a new Mat object and only copy over the rows we want to keep.

Matrix<float> matrix = new Matrix<float>(observedDescriptors.Size.Height, observedDescriptors.Size.Width, observedDescriptors.NumberOfChannels);
observedDescriptors.CopyTo(matrix);
Matrix<float> matrixNew = new Matrix<float>(observedDescriptors.Size.Height - observedPointsToRemove.Count, observedDescriptors.Size.Width, observedDescriptors.NumberOfChannels);
bool preservePt = false;
int updatedObservedKeyPointsIdx = 0;
MKeyPoint[] updatedObservedKeyPoints = new MKeyPoint[observedKeyPoints.Size - observedPointsToRemove.Count];
for(int idx = 0; idx < observedKeyPoints.Size; idx++)
{
preservePt = true;
foreach (var pointToRemove in observedPointsToRemove)
{
if (observedKeyPoints[idx].Point.Equals(pointToRemove.Point))
{
preservePt = false;
break;
}
}
if (preservePt)
{
for (int colIdx = 0; colIdx < observedDescriptors.Size.Width; colIdx++)
{
matrixNew[updatedObservedKeyPointsIdx, colIdx] = matrix[idx, colIdx];
}
updatedObservedKeyPoints[updatedObservedKeyPointsIdx++] = observedKeyPoints[idx];
}
}

Lastly, we’ll use our newly created updatedObservedKeyPoints and matrixNew Mat object as replacements for our observed key points and feature descriptor. This will set up our variables with the updated collections, so that when we call our next FindMatch it won’t match on the sea lion in the current scene that we have already identified.

observedKeyPoints = new VectorOfKeyPoint(updatedObservedKeyPoints);
observedDescriptors = matrixNew.Mat;

Using Emgu and OpenCV in C# .Net for Kaggle competition: NOAA Sea Lion Population Count

This article will cover detecting Sea Lion model images in the target images using Emgu and OpenCV in C# for the Kaggle competition: NOAA Fisheries Steller Sea Lion Population Count

The Emgu is essentially a .Net wrapper around the OpenCV C++ libraries. Use the Emgu Installer from SourceForge found here: Emgu Installer

Open the Emgu.CV.Example.sln file with Visual Studio 2015 and build the projects. There are several non-.Net libraries at top level folders for the Emgu install such as:

C:\Emgu\emgucv-windesktop 3.2.0.2682\bin
C:\Emgu\emgucv-windesktop 3.2.0.2682\bin\x64
C:\Emgu\emgucv-windesktop 3.2.0.2682\bin\x86

The example projects in Emgu.CV.Example.sln are setup to build and link with those paths, but you’ll want to copy over those dependencies when you start your own projects.

We will be specifically using the FeatureMatching project. The summary of that project is this:

  1. Load a Model image and a Scene image
  2. Detect points and features for both the Model and Scene images.
  3. Run a K nearest neighbor similarity between the Model and Scene features.
  4. Compare the results against a threshold to determine a match
  5. After a match is identified, generate a homography transform to locate the Model object in the Scene image
  6. Draw a box around the Model object in the Scene image.

Enhancements we need to make to that test app in order to do the Kaggle competition: NOAA Fisheries Steller Sea Lion Population Count are these:

  1. Cache the features and points of the target image, because we’ll be comparing them against many models per prediction set.
  2. Parse the training data for many small single sea lion image as model images.
  3. Cache the features and points of the model images.
  4. For each target scene image we want to train and test on, compare our scene features and points with all of our model image features and points.

A good first step, is to alter the FindMatch function to take in different arguments that include features and points for the model and observed scene rather than the full image objects. This way we can use a very similar version of FindMatch without needing to extract the features and key points from our images on each match attempt. Feature and key point extraction from image is a costly action and we certainly don’t want to do that more than once per image if not necessary.

    public static class DrawMatches
    {
        public static void FindMatch(VectorOfKeyPoint modelKeyPoints, Mat modelDescriptors, VectorOfKeyPoint observedKeyPoints, Mat observedDescriptors, 
            out long matchTime, VectorOfVectorOfDMatch matches, out Mat mask, out Mat homography)

To help with the cached points and features, I made a SeaLionFeatures class:

    public class SeaLionFeatures
    {
        public VectorOfKeyPoint seaLionPoints;
        public Mat seaLionDescriptors;
        public SeaLionType modelType;
    }

I have a function that extracts the features and key points from the many single sea lion model images and returns a collection of the SeaLionFeature objects that we use as our model cache:

public static List<SeaLionFeatures> GetFeaturesAndDescriptors(List<SeaLionModel> seaLionModels)
{
    List<SeaLionFeatures> features = new List<SeaLionFeatures>();
    KAZE featureDetector = new KAZE();
    foreach (var seaLionModel in seaLionModels)
    {
        using (UMat uModelImage = seaLionModel.modelMat.GetUMat(AccessType.Read))
        {
            VectorOfKeyPoint modelKeyPoints = new VectorOfKeyPoint();
            //extract features from the object image
            Mat modelDescriptors = new Mat();
            featureDetector.DetectAndCompute(uModelImage, null, modelKeyPoints, modelDescriptors, false);
            SeaLionFeatures seaLionFeatures = new SeaLionFeatures();
            seaLionFeatures.modelType = seaLionModel.modelType;
            seaLionFeatures.seaLionPoints = modelKeyPoints;
            seaLionFeatures.seaLionDescriptors = modelDescriptors;
            features.Add(seaLionFeatures);
        }
    }
    return features;
}

Now, we have a collection of SeaLionFeatures with key points and features for all of our target sea lion models. Next, we want to iterate through our scene images and look for our sea lion models in them.

This code shows how we can use our observed image to extract features and key points and then iterate through our models. We call FindMatch on each of our sea lion feature objects in order to find if that model exists in our target scene.

using (UMat uObservedImage = observedImage.GetUMat(AccessType.Read))
{
    KAZE featureDetector = new KAZE();
    VectorOfKeyPoint observedKeyPoints = new VectorOfKeyPoint();
    Mat observedDescriptors = new Mat();
    // extract features from the observed image
    featureDetector.DetectAndCompute(uObservedImage, null, observedKeyPoints, observedDescriptors, false);

    foreach (var modelFeature in modelFeatures)
    {
        Mat homography;
        using (VectorOfVectorOfDMatch matches = new VectorOfVectorOfDMatch())
        {
            Mat mask;
            FindMatch(modelFeature.seaLionPoints, modelFeature.seaLionDescriptors, observedKeyPoints, observedDescriptors, out matchTime, matches,
                out mask, out homography);

If FindMatch has a hit, the homography output from FindMatch will not be null. Be sure to give a +1 to the metrics object you’ll be using to record predicted hits:

if (homography != null)
{
    switch(modelFeature.modelType)
    {
        case SeaLionType.AdultFemale:
            seaLionCounts.AdultFemale++;
            break;
        case SeaLionType.AdultMale:
            seaLionCounts.AdultMale++;
            break;
        case SeaLionType.Juvenile:
            seaLionCounts.Juvenile++;
            break;
        case SeaLionType.Pup:
            seaLionCounts.Pup++;
            break;
        case SeaLionType.SubAdultMale:
            seaLionCounts.SubAdultMale++;
            break;
    }

After you have a sea lion identification match, be sure to remove the key points and features from your scene image so that other models aren’t matching on that same sea lion for multiple models. This requires iterating through the Emgu matches collection and removing items from the observed scene image’s features and key points collection. I’ll be sharing how I accomplish that in my next article.