Ask Your Question
1

Best method for multiple particle tracking with noise and possible overlap?

asked 2014-03-17 19:22:27 -0600

CVNovice gravatar image

updated 2014-03-25 19:50:58 -0600

Hello, I am working on a school project where I want to track the number, direction, and velocity of particles moving across a flow chamber. I have a series of timestamped images which were taken under florescent light showing bright particles flowing over a view field.

The particles I'm interested in tracking are the bright round dots (highlighted in green), while excluding motion blur from other particles that were not in focus.

Image Showing: Sample (green) vs Noise (red)

Here is a series of sample images from the data set: Sample Data

I have started working with both optical flow examples from the docs but they pick up all of the noise as tracks which I need to avoid. What method would you recommend for this application?

EDIT: Using the suggestion below I've added a tophat filter before running the sequence through an Lukas Kanade Motion Tracker. I just modified it slightly to return all of the tracks it picks up, and then I go through them to remove duplicates, and calculate velocities for each tracked particle.

This method still seems to pick up a lot of noise in the data, perhaps I haven't used optimal parameters for the LK filter?

lk_params = dict( winSize  = (10, 10),
              maxLevel = 5,
              criteria = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 0.03))

feature_params = dict( maxCorners = 3000,
                       qualityLevel = 0.5,
                       minDistance = 3,
                       blockSize = 3 )

class App:
    def __init__(self, video_src):
        self.track_len = 50
        self.detect_interval = 1
        self.tracks = []
        self.allTracks = []
        self.cam = video.create_capture(video_src)
        self.frame_idx = 0

def run(self):
    maxFrame = 10000
    while True:
        ret, frame = self.cam.read()
        if frame == None:
            break
        if self.frame_idx > maxFrame:
            break
        if frame.shape[2] == 3:
            frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
            kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(4,4))
            tophat1 = cv2.morphologyEx(frame, cv2.MORPH_TOPHAT, kernel)


            ret, frame_gray = cv2.threshold(tophat1, 127, 255, cv2.THRESH_BINARY)
        else: break
        vis = cv2.cvtColor(frame_gray, cv2.COLOR_GRAY2BGR)
        if len(self.tracks) > 0:
            img0, img1 = self.prev_gray, frame_gray
            p0 = np.float32([tr[-1] for tr in self.tracks]).reshape(-1, 1, 2)
            p1, st, err = cv2.calcOpticalFlowPyrLK(img0, img1, p0, None, **lk_params)
            p0r, st, err = cv2.calcOpticalFlowPyrLK(img1, img0, p1, None, **lk_params)
            d = abs(p0-p0r).reshape(-1, 2).max(-1)
            good = d < 1
            new_tracks = []
            for tr, (x, y), good_flag in zip(self.tracks, p1.reshape(-1, 2), good):
                if not good_flag:
                    continue
                tr.append((x, y))
                if len(tr) > self.track_len:
                    del tr[0]
                new_tracks.append(tr)
                cv2.circle(vis, (x, y), 2, (0, 255, 0), -1)
            self.tracks = new_tracks
            cv2.polylines(vis, [np.int32(tr) for tr in self.tracks], False, (0, 255, 0))
            draw_str(vis, (20, 20), 'track count: %d' % len(self.tracks))

        if self.frame_idx % self.detect_interval == 0:
            mask = np.zeros_like(frame_gray)
            mask[:] = 255
            for x, y in [np.int32(tr[-1]) for tr in self.tracks]:
                cv2.circle(mask, (x, y), 5, 0, -1)
            p = cv2.goodFeaturesToTrack(frame_gray, mask = mask, **feature_params)
            if p is not None:
                for x, y in np.float32(p).reshape(-1, 2):
                    self ...
(more)
edit retag flag offensive close merge delete

Comments

Any luck yet? I'm curious how you solve this.

Goosebumps gravatar imageGoosebumps ( 2014-03-24 11:13:26 -0600 )edit

I took your suggestion of using a top hat filter and implemented it before feeding the sequence to the LK tracker. I posted the code I'm now using in the edit above.

CVNovice gravatar imageCVNovice ( 2014-03-25 19:36:07 -0600 )edit

If you pick up a lot of noise, maybe you could have a look at blob tracking augmented with kalman filtering.

Goosebumps gravatar imageGoosebumps ( 2014-03-26 02:19:22 -0600 )edit

2 answers

Sort by ยป oldest newest most voted
2

answered 2014-03-18 02:08:06 -0600

Goosebumps gravatar image

updated 2014-03-18 02:36:04 -0600

You may want to start using a top hat operation. That will make you get rid of the blurry non black background.

Second step would be binarization. I used a simple threshold, but there is still some work to be done here. Perhaps watershed could help you.

Then you may want to use findContours and do away with blobs that are more then e.g. 1.5 times as high as they are wide.

static void Main(string[] args)
{
    Image<Bgr, byte> bgr = new Image<Bgr, byte>(@"k:\tomgoo\im.png");
    Image<Gray, byte> gray = bgr.Convert<Gray, byte>();

    // Perform top-hat
    int morphOpSize = 5;
    StructuringElementEx element =
        new StructuringElementEx(
            morphOpSize,
            morphOpSize,
            morphOpSize / 2,
            morphOpSize / 2,
            Emgu.CV.CvEnum.CV_ELEMENT_SHAPE.CV_SHAPE_ELLIPSE);

    Image<Gray, byte> filtered;
    filtered = gray.MorphologyEx(
        element,
        Emgu.CV.CvEnum.CV_MORPH_OP.CV_MOP_TOPHAT,
        1);

    // binarize image
    double thresh = CvInvoke.cvThreshold(
                    filtered,
                    filtered,
                    100,
                    255,
                    THRESH.CV_THRESH_BINARY);

    // find and filter out blobs
    Contour<Point> contours = filtered.FindContours(
        CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE,
        RETR_TYPE.CV_RETR_CCOMP);

    Contour<Point> cont = contours;
    while (cont != null)
    {
        // Remember the next contour for later and isolate the current one
        Contour<Point> next = cont.HNext;
        cont.HNext = null;

        MCvBox2D minAreaRect = cont.GetMinAreaRect();
        if (minAreaRect.size.Height < 1.5 * minAreaRect.size.Width)
        {
            // Generate random color
            CvInvoke.cvDrawContours(
                bgr,
                cont,
                new MCvScalar(255, 0, 0),
                new MCvScalar(255, 0, 0),
                2,
                2,
                LINE_TYPE.EIGHT_CONNECTED,
                new Point(0, 0));
        }

        // Now go to the next contour
        cont = next;
    }
    bgr.Save(@"c:\out.png");
}

(Sorry for my emgu dialect.) You will get this result: image description and image description

edit flag offensive delete link more

Comments

Thank you for your suggestions! The top hat filter seems to be just what I needed to remove most of the background debris/noise. I'm still working on the code to keep track of all the particles though, I posted my code so far in the edit.

Thanks again!

CVNovice gravatar imageCVNovice ( 2014-03-25 19:52:49 -0600 )edit
0

answered 2017-08-09 09:56:06 -0600

Ziri gravatar image

How about finding local MAXIMA after applying Gaussian filter ?

image description

image description

edit flag offensive delete link more

Comments

without code this answer seems useless

sturkmen gravatar imagesturkmen ( 2017-08-12 16:44:53 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2014-03-17 19:22:27 -0600

Seen: 6,260 times

Last updated: Aug 09 '17