Ask Your Question

CVNovice's profile - activity

2017-08-09 09:28:40 -0600 received badge  Notable Question (source)
2016-01-26 06:04:35 -0600 received badge  Popular Question (source)
2015-04-17 11:23:07 -0600 commented question How would I sort contours from cvFindContours using RETR_TREE into a tree/pyrimid based on hierarchy?

Thanks for the tip. I didn't realize that. I just thought this forum would be more knowledgeable about opencv specifically while stack would be a good source of broader knowledge and has a larger user base. I'm not quite sure why it would be bad to post on two different forums?? It seems its just a better way to have more people see the question and potential answer.

2015-04-17 11:23:06 -0600 asked a question KMeans for Color Quantization: Do I use just color values or spatial+color values as sample data?

I've seen different examples where each row of the sample data is either (R,G,B) or (X,Y,R,G,B).

What is the difference of using just color values or color and spatial values? How does this affect the centroids generated from a color image?

My goal is to identify prominent colors of the image, for example, detecting eye color following face detection.

2014-10-23 02:11:31 -0600 received badge  Necromancer (source)
2014-10-23 02:11:31 -0600 received badge  Self-Learner (source)
2014-10-22 21:26:41 -0600 answered a question How would I sort contours from cvFindContours using RETR_TREE into a tree/pyrimid based on hierarchy?
import numpy as np
H = np.array(
    [[ 7, -1,  1, -1],
     [-1, -1,  2,  0],
     [-1, -1,  3,  1],
     [-1, -1,  4,  2],
     [-1, -1,  5,  3],
     [ 6, -1, -1,  4],
     [-1,  5, -1,  4],
     [ 8,  0, -1, -1],
     [-1,  7, -1, -1]])

def T(i):
    children = [(h, j) for j, h in enumerate(H) if h[3] == i]
    children.sort(key = lambda h: h[0][1])
    return {c[1]: T(c[1]) for c in children}

print T(-1)

Thanks to Falko at stack overflow for this answer.

2014-09-15 14:18:25 -0600 asked a question How would I sort contours from cvFindContours using RETR_TREE into a tree/pyrimid based on hierarchy?

For example if I were to input the hierarchy from 4.RETR_TREE here

hierarchy = 
    array([[[ 7, -1,  1, -1],
            [-1, -1,  2,  0],
            [-1, -1,  3,  1],
            [-1, -1,  4,  2],
            [-1, -1,  5,  3],
            [ 6, -1, -1,  4],
            [-1,  5, -1,  4],
            [ 8,  0, -1, -1],
            [-1,  7, -1, -1]]])

I would like to get this output:

{0:{1:{2:{3:{4:{5:{},6:{}}}}}},
7:{},
8:{}}

I am looking to do this to build a tree model in Qt so I can easily see which contours contain which others. If you have a better idea of how to turn the hierarchy data into a Qt tree model that would also be appreciated.

Thanks in advance!

2014-07-02 17:29:05 -0600 commented answer Is it possible to get function and method documentation programmatically?

Not quite. I'm making a GUI that takes a regular cv function and lets you change its attributes and see the results of different parameters in real time.

I have written wrappers for most of the common methods but I thought it would be really cool (if its possible) to have a script make the wrappers automatically for every cv method based on the documentation.

For instance, the threshold function gets passed its source image, then has a slider for changing the threshold value and max threshold value, and a combo box to select what type of threshold to use (binary, binary_inv).

cvtColor just has the combo box to select bgr2gray,bgr2hsv, etc.

Does that make sense? Give me a few mins and Ill post a screen shot.

2014-07-02 15:32:44 -0600 commented answer Is it possible to get function and method documentation programmatically?

Ok a slightly bigger issue, I need to get this information into a class method call. But it seems help automatically prints to stdout. I need to figure out how to go about redirecting the output of help() so it can be called and parsed in a script. Any ideas?

2014-07-02 15:11:03 -0600 commented answer Is it possible to get function and method documentation programmatically?

Thank you! Now what about getting the type of each argument in the method call?

Note: I'm using python 2.7, ubuntu14.04, and opencv2.4

2014-07-02 14:45:01 -0600 asked a question Is it possible to get function and method documentation programmatically?

Is it possible to have python return a list or dictionary of method arguments and their type?

For instance take cv2.threshold function. If I go to the docs webpage I can see all of the info I want, but is it possible to get this information in code? I would like to query the function cv2.threshold and have something tell me it takes the arguments [src,thresh,maxval,type[,dst]). It would be ideal if it also told me what type or class the values are:

e.g. the docs say: Python: cv2.threshold(src, thresh, maxval, type[, dst]) → retval, dst

so I want to be able to do something like:

foo.getMethods(cv2.threshold) -> {src:mat, thresh:int, maxval:int, type:built-in/[cv2. THRESH_BINARY_INV,cv2. THRESH_BINARY)]}

All of this info is in the documentation on the web page, I just want to access it to dynamically setup a codebase I am working on for a GUI.

Can I access this info without having to setup a web-scraper to pull it directly from the website?

Edit: I've uploaded a video to show exactly what I want to do.

Up to now for each function I have to manually enter its parameters which look like this (for cv2.threshold or Binary Threshold in the video):

    def guiParams(self):
    thresh = {'type':'int',
              'name':'threshold value',
              'min':0,
              'max' : 255,
              'default': 127}
    maxval = {'type':'int',
            'name':'max threshold value',
            'min':0,  
            'max' : 255,
          'default': 255}
    threshtype ={'type':'builtin',
            'name':'thresholding type',
            'options':{'THRESH_BINARY':cv2.THRESH_BINARY,
                       'THRESH_BINARY_INV':cv2.THRESH_BINARY_INV,
                       'THRESH_TRUNC':cv2.THRESH_TRUNC,
                       'THRESH_TOZERO':cv2.THRESH_TOZERO,
                       'THRESH_TOZERO_INV':cv2.THRESH_TOZERO_INV},
            'default':'THRESH_BINARY'
            }
    return [thresh,maxval,threshtype]

This question is trying to figure out how I can get the information in these dictionaries (primarily the attribute name and type, default/typical values would be a bonus) grammatically so I can ensure the gui will work with every image processing method in the opencv Library

2014-03-26 02:05:16 -0600 received badge  Student (source)
2014-03-25 19:52:49 -0600 commented answer Best method for multiple particle tracking with noise and possible overlap?

Thank you for your suggestions! The top hat filter seems to be just what I needed to remove most of the background debris/noise. I'm still working on the code to keep track of all the particles though, I posted my code so far in the edit.

Thanks again!

2014-03-25 19:36:07 -0600 commented question Best method for multiple particle tracking with noise and possible overlap?

I took your suggestion of using a top hat filter and implemented it before feeding the sequence to the LK tracker. I posted the code I'm now using in the edit above.

2014-03-18 13:20:02 -0600 received badge  Supporter (source)
2014-03-17 19:23:25 -0600 received badge  Editor (source)
2014-03-17 19:22:27 -0600 asked a question Best method for multiple particle tracking with noise and possible overlap?

Hello, I am working on a school project where I want to track the number, direction, and velocity of particles moving across a flow chamber. I have a series of timestamped images which were taken under florescent light showing bright particles flowing over a view field.

The particles I'm interested in tracking are the bright round dots (highlighted in green), while excluding motion blur from other particles that were not in focus.

Image Showing: Sample (green) vs Noise (red)

Here is a series of sample images from the data set: Sample Data

I have started working with both optical flow examples from the docs but they pick up all of the noise as tracks which I need to avoid. What method would you recommend for this application?

EDIT: Using the suggestion below I've added a tophat filter before running the sequence through an Lukas Kanade Motion Tracker. I just modified it slightly to return all of the tracks it picks up, and then I go through them to remove duplicates, and calculate velocities for each tracked particle.

This method still seems to pick up a lot of noise in the data, perhaps I haven't used optimal parameters for the LK filter?

lk_params = dict( winSize  = (10, 10),
              maxLevel = 5,
              criteria = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 0.03))

feature_params = dict( maxCorners = 3000,
                       qualityLevel = 0.5,
                       minDistance = 3,
                       blockSize = 3 )

class App:
    def __init__(self, video_src):
        self.track_len = 50
        self.detect_interval = 1
        self.tracks = []
        self.allTracks = []
        self.cam = video.create_capture(video_src)
        self.frame_idx = 0

def run(self):
    maxFrame = 10000
    while True:
        ret, frame = self.cam.read()
        if frame == None:
            break
        if self.frame_idx > maxFrame:
            break
        if frame.shape[2] == 3:
            frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
            kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(4,4))
            tophat1 = cv2.morphologyEx(frame, cv2.MORPH_TOPHAT, kernel)


            ret, frame_gray = cv2.threshold(tophat1, 127, 255, cv2.THRESH_BINARY)
        else: break
        vis = cv2.cvtColor(frame_gray, cv2.COLOR_GRAY2BGR)
        if len(self.tracks) > 0:
            img0, img1 = self.prev_gray, frame_gray
            p0 = np.float32([tr[-1] for tr in self.tracks]).reshape(-1, 1, 2)
            p1, st, err = cv2.calcOpticalFlowPyrLK(img0, img1, p0, None, **lk_params)
            p0r, st, err = cv2.calcOpticalFlowPyrLK(img1, img0, p1, None, **lk_params)
            d = abs(p0-p0r).reshape(-1, 2).max(-1)
            good = d < 1
            new_tracks = []
            for tr, (x, y), good_flag in zip(self.tracks, p1.reshape(-1, 2), good):
                if not good_flag:
                    continue
                tr.append((x, y))
                if len(tr) > self.track_len:
                    del tr[0]
                new_tracks.append(tr)
                cv2.circle(vis, (x, y), 2, (0, 255, 0), -1)
            self.tracks = new_tracks
            cv2.polylines(vis, [np.int32(tr) for tr in self.tracks], False, (0, 255, 0))
            draw_str(vis, (20, 20), 'track count: %d' % len(self.tracks))

        if self.frame_idx % self.detect_interval == 0:
            mask = np.zeros_like(frame_gray)
            mask[:] = 255
            for x, y in [np.int32(tr[-1]) for tr in self.tracks]:
                cv2.circle(mask, (x, y), 5, 0, -1)
            p = cv2.goodFeaturesToTrack(frame_gray, mask = mask, **feature_params)
            if p is not None:
                for x, y in np.float32(p).reshape(-1, 2):
                    self ...
(more)
2014-03-17 19:21:12 -0600 asked a question Best method for Multiple Object tracking with noise and overlap?

Hello, I am working on a school project where I want to track the number, direction, and velocity of particles moving across a flow chamber. I have a series of timestamped images which were taken under florescent light showing bright particles flowing over a view field.

The particles I'm interested in tracking are the bright round dots (shown in green), while excluding motion blur from other particles that were not in focus.

Image Showing: Sample (green) vs Noise (red)

Here is a series of sample images from the data set: Sample Data

I have started working with both optical flow examples from the docs but they pick up all of the noise as tracks which I need to avoid. What method would you recommend for this application?

2014-02-05 14:35:02 -0600 commented question [Python] Would this GUI concept be a good open source contribution?

Thanks for the response. I've got a working prototype I did with PyQt, I'll post some screen shots soon.

2014-02-04 23:50:33 -0600 received badge  Organizer (source)
2014-02-04 17:21:18 -0600 asked a question [Python] Would this GUI concept be a good open source contribution?

Background: I am working on a project where I am trying to automatically identify an object and perform color calibration and segmentation on a subset of an image. The image can be taken from any camera where it is uploaded to a server. I am trying to normalize and accommodate for issues such as shadows, non-uniform illumination, as well as zoom, pan, and tilt of the camera. The problem I have is my code works well on images that match the test images I used, but subtle variations can throw it off.

GUI Proposal: I am considering making a tool which would allow each step in an image processing algorithms be kept track of and changed on the fly.

For instance, If these are the steps:

  1. BGR > Gray

  2. Gray >Gaussian Blur

  3. Gaussian Blur > Histogram Equalization

  4. Histogram Equalization > Adaptive Threshold

  5. Adaptive Threshold > Morph Close

  6. Morph Close > Morph Open

I would have a single window pop up with prev and next buttons. Each Step would be loaded into an array along with parameters important to each step. The main window will allow you to scroll through the stack of steps for each process. A prev and next button, along with a slider to quickly cycle through all the steps would be present in every window.

For example, all step one would have is the basic gui. Step two however would have a slider to adjust the kernel size of the blurr, in addition to the basic elemnts. Step 4 would have multiple sliders to adjust all of the paremeters involved in an adaptive threshold.

For instance, when you go to step four you can change the threshold value, and the image will automatically update in the window reflecting the updated values. If you hit next, it will use your updated image rather than the originial hard coded value.

I hope my discription made sense, but my thought here is I waste a lot of time with trial and error trying to get parameters right in some of these functions (primarily thresholds where lighting and showdows vary).

Main Question: Would this be a useful tool for anyone (make a valid contribution to the open source project?) or am I just doing CV wrong? Is there some function I can use to calculate optimal parameters for each image without having to manually play with the values?