Ask Your Question
0

Getting 3D points from pointcloud to show objects positions

asked 2019-02-02 22:19:11 -0600

joe_jetson gravatar image

updated 2019-02-03 01:00:06 -0600

Hardware: Kinect V1`` Software: Ubuntu 16.04, cv2, python

Hello, Currently in my project I have used used openCV to do color based object detection. My next step in my project is to find the coordinates (x,y,z) in mm of these objects to pass over to another program that uses a robotic arm. An interface would be used as well to request the coordinates of the objects. Say if the user wants to pick up all green objects then the program will return the coordinates of only the green objects. The robotic arm will used the coordinates to pick up the colored object. The problem I am running into is finding how to get these coordinates using the kinectV1. Could I use the get_depth() somehow to get the xyz coordinates? I do not know how to use the point cloud to get coordinates as I have seen that is an option but I could not find any examples with python. Also could I take a picture of the images to get the coordinates of the objects? Any advice would be appreciated. Attached is a picture of my results and my code for your reference. Thank you

Edit: I found this repository https://github.com/amiller/libfreenec... In the calibkinect.py there is code that helps with getting the xyz coordinates from the point cloud. After playing with this code a while I noticed that the image has to be refreshed otherwise the xyz matrix will stay the same. I am just having trouble understanding how can I integrate this with my code.

image description

import cv2
import numpy as np
import freenect

#function to get RGB image from kinect
def get_video():
    array,_ = freenect.sync_get_video()
    array = cv2.cvtColor(array,cv2.COLOR_RGB2BGR)
    return array

#function to get depth image from kinect
def get_depth():
    array,_ = freenect.sync_get_depth()
    array = array.astype(np.uint8)
    return array

if __name__ == "__main__":
    while True:
        #get a frame from RGB kinect camera
        frame = get_video()     
        #_, frame = cap.read()

        #get a frame from depth sensor
        #depth = get_depth()

        hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)


        #Create kernel to use in filter
        kernel = np.ones((5, 5), np.uint8)

    #list yellow range
        lower_yellow = np.array([15, 60, 30]) 
        upper_yellow = np.array([30, 255, 255 ])     

        #Create filter for yellow
        yellow_mask = cv2.inRange(hsv, lower_yellow, upper_yellow) # Threshold the HSV image to get only yellow colors
        opening_y = cv2.morphologyEx(yellow_mask, cv2.MORPH_OPEN, kernel, iterations = 1) #Use morphology open to rid of false pos and false neg (noise)
        result_y = cv2.bitwise_and(frame, frame, mask = opening_y) #bitwise and the opening filter with the original frame 

    #list green range 
        lower_green = np.array([45, 45, 10]) 
        upper_green = np.array([100, 255, 255])

        #Create filter for green
        green_mask = cv2.inRange(hsv, lower_green, upper_green)
        opening_g = cv2.morphologyEx(green_mask, cv2.MORPH_OPEN, kernel, iterations = 1)
        result_g = cv2.bitwise_and(frame, frame, mask = opening_g)

    #list blue range 
        lower_blue = np.array([110, 70, 30]) 
        upper_blue = np.array([130, 255, 255])

    #Create filter for blue
        blue_mask = cv2.inRange(hsv, lower_blue, upper_blue)
        opening_b = cv2.morphologyEx(blue_mask, cv2 ...
(more)
edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
1

answered 2019-02-03 14:07:19 -0600

kbarni gravatar image

For KinectV1 you can use libfreenect or OpenNI2 (more complete, but not sure if it has Python bindings).

Then set the registration mode to IMAGE_TO_DEPTH_REGISTRATION_IMAGE in OpenNI or FREENECT_DEPTH_REGISTERED in libfreenect to superpose the color and depth data.

Create a stream for color and for depth images and capture the two streams continuously. Refer to the example codes how to do this.

For each image, get a mask on the desired colored objects just as you did. If you have several objects of same colour, separate them with connectedComponentAnalysis.

Then compute the mean depth on the masked area (i.e. the area covered by a colored box) use the mean function: dmean = cv2.mean(image, mask). Get the center of gravity for the object as the middle of the bounding box (cx=x+w/2).

Now you have all the elements to get the (x,y,z) coordinates of the object: use the freenect_camera_to_world() or convertDepthToWorld() function.

edit flag offensive delete link more

Comments

When you said "*Refer to the example codes how to do this. *" what example code are you referring to.

joe_jetson gravatar imagejoe_jetson ( 2019-02-03 16:35:26 -0600 )edit

As this is the most basic thing to do with an RGB-D camera, probably most tutorials or examples contain a similar capturing loop.

If your code works correctly (I didn't test it), it should do this too (you only commented the depth capturing and didn't do the depth-color registration).

As "official" example, there is SimpleViewer for OpenNI2.

kbarni gravatar imagekbarni ( 2019-02-04 06:36:08 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2019-02-02 22:19:11 -0600

Seen: 3,034 times

Last updated: Feb 03 '19