Ask Your Question

Sinjon's profile - activity

2020-06-03 11:33:30 -0600 received badge  Popular Question (source)
2017-05-05 14:29:28 -0600 asked a question How can I use time as my file name and have the save function in a class

Hello,

I'm trying to use time as my filename header but can't time to get it functioning. and then also want it in class but I couldn't get that working.

This is my current code to get it

fourcc = cv2.VideoWriter_fourcc(*args["codec"])
writer = None
(h, w) = (None, None)
zeros = None

if writer is None:
        (h, w) = frame.shape[:2]     
        writer = cv2.VideoWriter(args["output"], fourcc, args["fps"],
            (w * 1, h * 1), True)
        zeros = np.zeros((h, w), dtype="uint8") 
        output = np.zeros((h * 1, w * 1, 3), dtype="uint8")
        output[0:h, 0:w] = frame
 write.write(output)


def show_time ():
    rightNow = datetime.datetime.now()
    currentTime = ("tracker:%04d%02d%02d_%02d:%02d:%02d.avi" %                                    ( rightNow.year, rightNow.month, rightNow.day, rightNow.hour, rightNow.minute, rightNow.second ))
    return currentTime
args["ouput"] = show_time()

tried using

now = datetime.datetime.now()
writer = cv2.VideoWriter(now".avi", fourcc, args["fps"],

then for saving using a class, I used the following code when writing a file was working. I tried putting the variables in the brackets but it didn't agree with (h, w). I'm also not too sure how to call the function.

def save ():
        (h, w) = frame.shape[:2]
        writer = cv2.VideoWriter(args["output"], fourcc, args["fps"],
            (w * 2, h * 1), True)
        zeros = np.zeros((h, w), dtype="uint8") 
        output = np.zeros((h * 1, w * 1, 3), dtype="uint8")
        output = np.zeros((h * 1, w * 1, 3), dtype="uint8")
       output[0:h, 0:w] = frame 
       return = output 

if writer is None
    output = save()
    writer.write(output)

Thanks in advance!

2017-05-03 10:03:53 -0600 commented answer fps - how to divide count by time function to determine fps

Sweet thanks! I changed it for python and got the following results which dont seem correct to me...

tickmark = cv2.getTickCount()

11877917107929

And for
tickmark = cv2.getTickFrequency()

1000000000.0

2017-05-03 08:03:50 -0600 asked a question fps - how to divide count by time function to determine fps

Hello,

I have a counter working that counts every frame. what I want to do is divide this by time to determine the FPS of my program. But I'm not sure how to perform operations on timing functions within python.

I've tried initializing time as

fps_time = time.time 
fps_time = float(time.time)
fps_time = np.float(time.time)
fps_time = time()

Then for calculating the fps,

FPS = (counter / fps_time)
FPS = float(counter / fps_time)
FPS = float(counter (fps_time))

But errors I'm getting are object is not callable or unsupported operand for /: 'int' and 'buildin functions'

2017-05-02 12:35:54 -0600 commented question OpenCV speed tracker logic / formula

I was able to get arclength working after some trial and error and could calculate the diameter! now the problem is that the diameter is constantly changing as the video detects the contours. I want to input the first 5 values into a np.array and calculate the mean to use as my object size. But I'm having difficulty setting the array up to only contain 5 values.

av_diameter = np.array(diameter) 
av_diameter = np.mean(av_diameter)

I've tried putting (5), a data type and also using .shape but having no luck. Also, once the array is filled. will these values be maintained in there or be replaced with new values entering array? @LBerger

2017-05-02 12:25:54 -0600 commented answer camera calibration - tracking object distance

the ball in this project will only be traveling up and down in this project so that'll work perfectly. I was able to get arclength working after some trial and error and could calculate the diameter! now the problem is that the diameter is constantly changing as the video detects the contours. I want to input the first 5 values into a np.array and calculate the mean to use as my object size.

But I'm having difficulty setting the array up to only contain 5 values.

av_diameter = np.array(diameter) 
av_diameter = np.mean(av_diameter)

I've tried putting (5), a data type and also using .shape but having no luck. Also, once the array is filled. will these values be maintained in there or be replaced with new values entering array? @Tetragramm

2017-05-01 09:44:33 -0600 commented question OpenCV speed tracker logic / formula

Okay, how can I calculate the size of a contour? @LBerger

2017-04-30 16:54:30 -0600 asked a question OpenCV speed tracker logic / formula

Hello,

My aims are to create a tracker that tracks a coloured ball and outputs its speed. I'm new at programming and can't get my head around the logic or functions needed to calculate the speed.

So far I've got my code to:

  • convert to hsv
  • detect the contour
  • draw a circle around the ball & calculate the centre point
  • track its previous points

I want to implement (space in http, as i cant publish links) h ttp://www.pyimagesearch.com/2015/09/21/opencv-track-object-movement/ for the speed detection. in that code it uses the previous coordinates to determine the direction it's travelling in. I want to use the data from the previous points to calc speed.

this is a snip of the code storing the coordinates and determining direction

# loop over the set of tracked points
    for i in np.arange(1, len(pts)):
        # if either of the tracked points are None, ignore
        # them
        if pts[i - 1] is None or pts[i] is None:
            continue


            # check to see if enough points have been accumulated in
            # the buffer
            if counter >= 10 and i == 1 and pts[-10] is not None:
                # compute the difference between the x and y
                # coordinates and re-initialize the direction
                # text variables
                dX = pts[-10][0] - pts[i][0]
                dY = pts[-10][1] - pts[i][1]
                (dirX, dirY) = ("", "")

                # ensure there is significant movement in the
                # x-direction
                if np.abs(dX) > 20:
                    dirX = "East" if np.sign(dX) == 1 else "West"

I've calibrated my camera and know the px/mm of my pi. So say my ball is 10cm, in the video capture its size is 100px. 10px is 1cm at its current distance from the camera.

What I can't figure out what to do is calculate the size of the contour found, I tried using:

   area = cv2.contourArea(cnt) 
   diameter = np.sqrt(4*area/np.pi)

but from that got: 'is not a numpy array neither a scalar error' , I tried creating an array for cnt and also putting area in where cnt is declared, no luck though.

And then the next step, making use of the coordinates to calculate speed. the ball will going up and down but it won't be in a perfectly straight line. If I know the frames per second and use this as my time, say its 20fps, count to 40, 2 seconds then perform the speed formula. Now how would I determine the pixels the tracker has travelled in the 40 frames?

Thanks in advance, its really appreciated!

2017-04-30 10:40:31 -0600 commented answer camera calibration - tracking object distance

brilliant thanks you!

My end goal is to track the speed of the ball. If I determine the size of the ball in pixels can I use that to calculate speed? say the ball is 10cm in real life, in the video stream its 100px meaning that 10px is the equivalent of 1 cm. If I can find the distance moved per frame, can I use this info to calculate speed?

I can detect the ball but I'm not sure how to determine the balls pixel size.. @Tetragramm

2017-04-29 05:13:37 -0600 asked a question camera calibration - tracking object distance

Hello,

I'm trying to calibrate my camera so I can eventually figure how far an object is away from the camera.

I've been following this question to detail the equation for figuring distance of an object but don't have enough reputation to comment or enough to publish a link so there's a space in http: h ttp://stackoverflow.com/questions/14038002/opencv-how-to-calculate-distance-between-camera-and-object-using-image

there are a couple things I'm unsure about, my matrix after running the calibrate.py with 30 or so photos, ran it another time with a different set of photos and got pretty much the same results.

RMS: 0.230393020863
camera matrix:
 [[ 294.17185696    0.          153.23247818]
 [   0.          295.43662344  119.46194893]
 [   0.            0.            1.        ]]
distortion coefficients:  [ 0.14143871 -0.76981318 -0.01467287 
-0.00334742  0.88460406]

in the other forum, the matrix was built for a iPhone 5S camera, the f_x and f_y results were 2.8036 but written down as:

f_x = 2803
f_y = 2805
c_x = 1637
c_y = 1271

why was it multiplied by 1000? should mine be as followed then?:

F_X = 294171
F_Y = 295436 
C_X = 153232
X_Y = 119419

The further down to calculate pixels in a lower-resolution the object is calculated to be 41 pixels. I've got code working to track a blue ball, shown below. what do I need to do to calculate the size of ball?

if len(cnts) > 0:
        c = max(cnts, key=cv2.contourArea)
        ((x, y), radius) = cv2.minEnclosingCircle(c)
        M = cv2.moments(c)
        center = (int(M["m10"] / M["m00"]), int(M["m01"] / M["m00"]))

        if radius > 10:
            cv2.circle(frame, (int(x), int(y)), int(radius),
                (0, 255, 255), 2)
            cv2.circle(frame, center, 5, (0, 0, 255), -1)

Thanks for your help!!

2017-04-26 13:27:56 -0600 asked a question How to stop deque / tracker when it reaches heights point

I'm new at coding and trying to get an opencv project working but can't get my head around a couple things.

My aim is to create a barbell tracker, I want the code to track the barbell or the colour till it reaches the highest point in its journey. Using the distance from start to finish I want to calculate velocity.

So far I can get my code

  • initiate recording
  • detect the coloured ball, draw a circle around it and find the centroid
  • track its previous points by adding it the deque
  • write video

Firstly I would like the deque or tracker to stop when the reaches the height point in its journey. What I can't get my head around is the criteria and logic for it to stop adding to the line/coordinates of the queue being used to track the colour?

Any help would be greatly appreciated

2017-04-05 05:05:33 -0600 received badge  Enthusiast
2017-04-02 05:03:08 -0600 commented question NoneType attribute error, no object shape & omxplayer unable to play video

Any idea derak?

2017-03-31 13:43:13 -0600 commented question NoneType attribute error, no object shape & omxplayer unable to play video

I'm trying to output the build information as its shown in terminal but I'm not sure how to stop it from grouping?

2017-03-31 13:02:25 -0600 received badge  Editor (source)
2017-03-31 11:56:21 -0600 asked a question NoneType attribute error, no object shape & omxplayer unable to play video

Hello,

I'm trying to get the following working

# import the necessary packages
from __future__ import print_function
from imutils.video import VideoStream
import numpy as np
import argparse
import imutils
import time
import cv2

# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-o", "--output", required=True,
    help="path to output video file")
ap.add_argument("-p", "--picamera", type=int, default=-1,
    help="whether or not the Raspberry Pi camera should be used")
ap.add_argument("-f", "--fps", type=int, default=20,
    help="FPS of output video")
ap.add_argument("-c", "--codec", type=str, default="MJPG",
    help="codec of output video")
args = vars(ap.parse_args())

# initialize the video stream and allow the camera
# sensor to warmup
print("[INFO] warming up camera...")
vs = VideoStream(usePiCamera=args["picamera"] > 0).start()
time.sleep(2.0)

# initialize the FourCC, video writer, dimensions of the frame, and
# zeros array
fourcc = cv2.VideoWriter_fourcc(*args["codec"])
writer = None
(h, w) = (None, None)
zeros = None
# loop over frames from the video stream
while True:
    # grab the frame from the video stream and resize it to have a
    # maximum width of 300 pixels
    frame = vs.read()
    frame = imutils.resize(frame, width=300)

    # check if the writer is None
if writer is None:
    # store the image dimensions, initialzie the video writer,
    # and construct the zeros array
    (h, w) = frame.shape[:2]
    writer = cv2.VideoWriter(args["output"], fourcc, args["fps"],
    (w * 2, h * 2), True)
    zeros = np.zeros((h, w), dtype="uint8")
    # break the image into its RGB components, then construct the
    # RGB representation of each frame individually
    (B, G, R) = cv2.split(frame)
    R = cv2.merge([zeros, zeros, R])
    G = cv2.merge([zeros, G, zeros])
    B = cv2.merge([B, zeros, zeros])

# construct the final output frame, storing the original frame
# at the top-left, the red channel in the top-right, the green
# channel in the bottom-right, and the blue channel in the
# bottom-left
    output = np.zeros((h * 2, w * 2, 3), dtype="uint8")
    output[0:h, 0:w] = frame
    output[0:h, w:w * 2] = R
    output[h:h * 2, w:w * 2] = G
    output[h:h * 2, 0:w] = B

# write the output frame to file
    writer.write(output)
# show the frames
    cv2.imshow("Frame", frame)
    cv2.imshow("Output", output)
    key = cv2.waitKey(1) & 0xFF

# if the `q` key was pressed, break from the loop
if key == ord("q"):
    sys.exit()

# do a bit of cleanup
    print("[INFO] cleaning up...")
    cv2.destroyAllWindows()
    vs.stop()
    writer.release()

Build Information:

UI: 
    QT:                          NO
    GTK+ 2.x:                    YES (ver 2.24.25)
    GThread :                    YES (ver 2.42.1)
    GtkGlExt:                    NO
    OpenGL support:              NO
    VTK support:                 NO

  Media I/O: 
    ZLib:                        /usr/lib/arm-linux-gnueabihf/libz.so (ver 1.2.😎
    JPEG:                        libjpeg (ver 90)
    WEBP:                        build (ver 0.3.1)
    PNG:                         /usr/lib/arm-linux-gnueabihf/libpng.so (ver 1.2.50)
    TIFF:                        build (ver 42 - 4.0.2)
    JPEG 2000:                   build (ver 1.900.1)
    OpenEXR:                     build (ver 1.7.1)
    GDAL:                        NO
    GDCM:                        NO

  Video I/O:
    DC1394 1.x:                  NO
    DC1394 2.x ...
(more)