2021-03-02 03:11:38 -0600 | received badge | ● Famous Question (source) |
2018-12-10 01:12:57 -0600 | received badge | ● Notable Question (source) |
2017-12-17 12:41:33 -0600 | received badge | ● Popular Question (source) |
2017-02-10 06:23:22 -0600 | received badge | ● Notable Question (source) |
2016-04-16 00:19:54 -0600 | received badge | ● Popular Question (source) |
2016-02-25 07:56:38 -0600 | received badge | ● Student (source) |
2015-05-08 09:41:30 -0600 | asked a question | Android4OpenCV: setting resolution at startup I'm using Android4OpenCV to do some live image processing, and I'd like to use the smallest resolution the camera can offer. The default resolution is the largest the camera can offer. I'm looking at the 3rd example, which allows the user to change resolutions via a menu. I'd like to modify that example to change the resolution at startup instead of requiring the user go through the menu. To do that, I simply add two lines to the otherwise empty And the thing is, this works perfectly fine on my Galaxy Nexus, running Android 4.2.2. The app starts up, and the resolution is set correctly. However, when I run the exact same app on a Nexus 7 tablet, running Android 5.1, the app hangs on the call to Specifically, the application goes into the (Note: this code is internal to the OpenCV4Android library, this is not my code) Looking at the logs, I can see that the thread gets stuck on the (Note: this code is internal to the OpenCV4Android library, this is not my code) I've tried futzing with that code, changing the notify() to notifyAll(), and maintaining a List of CameraWorker threads and joining each one. But no matter what, the app still hangs at the My questions are:
I've also posted ... (more) |
2015-03-09 12:51:08 -0600 | answered a question | Recording Live OpenCV Processing on Android @HaDang pointed me to these links: http://www.walking-productions.com/no... https://code.google.com/p/javacv/sour... That example uses a Java wrapper of FFMPEG to do the video recording. This project is a pretty useful starting point for anybody wanting to do the same: https://github.com/vanevery/JavaCV_0.... I took that above project and hammered it into my example. It's very messy, but it works: (more) |
2015-02-25 07:41:06 -0600 | received badge | ● Enthusiast |
2015-02-23 13:20:47 -0600 | answered a question | Drawing lines from Canny? Eventually I used the setTo() method to set the color, and then the copyTo() method to copy that color back to the original Mat: |
2015-02-23 13:16:21 -0600 | asked a question | Recording Live OpenCV Processing on Android I originally asked this question on StackOverflow here, but I haven't received any replies or answers. My goal is to do a couple things:
I have both of them working, but the way I had to implement number 2 is ridiculous:
That works, but it comes with a ton of drawbacks: the framerate drops unbearably low during a recording, and the stitching step takes about half a second per frame, and runs out of memory for videos more than a couple seconds long- and that's after I lower my camera's resolution to make sure the images are as small as possible. Even then, the video framerate is way out of whack with reality, and the video looks insanely sped up. This seems ridiculous for a lot of reasons, so my question is: is there a better way to do this? Here's a little example if anybody wants to run it. This requires the OpenCV Android project available here, and the JCodec Andrpod project available here. Manifest.xml: <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.videotest" android:versioncode="1" android:versionname="1.0" ><="" p=""> </manifest> MainActivity: (more) |
2015-02-15 10:20:53 -0600 | commented question | Drawing lines from Canny? @wuling I chose canny because it was the first result for googling "OpenCV edge detection". |
2015-02-14 23:54:45 -0600 | asked a question | Drawing lines from Canny? I posted this on StackOverflow here, but after only getting a few views, I've decided to crosspost here. I'm a complete novice when it comes to OpenCV, so this is probably a dumb question. I'm just trying to get something basic up and running- I want to draw the edges detected by the Canny algorithm directly on the image coming in. I currently have this: I'm displaying the edge data from Canny directly, but now I want to get rid of the black and just show the white, on the image being processed. I've tried googling things like "using binary image as alpha mask", but after a day of reading tutorials and trying everything I can find, I'm still not sure I know what's going on. OpenCV seems very powerful, so this is probably a pretty easy thing to do, so I'm hoping somebody can point me in the right direction. Here's the code I'm using, most of which has been copied from the examples: If anybody can point me to a basic example that does what I want, I'd be very grateful! |