Ask Your Question

Yasser Akram's profile - activity

2017-06-12 14:28:08 -0600 commented question Sending Mat over the web

Yes. I'll be sending it with an HttpClient "POST".

2017-06-12 05:56:35 -0600 asked a question Sending Mat over the web

I need to send a Mat over the web (to ASP .net c# backend) Any ideas or ready examples?

2017-06-12 05:30:08 -0600 commented question kNN implementation and required input/output

not exactly sure if it can be solved with regression let me know your thoughts

2017-04-04 15:36:11 -0600 asked a question kNN implementation and required input/output

I found a ready code but have some issues mapping my thoughts and what I've implemented in [weka] especially that how do I supply my data to the algorithm. I have: x, y, speed, factor. They all numbers, the [x] & [y] are the coordinates of the object. [speed] is the speed of the object. [factor] is division of the initial speed and the current speed of the object. So I need to build a model based on kNN so I can supply the model with :[x, y, speed] to get [factor] which is predictable based on the model I've built.

I have a sample data as per the following (sample from 50k rows):

x, y, speed, factor

414, 369, 250.004761, 1.1

418, 360, 225.004285, 1.222222221

423, 352, 225.004285, 1.222222221

427, 344, 200.003809, 1.374999998

431, 336, 200.003809, 1.374999998

435, 329, 200.003809, 1.374999998

438, 322, 175.003333, 1.571428568

441, 315, 175.003333, 1.571428568

444, 309, 150.002856, 1.83333334

448, 303, 175.003333, 1.571428568

451, 297, 150.002856, 1.83333334

454, 291, 150.002856, 1.83333334

457, 285, 150.002856, 1.83333334

460, 280, 125.00238, 2.200000008

463, 275, 125.00238, 2.200000008

466, 270, 125.00238, 2.200000008

469, 265, 125.00238, 2.200000008

472, 260, 125.00238, 2.200000008

474, 256, 100.001904, 2.75000001

477, 251, 125.00238, 2.200000008

The code I found and I can't figure out how to map my table and the input (450, 300, 100.002834) in order to get the predicted/calculated (factor)

//Be sure to change number_of_... to fit your data!
Mat matTrainFeatures(0, number_of_train_elements, CV_32F);
Mat matSample(0, number_of_sample_elements, CV_32F);

Mat matTrainLabels(0, number_of_train_elements, CV_32F);
Mat matSampleLabels(0, number_of_sample_elements, CV_32F);

Mat matResults(0, 0, CV_32F);

//etcetera code for loading data into Mat variables suppressed

Ptr<TrainData> trainingData;
Ptr<KNearest> kclassifier = KNearest::create();

trainingData = TrainData::create(matTrainFeatures,
    SampleTypes::ROW_SAMPLE, matTrainLabels);



kclassifier->setIsClassifier(true);
kclassifier->setAlgorithmType(KNearest::Types::BRUTE_FORCE);
kclassifier->setDefaultK(1);

kclassifier->train(trainingData);
kclassifier->findNearest(matSample, kclassifier->getDefaultK(), matResults);

//Just checking the settings
cout << "Training data: " << endl
    << "getNSamples\t" << trainingData->getNSamples() << endl
    << "getSamples\n" << trainingData->getSamples() << endl
    << endl;

cout << "Classifier :" << endl
    << "kclassifier->getDefaultK(): " << kclassifier->getDefaultK() << endl
    << "kclassifier->getIsClassifier()   : " << kclassifier->getIsClassifier() << endl
    << "kclassifier->getAlgorithmType(): " << kclassifier->getAlgorithmType() << endl
    << endl;

//confirming sample order
cout << "matSample: " << endl
    << matSample << endl
    << endl;

//displaying the results
cout << "matResults: " << endl
    << matResults << endl
    << endl;

//etcetera ending for main function

I appreciate any explanation to the declared [Mat]s in the code as they seemed confusing.

2017-04-02 09:03:34 -0600 commented question Ways of doing realtime processing

Is there any example for that?

2017-04-02 06:34:10 -0600 asked a question Ways of doing realtime processing

What is the best way to do realtime processing for frames? I'm reading from a video and processing cascade detector and a tracker as well so the speed is becoming bad. I need to play or read from a camera while processing my algorithm.

If ever you worked on something please advise.

2017-02-28 07:49:51 -0600 received badge  Student (source)
2017-02-28 04:34:42 -0600 commented question building from source along with opencv_contrib

Maybe you both are right. I have to research that with Mr. Google :)

2017-02-27 07:41:29 -0600 asked a question building from source along with opencv_contrib

I'm working on a research with opencv [cpp] and I had to use the opencv_contrib. You know that in this case we must build the library from the source and I found a good reference on Youtube.com but I was wondering about:

  • Why we have to build from source? why not just add the code and build the application normally when we launch our program using VisualStudio?
  • What is the use of the .lib file?
  • Why we have to include the .dll file in our solution directory not just keeping the source files (.h & .cpp) which can be found with opencv sources?
  • I had to use cmake and I have no idea what is this for and why it creates a solution for VisualStudio and there I have to build it for release and debug?

I know it is not a core opencv question but I appreciate any assistance

2017-01-10 02:26:06 -0600 commented question Background subtraction to detect cars on a road

@abhijith, It doesn't detect far objects, some cars are detected as 2 or more blobs instead of 1 (especially buses), sometimes if 2 cars moving side by side it will detect them as a single car.

2017-01-08 06:23:36 -0600 commented question Background subtraction to detect cars on a road

Yes I did. The results were worst since the video is having some pixels changes over time due to lightning and air (there are some trees)

2017-01-07 10:18:49 -0600 asked a question Background subtraction to detect cars on a road

I've implemented the background subtraction method to detect moving blobs but the issue is that it doesn't detect the far objects and sometimes the moving cars are not accurately detected as a single blob (sometimes it splits into 2 or 3, sometimes it doesn't detect it). I've also implemented the code here: https://github.com/MicrocontrollersAn...

Is there any sophisticated read solution to do the needful and it does the job perfectly (detecting car blob and tag it with ID)?

2016-12-01 07:27:25 -0600 received badge  Self-Learner (source)
2016-12-01 04:43:53 -0600 received badge  Scholar (source)
2016-12-01 04:43:23 -0600 answered a question using opencv with c++ .net application compilation issues

Apart from doing the linker and include stuff I had also to: Use project build x64 instead of x86 Then I had to copy the .dlls to my project directory Then set the linker->System->SubSystem to WINDOWS. Then I had to go Advanced->Entry Point set to Main Don't forget that if you choose x64 to verify the linker and include setup is already implemented. Because I had to redo it as it seems x86 and x64 got different setup.

2016-11-27 04:31:14 -0600 commented question using opencv with c++ .net application compilation issues

There is nothing special in the code but I included it. I removed the arrow brackets for 2 includes because of the formatting in the website is ignoring them.

2016-11-27 02:23:46 -0600 asked a question using opencv with c++ .net application compilation issues

I didn't face any issues while linking opencv library with basic console application but when I moved to do the same in a .net framework template I got many LNK errors

image description

I didn't do anything strange except that I included the include and using namespace only.

I appreciate any assistance provided

PS: I managed to run it if I install the opencv package via NuGet but I don't wan't to use NuGet. (I tried to get frames from video but it is not working and I get an unhalded error while the same code is running with console without an issue)

This is a clean Form header. Whenever I place the include tags of opencv I will not be able to compile:

include "opencv2/objdetect/objdetect.hpp"
include "opencv2/highgui/highgui.hpp"
include "opencv2/imgproc/imgproc.hpp"
include opencv2/ml.hpp
include ctime

namespace VisionApp01
{
    using namespace cv;
    using namespace cv::ml;
    using namespace System;
    using namespace System::ComponentModel;
    using namespace System::Collections;
    using namespace System::Windows::Forms;
    using namespace System::Data;
    using namespace System::Drawing;

/// <summary>
/// Summary for MyForm
/// </summary>
public ref class MyForm : public System::Windows::Forms::Form
{
public:
    MyForm(void)
    {
        InitializeComponent();
        //
        //TODO: Add the constructor code here
        //
    }

protected:
    /// <summary>
    /// Clean up any resources being used.
    /// </summary>
    ~MyForm()
    {
        if (components)
        {
            delete components;
        }
    }
private: System::Windows::Forms::PictureBox^  beforeBox;
private: System::Windows::Forms::PictureBox^  afterBox;
protected:

protected:

private:
    /// <summary>
    /// Required designer variable.
    /// </summary>
    System::ComponentModel::Container ^components;


    /// <summary>
    /// Required method for Designer support - do not modify
    /// the contents of this method with the code editor.
    /// </summary>
    void InitializeComponent(void)
    {
        this->beforeBox = (gcnew System::Windows::Forms::PictureBox());
        this->afterBox = (gcnew System::Windows::Forms::PictureBox());
        (cli::safe_cast<System::ComponentModel::ISupportInitialize^>(this->beforeBox))->BeginInit();
        (cli::safe_cast<System::ComponentModel::ISupportInitialize^>(this->afterBox))->BeginInit();
        this->SuspendLayout();
        // 
        // beforeBox
        // 
        this->beforeBox->Location = System::Drawing::Point(12, 12);
        this->beforeBox->Name = L"beforeBox";
        this->beforeBox->Size = System::Drawing::Size(300, 300);
        this->beforeBox->TabIndex = 0;
        this->beforeBox->TabStop = false;
        // 
        // afterBox
        // 
        this->afterBox->Location = System::Drawing::Point(353, 12);
        this->afterBox->Name = L"afterBox";
        this->afterBox->Size = System::Drawing::Size(300, 300);
        this->afterBox->TabIndex = 0;
        this->afterBox->TabStop = false;
        // 
        // MyForm
        // 
        this->AutoScaleDimensions = System::Drawing::SizeF(6, 13);
        this->AutoScaleMode = System::Windows::Forms::AutoScaleMode::Font;
        this->ClientSize = System::Drawing::Size(665, 330);
        this->Controls->Add(this->afterBox);
        this->Controls->Add(this->beforeBox);
        this->Name = L"MyForm";
        this->Text = L"MyForm";
        this->Load += gcnew System::EventHandler(this, &MyForm::MyForm_Load);
        (cli::safe_cast<System::ComponentModel::ISupportInitialize^>(this->beforeBox))->EndInit();
        (cli::safe_cast<System::ComponentModel::ISupportInitialize^>(this->afterBox))->EndInit();
        this->ResumeLayout(false);

    }

private: System::Void MyForm_Load(System::Object^  sender, System::EventArgs^  e)
{

}
};

}

2016-11-18 00:46:09 -0600 received badge  Enthusiast
2016-11-17 07:59:00 -0600 commented question best way to do the traincascade via the standalone tool

Maybe you are right.. Can you provide me with the best opinion, method or ready code you came across? Note that the camera is moving and not fixed.

2016-11-17 06:35:17 -0600 commented question best way to do the traincascade via the standalone tool

The idea is getting the shadow locations, make the detected Rect as squared, resize to 64x64, then run a verification for that squared area via SVM traind with HOG from another samples of cars rear shot. I'm trying to replication the work of some other researchers in order to do some enhancements.

2016-11-15 13:42:04 -0600 commented question best way to do the traincascade via the standalone tool

Also I wonder if my positive and negative samples should be converted to grey scale and then have a histogram equalization?

2016-11-15 11:58:41 -0600 commented question best way to do the traincascade via the standalone tool

So what do I have to do then? As per my investigation I found everybody is telling to get samples of the object in its many statuses. For the squared detection actually I'm making it like that, when I get the Rect vector I'm making the height equals the width before I plot the borders on the image so the -w and -h are fine I suppose.

2016-11-13 23:40:28 -0600 received badge  Editor (source)
2016-11-13 23:39:40 -0600 asked a question best way to do the traincascade via the standalone tool

Hi all

I'm trying to train a classifier to detect cars shadows (rear view) and I am facing difficulties doing that, later I want to use the output to verify the possible car region via HOG+SVM (will worry about it later). I don't know exactly what is needed to do so as I'm using around 692 positive samples with 4400 negatives converted to gray, cropped exactly to the object (160x45). for training I'm using 600 pos, 3500 neg. My target dimensions: -w 60 -h 17

Positive samples are like (160x45): image description

Negative samlpes are like (160x45): image description

And this is the horrible result with only 1 correct catch (I made the rectangle height similar to the width to match the whole car):

image description

What exactly is wrong? Do I need more samples? Is it required to do histogram equalization (before or after)? Is it mandotary to use the annotation tool (I'm using cropped images so my pos.txt having things like pos/pos2.png 1 0 0 160 45)? Do my negative samples should be a full image or just peices matches the size of the positives? I'm so confused! Appreciate any help really

Thanks

2016-10-28 08:08:44 -0600 asked a question HOG calculation

I'm applying a research that includes HOG + SVM Is there any process is needed before using OpenCV HOG features extraction? In the research paper they are talking about some mask to be applied as a first step: 1D centered point discrete derivative mask [-1, 0, 1] and [-1, 0, 1]^-1 in one or both of the horizontal and vertical directions to obtain the gradient orientation and gradient magnitude. We can use the Eq. 1 and Eq.2 to calculate each pixel point's horizontal and vertical gradient value, respectively. And use the Eq. 3 and Eq. 4 to calculate pixel point's gradient magnitude value and gradient value, respectively.

image description

Then:

2) Orientation binning:

Each pixel within the cell casts a weighted vote for an orientation-based histogram channel based on the values found in the gradient computation. The histogram channels are evenly spread over 0 to 180 degrees or 0 to 360 degrees, depending on whether the gradient is 'unsigned' or 'signed'. In our study, we use the 'unsigned' and nine channels that evenly spread over 0 to 180 degrees to construct our histogram. As for the vote weight, pixel contribution can either be the gradient magnitude itself, or some function of the magnitude. We just use the gradient magnitude value as vote weight in our study.

3) Descriptor blocks:

In order to avoid for changes in illumination and contrast, the gradient strengths must be locally normalized, which requires grouping the cells together into larger, spatially connected blocks. The HOG descriptor is then the vector of the components of the normalized cell histograms from all of the block regions. These blocks typically overlap, meaning that each cell contributes more than once to the final descriptor. Every four cells (22) comprise one block in our study. For an image with size of 6464, we assume that each cell's size is 88, and the size of one block is 1616 since one block comprises four cells. Thus, we can consider that there are 49 blocks in an image since one block can be slid seven times in horizontal and vertical orientation, respectively. Meanwhile, there are nine channels in one cell and 36 features in one block. Thus, we can obtain the number of features is 1764 (3649) in a 6464 image that as shown as Figure 7.

So please is there anybody can help me in this task as I'm supposed to compare this to a new method of my own (in progress)..