How to translate this code to Java

asked 2018-06-28 02:39:22 -0500

joyceeeen gravatar image

updated 2018-06-28 02:58:10 -0500

This code is from Azoft to read embossed text from credit cards ( I'm trying to integrate it with my school project, they gave an example code using C, and Im trying to translate it to JAVA.

(void)processingByStrokesMethod:(cv::Mat)src dst:(cv::Mat*)dst { 
cv::Mat tmp;  
cv::GaussianBlur(src, tmp, cv::Size(3,3), 2.0);                    // gaussian blur  
tmp = cv::abs(src - tmp);                                          // matrix of differences between source image and blur iamge  

cv::threshold(tmp, tmp, 0, 255, CV_THRESH_BINARY | CV_THRESH_OTSU);  

//Using method of strokes:  
int Wout = 12;  
int Win = Wout/2;  
int startXY = Win;  
int endY = src.rows - Win;  
int endX = src.cols - Win;  

for (int j = startXY; j < endY; j++) {  
    for (int i = startXY; i < endX; i++) {  
        //Only edge pixels:  
        if (<unsigned char="">(j,i) == 255)  
            //Calculating maxP and minP within Win-region:  
            unsigned char minP =<unsigned char="">(j,i);  
            unsigned char maxP =<unsigned char="">(j,i);  
            int offsetInWin = Win/2;  

            for (int m = - offsetInWin; m < offsetInWin; m++) {  
                for (int n = - offsetInWin; n < offsetInWin; n++) {  
                    if (<unsigned char="">(j+m,i+n) < minP) {  
                        minP =<unsigned char="">(j+m,i+n);  
                    }else if (<unsigned char="">(j+m,i+n) > maxP) {  
                        maxP =<unsigned char="">(j+m,i+n);  

            unsigned char meanP = lroundf((minP+maxP)/2.0);  

            for (int l = -Win; l < Win; l++) {  
                for (int k = -Win; k < Win; k++) {  
                    if (<unsigned char="">(j+l,i+k) >= meanP) {  
                        dst->at<unsigned char="">(j+l,i+k)++;  

///// Normalization of imageOut:  
unsigned char maxValue = dst->at<unsigned char="">(0,0);  

for (int j = 0; j < dst->rows; j++) {              //finding max value of imageOut  
    for (int i = 0; i < dst->cols; i++) {  
        if (dst->at<unsigned char="">(j,i) > maxValue)  
            maxValue = dst->at<unsigned char="">(j,i);  
float knorm = 255.0 / maxValue;  

for (int j = 0; j < dst->rows; j++) {             //normalization of imageOut  
    for (int i = 0; i < dst->cols; i++) {  
        dst->at<unsigned char="">(j,i) = lroundf(dst->at<unsigned char="">(j,i)*knorm);  
edit retag flag offensive close merge delete


can you try again with pasting the code ? we can help you with the formatting.

berak gravatar imageberak ( 2018-06-28 02:48:24 -0500 )edit

hi you mean the whole code?

joyceeeen gravatar imagejoyceeeen ( 2018-06-28 02:50:48 -0500 )edit

as much as nessecary.

and also please explain the context. what is it trying to do ? based on what kind of input data ?

berak gravatar imageberak ( 2018-06-28 02:52:57 -0500 )edit

I edited my question, kindly check it. thanks

joyceeeen gravatar imagejoyceeeen ( 2018-06-28 02:58:33 -0500 )edit

well one thing is sure: you should NEVER write code like that in java, and you should NOT try to translate this line by line.

maybe start another research ("stroke width transform"), and try to fiind something more "high-level" ?

berak gravatar imageberak ( 2018-06-28 03:28:41 -0500 )edit

I understand. :) thankyou. Would SWT algorithm work when usually the color of the embossed texts in credit cards are the same as the background color?

joyceeeen gravatar imagejoyceeeen ( 2018-06-28 03:35:20 -0500 )edit

see the image bottom left of that website (the golden card). it maynot work so nice, but there will always be intensity gradients due to lighting

berak gravatar imageberak ( 2018-06-28 03:50:50 -0500 )edit

Well you can also use cnn/dnns to extract text from the images, there are some pretrained models out there. And you can also evaluate the models via the dnn module in open cv.

holger gravatar imageholger ( 2018-06-28 04:43:22 -0500 )edit

Enlighten me with this, Sorry Im new to this, I've checked this tutorial with DNN but I want to implement the TextDetectorCNN instead of DNN should I only replace the DNN function with CNN to accomplish that or do I need to combine the two?

joyceeeen gravatar imagejoyceeeen ( 2018-06-28 05:20:41 -0500 )edit

DNN stand for Deep Neuronal Network and CNN stand for Convolution Neural Network. Disclaimer - I am new to this class. Lets take a look at the api:

This class finds bounding boxes of text words given for an input image. This class uses OpenCV dnn module to load pre-trained model described in [111]

Your first create a instance of it - supply the pretrained model (layout of the model) and its weights (what the model learned)

After that you can use this instance to put in an image and get back a vector of bounding boxes(x,y coordinates where text was found)

Note that you don't get back the text itself! It seems to be an object detector (Telling you where text is but not what text it is)

So you will still need to find a way to extract the text from the bounding box

holger gravatar imageholger ( 2018-06-28 07:38:53 -0500 )edit