OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Tue, 25 Aug 2020 10:26:25 -0500CUDACuts for Image Texture Synthesishttp://answers.opencv.org/question/234209/cudacuts-for-image-texture-synthesis/I am doing a project requiring composing full 360-degree view images. To produce an output image with no visible seam, I adopted the GraphCut algorithm on paper "[Graphcut Textures: Image and Video Synthesis Using Graph Cuts](https://www.cc.gatech.edu/~turk/my_papers/graph_cuts.pdf)" which based on the maxflow/min-cut algorithm. I used the implementation in CPU of Kolmogorov and Boykov described in the paper"[An Experimental Comparison of Min-Cut/Max-Flow Algorithms for Energy Minimization in Vision](https://www.csd.uwo.ca/~yboykov/Papers/pami04.pdf)" to find the best min-cut to stitch image which gives rise to a relatively good result (their implementation on how maxflow/min-cut algorithm is solved can be found [here](https://github.com/opengm/maxflow)). Their result looks something like this.
I do image quilting for the following image
![image description](/upfiles/15983675967152857.jpeg)
I take a copy of it and lay it to the original one to make a bigger one without making a visible seam. I got the following result
![image description](/upfiles/15983676162594995.png)
and the green line depicting the min-cut that Boykov's implementation produce
![image description](/upfiles/15983676292653983.png)
However, I want a faster implementation to solve the maxflow/min-cut algorithm. That when I found an implementation on CUDA described in "[CudaCuts: Fast Graph Cuts on the GPU](http://cdn.iiit.ac.in/cdn/cvit.iiit.ac.in/images/ConferencePapers/2008/rtGCuts_2008.pdf)". I successfully run their examples on their project page on image segmentation and produce the same results as they did. However, when I apply it to my problem, it did not work properly. The only difference in the 2 problems is the way I construct the graph representation for the maxflow/min-cut algorithm to solve. I constructed the graph as they suggested in "[Graphcut Textures: Image and Video Synthesis Using Graph Cuts](https://www.cc.gatech.edu/~turk/my_papers/graph_cuts.pdf)" in which the terminal weights(t-links) are infinite for valid source(sink) nodes and 0 otherwise and neighbor weights(n-links) corresponding to a cost function in paper. That graph representation worked well with Boykov implementation on CPU but it did not work for CUDA implementation by Vibhav Vineet and Narayanan(their implementation can be found [here](http://vibhavvineet.info/code.html))
I confused since the CUDA implementation worked well for the image segmentation problem meaning the core algorithm (maxflow/min-cut) was correct. Therefore, I guess I am doing something wrong with graph construction. My implementation on how I construct my graph can be found [here](https://github.com/bqm1111/cuda/tree/master/GraphCutCuda). Could anyone who has done experiments with this CUDA implementation give me a hint? Am I misunderstanding the meaning of dataTerm(terminal weights/t-links), smoothTerm(neighbor weights/ n-links) in their codeMon, 24 Aug 2020 22:05:54 -0500http://answers.opencv.org/question/234209/cudacuts-for-image-texture-synthesis/Comment by bqm1111 for <p>I am doing a project requiring composing full 360-degree view images. To produce an output image with no visible seam, I adopted the GraphCut algorithm on paper "<a href="https://www.cc.gatech.edu/~turk/my_papers/graph_cuts.pdf">Graphcut Textures: Image and Video Synthesis Using Graph Cuts</a>" which based on the maxflow/min-cut algorithm. I used the implementation in CPU of Kolmogorov and Boykov described in the paper"<a href="https://www.csd.uwo.ca/~yboykov/Papers/pami04.pdf">An Experimental Comparison of Min-Cut/Max-Flow Algorithms for Energy Minimization in Vision</a>" to find the best min-cut to stitch image which gives rise to a relatively good result (their implementation on how maxflow/min-cut algorithm is solved can be found <a href="https://github.com/opengm/maxflow">here</a>). Their result looks something like this.
I do image quilting for the following image
<img alt="image description" src="/upfiles/15983675967152857.jpeg"></p>
<p>I take a copy of it and lay it to the original one to make a bigger one without making a visible seam. I got the following result
<img alt="image description" src="/upfiles/15983676162594995.png"></p>
<p>and the green line depicting the min-cut that Boykov's implementation produce
<img alt="image description" src="/upfiles/15983676292653983.png"></p>
<p>However, I want a faster implementation to solve the maxflow/min-cut algorithm. That when I found an implementation on CUDA described in "<a href="http://cdn.iiit.ac.in/cdn/cvit.iiit.ac.in/images/ConferencePapers/2008/rtGCuts_2008.pdf">CudaCuts: Fast Graph Cuts on the GPU</a>". I successfully run their examples on their project page on image segmentation and produce the same results as they did. However, when I apply it to my problem, it did not work properly. The only difference in the 2 problems is the way I construct the graph representation for the maxflow/min-cut algorithm to solve. I constructed the graph as they suggested in "<a href="https://www.cc.gatech.edu/~turk/my_papers/graph_cuts.pdf">Graphcut Textures: Image and Video Synthesis Using Graph Cuts</a>" in which the terminal weights(t-links) are infinite for valid source(sink) nodes and 0 otherwise and neighbor weights(n-links) corresponding to a cost function in paper. That graph representation worked well with Boykov implementation on CPU but it did not work for CUDA implementation by Vibhav Vineet and Narayanan(their implementation can be found <a href="http://vibhavvineet.info/code.html">here</a>)</p>
<p>I confused since the CUDA implementation worked well for the image segmentation problem meaning the core algorithm (maxflow/min-cut) was correct. Therefore, I guess I am doing something wrong with graph construction. My implementation on how I construct my graph can be found <a href="https://github.com/bqm1111/cuda/tree/master/GraphCutCuda">here</a>. Could anyone who has done experiments with this CUDA implementation give me a hint? Am I misunderstanding the meaning of dataTerm(terminal weights/t-links), smoothTerm(neighbor weights/ n-links) in their code</p>
http://answers.opencv.org/question/234209/cudacuts-for-image-texture-synthesis/?comment=234242#post-id-234242Do you have any idea how can I proceed with my problem??Tue, 25 Aug 2020 10:26:25 -0500http://answers.opencv.org/question/234209/cudacuts-for-image-texture-synthesis/?comment=234242#post-id-234242Comment by berak for <p>I am doing a project requiring composing full 360-degree view images. To produce an output image with no visible seam, I adopted the GraphCut algorithm on paper "<a href="https://www.cc.gatech.edu/~turk/my_papers/graph_cuts.pdf">Graphcut Textures: Image and Video Synthesis Using Graph Cuts</a>" which based on the maxflow/min-cut algorithm. I used the implementation in CPU of Kolmogorov and Boykov described in the paper"<a href="https://www.csd.uwo.ca/~yboykov/Papers/pami04.pdf">An Experimental Comparison of Min-Cut/Max-Flow Algorithms for Energy Minimization in Vision</a>" to find the best min-cut to stitch image which gives rise to a relatively good result (their implementation on how maxflow/min-cut algorithm is solved can be found <a href="https://github.com/opengm/maxflow">here</a>). Their result looks something like this.
I do image quilting for the following image
<img alt="image description" src="/upfiles/15983675967152857.jpeg"></p>
<p>I take a copy of it and lay it to the original one to make a bigger one without making a visible seam. I got the following result
<img alt="image description" src="/upfiles/15983676162594995.png"></p>
<p>and the green line depicting the min-cut that Boykov's implementation produce
<img alt="image description" src="/upfiles/15983676292653983.png"></p>
<p>However, I want a faster implementation to solve the maxflow/min-cut algorithm. That when I found an implementation on CUDA described in "<a href="http://cdn.iiit.ac.in/cdn/cvit.iiit.ac.in/images/ConferencePapers/2008/rtGCuts_2008.pdf">CudaCuts: Fast Graph Cuts on the GPU</a>". I successfully run their examples on their project page on image segmentation and produce the same results as they did. However, when I apply it to my problem, it did not work properly. The only difference in the 2 problems is the way I construct the graph representation for the maxflow/min-cut algorithm to solve. I constructed the graph as they suggested in "<a href="https://www.cc.gatech.edu/~turk/my_papers/graph_cuts.pdf">Graphcut Textures: Image and Video Synthesis Using Graph Cuts</a>" in which the terminal weights(t-links) are infinite for valid source(sink) nodes and 0 otherwise and neighbor weights(n-links) corresponding to a cost function in paper. That graph representation worked well with Boykov implementation on CPU but it did not work for CUDA implementation by Vibhav Vineet and Narayanan(their implementation can be found <a href="http://vibhavvineet.info/code.html">here</a>)</p>
<p>I confused since the CUDA implementation worked well for the image segmentation problem meaning the core algorithm (maxflow/min-cut) was correct. Therefore, I guess I am doing something wrong with graph construction. My implementation on how I construct my graph can be found <a href="https://github.com/bqm1111/cuda/tree/master/GraphCutCuda">here</a>. Could anyone who has done experiments with this CUDA implementation give me a hint? Am I misunderstanding the meaning of dataTerm(terminal weights/t-links), smoothTerm(neighbor weights/ n-links) in their code</p>
http://answers.opencv.org/question/234209/cudacuts-for-image-texture-synthesis/?comment=234240#post-id-234240hehe, yea, nice update ;)Tue, 25 Aug 2020 09:42:35 -0500http://answers.opencv.org/question/234209/cudacuts-for-image-texture-synthesis/?comment=234240#post-id-234240Comment by bqm1111 for <p>I am doing a project requiring composing full 360-degree view images. To produce an output image with no visible seam, I adopted the GraphCut algorithm on paper "<a href="https://www.cc.gatech.edu/~turk/my_papers/graph_cuts.pdf">Graphcut Textures: Image and Video Synthesis Using Graph Cuts</a>" which based on the maxflow/min-cut algorithm. I used the implementation in CPU of Kolmogorov and Boykov described in the paper"<a href="https://www.csd.uwo.ca/~yboykov/Papers/pami04.pdf">An Experimental Comparison of Min-Cut/Max-Flow Algorithms for Energy Minimization in Vision</a>" to find the best min-cut to stitch image which gives rise to a relatively good result (their implementation on how maxflow/min-cut algorithm is solved can be found <a href="https://github.com/opengm/maxflow">here</a>). Their result looks something like this.
I do image quilting for the following image
<img alt="image description" src="/upfiles/15983675967152857.jpeg"></p>
<p>I take a copy of it and lay it to the original one to make a bigger one without making a visible seam. I got the following result
<img alt="image description" src="/upfiles/15983676162594995.png"></p>
<p>and the green line depicting the min-cut that Boykov's implementation produce
<img alt="image description" src="/upfiles/15983676292653983.png"></p>
<p>However, I want a faster implementation to solve the maxflow/min-cut algorithm. That when I found an implementation on CUDA described in "<a href="http://cdn.iiit.ac.in/cdn/cvit.iiit.ac.in/images/ConferencePapers/2008/rtGCuts_2008.pdf">CudaCuts: Fast Graph Cuts on the GPU</a>". I successfully run their examples on their project page on image segmentation and produce the same results as they did. However, when I apply it to my problem, it did not work properly. The only difference in the 2 problems is the way I construct the graph representation for the maxflow/min-cut algorithm to solve. I constructed the graph as they suggested in "<a href="https://www.cc.gatech.edu/~turk/my_papers/graph_cuts.pdf">Graphcut Textures: Image and Video Synthesis Using Graph Cuts</a>" in which the terminal weights(t-links) are infinite for valid source(sink) nodes and 0 otherwise and neighbor weights(n-links) corresponding to a cost function in paper. That graph representation worked well with Boykov implementation on CPU but it did not work for CUDA implementation by Vibhav Vineet and Narayanan(their implementation can be found <a href="http://vibhavvineet.info/code.html">here</a>)</p>
<p>I confused since the CUDA implementation worked well for the image segmentation problem meaning the core algorithm (maxflow/min-cut) was correct. Therefore, I guess I am doing something wrong with graph construction. My implementation on how I construct my graph can be found <a href="https://github.com/bqm1111/cuda/tree/master/GraphCutCuda">here</a>. Could anyone who has done experiments with this CUDA implementation give me a hint? Am I misunderstanding the meaning of dataTerm(terminal weights/t-links), smoothTerm(neighbor weights/ n-links) in their code</p>
http://answers.opencv.org/question/234209/cudacuts-for-image-texture-synthesis/?comment=234239#post-id-234239Thank you for your kind reply. Sorry since this is my first post. I updated my post. Is there something that I can elaborate for youTue, 25 Aug 2020 09:36:06 -0500http://answers.opencv.org/question/234209/cudacuts-for-image-texture-synthesis/?comment=234239#post-id-234239Comment by berak for <p>I am doing a project requiring composing full 360-degree view images. To produce an output image with no visible seam, I adopted the GraphCut algorithm on paper "<a href="https://www.cc.gatech.edu/~turk/my_papers/graph_cuts.pdf">Graphcut Textures: Image and Video Synthesis Using Graph Cuts</a>" which based on the maxflow/min-cut algorithm. I used the implementation in CPU of Kolmogorov and Boykov described in the paper"<a href="https://www.csd.uwo.ca/~yboykov/Papers/pami04.pdf">An Experimental Comparison of Min-Cut/Max-Flow Algorithms for Energy Minimization in Vision</a>" to find the best min-cut to stitch image which gives rise to a relatively good result (their implementation on how maxflow/min-cut algorithm is solved can be found <a href="https://github.com/opengm/maxflow">here</a>). Their result looks something like this.
I do image quilting for the following image
<img alt="image description" src="/upfiles/15983675967152857.jpeg"></p>
<p>I take a copy of it and lay it to the original one to make a bigger one without making a visible seam. I got the following result
<img alt="image description" src="/upfiles/15983676162594995.png"></p>
<p>and the green line depicting the min-cut that Boykov's implementation produce
<img alt="image description" src="/upfiles/15983676292653983.png"></p>
<p>However, I want a faster implementation to solve the maxflow/min-cut algorithm. That when I found an implementation on CUDA described in "<a href="http://cdn.iiit.ac.in/cdn/cvit.iiit.ac.in/images/ConferencePapers/2008/rtGCuts_2008.pdf">CudaCuts: Fast Graph Cuts on the GPU</a>". I successfully run their examples on their project page on image segmentation and produce the same results as they did. However, when I apply it to my problem, it did not work properly. The only difference in the 2 problems is the way I construct the graph representation for the maxflow/min-cut algorithm to solve. I constructed the graph as they suggested in "<a href="https://www.cc.gatech.edu/~turk/my_papers/graph_cuts.pdf">Graphcut Textures: Image and Video Synthesis Using Graph Cuts</a>" in which the terminal weights(t-links) are infinite for valid source(sink) nodes and 0 otherwise and neighbor weights(n-links) corresponding to a cost function in paper. That graph representation worked well with Boykov implementation on CPU but it did not work for CUDA implementation by Vibhav Vineet and Narayanan(their implementation can be found <a href="http://vibhavvineet.info/code.html">here</a>)</p>
<p>I confused since the CUDA implementation worked well for the image segmentation problem meaning the core algorithm (maxflow/min-cut) was correct. Therefore, I guess I am doing something wrong with graph construction. My implementation on how I construct my graph can be found <a href="https://github.com/bqm1111/cuda/tree/master/GraphCutCuda">here</a>. Could anyone who has done experiments with this CUDA implementation give me a hint? Am I misunderstanding the meaning of dataTerm(terminal weights/t-links), smoothTerm(neighbor weights/ n-links) in their code</p>
http://answers.opencv.org/question/234209/cudacuts-for-image-texture-synthesis/?comment=234219#post-id-234219we can see neither the paper, the original implementation, nor your attempt or your results, so -- how do you think can we help you ?Tue, 25 Aug 2020 01:14:03 -0500http://answers.opencv.org/question/234209/cudacuts-for-image-texture-synthesis/?comment=234219#post-id-234219