Ask Your Question

Revision history [back]

OpenCV suitable for volume of multiple merging blobs scenario?

Hi,

I have a volumetric dataset that contains in essence little glowing semi-rigid jelly beans that crawl around, and occasionally merge, or split, in quite some density. Over 120 frames (pretty contiguous in terms of sampling the real behaviour). For curiosity, data is lattice light sheet microscopy acquisition of cellular mitochondria.

I can render this 3d volumetric set out in most imaginable ways, with as many cameras as required, in order to best aid the algorithms (eg could do a ring of 36 cameras, rendering with depth or floating point world position, naive global masks [eg alpha cut the BG], etc). Can supply exact specifics of the camera details and relative position to the data. Its not so much a question about resolving where they are in space, but tracking them in their dense environment, utilising as many cameras as required to help that.

I want to track many of these beans, say 30 out of a population of 100. That could be done as separate passes though if required. And obviously RT is not of particular interest. Could also consider doing this in multiple passes, for single tracked entities, where a rough track is achieved, that is then applied to re-rendering the volumetric data with any surrounding, occluding, non-interacting data removed to aid better tracking.

Any and all advice is greatly appreciated, Thank you, Campbell.

OpenCV suitable for volume of multiple merging blobs scenario?

Hi,

I have a volumetric dataset that contains in essence little glowing semi-rigid jelly beans that crawl around, and occasionally merge, or split, in quite some density. Over 120 frames (pretty contiguous in terms of sampling the real behaviour). For curiosity, data is lattice light sheet microscopy acquisition of cellular mitochondria.mitochondria.

I can render this 3d volumetric set out in most imaginable ways, with as many cameras as required, in order to best aid the algorithms (eg could do a ring of 36 cameras, rendering with depth or floating point world position, naive global masks [eg alpha cut the BG], etc). Can supply exact specifics of the camera details and relative position to the data. Its not so much a question about resolving where they are in space, but tracking them in their dense environment, utilising as many cameras as required to help that.

I want to track many of these beans, say 30 out of a population of 100. That could be done as separate passes though if required. And obviously RT is not of particular interest. Could also consider doing this in multiple passes, for single tracked entities, where a rough track is achieved, that is then applied to re-rendering the volumetric data with any surrounding, occluding, non-interacting data removed to aid better tracking.

Any and all advice is greatly appreciated, Thank you, Campbell.

OpenCV suitable for volume of multiple merging blobs scenario?

Hi,

I have a volumetric dataset that contains in essence little glowing semi-rigid jelly beans that crawl around, and occasionally merge, or split, in quite some density. Over 120 frames (pretty contiguous in terms of sampling the real behaviour). For curiosity, data is lattice light sheet microscopy acquisition of cellular mitochondria.

I can render this 3d volumetric set out in most imaginable ways, with as many cameras as required, in order to best aid the algorithms (eg could do a ring of 36 cameras, rendering with depth or floating point world position, naive global masks [eg alpha cut the BG], etc). Can supply exact specifics of the camera details and relative position to the data. Its not so much a question about resolving where they are in space, but tracking them in their dense environment, utilising as many cameras as required to help that.

I want to track many of these beans, say 30 out of a population of 100. That could be done as separate passes though if required. And obviously RT is not of particular interest. Could also consider doing this in multiple passes, for single tracked entities, where a rough track is achieved, that is then applied to re-rendering the volumetric data with any surrounding, occluding, non-interacting data removed to aid better tracking.

Any and all advice is greatly appreciated, Thank you, Campbell.

OpenCV suitable for volume of multiple merging blobs scenario?

Hi,

I have a volumetric dataset that contains in essence little glowing semi-rigid jelly beans that crawl around, and occasionally merge, or split, in quite some density. Over 120 frames (pretty contiguous in terms of sampling the real behaviour). For curiosity, data is lattice light sheet microscopy acquisition of cellular mitochondria.

I can render this 3d volumetric set out in most imaginable ways, with as many cameras as required, in order to best aid the algorithms (eg could do a ring of 36 cameras, rendering with depth or floating point world position, naive global masks [eg alpha cut the BG], etc). Can supply exact specifics of the camera details and relative position to the data. Its not so much a question about resolving where they are in space, but tracking them in their dense environment, utilising as many cameras as required to help that.

I want to track many of these beans, say 30 out of a population of 100. That could be done as separate passes though if required. And obviously RT is not of particular interest. Could also consider doing this in multiple passes, for single tracked entities, where a rough track is achieved, that is then applied to re-rendering the volumetric data with any surrounding, occluding, non-interacting data removed to aid better tracking.

(attached [hopefully... cant see if its successful] is an eg of the data, not rendered in a style that would be used fro tracking however)

Any and all advice is greatly appreciated, Thank you, Campbell.

OpenCV suitable for volume of multiple merging blobs scenario?

Hi,

I have a volumetric dataset that contains in essence little glowing semi-rigid jelly beans that crawl around, and occasionally merge, or split, in quite some density. Over 120 frames (pretty contiguous in terms of sampling the real behaviour). For curiosity, data is lattice light sheet microscopy acquisition of cellular mitochondria.

I can render this 3d volumetric set out in most imaginable ways, with as many cameras as required, in order to best aid the algorithms (eg could do a ring of 36 cameras, rendering with depth or floating point world position, naive global masks [eg alpha cut the BG], etc). Can supply exact specifics of the camera details and relative position to the data. Its not so much a question about resolving where they are in space, but tracking them in their dense environment, utilising as many cameras as required to help that.

I want to track many of these beans, say 30 out of a population of 100. That could be done as separate passes though if required. And obviously RT is not of particular interest. Could also consider doing this in multiple passes, for single tracked entities, where a rough track is achieved, that is then applied to re-rendering the volumetric data with any surrounding, occluding, non-interacting data removed to aid better tracking.

(attached [hopefully... cant see if its successful] is an eg (eg of the data, not rendered in a style that would be used fro for tracking however)however: https://s3-ap-southeast-2.amazonaws.com/3dvalmisc/cell04_BIII_MTGreen_deskewed_T000_C0.mov)

Any and all advice is greatly appreciated, Thank you, Campbell.