Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

It depends on wavelength you want to catch. Most cameras sensor, in special CCD, have some residual sensitivity out of visible range... let say 20% @ 800nm down to 0% around 1000nm. Manufacturers install IR filter to create an image as human eye can see.

After this, there is no unique technology that works over the full IR band 700nm...20um and over. You can buy Near,Short,Mid or Long Wave IR Cameras if your budget is rich enough :)

Going back to VISUAL-NIR range (720..1000nm), common cameras have low sensitivity here so you can catch this wavelength using longer exposure time. If you want only VIS-NIR wavelengths you have to mount IR pass filter over your lens. You can buy it and/or you can make some test using a non developed negative film or hold floppy disk.

Finallly, the IR pass filter cuts the visible band, therefore, using common camera you can't split VIS-NIR, R, G, B from same shoot

It depends on wavelength you want to catch. Most cameras sensor, in special CCD, have some residual sensitivity out of visible range... let say 20% @ 800nm down to 0% around 1000nm. Manufacturers install IR filter to create an image as human eye can see.

After this, there is no unique technology that works over the full IR band 700nm...20um and over. You can buy Near,Short,Mid or Long Wave IR Cameras if your budget is rich enough :)

Going back to VISUAL-NIR range (720..1000nm), common cameras have low sensitivity here so you can catch this wavelength using longer exposure time. If you want only VIS-NIR wavelengths you have to mount IR pass filter over your lens. You can buy it and/or you can make some test using a non developed negative film or hold floppy disk.

Finallly, Finally, the IR pass filter cuts the visible band, therefore, using common camera you can't split VIS-NIR, R, G, B from same shoot

EDIT after user comment

1st Considering that NDVI = (NIR — VIS)/(NIR + VIS) you need to compare intensity vs wavelengths. For absolute and accurate measures you have to make VIS and NIR response homogeneous (sensor independent). You have to specify wavelengths than you have to check sensitivity (quantum efficiency) of your sensor for those wavelengths. This info is provided by sensor manufacturer. E.g. sensor XYZ quantum efficiency: 60% at 550nm, 80% at 700nm 50% at 800nm 30% at 900nm 10% at 1000nm

It means that your sensor converts as image signal only 10% of energy it has received at 1000nm. While energy at 550 will looks much brighter because 60% of incident energy will be converted by the sensor. This gives you an offset you have to consider.

2nd As I stated, on common camera there is no software option to split VIS-NIR, R, G, B from same shoot, because you need to apply/remove the IR pass filter.

Some alternatives

  • you might create some mechanical/servo wheel to move bandpass filters over your cam and take a picture for each filter.
  • XBox Kinetic cam has VIS cam and NIR cams... it's not so expensive
  • Last, but maybe more interesting, is to use a "shift filter" like this to take false-color images . It seems it's used for same task

It depends on wavelength you want to catch. Most cameras sensor, in special CCD, have some residual sensitivity out of visible range... let say 20% @ 800nm down to 0% around 1000nm. Manufacturers install IR filter to create an image as human eye can see.

After this, there is no unique technology that works over the full IR band 700nm...20um and over. You can buy Near,Short,Mid or Long Wave IR Cameras if your budget is rich enough :)

Going back to VISUAL-NIR range (720..1000nm), common cameras have low sensitivity here so you can catch this wavelength using longer exposure time. If you want only VIS-NIR wavelengths you have to mount IR pass filter over your lens. You can buy it and/or you can make some test using a non developed negative film or hold floppy disk.

Finally, the IR pass filter cuts the visible band, therefore, using common camera you can't split VIS-NIR, R, G, B from same shoot

EDIT after user comment

1st Considering that NDVI = (NIR — VIS)/(NIR + VIS) you need to compare intensity vs wavelengths. For absolute and accurate measures you have to make VIS and NIR response homogeneous (sensor independent). You have to specify wavelengths than you have to check sensitivity (quantum efficiency) of your sensor for those wavelengths. This info is provided by sensor manufacturer. E.g. sensor XYZ quantum efficiency: 60% at 550nm, 80% at 700nm 50% at 800nm 30% at 900nm 10% at 1000nm

It means that your sensor converts as image signal only 10% of energy it has received at 1000nm. While energy at 550 will looks much brighter because 60% of incident energy will be converted by the sensor. This gives you an offset you have to consider.

2nd As I stated, on common camera there is no software option to split VIS-NIR, R, G, B from same shoot, because you need to apply/remove the IR pass filter.

Some alternatives

  • you might create some mechanical/servo wheel to move bandpass filters over your cam and take a picture for each filter.
  • XBox Kinetic cam has VIS cam and NIR cams... it's not so expensiveexpensive. If you like raspberry a nice option would be 1 raspi camera + 1 raspi noir camera with high pass IR pass filter

EDIT 2 short clarification about false colors-image

  • Last, but maybe more interesting, is An option could be to use a "shift filter" high pass red filter like this to take false-color false-colors images . with IR converted camera. Considering that residual sensitivity of the Bayer filter in the NIR band is spreads over BGR channels, if you cut the blue light you will have a (lower intensity) NIR image in the blue channel and a VIS-Red image in the red channel. As alternative you might use a band pass blue to get VIS-blue in the blue channel and NIR in the red channel. This solution requires calibrations an post processing but It seems that this it's used for same tasksimilar task.