1 | initial version |
It depends on wavelength you want to catch. Most cameras sensor, in special CCD, have some residual sensitivity out of visible range... let say 20% @ 800nm down to 0% around 1000nm. Manufacturers install IR filter to create an image as human eye can see.
After this, there is no unique technology that works over the full IR band 700nm...20um and over. You can buy Near,Short,Mid or Long Wave IR Cameras if your budget is rich enough :)
Going back to VISUAL-NIR range (720..1000nm), common cameras have low sensitivity here so you can catch this wavelength using longer exposure time. If you want only VIS-NIR wavelengths you have to mount IR pass filter over your lens. You can buy it and/or you can make some test using a non developed negative film or hold floppy disk.
Finallly, the IR pass filter cuts the visible band, therefore, using common camera you can't split VIS-NIR, R, G, B from same shoot
2 | No.2 Revision |
It depends on wavelength you want to catch. Most cameras sensor, in special CCD, have some residual sensitivity out of visible range... let say 20% @ 800nm down to 0% around 1000nm. Manufacturers install IR filter to create an image as human eye can see.
After this, there is no unique technology that works over the full IR band 700nm...20um and over. You can buy Near,Short,Mid or Long Wave IR Cameras if your budget is rich enough :)
Going back to VISUAL-NIR range (720..1000nm), common cameras have low sensitivity here so you can catch this wavelength using longer exposure time. If you want only VIS-NIR wavelengths you have to mount IR pass filter over your lens. You can buy it and/or you can make some test using a non developed negative film or hold floppy disk.
Finallly, Finally, the IR pass filter cuts the visible band, therefore, using common camera you can't split VIS-NIR, R, G, B from same shoot
EDIT after user comment
1st Considering that NDVI = (NIR — VIS)/(NIR + VIS)
you need to compare intensity vs wavelengths. For absolute and accurate measures you have to make VIS and NIR response homogeneous (sensor independent). You have to specify wavelengths than you have to check sensitivity (quantum efficiency) of your sensor for those wavelengths. This info is provided by sensor manufacturer.
E.g. sensor XYZ quantum efficiency: 60% at 550nm, 80% at 700nm 50% at 800nm 30% at 900nm 10% at 1000nm
It means that your sensor converts as image signal only 10% of energy it has received at 1000nm. While energy at 550 will looks much brighter because 60% of incident energy will be converted by the sensor. This gives you an offset you have to consider.
2nd As I stated, on common camera there is no software option to split VIS-NIR, R, G, B from same shoot, because you need to apply/remove the IR pass filter.
Some alternatives
3 | No.3 Revision |
It depends on wavelength you want to catch. Most cameras sensor, in special CCD, have some residual sensitivity out of visible range... let say 20% @ 800nm down to 0% around 1000nm. Manufacturers install IR filter to create an image as human eye can see.
After this, there is no unique technology that works over the full IR band 700nm...20um and over. You can buy Near,Short,Mid or Long Wave IR Cameras if your budget is rich enough :)
Going back to VISUAL-NIR range (720..1000nm), common cameras have low sensitivity here so you can catch this wavelength using longer exposure time. If you want only VIS-NIR wavelengths you have to mount IR pass filter over your lens. You can buy it and/or you can make some test using a non developed negative film or hold floppy disk.
Finally, the IR pass filter cuts the visible band, therefore, using common camera you can't split VIS-NIR, R, G, B from same shoot
EDIT after user comment
1st Considering that NDVI = (NIR — VIS)/(NIR + VIS)
you need to compare intensity vs wavelengths. For absolute and accurate measures you have to make VIS and NIR response homogeneous (sensor independent). You have to specify wavelengths than you have to check sensitivity (quantum efficiency) of your sensor for those wavelengths. This info is provided by sensor manufacturer.
E.g. sensor XYZ quantum efficiency: 60% at 550nm, 80% at 700nm 50% at 800nm 30% at 900nm 10% at 1000nm
It means that your sensor converts as image signal only 10% of energy it has received at 1000nm. While energy at 550 will looks much brighter because 60% of incident energy will be converted by the sensor. This gives you an offset you have to consider.
2nd As I stated, on common camera there is no software option to split VIS-NIR, R, G, B from same shoot, because you need to apply/remove the IR pass filter.
Some alternatives
EDIT 2 short clarification about false colors-image