# Depth map to Normal map conversion

Hi there, I'm trying to convert depth map to normal map. I have found this answer: link but I have to do something wrong. Here is my code:

h, w = np.shape(grayImg)
normals = np.zeros((h, w, 3))

for x in range(1, h-1):
for y in range(1, w-1):

dzdx = (float(depthValue(grayImg[x+1, y])) - float(depthValue(grayImg[x-1, y]))) / 2.0
dzdy = (float(depthValue(grayImg[x, y+1])) - float(depthValue(grayImg[x, y-1]))) / 2.0

d = (-dzdx, -dzdy, 1.0)

n = normalizeVector(d)

normals[x,y] = n * 0.5 + 0.5

normals *= 255

normals = normals.astype('uint8')

plt.imshow(normals)
plt.show()

normals = cv2.cvtColor(normals, cv2.COLOR_BGR2RGB)

cv2.imwrite("images/N_{0}".format(fileName), normals)


The function depthValue() does nothing more than return grayscale value of an image (I was trying to convert every value of pixel to non-linear value like from this link but the effect was even worst).

Here is normalizeVector():

def normalizeVector(v):
length = np.sqrt(v[0]**2 + v[1]**2 + v[2]**2)
v = v/length
return v


Here are my results:

The colors are in wrong order and they are much paler than from original output image from link above. Could you tell me what am I doing wrong?

edit retag close merge delete

plt.imshow() is rgb, cv::imshow bgr. also try NOT to convert to rgb before imwrite() (which also expects bgr)

( 2017-05-22 01:19:59 -0500 )edit

Sort by ยป oldest newest most voted

So the issue with the colors being paler comes down to where they are centered.

normals[x,y] = n * 0.5 + 0.5


This centers the normals around a neutral gray. This isnt noticeable in the example because none of the normals face away from the viewer, in your display they would appear dark and colored. In the other display each channel would clip at zero and it would look very strange.

As bearak says, the swap of the channel order comes down to opencv and matplotlib using different bgr/rgb conventions.

more

Official site

GitHub

Wiki

Documentation