1 | initial version |
One normalises data when: * one wants to convert data to a range that is easy to understand * an algorithm or a function expects data in certain range * wider range of values (and bigger data types) have been used temporarily * etc...
Of course, the conversation is most often lossy due to rounding, therefore it is best used as late as possible. And of course, different numbers given to an algorithm produce different results unless the algorithm outputs some relative result. The question is always: how is the data used next and later... ...But mathematically, the magnitude of a normal vector is 1, so it should be normalised to that magnitude, and by definition, the magnitude of the source data should not matter.