@Eduardo and @StevenPuttemans are completely correct! I'll add some details below.
It turns out that color matching functions are a capture device and display device problem. Look at this relevant section of the sRGB wiki page
Due to the standardization of sRGB on the Internet, on computers, and on printers, many low- to medium-end consumer digital cameras and scanners use sRGB as the default (or only available) working color space. As the sRGB gamut meets or exceeds the gamut of a low-end inkjet printer, an sRGB image is often regarded as satisfactory for home use. However, consumer-level CCDs are typically uncalibrated, meaning that even though the image is being labeled as sRGB, one can't conclude that the image is color-accurate sRGB.
OpenCV can be used to convert to and from color spaces or derive color space coordinates with the right math. But OpenCV does not have a "default" set of color matching functions. If your camera captures in space X (say, CIE XYZ) and your display device or printer also displays in space X, then OpenCV can be used to process the data between with no consideration to the spaces. However, if the display device displays in a different space Y (say, Adobe RGB), then OpenCV can be used to translate the image from space X to space Y.
For what I know, I would say that the type of RGB spaces doesn't matter as each channel is encoded on [0-255] values. Is it not only when displayed that RGB spaces matters ?
Like @Eduardo said, for OpenCV it does not matter. The RGB space used is defined by your camera sensor and sensitivity to each color channel. OpenCV just processed the raw data that it receives.