Ok I switch on my crystal ball and it says to me that either you try to make contourArea() on your whole image or you do the contourArea() function on the whole vector of your found contours. So use contourArea() like this:
//This should include most of the OpenCV functions
#include <opencv2/opencv.hpp> // When using OpenCV 3
//#include <opencv2/imgproc/imgproc.hpp> // When using OpenCV 2
using namespace cv;
Mat img = imread("My/image/path.bmp", IMREAD_ANYCOLOR);
Mat imgThresh;
threshold(img, imgThresh, nThreshold, 255, THRESH_BINARY);
vector<Mat> myContours; // This is the same as std::vector<std::vector<Point>> myContours
findContours(imgThresh, myContours, RETR_EXTERNAL, CHAIN_APPROX_SIMPLE, Point(0,0));
for(int i=0; i<myContours.rows; i++) { // or myContours.size() if std::vector<std::vector<Point>> is used
double area = contourArea(myContours[i]); // Perhabs it is contourArea(Mat(myContours[i])) when using std::vector<std::vector<Points>>
}
The negative vote from me is for switching on the crystal ball. Next time write some example code in your question.
Edit: Added #include and using namespace to code snippet