First, I strongly suggest to use the Raspberry Pi 2 instead of the B+. It costs the same, has the same layout, uses less power and it's more than 5x faster. It's much better in multitasking, too.
The Raspberry Pi B+ has a very limited processing power, and I definitely wouldn't use it for real-time image processing applications.
Then, you should find the right balance between robustness and processing time. An algorithm detecting red patches in the image is fast and easy to implement, but then your robot might stop at any red thing.
As Pedro said, first reduce the image size. Then try to process only a small part of the image. For example, to detect the line on the ground, use only the bottom part of the image.
There are some simple and fast algorithms you can try: HSV color space (to detect a specific color), thresholding (to binarize the image), morphological operators (to clean up the binary image), image moments (to detect the area and center of a zone), Sobel edge detector (to detect contours).
Then look, where you need more robustness, and use more complex algorithms on that part of your detection process.
The first go-to optimization in image processing algorithms is downscaling your working image. You should test your algorithm for various image sizes and choose one that has a nice balance between functionality and performance.