It is done using cross-validation if you have a training set. If you don't, you could test your system on simulated data or external data that contains a pre-computed / pre-tested recommendation field. If this is not possible, see here and here. Also, comparing with a base model (random recommendations) or checking how users interact with the recommendations, can help.
A different approach is to check how well your clusters (bad vs. good recommendations) are separated. Various distance metrics are available for such comparisons. See also my article on the elbow rule: the strength of the signal is an indicator of how well your classes are separated. Try different models, select the one with the best discriminating power.
wow, Thank you very much sir.