In boosting, each data sample is given a different weight and you train a whole bunch of classifiers to train on them. Initially, all the weights are the same, but as time goes you reduce the weights of the data that was correctly predicted at the previous step, while you increase the weights of the data that was incorrectly predicted. In that way, you force the estimators to pay more attention to the mis-classified data. This improves the performance.