Residual analysis is the examination of the gaps between predicted and actual data after fitting a model. Suppose we said that inflation would be equal to the economic growth rate, and one year inflation is 5% and growth is 2%. Our inflation prediction was 2%, so we have a residual of 5%-2%=3%. The residual analysis is saying that the residual is large and the model is dire.
Classical testing of models examines the sum of the residuals (actually the sum of their squares, so they don't cancel each other out when added). The approach is a bit different in modern tests of more complicated fitting issues than just gaps between actual and predicted data. For example, successive residuals might all be positive or negative, which would mean that the model underestimates for many consecutive data points - such as when the data is large - and overestimates for another set of linked data points - such as when the data is small.
Many of the famous modern tests rely on feeding the residuals back into another model, and seeing whether that model is a good fit. If it is, then the original model has something suspicious about it. For example, with the positive-when-the-data-is-large residuals, then we would expect a model such as residual = data - 10 to be a good fit, since large data (above 10, say) would have a positive residual, and small data would have a negative residual.