Normalisation is a digital signal processing function that's available in a lot of digital audio editing software. It scans through the programme material for the highest level, and if that level doesn't reach the maximum available dynamic range, the software boosts the overall signal so that the peak hits the highest level possible. For example, suppose you record a track of music and the highest peak registers at 6dB below the maximum available headroom. Normalisation brings the entire track up by 6dB. (Incidentally, most normalisation functions allow normalising to some percentage of the maximum available level; it needn't always be 100 %.) There are a couple of problems though:
• Because normalisation boosts the entire signal, the noise floor comes up as well.
• Excessive use of amplitude-changing audio processes such as normalisation on linear, non-floating-point digital systems can cause so-called 'round-off errors' that, if allowed to accumulate, impart a 'fuzzy' quality to your sound. If you're going to normalise, it should be the very last process -- don't normalise, then add EQ, then change the overall level, and then re-normalise, for example.
If you need to normalize then think carefully about whether you will use Peak or RMS (average level).
I tend to find that RMS (Root Mean Square) works best on long audio files that have varying peaks and troughs. Peak tends to work well on single shot samples, much like drum hits.