The short answer is that you need much more than one algorithm. Good chord recognition methods could more aptly be described as "systems", but usually they are indeed based on an initial transform to the frequency domain (most often DFT).
If you want a chord representaton of the song similar to this
C G Am F7 F6 C ...
then this is actually a problem that is slightly removed from recognising the notes in a piece of audio. In fact, there are two problems (roughly speaking):
finding which pitches are present at any time
grouping these pitches over time so as to be able to assign a chord label to a time interval.
It turns out that the way you transform from the time domain (normal audio) to the frequency domain (spectral representation) is only of limited importance. It's very important what you do afterwards, and often sophisticated probabilistic models (similar to those in speech recognition: HMMs, DBNs, ...) are used to tackle this problem.
Try google scholar "chord transcription", or "chord detection", or "chord labelling" for advanced research in this area.
Most of these approaches use a discrete Fourier transform (DFT) to create the initial spectrogram. During further processing, too, they tend to differ only slightly, though different time-series smoothing techniques have been used: hidden Markov models, dynamic Bayesian networks, support vector machines (SVMstruct), and conditional random fields -- among others.
The most advanced transcribers use automatic tuning, key information, bass note information, and information of the metric position to improve the results. My thesis (Chapter 2) gives a nice overview.
Open source chord detection algorithms:
Hope this helps.