As I mentioned in an earlier post, the waters get a lot deeper here, compared to the much simpler low-to-mid crossovers. Why? Two reasons:
1) 1 kHz is about 14 inches long (sound travels 345 m/Sec). This means polar patterns are going to be much more important than at lower frequencies, where drivers are usually closer together than one-half wavelength. When drivers are one wavelength - or more - apart, you start getting lobing in the polar pattern, just like a TV or FM antenna. In addition, the phase angle between the drivers becomes important, since this steers the lobe up or down - and changes with polar pattern vs frequency are very audible as a “phasey”, electronic-sounding, or artificial coloration.
2) Remember the Fletcher-Munson curve? That applies to audibility of frequency response variations, and distortion. A 2dB variation in frequency response at 200 Hz is less audible than a 0.5dB variation at 2kHz - in fact, normal listening rooms have 5 to 10dB variations in the 100 to 500 Hz region, and we take these for granted. At the higher frequencies, room modes are so closely spaced they no longer affect the direct-sound response, so we are mainly hearing the speaker itself.
Even more important than frequency response variations, audibility of distortion follows the Fletcher-Munson curve as well. This results in IM and THD distortion in the critical 1 to 5kHz region being far more audible than lower frequencies. Audibility of noise, for example, reaches its peak in the 5 to 8 kHz region, which is why DBX and Dolby compandors use frequency-selective curves, as well as FM broadcast de-emphasis curves, NAB tape recording equalization, and the RIAA recording and playback curve. Audibility of noise and distortion in the upper-mid frequencies has been known going back to the Twenties, thanks to Bell Labs research on telephone transmission.