Will, thanks for the video. Gonna take a little time and reading to fully wrap my head around this. Seems like the example around 3:00 is what I'm most curious about - why is the filter causing ringing, regardless of which mode it is using?
I get confused with DSP when pondering a synthesized signal vs. a recorded signal. Shifting phase in a synth signal is simple, you just change that parameter in the algorithm (usually represented by Theta if I remember correctly). But how can you do that on a recorded signal? Do you Fourier Transform it into a collection of infinite sinusoids which you can then manipulate the same as synthesized sine waves? Of course in a real signal, such sinusoid waves are not going to be continuous over time. Seems like any adjustment to them can cause artifacts.
I feel like I understand linear phase EQ - rather than attempt to perform the above phase adjustment, you create infinite copies of the signal, all delayed by slightly larger and larger amounts as needed until the last signal provides 180 phase inversion for the lowest frequency desired (20 HZ?). Then you amplify these signals according to how much filtering you want to occur and mix them against the original signal. For instance, if you wanted to filter out frequencies centered around the 1 HZ frequency, you would only mix the original signal with a copy of itself delayed by 1/2 a second. To widen the Q you boost the signals more and more as you get closer and closer to 1/2 a second delay. To narrow the Q, you boost the same way but with phase inverted copies of those delayed signals.
This method does not have the possible artifacts with the phase-shifting mentioned above, but because it has to compensate for these variable delays, it can cause transient-smearing as the video noted.
Which method does the Kemper take? That's a good question, but considering that we know the Kemper can center a parametric EQ at a frequency of 20 HZ, which would require a .0025 s delay. Considering you can use at least 6 EQ's in the Kemper that's a latency of .015 s. The Kemper can run with a fixed latency of 5 ms (4.95 ms maybe?), I believe. So given this, I would expect that either the KPA always uses no latency filtering via the phase shifting method, or can operate either way when in variable latency but must use minimum-latency phase-shifting when in fixed latency mode.
As the video notes, the main danger of minimum latency filtering is that it can cause undesirable effects on correlated signals by creating phasing issues. In the KPA's case, there are no correlated signals - the KPA is only concerned with the mono guitar signal.
So how bad is the phase-shifting's effect on the signal? Sounds like experiment time! Fun, my first Kemper experiment. Here's the details.
I will take a couple samples of sounds that don't neatly fit into a nice continuous signal. So maybe a snare drum hit, human speech, and some disconnected instrument cable noise/hum. I will run this through a blank rig and a rig with 6 Studio EQ's - the maximum I can put on the Kemper. Each pair of EQ's will be designed to be frequency-neutral. So one EQ would cut at X HZ while the other would boost inversely. Each pair will focus on a different center frequency.
We should expect the rig with the 6 EQ's to possibly have more noise than the blank rig, although I'm not sure the degree to which this will occur. However, paying attention to the transients in those examples we should note the degree of artifacts introduced. I probably won't be able to test this until this weekend, but I will report back.