Note that it's not clear what you're doing with set_max_delay, and I'm reading posts thinking it's different. Let's say you had some logic that runs off an 8ns clock. You then added SignalTap to tap some of that logic and you properly used that same 8ns clock for SignalTap. The acq_data_in_regs will now be clocked by the same clock as your logic, and TimeQuest will automatically analyze it against the 8ns setup relationship, just like any other logic, and the clock skew should be small(sub300ps). Is that what you're seeing in TimeQuest?
If not, the problem is that you're using the wrong clock for SignalTap. I see this all the time. Users either fix it by changing the clock, or they cut timing on it and accept that their SignalTaps are not 100% trustworthy. (Many issues can still be debugged like this)
If the setup relationship is correct and it's low clock skew, a set_max_delay is unnecessary. Note that you could do it to overconstrain the path, such as taking it from 8ns to 6ns, which is not "wrong" but will make it more likely to fail timing(even if it's passing, such as if it has a 7ns delay). If you underconstrain it, such as taking it from 8ns to 10ns, then that certainly is wrong and even if it passes timing you can't 100% trust your captures.
If everything is set up correctly, it is unlikely that it fails timing. I have recently worked on a design that did this though. Basically we were capturing a lot of signals(about 300) on six different instances of the same thing, and those six instances were spread out across the largest Arria 10 die. So we were tapping from locations all over the die and then putting them into M20Ks to offload, and these would fail timing. I fixed it by creating an instance in SignalTap for each of the six blocks we were tapping, and it then easily met timing.