1) I hope you're not expecting clean behavior when switching between clocks(glitch free), or at least have put a lot of analysis to these cases.
2) I hope you're not trying to run at extreme rates. Source synch interfaces rely on clock and data paths mimicking each other, and they just won't do a great job going through 8:1 muxes. (You may ask what's extreme, which is device/design dependent, but you'll see if you have problems when doing timing analysis)
3) Do you invert/phase shift the clock 180 going off chip, or are you sending the clock and data edge aligned?
You'll want to put 8 generated clocks on the port sending the muxed clock off chip, one generated from each input. You'll then have 8 set_output_delay assignments, where the -clock is this new generated clock for each case. Why is this important? Because you literally have 8 different interfaces going through these ports, so we're constraining them all. You'll also want to do a set_clock_groups command to relate them. For example, let's say you have 8 input clocks called inclk[7:0]. On the clock output port you'll have 8 generated clocks created based on these incoming clocks, call ssync_clk_out[7:0]. Then you'll have 8 set_output_delay constraints. You'll also have:
set_clock_groups -exclusive -group {inclk[7] ssyn_clk_out[7:0]}
-group {inclk[7] ssyn_clk_out[7:0]}
-group {inclk[6] ssyn_clk_out[6]}
-group {inclk[5] ssyn_clk_out[5]}
...
-group {inclk[0] ssyn_clk_out[0]}
This way the path won't get analyzed when ssync_clk_out[5] is sent off chip, but inclk[4] clocks the data out. (In your mind this will never happen because the muxes are controlled by the same select lines, but Static Timing tools don't understand the code behavior and will analyze all possible paths. This set_clock_groups command is basically telling TQ that this analysis will never happen.)