Your output clock generally isn't constrained. There isn't some min/max relationship it needs to meet, it's just used as a clock for the data going out. (For a parallel, you don't specify how quickly your internal clock needs to get to the each register.) TQ checks all I/O to see if they have a set_input/output_delay and will let you know if they don't. In this case, your I/O doesn't and that's all right. (I'm assuming that's the check you're concerned about, as I didn't download the .qar).
A quick look at your constraints in the post and it looks good. The output delays look a little large, i.e. your data window is 5ns, and your basically saying the external device chews up 4.2ns of that, only leaving 800ps for the FPGA. That being said, if it meets timing then that's good.
Be sure to do a
report_timing -setup -detail full_path -to [get_ports out*] -panel_name "s: -> out*"
report_timing -hold -detail full_path -to [get_ports out*] -panel_name "h: -> out*"
One "trick" is to highlight the all the numbers in the Inc column except for the launch or latch time(the first line) and the external delay(which is out of your .sdc). Then point to the highlighted values and a pop-up will tell you their sum. In essence this is the raw delays through the FPGA, ignoring external delays and clock phase-shifts. What you should find is that they're very close to each other, which is good. Let's say they vary by 300ps. Then most likely you would have a slack of 100ps, since you're only giving 400ps for the setup analysis and 400ps for the hold analysis.
(In the next release, the timing reports will have the ability to "roll-up" these delay values into a single line item that will be clock delay and data delay. I think this will be much more readable, and then if you want to know why a delay is what it is, you just click the + sign and unroll it. Hopefully that makes the analysis more understandable.)