Add the board delays from upstream device to FPGA into the input delay. So if the delays are max 200ps and min 100ps, then the max min values would change to 5.2 and 2.1ns.
Or are you talking about clock delays? You didn't state how the clock is laid out, i.e. is it a clock generator that feeds both devices, or does the upstream device send a clock with the data? If it's the first one, you can roll those delays into the external delays too. For example, if the max delay would be 5.2 + $Max_clk_delay_to_upstream_chip - $Min_clk_delay_to_FPGA. I add the clock delay to the upstream device because if this gets longer, then the data just comes out later. We subtract the clock delay to the FPGA because as this delay gets longer, it gets easier to meet timing since the FPGA input register is latching the data later in time. I used the specific max and mins to make the -max value the worst case possible.
For the -min, do the opposite, i.e. 2.1 + $Min_clk_delay_to_upstream_chip - $Max_clk_delay_to_FPGA.
If it's clock being sent with the data, please describe the relationship more.
Finally, make sure you look at it in TimeQuest and try to understand that. Run:
report_timing -setup -npaths 100 -detail full_path -from [get_ports din*] -panel_name "s: din -> *"
report_timing -hold -npaths 100 -detail full_path -from [get_ports din*] -panel_name "h: din -> *"
Then analyze the waveform view and the Data Path tab. The latch and launch edges are based on your clocks. The iExt delays are the -max and -min values. Everything is based on the FPGA. If you understand that, it goes a long way toward feeling good about your constraint.