Note that set_max_delay and set_min_delay are with respect to the clock, i.e. clock delays are still taken into account. They are poorly named constraints, as they basically override the clock relationship. For example, if two registers are clocked by a 10ns clock, then the setup relationship is 10ns, and the data has to get there in less than 10ns, including calculations for clock skew. If you put a set_max_delay of 12ns, then you've loosened the clock relationship to 12 for this path, but the rest of the calculation is the same, i.e. you'll get the same slack as before plus 2ns. (Basically the set_max/min_delay override the launch edge and latch edges)
The reason clocks are used is for their "periodicity". The requirements reference clocks but basically say that after clock skew, the data has to get there in less than X delay and greater than Y delay. It basically bounds what that can be. The reason that clock skew is always taken into account is that it naturally has an impact. If two compiles have a data delay of 6ns, but one has a clock skew of 1ns and the other has a skew of 20ns, they will probably behave differently. (I'm not giving enough info to know for sure).
Most things are a false path, or a multicycle to loosen the constraint(you can do max/min delay if you want, but multicycles have some benefits in that they change automatically if you change their clocks, and they account for rise/fall edges.)
The constant dip switch is certainly a false path. The uart is most likely a false path. In theory if it had a super long delay, then it might send something but it takes so long for the receiver to get it that it makes a decision that is incorrect. In practice delays aren't that long, but I've seen people worry about it and always say "how do you know", so they end up putting constraints on them and avoiding false paths.
The one thing that was missing when TimeQuest came out were skew constraints. Note that users ask for them for source-synchronous interfaces, which is not correct as there is a clock and hence a more accurate way to analyze them. But skew can be independent of a clock and useful. For example if I have 8 data lines coming in and am oversampling them, I may not care about any of their raw delays, but I do care if their skew is greater than my sample window. As long as that's the case, then I know any bit's adjacent data is plus or minus one sample, and can use that in my algorithm(that's not exactly how it's done, but the reason skew is important is there.) That's been put in recently, although it's a constraint that wasn't originally part of the SDC list. That always surprised me, and I was surprised ASIC designers didn't have a constraint like this, unless they just guarantee skew by layout, which seems do-able but not that rigid.