Hi,
The Design Space Explorer tool in Quartus will farm out multiple compilations to networked computers (with a built-in light-weight batch system or with LSF). This is the easiest, coarsest version of parallelism. Any time you can take a task a sub-divide it into large, fully independent tasks, then parallelism (fine or coarse-grained) works great, and is very scalable. But I don't think this is what you mean.
Farming out work to other computers requires coarser-grained parallelism than what is being employed to take advantage of multiple CPUs in the same computer. For example, you could imagine that when performing timing analysis, the CAD software could analyse one clock domain on one CPU, and analyse another clock domain on another CPU. This is efficient in a system where the CPUs share memory -- the bandwidth from the CPUs to the memory (which contains the timing graph and other improtant information) is on the order of GB/s. However, if you were to try the same parallelism across computers, suddenly the software would have to send all that memory content (the timing graph) over a link (1 Gb/s, say) first... and this would probably not be worth the effort. On top of this, its a bit more of a pain to get a program to dole out work to other computers; you requiring some sort of batch computing environment or computing client, and one machine must manage the scheduling of tasks across the computing resources.
I could see that perhaps if your design had multiple partitions (for example, in an incremental design flow) a CAD system could farm off synthesis and fitting for the various pieces to multiple machines. This is in effect what you are doing when you are using "team-based" design methodologies -- one engineer is doing one piece on their computer, while another is working on a different piece on another machine.
Regards,
Paul