Some history might help:
The original 10M HDX for UTP included 'link test pulses', a single pulse was sent at a fixed rate, reception of the pulses would typically light a little led, the polarity of the pulses could be uses to detect mis wired cables. Later cards would also auto-detect cross over (by detecting pulses on the tx pair).
100M HDX used a short burst of pulses to indicate support for 100M (and 10M), a 10M phy would treat them normally, a 100M phy would detect the burst and then auto-detect that it could use at 100M. Old dual speed hubs are two separate hubs (one for each speed) connected via a bridge.
With the advent of FDX the burst of link test pulses was modified so that some of the pulses would be missing. This enabled specific register values (not only the ANAR) to be passed to the remote PHY. A 100M HDX hub would still treat it as a valid burst and do 100M.
The standard MII phy interface defined 16 registers (it might have been extended). The first 8 are vendor independant and contain the ANAR and the ANRR? (received ANAR value), what they don't contain is the actual operating mode - which can be difficult to determine. There are a lot of devices that don't like being connected to 10/100 hubs.
For 10M and 100M the link test pulses are sent at regular intervals.
For Ge and higher things are more complicated, once the initial negotiation has happened the data pairs get connected to a powerful DSP. I suspect that regular link test pulses are no longer sent.