Forum Discussion
Altera_Forum
Honored Contributor
17 years agoI don't believe the problem is as simple as you would like. I'm sure some of the wiser forum users will override me on this but I see two issues.
1 - The physical layer. What does a loss of signal look like at the receiver? Does the signal drift to a logic high, low, or somewhere inbetween? The deserializer is just that. At a high level, it has no concept of signal levels other than 1 or 0. It will continue to deserialize data regardless of what is found on its input. At the physical layer you really have no concept of loss of signal. Now in reality, the transceiver does have a signal loss threshold determined by the amplitude of the input signal but I don't know how you can make use of this. 2 - Depending on your implementation, at some point the clock recovery unit is no longer going to be able to detect a clock on the input. However, there may be a significant delay between the actual loss of signal and the loss of recovered clock. This method will not give you 3 - Your encoding protocol is really the only method of signal detection you have. For example, if you are sending/receiving SDI data, the only way you know that a signal is present is by the consistent presence of a predetermined sequence of bits on the data stream. If that sequence of bits is not found in the stream, then you assume there is no signal. In this case, there is a huge time delay between the loss of signal and the awareness that the signal is lost. Certain protocols may give you finer resolution on detecting when a signal was lost. But even then, you would have to define signal loss as a bit error in the data stream. Your resolution on determining where in the bitstream that error occurred would depend on your bit error detection resolution. Jake