Forum Discussion
Altera_Forum
Honored Contributor
14 years agoNiki,
I am responding now to say thank you for getting back to me, and explaining your approach. Once I read it, it made perfect sense to me, and seems like it is what they should have done in the CVI to begin with. We implemented this approach, and it solved our issues too. The input async FIFO shields the CVI from the input clock, and the clk_vid (our equivalent of your system clock) is a constant clock that runs the CVI at a fixed rate. I looked into the CVI code, and there is quite a bit of logic and a state machine running off of the pixel clock that comes in with the input video. The CVI also uses synchronous resets, so it is not possible to reset the front end logic without video (and the associated clock) present. Perhaps this is why the logic seems to be unresponsive to a reset after video loss. The way that the CVI is implemented seems very poor to me from a fundamental design perspective, because all these clock crossing and clock loss scenarios are inherent to many video designs. Also, in our system, we do not transition to an output pattern in the event of loss of video. We essentially have a background pattern that is constantly fed into the chip through another dedicated path, and we switch to that in the event of video loss. So, the fact that the eof does not occur until new video is detected is not an issue. However, I believe that the eof is only written in once the FIFO overflows, which occurs on a video switch. The last old video line sits in the fifo and then the new full video line is written in, creating the overflow scenario we each observed. At any rate, thanks for all the assistance, it was invaluable. Thomas