Can you explain exactly what it is that you are trying to achieve?
What are the four video inputs? Four cameras? Something else?
Are you gen-locking the cameras with a sync pulse out from the FPGA?
If you are using cameras and went them to be synchronized then
the FPGA needs to be the timing master and you need it to send out
a gen-lock sync pulse to each camera, so that they all start exposing simultaneously
and will also all be read out at the exact same rate (Frames per Second).
Maybe you can upload some screen shots of the offsets you are talking about.
--- Quote Start ---
The inputs are the same at each CVI, I splitted the source signal out of the SOPC Builder.
Input => SOPC System => Self Programmed Mixer => Output
SOPC System:
CVI - Clipper - FrameBuffer - Scaler - CVO
CVI - Clipper - FrameBuffer - Scaler - CVO
CVI - Clipper - FrameBuffer - Scaler - CVO
I clipped different parts of the video and have scaled them up.
If I clipp always the same parts of a videostream, I get only an offset of a few pixels, which I can compensate by a self written Verilog code.
But when I clipp different parts, i get an offset of around ~50% of all lines
--- Quote End ---