ContributionsMost RecentMost LikesSolutionsRe: FPGA passive serial configuration -- detect as PCIe device in Linux after configuration Hello @Steve-Mowbray-ENL , I had the exact same problem. On a Raspberry Pi CM4, the solution outlined in the following link works by PeartreeStudios: PCIe Hot Swap The "rescan" idea does not work as you discovered. I have two scripts. bind_pci contains: echo "Bind PCIe Driver" sudo bash -c 'echo fd500000.pcie > /sys/bus/platform/drivers/brcm-pcie/bind' and unbind_pci contains: echo "Unbind PCIe Driver" sudo bash -c 'echo fd500000.pcie > /sys/bus/platform/drivers/brcm-pcie/unbind' These are clearly system dependent. But each time I update the FPGA firmware: Run unbind_pci if it was already bound Update the firmware via PS or JTAG Run bind_pci. Re: PCIe pld_clk conflicting coreclkout statements Hello @ventt Thanks for the info. So apparently both documents are in error. You can use an external clock, but not in the range specified. The 150 MHz should have worked according to the second one. I don't think I could meet timing at 250 Mhz. They may want to update the documentation. You mention: In this case, it is suggested that you drive the pld_clk with coreclkout_hip, as using a single clock simplifies timing. It actually complicates timing for the Application layer, so I assume you mean it simplifies it for the PCIe core? In any event, the safest solution is clearly to drive pld_clk with coreclockout_hip. It works. Might as well close this one. Regards, Dave Re: How is PCIe coreclkout_hip generated? Hello, It's still not very clear how all this works, but apparently the information, such as jitter and accuracy of coreclock, is not available for some reason. Please go ahead and close the ticket. Just as a side note, Timing Analyzer does in fact report coreclockout as a separate clock to the incoming PCIe_ref clock pin (as opposed to one generated by it). Re: How is PCIe coreclkout_hip generated? @Wincent_Altera Thanks for the info. If they can really generate a stable low jitter clock inside the FPGA, it would be really useful to make that available independent of the PCIe core. For now, I think the safest route is to just use an external clock source to drive most of the FPGA logic and coreclockout_hip for the PCIe. Lots of clock domain crossings but so be it. Thanks, Dave Re: PCIe pld_clk conflicting coreclkout statements Hello @ventt My main motivation was avoiding a few clock domain crossings as opposed to higher clock speed. Since the design works fine when I drive pld_clk with coreclockout_hip, but fails otherwise, I have to assume it is a requirement. But the first guide mentioned went into great detail on the nature of the external clock, so it seemed like they knew what they were talking about. It will be interesting to here the resolution. Thanks, Re: How is PCIe coreclkout_hip generated? Hello, Yes you are correct, the coreclkout_hip source is from PCIe HIP. Are you saying the PCIe HIP has a clock generator independent of incoming PCIe refclk? Are there any specs on it's jitter and accuracy? I'm thinking of using it instead of an external clock. I'm familiar with the PCIe jitter specs, but if the coreclockout_hip is generated from refclk, I was wondering about the situation where PCIe Spread Spectrum is enabled (the norm on many systems) as described here: PCIe Spread Spectrum Clocking In this case, refclk is modulated with a 35 KHz triangle wave and the incoming PCIe refclk itself has very high jitter. Apparently has high as 4ns. This is on purpose to reduce EMI. The PCIe IP itself is designed to handle it, but other parts of a system are not and depend on low jitter. Thanks How is PCIe coreclkout_hip generated? I'm using a Cyclone V GT PCIe Avalon ST core. It might be helpful to use coreclkout_hip to run our application layer, but it is not clear how that clock is generated and how much jitter it has. I had assumed this clock was generated from the 100 MHz refclk. But in spread spectrum mode, the default (to reduce EMI) on the processor we are using, the refclk can have enormous jitter. I assume the hard IP deals with the on the Phy side, but if coreclkout_hip is generated from refclk, does it clean the jitter? But looking at Figure 6-4 in the "Cyclone V Avalon ST User Guide", it shows: It looks like refclk and coreclkout_hip are not related. I realize this is just a block diagram, but if coreclkout_hip is not generated from refclk, how is it generated? I there a clock source in the PCIe hard IP? And much jitter does it have? PCIe pld_clk conflicting coreclkout statements I'm using a Cyclone V GT with PCIe ST core. The app works fine when I drive pld_clk with coreclockout, but fails when I drive pld_clk with a 150 MHz external clock. I used 150 MHz instead of 125 MHz since it says you must use a 0 ppm accuracy clock at 125 MHz and I don't know how to do that! According to "Cyclone V Streaming Interface for PCIe Solutions User Guide", I should be able to do this (Table 4-5): But according to "Cyclone V Hard IP for PCI Express User Guide": This seems contradictory. And the latter seems wrong since if you must drive it with coreclkout, why did they not just do that internally? So can I drive pld_clk from a PLL? This would save alot of clock domain crossing inside my App. SolvedRe: Looking for some details on FPGAs In addition to the excellent free training @sstrell mentioned, you could get an inexpensive eval board such as Intel® MAX® 10 - 10M08 Evaluation Kit that works with the free version of the Quartus tools. Best way to learn is by doing. Re: Using multiple LVDS ADC in a single Arria 10 GX IO bank @FvM, - yes, you don't need necessarily RX PLL. Might need to setup your own deserializer because SERDES IP probably doesn't support this operation mode, but I didn't try on newer devices with SERDES IP. Major drawback is requirement of a dedicated clock input per ADC. Just wanted to mention the option. I think I understand. A shame to waste the SERDES though. Also, the LTC2271 was an example. We may go to higher speed. - You don't tell how you want to set clock phase with your (single) external PLL solution. If phase isn't measured and adjusted online (DPA/soft-CDR solution), it has to be preset to a fixed value. How is this different from my suggestion to use common DCO and frame clock? You can at best compensate for known trace delay differences by setting individual phase shift. If you're referring to my question of if I could use an fPLL, I meant to say I would use an fPLL per ADC, not one for all of them. Some banks have multiple clock inputs which would determine how many ADC per bank. Of course, would have to try it and see if the tools complain. But a better way might be your last suggestion. Say I want four ADC on a bank: Create 4 ALT_LVDS cores in external PLL mode with DPA. Feed the DCO from one ADC into the IOPLL on that bank Use DPA on all LVDS data lines. Use test pattern to do bit-slip per ADC It seem to me, the DPA would be off by at most half of 45 degrees or 22.5 degrees. In practice, the perfectly centered 90 degree offset option of the LTC2271 will have some variation anyway. This would actually save alot of traces. No need for FRAME clocks at all (true of any solution) and really only need the DCO from say a fourth of the ADC. Of course, you never know what the tools will complain about. It's all a bit unclear still. Thanks for the help.