User Profile
User Widgets
Contributions
Re: Linux driver for AVMM Arria10 PCIe is deprecated
Hi, It's an OS/compiler compatibility issue. The original driver is compiled on Centos, and it is just for customer's reference. We don't provide support to PCIe driver. I suggest you try to find the solution from the internet with search engine. Thanks. BR/Harris684Views0likes0CommentsRe: SERDES to PCIe4 in Agilex
Hi Victor, Every PCIe bus works independently. In theory, the whole throughput can be divided into 4 parallel parts, but it depends on your design. DMA controller only works for data movement, it don't support this division function. Thanks. BR/Harris1.3KViews0likes0CommentsRe: SERDES to PCIe4 in Agilex
Hi Victor, From your description, FPGA should work as PCIe root port mode, and SSD works as EP mode. PCIe system needs to work with software for enumeration, configuration space registers setting, etc. For your case, it is impossible to send data directly to PCIe. You need to use RAM/FIFO to buffer data from Serdes, then design DMA controller to move data from the RAM/FIFO to PCIe endpoint(SSD). Thanks. BR/Harris1.3KViews0likes2CommentsRe: PCIe dma trasfer optimization
Hi ZVere, 1, I guess the multiple channel DMA is designed by yourself, I know nothing about your design, so I don't have any suggestion for your design. 2, I suggest you try polling scheme instead of interrupt scheme. I suspect so many interrupts might not be handled by system timely. It might be the reason of low performance. 3, You also can use Signaltap to capture some signals and check the timing relation to help analyze the issue. Thanks. Harris1.5KViews0likes2CommentsRe: PCIe gen 4-6 lanes Agilex: maximum number of clks ready signal disabled
Hi Avi, Yes. It is possible that you can use TX flow control interface to allocate the transmission in advance, instead of depending on 'ready' fully in your situation. For TX flow control interface, please refer to user guide. Thanks. BR/Harris1.2KViews0likes3CommentsRe: PCIe gen 4 - 16 lanes in Agilex: maximum rate
Hi Avi, 1, Because there are lot of factors impacting throughput, including 128b/130b encoding in physical layer, acknowledge and flow control update in DLL, packet overhead, payload size, DMA design, etc, the throughput can't get to the theoretical data rate(32GB/s). 2, I suggest you try MPS 512B. From factory test result, the writing throughput(without reading) can get more than 29GB/s with MPS 512B. Thanks Harris1.3KViews0likes3Comments