MCTP over PCIe VDM routing to PMCI in OFS N6000 FIM configuration and datapath clarification
Hi Team,
I am working on implementing PLDM over MCTP over PCIe VDM on the Intel N6000 platform OFS PCIe Attach FIM.
I have referred the following documents:
1. OFS Linux MCTP driver documentation:
https://github.com/OFS/linux-dfl/blob/2014c95afecee3e76ca4a56956a936e23283f05b/Documentation/networking/mctp.rst
2. OFS Agilex PCIe Attach FIM architecture guide:
https://ofs.github.io/ofs-2025.1-1/hw/n6001/dev_guides/fim_dev/ug_dev_fim_ofs_n6001/#1211-top-level
From my understanding, the PMCI module inside the FIM contains the MCTP over VDM controller which transmits MCTP payloads containing PLDM commands to the MAX10 BMC. The BMC communicates with PMCI via SPI. The host can access PMCI CSRs through PCIe MMIO and this is working in our setup using the intel-m10-bmc driver.
However, the datapath for MCTP over PCIe VDM from the host is unclear.
I have the following questions:
1. How are PCIe VDM packets generated from the host routed to the PMCI MCTP controller inside the FIM?
2. Is there any specific configuration or enablement required in the PCIe subsystem or FIM fabric to allow VDM packets to reach PMCI?
3. What is the role of the MCTP Management Interface or MCTP VDM controller IP in the OFS FIM design? 4. Is this a custom IP responsible for filtering or routing VDM packets? How is this block exposed to the host?
5. Are there any registers, BAR mappings, or configuration steps required to enable this datapath?
Does the Linux MCTP stack directly interact with PMCI, or is routing handled entirely in hardware?
Currently, we are able to generate PCIe VDM packets from the host, but they are not observed at the PMCI or BMC side. This suggests that the ingress path may not be enabled or packets are being filtered before reaching PMCI.
Any clarification on the expected datapath and required configuration for enabling host to PCIe VDM to PMCI to BMC communication would be very helpful.
Thanks and regards,
Nafiah Siddiqha