I use custom board with ZynqMP as root port, that connected to Jetson AGX Orin EP.
I tested this connection and can transfer data. But i have use DMA for transfering data from PS memory Zynq to Jetson.
I find resource in wiki:
https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18842008/Zynq+UltraScale+PS-PCIe+Linux+Configuration
Then i run simple_test, i get error:
root@zynqup:~# simple-test -c 0 -a 0x100000 -l 1024 -d s2c -b 0
failed to open data file
: No such file or directory
I see the following errors in the log:
**root@zynqup:~# dmesg | grep dma
[ 2.873437] ps_pcie_dma init()
[ 2.876194] xlnx-platform-dma-driver fd0f0000.rootdma: Hardware reports channel not present
[ 2.884358] xlnx-platform-dma-driver fd0f0000.rootdma: Unable to read channel properties
[ 2.892408] xlnx-platform-dma-driver: probe of fd0f0000.rootdma failed with error -524
[ 3.372596] Unable to acquire dma channels 0**
I find ps-pcie-dma.txt which states that I should add an node for DMA, but it is not specified which file.
I add node to system-user.dtsi:
&amba {
pci_rootdma: rootdma@fd0f0000 {
compatible = "xlnx,ps_pcie_dma-1.00.a";
reg = <0x0 0xfd0f0000 0x0 0x1000>;
reg-names = "ps_pcie_regbase";
interrupts = <0 117 4>;
interrupt-names = "ps_pcie_rootdma_intr";
interrupt-parent = <&gic>;
rootdma;
dma_vendorid = /bits/ 16 <0x10EE>;
dma_deviceid = /bits/ 16 <0xD011>;
numchannels = <0x4>;
#size-cells = <0x5>;
ps_pcie_channel0 = <0x1 0x7CF 0x4 0x0 0x3E8>;
ps_pcie_channel1 = <0x0 0x7CF 0x4 0x0 0x3E8>;
ps_pcie_channel2 = <0x1 0x7CF 0x4 0x0 0x3E8>;
ps_pcie_channel3 = <0x0 0x7CF 0x4 0x0 0x3E8>;
};
};
I’m not sure if I should add to &amba
I try anower variant:
/include/ "system-conf.dtsi"
/ {
pci_rootdma: rootdma@fd0f0000 {
...
};
};
but have same errors.
I find a lot of posts that don’t have replies from Xilinx a lot of links don’t work, and I can’t set up DMA.
What could be the reasons, how can I localize the error?
Here is a logs with more information:
`**root@zynqup:~# dmesg | grep pci**
[ 2.873437] ps_pcie_dma init()
[ 3.425690] nwl-pcie fd0e0000.pcie: Link is UP
[ 3.430159] nwl-pcie fd0e0000.pcie: host bridge /amba/pcie@fd0e0000 ranges:
[ 3.437127] nwl-pcie fd0e0000.pcie: MEM 0xe0000000..0xefffffff -> 0xe0000000
[ 3.444348] nwl-pcie fd0e0000.pcie: MEM 0x600000000..0x7ffffffff -> 0x600000000
[ 3.451933] nwl-pcie fd0e0000.pcie: PCI host bridge to bus 0000:00
[ 3.458117] pci_bus 0000:00: root bus resource [bus 00-ff]
[ 3.463597] pci_bus 0000:00: root bus resource [mem 0xe0000000-0xefffffff]
[ 3.470461] pci_bus 0000:00: root bus resource [mem 0x600000000-0x7ffffffff pref]
[ 3.477953] pci 0000:00:00.0: [10ee:d011] type 01 class 0x060400
[ 3.478009] pci 0000:00:00.0: PME# supported from D0 D1 D2 D3hot
[ 3.479449] pci 0000:01:00.0: [10de:229a] type 00 class 0x050000
[ 3.479520] pci 0000:01:00.0: reg 0x10: [mem 0x00000000-0x0fffffff 64bit pref]
[ 3.479540] pci 0000:01:00.0: reg 0x18: [mem 0x00000000-0x0001ffff 64bit pref]
[ 3.479560] pci 0000:01:00.0: reg 0x20: [mem 0x00000000-0x00000fff 64bit]
[ 3.479583] pci 0000:01:00.0: Max Payload Size set to 128 (was 256, max 256)
[ 3.486752] pci 0000:01:00.0: PME# supported from D0 D3hot
[ 3.486781] pci 0000:01:00.0: 4.000 Gb/s available PCIe bandwidth, limited by 5 GT/s x1 link at 0000:00:00.0 (capable of 126.024 Gb/s with 16 GT/s x8 link)
[ 3.501986] pci 0000:00:00.0: BAR 9: assigned [mem 0x600000000-0x617ffffff 64bit pref]
[ 3.509897] pci 0000:00:00.0: BAR 8: assigned [mem 0xe0000000-0xe00fffff]
[ 3.516676] pci 0000:01:00.0: BAR 0: assigned [mem 0x600000000-0x60fffffff 64bit pref]
[ 3.524595] pci 0000:01:00.0: BAR 2: assigned [mem 0x610000000-0x61001ffff 64bit pref]
[ 3.532520] pci 0000:01:00.0: BAR 4: assigned [mem 0xe0000000-0xe0000fff 64bit]
[ 3.539836] pci 0000:00:00.0: PCI bridge to [bus 01-0c]
[ 3.545059] pci 0000:00:00.0: bridge window [mem 0xe0000000-0xe00fffff]
[ 3.551837] pci 0000:00:00.0: bridge window [mem 0x600000000-0x617ffffff 64bit pref]
**root@zynqup:~# lspci -v**
00:00.0 PCI bridge: Xilinx Corporation Device d011 (prog-if 00 [Normal decode])
Flags: fast devsel, IRQ 255
Bus: primary=00, secondary=01, subordinate=0c, sec-latency=0
I/O behind bridge: 00000000-00000fff [size=4K]
Memory behind bridge: e0000000-e00fffff [size=1M]
Prefetchable memory behind bridge: 0000000600000000-0000000617ffffff [size=384M]
Capabilities: [40] Power Management version 3
Capabilities: [60] Express Root Port (Slot-), MSI 00
Capabilities: [100] Device Serial Number 00-00-00-00-00-00-00-00
Capabilities: [10c] Virtual Channel
Capabilities: [128] Vendor Specific Information: ID=1234 Rev=1 Len=018 <?>
01:00.0 RAM memory: NVIDIA Corporation Device 229a
Flags: fast devsel, IRQ 255
[virtual] Memory at 600000000 (64-bit, prefetchable) [size=256M]
Memory at 610000000 (64-bit, prefetchable) [size=128K]
Memory at e0000000 (64-bit, non-prefetchable) [size=4K]
Capabilities: [40] Power Management version 3
Capabilities: [50] MSI: Enable- Count=1/16 Maskable+ 64bit+
Capabilities: [70] Express Endpoint, MSI 00
Capabilities: [b0] MSI-X: Enable- Count=8 Masked-
Capabilities: [100] Advanced Error Reporting
Capabilities: [148] Secondary PCI Express <?>
Capabilities: [168] Physical Layer 16.0 GT/s <?>
Capabilities: [190] Lane Margining at the Receiver <?>
Capabilities: [1b8] Latency Tolerance Reporting
Capabilities: [1c0] L1 PM Substates
Capabilities: [1d0] Vendor Specific Information: ID=0002 Rev=4 Len=100 <?>
Capabilities: [2d0] Vendor Specific Information: ID=0001 Rev=1 Len=038 <?>
Capabilities: [308] Data Link Feature <?>
Capabilities: [314] Precision Time Measurement
Capabilities: [320] Vendor Specific Information: ID=0003 Rev=1 Len=054 <?>
Capabilities: [388] Vendor Specific Information: ID=0006 Rev=0 Len=018 <?>
**root@zynqup:~# ls /sys/firmware/devicetree/base/amba/rootdma@fd0f0000**
#size-cells dma_deviceid interrupt-names interrupts numchannels ps_pcie_channel1 ps_pcie_channel3 reg-names
compatible dma_vendorid interrupt-parent name ps_pcie_channel0 ps_pcie_channel2 reg rootdma`