Assumed with a 1 TiB NVMe SSD, I am wondering if it’s possible to map its entire capacity (1 TiB) into PCIe BAR for memory-mapped I/O (MMIO).
My understanding is that typically only device registers and doorbell registers of an NVMe SSD are mapped to PCIe BAR space, allowing MMIO access. Once the doorbell is triggered, data transfers occur via DMA between system memory and the NVMe SSD. It makes me thinking if is possible to open up the limited size of devices memory/registers for large range MMIO. ALso in this post, NVMe SSDs’s CMB (Controller Memory Buffer) is excluded.
Given the disparity between the small size of the NVMe SSD’s PCIe BAR space and its overall storage capacity, I’m unsure whether the entire SSD can be exposed to the PCIe BAR or physical memory.
If anyone could provide guidance or clarification on my understanding of PCIe, BAR, and NVMe, I would greatly appreciate it. Thank you for your time and assistance!
Here is an example of 1 TiB Samsung 980Pro SSD with only 16K in PCIe BAR:
# lspci -s 3b:00.0 -v
3b:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller PM9A1/PM9A3/980PRO (prog-if 02 [NVM Express])
Subsystem: Samsung Electronics Co Ltd NVMe SSD Controller PM9A1/PM9A3/980PRO
Flags: bus master, fast devsel, latency 0, IRQ 116, NUMA node 0, IOMMU group 11
Memory at b8600000 (64-bit, non-prefetchable) [size=16K]
Capabilities: [40] Power Management version 3
Capabilities: [50] MSI: Enable- Count=1/32 Maskable- 64bit+
Capabilities: [70] Express Endpoint, MSI 00
Capabilities: [b0] MSI-X: Enable+ Count=130 Masked-
Capabilities: [100] Advanced Error Reporting
Capabilities: [168] Alternative Routing-ID Interpretation (ARI)
Capabilities: [178] Secondary PCI Express
Capabilities: [198] Physical Layer 16.0 GT/s <?>
Capabilities: [1bc] Lane Margining at the Receiver <?>
Capabilities: [214] Latency Tolerance Reporting
Capabilities: [21c] L1 PM Substates
Capabilities: [3a0] Data Link Feature <?>
Kernel driver in use: nvme
Kernel modules: nvme