-
Notifications
You must be signed in to change notification settings - Fork 36
A Few Questions about riscv_iommu.c #323
Description
While playing around with Salus, I notice that a virtual IOMMU was enabled since there are prints(when booting debian image) like:
Found RISC-V IOMMU version 0x2,and
[ 0.518950] iommu: Default domain type: Translated
[ 0.519827] iommu: DMA domain TLB invalidation policy: strict mode
Then I find a file called riscv_iommu.c (https://github.com/rivosinc/qemu/blob/salus-integration-10312022/hw/riscv/riscv_iommu.c), after reading the code I believe that this c file is the implementation codes for the virtual IOMMU. I noticed that the vIOMMU was designed as a PCI device as the TypeInfo implies:
static const TypeInfo riscv_iommu_pci = {
.name = TYPE_RISCV_IOMMU_PCI,
.parent = TYPE_PCI_DEVICE,
.instance_size = sizeof(RISCVIOMMUStatePci),
.class_init = riscv_iommu_pci_init,
.interfaces = (InterfaceInfo[]) {
{ INTERFACE_PCIE_DEVICE },
{ },
},
};
For comparison, the TypeInfo in intel_iommu.c(https://github.com/rivosinc/qemu/blob/salus-integration-10312022/hw/i386/intel_iommu.c) imples:
static const TypeInfo vtd_info = {
.name = TYPE_INTEL_IOMMU_DEVICE,
.parent = TYPE_X86_IOMMU_DEVICE,
.instance_size = sizeof(IntelIOMMUState),
.class_init = vtd_class_init,
};
Here comes the first question. Why implement the IOMMU as a PCI device? As my humble knowledge goes, a typical IOMMU usually be integrated with the CPU. Maybe doing this for developing convenience?
Put aside the first question, I then notice that:
k->vendor_id = PCI_VENDOR_ID_RIVOS;
k->device_id = PCI_DEVICE_ID_RIVOS_IOMMU;
...
k->class_id = 0x0806;
in the riscv_iommu_pci_init function. I decided to observe the behaviors of the vIOMMU. In qemu-monitor, using info pci, it says:
(qemu) info pci
info pci
Bus 0, device 0, function 0:
Host bridge: PCI device 1b36:0008
PCI subsystem 1af4:1100
id ""
Bus 0, device 1, function 0:
Class 0264: PCI device 1b36:0010
PCI subsystem 1af4:1100
IRQ 0, pin A
BAR0: 64 bit memory at 0x100000000 [0x100003fff].
id ""
Bus 0, device 2, function 0:
Class 2054: PCI device 1efd:edf1
PCI subsystem 1af4:1100
BAR0: 64 bit memory at 0x17ffff000 [0x17fffffff].
id ""
Bus 0, device 3, function 0:
Ethernet controller: PCI device 1af4:1041
PCI subsystem 1af4:1100
IRQ 0, pin A
BAR1: 32 bit memory at 0x40000000 [0x40000fff].
BAR4: 64 bit prefetchable memory at 0x100004000 [0x100007fff].
BAR6: 32 bit memory at 0xffffffffffffffff [0x0003fffe].
id ""
Since the PCI_VENDOR_ID_RIVOS,PCI_DEVICE_ID_RIVOS_IOMMU,0x0806 equals to 0x1efd,0xedf1,2054D respectively, I believed that device 2 is the riscv_iommu_pci device. But in debian, trying to find the vIOMMU using lspci -v , it only says:
root@debian:~# lspci -v
00:00.0 Host bridge: Red Hat, Inc. QEMU PCIe Host bridge
Subsystem: Red Hat, Inc. QEMU PCIe Host bridge
Flags: fast devsel
lspci: Unable to load libkmod resources: error -2
00:01.0 Non-Volatile memory controller: Red Hat, Inc. QEMU NVM Express Controller (rev 02) (prog-if 02 [NVM Express])
Subsystem: Red Hat, Inc. QEMU NVM Express Controller
Flags: bus master, fast devsel, latency 0
Memory at 100000000 (64-bit, non-prefetchable) [size=16K]
Capabilities: [40] MSI-X: Enable+ Count=65 Masked-
Capabilities: [80] Express Root Complex Integrated Endpoint, MSI 00
Capabilities: [60] Power Management version 3
Kernel driver in use: nvme
00:03.0 Ethernet controller: Red Hat, Inc. Virtio network device (rev 01)
Subsystem: Red Hat, Inc. Virtio network device
Flags: bus master, fast devsel, latency 0
Memory at 40000000 (32-bit, non-prefetchable) [size=4K]
Memory at 100004000 (64-bit, prefetchable) [size=16K]
Capabilities: [98] MSI-X: Enable+ Count=4 Masked-
Capabilities: [84] Vendor Specific Information: VirtIO: <unknown>
Capabilities: [70] Vendor Specific Information: VirtIO: Notify
Capabilities: [60] Vendor Specific Information: VirtIO: DeviceCfg
Capabilities: [50] Vendor Specific Information: VirtIO: ISR
Capabilities: [40] Vendor Specific Information: VirtIO: CommonCfg
Kernel driver in use: virtio-pci
That leads to the other question. Why the riscv_iommu_pci device hasn't been recognized correctly by pciutils, but qemu-monitor seems normal? Possibly driver issues? mistaken setting? or any other problem?
I am researching the CoVE(or AP-TEE) currently, it would be very helpful to me if you answer my questions. 🥲