Skip to content

Conversation

@tdavenvidia
Copy link

This PR adds DirectNIC QEMU support to nvidia_stable-10.1.

shankerd04 and others added 7 commits December 16, 2025 16:27
Nvidia’s next generation GB200 platform has Blackwell GPU and CX8 directly
connected through PCIe Gen6 x16 link. Direct P2P PCIe traffic between GPU
and NIC is possible however it requires ATS at its core and Grace CPU does
not support PCIe ATS. GPA=HPA solution removes the need for GPA to HPA
address translation by configuring PCIe BARs in the VM with HPA. It also
enables ACPI PCI DSM by setting ‘preserve_config’ to true to avoid VM from
reconfiguring the PCI BARs during boot.

Here is the example of PCIe topology that shows GPU and CX8 behind the PCIe Switch:

$ lspci -vt
-[0000:00]---00.0-[01-07]----00.0-[02-07]--+-00.0-[03]--+-00.0  Mellanox Technologies CX8 Family [ConnectX-8]
                                           |            \-00.1  Mellanox Technologies CX8 Family [ConnectX-8]
                                           \-03.0-[04-07]----00.0-[05-07]--+-08.0-[06]--
                                                                           \-0c.0-[07]--
-[0002:00]---00.0-[01-07]----00.0-[02-07]--+-00.0-[03]--+-00.0  Mellanox Technologies CX8 Family [ConnectX-8]
                                           |            \-00.1  Mellanox Technologies CX8 Family [ConnectX-8]
                                           \-01.0-[04-07]----00.0-[05-07]--+-08.0-[06]--
                                                                           \-0c.0-[07]--
-[0005:00]---00.0-[01-0a]----00.0-[02-0a]--+-01.0-[03]--
                                           +-02.0-[04]--
                                           +-03.0-[05]--
                                           +-04.0-[06-07]----00.0-[07]----00.0  ASPEED Technology, Inc. ASPEED Graphics Family
                                           +-05.0-[08]----00.0  Renesas Technology Corp. uPD720201 USB 3.0 Host Controller
                                           +-06.0-[09]----00.0  Intel Corporation I210 Gigabit Network Connection
                                           \-07.0-[0a]--
-[0006:00]---00.0-[01-09]----00.0-[02-09]--+-00.0-[03]--+-00.0  Mellanox Technologies MT43244 BlueField-3 integrated ConnectX-7 network controller
                                           |            +-00.1  Mellanox Technologies MT43244 BlueField-3 integrated ConnectX-7 network controller
                                           |            \-00.2  Mellanox Technologies MT43244 BlueField-3 SoC Management Interface
                                           \-02.0-[04-09]----00.0-[05-09]--+-00.0-[06]----00.0  Samsung Electronics Co Ltd NVMe SSD Controller PM9A1/PM9A3/980PRO
                                                                           +-04.0-[07]----00.0  Samsung Electronics Co Ltd NVMe SSD Controller PM9A1/PM9A3/980PRO
                                                                           +-08.0-[08]----00.0  Samsung Electronics Co Ltd NVMe SSD Controller PM9A1/PM9A3/980PRO
                                                                           \-0c.0-[09]----00.0  Samsung Electronics Co Ltd NVMe SSD Controller PM9A1/PM9A3/980PRO
-[0008:00]---00.0-[01-06]----00.0-[02-06]--+-00.0-[03]----00.0  Mellanox Technologies Device 2100
                                           \-03.0-[04-06]----00.0-[05-06]----00.0-[06]----00.0  NVIDIA Corporation Device 2941
-[0009:00]---00.0-[01-06]----00.0-[02-06]--+-00.0-[03]----00.0  Mellanox Technologies Device 2100
                                           \-01.0-[04-06]----00.0-[05-06]----00.0-[06]----00.0  NVIDIA Corporation Device 2941
-[0010:00]---00.0-[01-07]----00.0-[02-07]--+-00.0-[03]--+-00.0  Mellanox Technologies CX8 Family [ConnectX-8]
                                           |            \-00.1  Mellanox Technologies CX8 Family [ConnectX-8]
                                           \-03.0-[04-07]----00.0-[05-07]--+-08.0-[06]--
                                                                           \-0c.0-[07]--
-[0012:00]---00.0-[01-07]----00.0-[02-07]--+-00.0-[03]--+-00.0  Mellanox Technologies CX8 Family [ConnectX-8]
                                           |            \-00.1  Mellanox Technologies CX8 Family [ConnectX-8]
                                           \-01.0-[04-07]----00.0-[05-07]--+-08.0-[06]--
                                                                           \-0c.0-[07]--
-[0015:00]---00.0-[01]----00.0  Samsung Electronics Co Ltd NVMe SSD Controller PM9A1/PM9A3/980PRO
-[0016:00]---00.0-[01-09]----00.0-[02-09]--+-00.0-[03]--+-00.0  Mellanox Technologies MT43244 BlueField-3 integrated ConnectX-7 network controller
                                           |            +-00.1  Mellanox Technologies MT43244 BlueField-3 integrated ConnectX-7 network controller
                                           |            \-00.2  Mellanox Technologies MT43244 BlueField-3 SoC Management Interface
                                           \-02.0-[04-09]----00.0-[05-09]--+-00.0-[06]----00.0  Samsung Electronics Co Ltd NVMe SSD Controller PM9A1/PM9A3/980PRO
                                                                           +-04.0-[07]----00.0  Samsung Electronics Co Ltd NVMe SSD Controller PM9A1/PM9A3/980PRO
                                                                           +-08.0-[08]----00.0  Samsung Electronics Co Ltd NVMe SSD Controller PM9A1/PM9A3/980PRO
                                                                           \-0c.0-[09]----00.0  Samsung Electronics Co Ltd NVMe SSD Controller PM9A1/PM9A3/980PRO
-[0018:00]---00.0-[01-06]----00.0-[02-06]--+-00.0-[03]----00.0  Mellanox Technologies Device 2100
                                           \-03.0-[04-06]----00.0-[05-06]----00.0-[06]----00.0  NVIDIA Corporation Device 2941
-[0019:00]---00.0-[01-06]----00.0-[02-06]--+-00.0-[03]----00.0  Mellanox Technologies Device 2100
                                           \-01.0-[04-06]----00.0-[05-06]----00.0-[06]----00.0  NVIDIA Corporation Device 2941

GPA=HPA is expected to work with PCIe topology in the VM that resembles to
baremetal. In other words, for P2P PCIe traffic (using GPA=HPA) over Gen6,
CX8 NIC(the DMA-PF) and GPU assigned to VM should be under the same PCIe switch.

Note: PCIe Switch needs special non-conventional ACS configuration such that
minimal P2P routes needed for GPU Direct RDMA should be allowed.

Signed-off-by: Shanker Donthineni <sdonthineni@nvidia.com>
Signed-off-by: Tushar Dave <tdave@nvidia.com>
Signed-off-by: Matthew R. Ochs <mochs@nvidia.com>
Grace Blackwell GPU PCIe BAR1 is real BAR exposed to VM that can be
used for GPUdirect RDMA [1].

This patch assigns HPA to BAR1 in the VM for the reason mentioned in
the commit 54db2e4a632 ("hw/arm: GB200 DirectNIC GPA=HPA").

This patch also assigns appropriate GPA to GPU BAR2 (exposed to VM with
the same size as BAR 1 that emulates C2C cache coherent address space)
to avoid region conflict in PCI bus resource assignment.

[1]: https://lore.kernel.org/lkml/20241006102722.3991-1-ankita@nvidia.com/

Signed-off-by: Tushar Dave <tdave@nvidia.com>
Signed-off-by: Matthew R. Ochs <mochs@nvidia.com>
Simialr to GB200, GB300 also requires workaround to make GPU BAR 1 GPA=HPA.

Signed-off-by: Tushar Dave <tdave@nvidia.com>
…e ports

When PASID capable device is added behind the PCIe downstream port,
for example Nvidia GPU, the PCIe downstream ports must expose ACS capability
otherwise PASID won't get enabled.

In addition, the other usecase is GPUDirect RDMA using Data Direct that
must require special ACS controls at the PCIe downstream ports for
P2P communication.

Signed-off-by: Tushar Dave <tdave@nvidia.com>
To support P2P on Guest we must expose to the guest OS the actual PCIe
topology and configuration as set by the HYP.

Otherwise, the behavior is considered as un-defined.

It might fail by SW or HW.

Extend both root port and downstream port to get acs caps that should
match the HYP and use them in the guest.

Signed-off-by: Yishai Hadas <yishaih@nvidia.com>
Signed-off-by: Tushar Dave <tdave@nvidia.com>
GPUDirect RDMA uisng data-direct requires a specific ACS configuration
on PCIe Root Ports and Downstream Ports.

While ACS can be configured via QEMU's 'acs-caps' property, the guest
kernel may overwrite ACS during standard programming.

This change blocks all guest writes to the PCIe ACS Control register and
preserves QEMU-provided ACS settings across device resets on PCIe Root Ports
and Downstream Ports.

Signed-off-by: Tushar Dave <tdave@nvidia.com>
Testing command for GB200/GB300 GPUDirect RDMA usign Nvidia GPU, CX8 and Data Direct Interface:

qemu-system-aarch64 \
        -object iommufd,id=iommufd0 \
        -machine hmat=on -machine virt,accel=kvm,gic-version=3,ras=on,grace-pcie-mmio-identity=on,highmem-mmio-size=4T \
        -cpu host -smp cpus=16 -m size=16G,slots=2,maxmem=256G -nographic \
        -object memory-backend-ram,size=8G,id=m0 \
        -object memory-backend-ram,size=8G,id=m1 \
        -numa node,memdev=m0,cpus=0-15,nodeid=0 -numa node,memdev=m1,nodeid=1 \
        -numa node,nodeid=2 -numa node,nodeid=3 -numa node,nodeid=4 -numa node,nodeid=5\
        -numa node,nodeid=6 -numa node,nodeid=7 -numa node,nodeid=8 -numa node,nodeid=9\
        -device pxb-pcie,id=pcie.1,bus_nr=1,bus=pcie.0 \
        -device arm-smmuv3,primary-bus=pcie.1,id=smmuv3.1,accel=on,ats=on,ril=off,pasid=on,oas=48,cmdqv=on \
        -device pcie-root-port,id=pcie.port1,bus=pcie.1,chassis=1,io-reserve=0,acs-caps=0x1C \
        -device x3130-upstream,id=upstream1,bus=pcie.port1 \
        -device xio3130-downstream,id=downstream1_1,bus=upstream1,chassis=1,slot=1,acs-caps=0x19 \
        -device vfio-pci,host=0018:03:00.0,bus=downstream1_1,id=dmapf1,iommufd=iommufd0 \
        -device xio3130-downstream,id=downstream1_2,bus=upstream1,chassis=1,slot=2,acs-caps=0x15 \
        -device vfio-pci-nohotplug,host=0018:06:00.0,bus=downstream1_2,rombar=0,id=dev0,iommufd=iommufd0 \
        -object acpi-generic-initiator,id=gi0,pci-dev=dev0,node=2 \
        -object acpi-generic-initiator,id=gi1,pci-dev=dev0,node=3 \
        -object acpi-generic-initiator,id=gi2,pci-dev=dev0,node=4 \
        -object acpi-generic-initiator,id=gi3,pci-dev=dev0,node=5 \
        -object acpi-generic-initiator,id=gi4,pci-dev=dev0,node=6 \
        -object acpi-generic-initiator,id=gi5,pci-dev=dev0,node=7 \
        -object acpi-generic-initiator,id=gi6,pci-dev=dev0,node=8 \
        -object acpi-generic-initiator,id=gi7,pci-dev=dev0,node=9 \
        -bios /usr/share/AAVMF/AAVMF_CODE.fd \
        -device nvme,drive=nvme0,serial=deadbeaf1,bus=pcie.0 \
        -drive file=/home/nvidia/tushar/tushar/ubuntu-24.04-server-cloudimg-arm64-grace-6.14.0-1007-nvidia-64k.qcow2,index=0,media=disk,format=qcow2,if=none,id=nvme0 \
        -device e1000,netdev=net0,bus=pcie.0 \
        -device pxb-pcie,id=pcie.9,bus_nr=9,bus=pcie.0 \
        -device arm-smmuv3,primary-bus=pcie.9,id=smmuv3.2,accel=on,ats=on,ril=off,pasid=on,oas=48,cmdqv=on \
        -device pcie-root-port,id=pcie.port9,bus=pcie.9,chassis=4,io-reserve=0 \
        -device x3130-upstream,id=upstream9,bus=pcie.port9 \
        -device xio3130-downstream,id=downstream9_1,bus=upstream9,chassis=4,slot=1 \
        -device vfio-pci,host=0012:03:00.1,bus=downstream9_1,id=nic1,iommufd=iommufd0 \
        -netdev user,id=net0,hostfwd=tcp::5558-:22,hostfwd=tcp::5586-:5586 \

Signed-off-by: Tushar Dave <tdave@nvidia.com>
--
@nvmochs nvmochs self-requested a review December 16, 2025 18:54
Copy link
Collaborator

@nvmochs nvmochs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I confirmed that these commits match what I had previously reviewed internally. I don't think it makes sense to keep the README since end-users will be relying on documentation rather than a commit message - I plan on dropping that when merging.

Acked-by: Matthew R. Ochs <mochs@nvidia.com>

@shamiali2008
Copy link

shamiali2008 commented Dec 19, 2025

I understand this is a rebase of an existing solution onto this branch and that it is highly NVIDIA specific. Given that, I haven’t done a thorough review at this point. I’m fine proceeding based on the test and verification results, and if those look good, please go ahead

Copy link

@MitchellAugustin MitchellAugustin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks all - I left a few questions that I'd like your input on, as well as some minor change requests.

Since this is a somewhat large delta not intended for upstream, I'd say my main focus is to ensure that any potentially user-visible differences are well documented in relevant parts of the source and in Nvidia's end-user documentation, if it is expected that the behavior for some common action might differ between our qemu's behavior and mainline.

I only sparsely commented areas that were primarily concentrated with very hardware-specific changes (such as specific address mappings that I don't have the full context on), so lmk if you would like additional input from me on any of those areas.

pbar[idx].end = pbar[idx].addr + dev->io_regions[idx].size - 1;
}

/* Make sure BAR1 gets GPA=HPA, adjust other two BARs accordingly to avoind region conflict */

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: s/avoind/avoid

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure thing.


/* Make sure BAR1 gets GPA=HPA, adjust other two BARs accordingly to avoind region conflict */
overlap = true;
while (overlap) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm wondering if there's any opportunity / benefit of optimizing these nested loops - perhaps by tracking the highest address of the new highest shifted BAR within an iteration, and then using that as the alignment input for the pbar[j]? (if I understand correctly, this outermost while(overlap) is only needed since a given BAR could theoretically be adjusted multiple times, which I think that might alleviate.)

As-is, I don't think this would cause a performance issue since I don't see any expensive operations here (and since AFAIK we only have a single-digit number of BARS we're iterating through), but it seems like something that could require optimization if any expensive operations are ever added to these inner loops in the future.

WDYT, worthwhile optimization or not necessary here?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This GPA=HPA changes will be updated with more generic solution - I will address any optimization then. Right now it looks good IMO.

4);
}

static void nvidia_dev_vfio(PCIBus *bus, PCIDevice *dev, void *opaque)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would you mind adding a comment to the code here to briefly describe the format of what is being read from the file descriptors later in this function, and how that content is being used in the context of this function?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That inconsistency is stylistic; there’s no semantic difference. I’ll unify the loop bounds for clarity in when I make generic solution. would that be okay?

Copy link

@MitchellAugustin MitchellAugustin Jan 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Assuming this reply was meant for my above comment - in this thread, I was curious about the syntax of the data read from {vdev->vbasedev.sysfsdev}/resource here on line 292.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My comment about, "That inconsistency is stylistic; ..." was about inner and outer for loop comment that you asked.

Would you mind adding a comment to the code here to briefly describe the format of what is being read from the file descriptors later in this function, and how that content is being used in the context of this function?

It’s the sysfs path string for the VFIO device. In QEMU’s VFIO stack, VFIODevice has a char *sysfsdev field that points to the kernel sysfs entry of the host device (e.g., /sys/bus/pci/devices/0009:06:00.0). I don't think we need comment here.

/* Try with the extended parent window */
ncfg->rbase = QEMU_ALIGN_UP(ncfg->wlimit64 + 1, ncfg->rsize);
ncfg->wlimit64 = ncfg->rbase + ncfg->rsize - 1;
/* TODO: check conflicts with the extended window */

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this TODO anything we should be concerned about within the scope of this PR, or for a future PR and expected to remain a TODO at this stage?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No that TODO is not necessary.

Copy link
Author

@tdavenvidia tdavenvidia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for review, I replied to your comments, let me know.

} NVIDIACfg;

#define IORESOURCE_PREFETCH 0x00002000 /* No side effects */
#define IORESOURCE_MEM_64 0x00100000
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I will remove IORESOURCE_MEM_64


/* Make sure BAR1 gets GPA=HPA, adjust other two BARs accordingly to avoind region conflict */
overlap = true;
while (overlap) {
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This GPA=HPA changes will be updated with more generic solution - I will address any optimization then. Right now it looks good IMO.

4);
}

static void nvidia_dev_vfio(PCIBus *bus, PCIDevice *dev, void *opaque)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That inconsistency is stylistic; there’s no semantic difference. I’ll unify the loop bounds for clarity in when I make generic solution. would that be okay?

/* Try with the extended parent window */
ncfg->rbase = QEMU_ALIGN_UP(ncfg->wlimit64 + 1, ncfg->rsize);
ncfg->wlimit64 = ncfg->rbase + ncfg->rsize - 1;
/* TODO: check conflicts with the extended window */
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No that TODO is not necessary.

cap_bits = PCI_ACS_SV | PCI_ACS_TB | PCI_ACS_RR |
PCI_ACS_CR | PCI_ACS_UF | PCI_ACS_DT;

if (ctrl_bits & ~cap_bits) {
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your thorough review :)

pci_set_word(dev->wmask + offset + PCI_ACS_CTRL, cap_bits);

if (is_downstream && p->acs_caps) {
/* Block guest writes to ACS Control entirely to preserve QEMU ACS settings */
Copy link
Author

@tdavenvidia tdavenvidia Jan 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mainly for my understanding: If this is only needed for GPUDirect RDMA, is it necessary to make this change for all PCI devices (as opposed to just NICs and GPUs?)

We are adding 'acs_caps' as a generic option, right? ACS is per device primary applicable RP and Downstream Ports , and so 'acs_caps'.

I'd also like to have a better understanding of whether you think this kind of change might cause any unexpected end-user-visible behavior when doing passthrough operations,

Admin who is launching QEMU and using ACS should know what they are doing. Like kernel which allows to use 'config_acs' kernel parameter , acs_caps serves the same purpose, for example in this case to allow p2p between pcie devices e.g. GPU and NIC.

Also, making the ACS read-only to the guest seems like something that might be useful to other hardware, and also might be nice to have as an explicit qemu configuration option rather than being implemented in this function - are there plans to make this a more general configurable in that future upstream PR?

The acs_caps is generic option for all PCIe.

FWIW, here is the Testing command for GB200/GB300 GPUDirect RDMA using Nvidia GPU, CX8 and Data Direct Interface:

qemu-system-aarch64 \
          -object iommufd,id=iommufd0 \
          -machine hmat=on -machine virt,accel=kvm,gic-version=3,ras=on,grace-pcie-mmio-identity=on,highmem-mmio-size=4T \
          -cpu host -smp cpus=16 -m size=16G,slots=2,maxmem=256G -nographic \
          -object memory-backend-ram,size=8G,id=m0 \
          -object memory-backend-ram,size=8G,id=m1 \
          -numa node,memdev=m0,cpus=0-15,nodeid=0 -numa node,memdev=m1,nodeid=1 \
          -numa node,nodeid=2 -numa node,nodeid=3 -numa node,nodeid=4 -numa node,nodeid=5\
          -numa node,nodeid=6 -numa node,nodeid=7 -numa node,nodeid=8 -numa node,nodeid=9\
          -device pxb-pcie,id=pcie.1,bus_nr=1,bus=pcie.0 \
          -device arm-smmuv3,primary-bus=pcie.1,id=smmuv3.1,accel=on,ats=on,ril=off,pasid=on,oas=48,cmdqv=on \
          -device pcie-root-port,id=pcie.port1,bus=pcie.1,chassis=1,io-reserve=0,acs-caps=0x1C \
          -device x3130-upstream,id=upstream1,bus=pcie.port1 \
          -device xio3130-downstream,id=downstream1_1,bus=upstream1,chassis=1,slot=1,acs-caps=0x19 \
          -device vfio-pci,host=0018:03:00.0,bus=downstream1_1,id=dmapf1,iommufd=iommufd0 \
          -device xio3130-downstream,id=downstream1_2,bus=upstream1,chassis=1,slot=2,acs-caps=0x15 \
          -device vfio-pci-nohotplug,host=0018:06:00.0,bus=downstream1_2,rombar=0,id=dev0,iommufd=iommufd0 \
          -object acpi-generic-initiator,id=gi0,pci-dev=dev0,node=2 \
          -object acpi-generic-initiator,id=gi1,pci-dev=dev0,node=3 \
          -object acpi-generic-initiator,id=gi2,pci-dev=dev0,node=4 \
          -object acpi-generic-initiator,id=gi3,pci-dev=dev0,node=5 \
          -object acpi-generic-initiator,id=gi4,pci-dev=dev0,node=6 \
          -object acpi-generic-initiator,id=gi5,pci-dev=dev0,node=7 \
          -object acpi-generic-initiator,id=gi6,pci-dev=dev0,node=8 \
          -object acpi-generic-initiator,id=gi7,pci-dev=dev0,node=9 \
          -bios /usr/share/AAVMF/AAVMF_CODE.fd \
          -device nvme,drive=nvme0,serial=deadbeaf1,bus=pcie.0 \
          -drive file=<IMG.qcow2>,index=0,media=disk,format=qcow2,if=none,id=nvme0 \
          -device e1000,netdev=net0,bus=pcie.0 \
          -device pxb-pcie,id=pcie.9,bus_nr=9,bus=pcie.0 \
          -device arm-smmuv3,primary-bus=pcie.9,id=smmuv3.2,accel=on,ats=on,ril=off,pasid=on,oas=48,cmdqv=on \
          -device pcie-root-port,id=pcie.port9,bus=pcie.9,chassis=4,io-reserve=0 \
          -device x3130-upstream,id=upstream9,bus=pcie.port9 \
          -device xio3130-downstream,id=downstream9_1,bus=upstream9,chassis=4,slot=1 \
          -device vfio-pci,host=0012:03:00.1,bus=downstream9_1,id=nic1,iommufd=iommufd0 \
          -netdev user,id=net0,hostfwd=tcp::5558-:22,hostfwd=tcp::5586-:5586 \

}

void pcie_acs_reset(PCIDevice *dev)
void pcie_acs_reset(PCIDevice *dev, uint16_t val)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"val" refers to as a PCIe ACS Register value, so IMO it is fine.

Copy link

@MitchellAugustin MitchellAugustin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks the quick responses to my review comments/questions. This all looks good to me now.

Acked-by: Mitchell Augustin <mitchell.augustin@canonical.com>

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants