Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

user/vijays/vdpa dev #2

Open
wants to merge 4 commits into
base: user/vijays/vdpa_dev
Choose a base branch
from

Conversation

asaini-xilinx
Copy link
Contributor

vdpa/sfc: Add support for SW assisted live migration
vdpa/sfc: add support for mcdi IOVA remap

Abhimanyu Saini added 4 commits July 15, 2022 11:13
In SW assisted live migration, vDPA driver will stop all virtqueues
and setup up SW vrings to relay the communication between the
virtio driver and the vDPA device using an event driven relay thread

This will allow vDPA driver to help on guest dirty page logging for
live migration.

Signed-off-by: Abhimanyu Saini <[email protected]>
vDPA driver allocates a MCDI buffer and maps it for DMA in the IO address
space of the VF, and guest virtio driver independently allocates virtqueues
and uses their corresponding guest IOVA(s). There is no guarantee that the
values for these two addresses will be unique and vDPA might need to
relocate mcdi IOVA.

The patch adds libefx API to handle mcdi IOVA remapping.

Signed-off-by: Abhimanyu Saini <[email protected]>
MCDI IOVA unmap/remap is handled by client driver, add
dma_remap callback to mcdi_ops so that libefx can pass
the control to client driver and move the MCDI IOVA to
a new address.

Signed-off-by: Abhimanyu Saini <[email protected]>
vDPA driver allocates a MCDI buffer and maps it for DMA in the IO address
space of the virtual function, and the virtio-net driver independently
allocates virtqueues and uses their corresponding guest IOVA(s). There is
no guarantee that the values for these two addresses will be unique and
vDPA might need to relocate mcdi IOVA.

To resolves the problem of overlap between IOVA(s) allocated by vDPA and
qemu, three changes have been made to the vDPA driver.

1) Cache all known IOVA(s) in a linked list and add functions
   to check overlap and resolve IOVA opverlaps.
2) Check for overlap
3) If overlap is found, find a new IOVA and invoke efx_mcdi_dma_remap

Signed-off-by: Abhimanyu Saini <[email protected]>
okt-galaktionov pushed a commit that referenced this pull request Oct 21, 2022
If DPDK is built with thread sanitizer it reports a race
in setting of multiprocess file descriptor. The fix is to
use atomic operations when updating mp_fd.

Build:
$ meson -Db_sanitize=address build
$ ninja -C build

Simple example:
$ .build/app/dpdk-testpmd -l 1-3 --no-huge
EAL: Detected CPU lcores: 16
EAL: Detected NUMA nodes: 1
EAL: Static memory layout is selected, amount of reserved memory can be adjusted with -m or --socket-mem
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /run/user/1000/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
testpmd: No probed ethernet devices
testpmd: create a new mbuf pool <mb_pool_0>: n=163456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
EAL: Error - exiting with code: 1
  Cause: Creation of mbuf pool for socket 0 failed: Cannot allocate memory
==================
WARNING: ThreadSanitizer: data race (pid=87245)
  Write of size 4 at 0x558e04d8ff70 by main thread:
    #0 rte_mp_channel_cleanup <null> (dpdk-testpmd+0x1e7d30c)
    #1 rte_eal_cleanup <null> (dpdk-testpmd+0x1e85929)
    #2 rte_exit <null> (dpdk-testpmd+0x1e5bc0a)
    #3 mbuf_pool_create.cold <null> (dpdk-testpmd+0x274011)
    #4 main <null> (dpdk-testpmd+0x5cc15d)

  Previous read of size 4 at 0x558e04d8ff70 by thread T2:
    #0 mp_handle <null> (dpdk-testpmd+0x1e7c439)
    #1 ctrl_thread_init <null> (dpdk-testpmd+0x1e6ee1e)

  As if synchronized via sleep:
    #0 nanosleep libsanitizer/tsan/tsan_interceptors_posix.cpp:366
    #1 get_tsc_freq <null> (dpdk-testpmd+0x1e92ff9)
    #2 set_tsc_freq <null> (dpdk-testpmd+0x1e6f2fc)
    #3 rte_eal_timer_init <null> (dpdk-testpmd+0x1e931a4)
    #4 rte_eal_init.cold <null> (dpdk-testpmd+0x29e578)
    #5 main <null> (dpdk-testpmd+0x5cbc45)

  Location is global 'mp_fd' of size 4 at 0x558e04d8ff70 (dpdk-testpmd+0x000003122f70)

  Thread T2 'rte_mp_handle' (tid=87248, running) created by main thread at:
    #0 pthread_create libsanitizer/tsan/tsan_interceptors_posix.cpp:969
    #1 rte_ctrl_thread_create <null> (dpdk-testpmd+0x1e6efd0)
    #2 rte_mp_channel_init.cold <null> (dpdk-testpmd+0x29cb7c)
    #3 rte_eal_init <null> (dpdk-testpmd+0x1e8662e)
    #4 main <null> (dpdk-testpmd+0x5cbc45)

SUMMARY: ThreadSanitizer: data race (app/dpdk-testpmd+0x1e7d30c) in rte_mp_channel_cleanup
==================
ThreadSanitizer: reported 1 warnings

Fixes: bacaa27 ("eal: add channel for multi-process communication")
Cc: [email protected]

Signed-off-by: Stephen Hemminger <[email protected]>
Acked-by: Anatoly Burakov <[email protected]>
Reviewed-by: Chengwen Feng <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant