Description
We are developing a SystemVerilog library and aiming for a reproducible build and test environment. To achieve this, we want to use a specific version of Verilator running in a Docker container, rather than relying on the system-installed version.
We are currently struggling to find a "native" or recommended way to configure FuseSoC to use containerized tools.
Current Approach (Workaround)
Our current solution involves "tricking" FuseSoC by manipulating the PATH to point to a wrapper script that invokes Docker.
-
Core File (project.core):
We define our targets using the standard verilator tool definition:
targets:
lint:
default_tool: verilator
filesets: [rtl]
tools:
verilator:
mode: lint-only
-
Wrapper Script (.fusesoc/verilator):
We created a script named verilator that acts as a shim:
#!/bin/bash
sudo docker run --rm \
-v $(pwd):/work \
-w /work \
verilator/verilator:latest \
"$@"
-
Invocation Script (fusesoc-docker.sh):
We run FuseSoC via a script that prepends our wrapper directory to the PATH:
export PATH="$(pwd)/.fusesoc:$PATH"
fusesoc "$@"
The Problem
While this "works" for simple cases, it feels fragile and hacky:
- Path Issues: It relies on volume mounting
$(pwd) to /work. If FuseSoC passes absolute paths that are outside of $(pwd) (e.g., from a cache directory or a dependency in another location), the container won't see them.
- Permissions: It requires
sudo (or a configured docker group), which complicates CI/CD environments.
- Tool Awareness: FuseSoC doesn't know it's running a container, so it can't intelligently handle things like environment variables or file paths that need translation.
The SVUnit Complication
The SVUnit Complication
The situation gets even more complex when integrating SVUnit. SVUnit is a framework that generates a test runner and then invokes the simulator. We run this via a hook_script in our core file.
-
Core File Configuration (project.core):
We define a script that runs the tests and attach it to the post_run hook of our test target.
scripts:
run_tests:
filesets:
- hook_scripts
cmd:
- sh
- -c
- "$FILES_ROOT/test/run_svunit.sh"
targets:
test:
default_tool: verilator
filesets:
- rtl
- hook_scripts
tools:
verilator:
mode: lint-only
hooks:
post_run:
- run_tests
toplevel: TopLevel_pkg
-
The Script (test/run_svunit.sh):
This script handles the complex Docker invocation required to run SVUnit inside the container.
#!/bin/bash
# ... checks for SVUNIT_INSTALL_PATH ...
sudo docker run --entrypoint bash \
-e SVUNIT_INSTALL=/eda/svunit \
-v ${SVUNIT_INSTALL_PATH}:/eda/svunit \
-v ./${FILES_ROOT}:/work \
-w /work/test \
verilator/verilator:latest \
-c "cd \$SVUNIT_INSTALL && source Setup.bsh && cd /work/test && \
runSVUnit -s verilator --c_arg '-I../src'"
This setup forces us to:
- Mount multiple volumes: The project code and the SVUnit installation (which lives on the host) must be mounted into the container.
- Handle Environment Variables: We need to pass
SVUNIT_INSTALL_PATH from the host to the container.
- Bypass FuseSoC: The command running inside the container is a complex chain that FuseSoC is unaware of. We are essentially just using FuseSoC as a glorified
make target that calls a shell script.
Questions
-
Is there a native way to specify a container image for a tool?
Ideally, we'd like something like:
tools:
verilator:
container: verilator/verilator:latest
Or a global configuration to map tools to containers.
-
How should we handle file paths?
If we stick to the wrapper approach, is there a standard way to ensure all necessary files (including dependencies managed by FuseSoC) are available to the container?
-
Are there examples of "Flow API" usage for this?
We looked into the Flow API but weren't sure if creating a custom flow just to wrap Docker is the right path, or if there's a simpler configuration we missed.
We would appreciate any guidance on the "best practice" way to achieve reproducible, containerized builds with FuseSoC.
Description
We are developing a SystemVerilog library and aiming for a reproducible build and test environment. To achieve this, we want to use a specific version of Verilator running in a Docker container, rather than relying on the system-installed version.
We are currently struggling to find a "native" or recommended way to configure FuseSoC to use containerized tools.
Current Approach (Workaround)
Our current solution involves "tricking" FuseSoC by manipulating the
PATHto point to a wrapper script that invokes Docker.Core File (
project.core):We define our targets using the standard
verilatortool definition:Wrapper Script (
.fusesoc/verilator):We created a script named
verilatorthat acts as a shim:Invocation Script (
fusesoc-docker.sh):We run FuseSoC via a script that prepends our wrapper directory to the
PATH:The Problem
While this "works" for simple cases, it feels fragile and hacky:
$(pwd)to/work. If FuseSoC passes absolute paths that are outside of$(pwd)(e.g., from a cache directory or a dependency in another location), the container won't see them.sudo(or a configured docker group), which complicates CI/CD environments.The SVUnit Complication
The SVUnit Complication
The situation gets even more complex when integrating SVUnit. SVUnit is a framework that generates a test runner and then invokes the simulator. We run this via a
hook_scriptin our core file.Core File Configuration (
project.core):We define a script that runs the tests and attach it to the
post_runhook of our test target.The Script (
test/run_svunit.sh):This script handles the complex Docker invocation required to run SVUnit inside the container.
This setup forces us to:
SVUNIT_INSTALL_PATHfrom the host to the container.maketarget that calls a shell script.Questions
Is there a native way to specify a container image for a tool?
Ideally, we'd like something like:
Or a global configuration to map tools to containers.
How should we handle file paths?
If we stick to the wrapper approach, is there a standard way to ensure all necessary files (including dependencies managed by FuseSoC) are available to the container?
Are there examples of "Flow API" usage for this?
We looked into the Flow API but weren't sure if creating a custom flow just to wrap Docker is the right path, or if there's a simpler configuration we missed.
We would appreciate any guidance on the "best practice" way to achieve reproducible, containerized builds with FuseSoC.