Skip to content

Seemingly large overhead running vivado on docker #247

@pwaller

Description

@pwaller

Take this perf analysis, for example, recorded while an synthesis is happening with vivado:

sudo yum install -y perf
sudo perf record -g -a sleep 10
sudo perf report
[snip]
   - 34.64% sys_read
      - 34.51% vfs_read
         - 34.26% seq_read
            - 34.01% proc_single_show
               - 33.81% proc_pid_status
                  + 11.45% render_sigset_t
                  + 10.17% cpuset_task_status_allowed
                  + 5.09% seq_printf
                  + 3.30% task_mem
                  + 2.15% render_cap_t
[snip]

It shows 34% of cycles spent in sys_read (i.e. the read syscall), and of those we're spending tons of time in proc_pid_status. This makes no sense. Why would we spend all our CPU budget interrogating the pid status and time in seq_printf?

According to my mental model, if this measurement is correct, we're wasting a lot of time on the kernel when we could be using it for synthesis. The obvious culpret would be seccomp, which effectively runs a Berkeley Packet Filter on every syscall invocation. However, the above measurement was taken in a docker container which was invoked with --security-opt seccomp=unconfined, so that theory is questionable. I'd love to know why proc_single_show is appearing in our stack traces.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions