Skip to content

Investigate perfomance for bigger gRPC message sizes #77

@fyrchik

Description

@fyrchik

By default it is 4 MiB. This leads to big objects being split into 4MiB chunks, thus for MaxObjectSize=64 MiB we have at least 16 messages sent (don't forget about signing and verification). For custom deployments where we have full control over the network we could set this size depending on MaxObjectSize both on client and server.

In this task:

  1. Set grpc.MaxRecvMsgSize in node to some high value (70 MiB).
  2. Perform benchmarks of 64 MiB objects with custom client build (see cli: Fix default buffer size for object PUT nspcc-dev/neofs-node#2243 for an example on what needs to be changed).
  3. If observations support the hypothesis, add a config parameter for this and create tasks to support this on client:
  • api-go and sdk-go
  • k6
  • frostfs-cli parameter

In theory this enables future optimizations, such as being able to replicate an object from the blobstor without unmarshaling. (in this case, also check that validation is done when the object is received, we don't want to propagate possibly corrupted data across the cluster, see https://www.usenix.org/system/files/conference/fast17/fast17-ganesan.pdf).

Somewhat related TrueCloudLab/frostfs-api#9

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions