Transform processor - memory usage continue to increase when transforming high cardinality Redis Calls #2148
Closed
manojksardana
announced in
Community Discussions
Replies: 1 comment
-
|
hi @manojksardana, I'd suggest asking in the https://github.com/open-telemetry/opentelemetry-collector-contrib repo where the spanmetricsconnector lives |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi All,
we are using open telemetry collector to receive traces from the application instrumented using otel SDK. To have optimal cardinality for the traces, we are trying to used transform processor which transform the traces based on some pattern and replace then with a generic URL. Post this replacement, the traces are send to spanmetric connector to convert the traces into metrics based on the few predefined dimensions.
Below is our definition of the pipelines
Now as part of of the transform processor we have the following operations
One of the command above (in bold) is converting all the high cardinality redis calls to give same span name called redis. As the number of calls are very high here, i expect all the calls get converted to same span name and get into same metric calculations by span metric later.
however this result into ever increasing memory and end up getting into OOM error. If I remove this call and add it to filter processor to filter it out instead of passing it to spanmetric as single span name, memory remain under control and consistent.
I am not sure if its the high cardinality of redis call resulting into some memory leak at transform or spanmetric processor level, or there is something else wrong which i am not able to capture.
Any help here ?
Thanks
Manoj Sardana
Beta Was this translation helpful? Give feedback.
All reactions