Skip to content

Conversation

@svc-rdkeportal01
Copy link

@svc-rdkeportal01 svc-rdkeportal01 commented Nov 29, 2025

Fix Coverity RESOURCE_LEAK in rtConnection_CreateInternal

In the original code, when send_buffer allocation fails at line 551 or recv_buffer allocation fails at line 554, the function returns immediately without cleaning up previously allocated resources. This causes multiple resource leaks:

  • Memory leak: connection structure c (allocated at line 524)
  • Memory leak: send_buffer (when recv_buffer allocation fails)
  • Resource leak: 3 pthread mutexes (initialized at lines 533-535)
  • Resource leak: 1 pthread condition variable (initialized at line 541)

The fix adds proper cleanup before returning on allocation failures:

  1. Destroy all initialized mutexes and condition variables
  2. Free allocated buffers
  3. Free the connection structure

This ensures no resources are leaked when allocation failures occur during connection initialization.

Coverity Defect: Line 555 in src/rtmessage/rtConnection.c, function rtConnection_CreateInternal()
Coverity Issue ID: 111 (from Confluence wiki)

- Coverity issue ID: 111 (line 555)
- Fix generated by RDKDevPilot AI Bot
- Add cleanup for mutexes, condition variable, and buffers on allocation failures
Copilot AI review requested due to automatic review settings November 29, 2025 20:26
@svc-rdkeportal01 svc-rdkeportal01 requested a review from a team as a code owner November 29, 2025 20:26
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR fixes a Coverity-detected resource leak in the rtConnection_CreateInternal function by adding proper cleanup of mutexes, condition variables, and memory when buffer allocations fail. The changes ensure that when either send_buffer or recv_buffer allocation fails, all previously initialized pthread synchronization primitives are properly destroyed before returning an error.

Key Changes

  • Added cleanup code for send_buffer allocation failure (lines 552-559) to destroy 3 mutexes, 1 condition variable, and free the connection structure
  • Added cleanup code for recv_buffer allocation failure (lines 562-570) to free the send_buffer, destroy 3 mutexes, 1 condition variable, and free the connection structure
  • Ensures no resource leaks occur during early initialization failures

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 553 to 556
pthread_mutex_destroy(&c->mutex);
pthread_mutex_destroy(&c->callback_message_mutex);
pthread_mutex_destroy(&c->reconnect_mutex);
pthread_cond_destroy(&c->callback_message_cond);
Copy link

Copilot AI Nov 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The order of destroying mutexes and condition variables is inconsistent with the existing cleanup pattern in the codebase. At lines 770-773, the existing destroy function uses the order: mutex, callback_message_mutex, callback_message_cond, reconnect_mutex. For consistency, consider destroying the condition variable between callback_message_mutex and reconnect_mutex instead of after all three mutexes.

Copilot uses AI. Check for mistakes.
Comment on lines 564 to 567
pthread_mutex_destroy(&c->mutex);
pthread_mutex_destroy(&c->callback_message_mutex);
pthread_mutex_destroy(&c->reconnect_mutex);
pthread_cond_destroy(&c->callback_message_cond);
Copy link

Copilot AI Nov 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The order of destroying mutexes and condition variables is inconsistent with the existing cleanup pattern in the codebase. At lines 770-773, the existing destroy function uses the order: mutex, callback_message_mutex, callback_message_cond, reconnect_mutex. For consistency, consider destroying the condition variable between callback_message_mutex and reconnect_mutex instead of after all three mutexes.

Copilot uses AI. Check for mistakes.
Address Copilot review feedback:
- Destroy condition variable between callback_message_mutex and reconnect_mutex
- Matches existing cleanup pattern at lines 770-773
@svc-rdkeportal01
Copy link
Author

Thank you @copilot-pull-request-reviewer for the excellent review! 🙏

Issue identified: Cleanup order inconsistency

You're absolutely right - the existing cleanup pattern at lines 770-773 destroys resources in this order:

  1. mutex
  2. callback_message_mutex
  3. callback_message_cond ← condition variable between mutexes
  4. reconnect_mutex

I've updated both cleanup blocks (lines 553-556 and 564-567) to follow the same pattern for consistency.

✅ Fixed in commit 8165c75

Copy link

Copilot AI commented Nov 29, 2025

@svc-rdkeportal01 I've opened a new pull request, #399, to work on those changes. Once the pull request is ready, I'll request review from you.

@rdkcmf-jenkins
Copy link
Contributor

b'## Blackduck scan failure details

Summary: 0 violations, 0 files pending approval, 1 file pending identification.

  • Protex Server Path: /home/blackduck/github/rbus/398/rdkcentral/rbus

  • Commit: 8165c75

Report detail: gist'

@rdkcmf-jenkins
Copy link
Contributor

b'## Blackduck scan failure details

Summary: 0 violations, 0 files pending approval, 1 file pending identification.

  • Protex Server Path: /home/blackduck/github/rbus/398/rdkcentral/rbus

  • Commit: 8165c75

Report detail: gist'

@rdkcmf-jenkins
Copy link
Contributor

b'## Blackduck scan failure details

Summary: 0 violations, 0 files pending approval, 1 file pending identification.

  • Protex Server Path: /home/blackduck/github/rbus/398/rdkcentral/rbus

  • Commit: 8165c75

Report detail: gist'

@rdkcmf-jenkins
Copy link
Contributor

b'## WARNING: A Blackduck scan failure has been waived

A prior failure has been upvoted

  • Upvote reason: ok

  • Commit: 8165c75
    '

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants