Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds deterministic hashing utilities and verification, restructures audit payloads into root metadata plus nested data, updates OpenSearch mappings/settings and config (indexPrefix/refresh), changes service interfaces to use AuditLogInput/AuditLogResult/AuditLogDocument, expands tests, and bumps package version. Changes
Sequence Diagram(s)sequenceDiagram
rect rgba(200,220,255,0.5)
participant Client
end
rect rgba(200,255,200,0.5)
participant AuditService
participant OpenSearchService
end
rect rgba(255,230,200,0.5)
participant HashUtility
participant OpenSearch
end
Client->>AuditService: log(AuditLogInput)
AuditService->>OpenSearchService: log(input)
OpenSearchService->>OpenSearchService: build nested data object
OpenSearchService->>HashUtility: computeLogHash(data)
HashUtility-->>OpenSearchService: hex hash
OpenSearchService->>OpenSearch: index AuditLogDocument (timestamp, serviceName, hash, eventPhase, correlationId, data) with refresh
OpenSearch-->>OpenSearchService: ack / error
OpenSearchService-->>AuditService: AuditLogResult
AuditService-->>Client: AuditLogResult
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~28 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 8
🧹 Nitpick comments (5)
src/utils/interfaces/audit.ts (1)
11-42:AuditLogInputandAuditLogDatashare nearly identical fields — consider composing.
AuditLogInputisAuditLogDatapluscorrelationIdandeventPhase. Defining one in terms of the other eliminates the maintenance burden of keeping both in sync.♻️ Proposed refactor
-export interface AuditLogInput { - correlationId: string; - eventPhase: 'INTENT' | 'SUCCESS' | 'FAILED'; - eventType: string; - actorId: string; - actorRole: string; - actorName: string; - resourceType: string; - resourceId?: string; - sourceIp: string; - description: string; - status?: string; - tenantId: string; - durationMs?: number; - outcome?: Record<string, unknown>; - actionPerformed?: Record<string, unknown>; -} +export interface AuditLogInput extends AuditLogData { + correlationId: string; + eventPhase: 'INTENT' | 'SUCCESS' | 'FAILED'; +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/interfaces/audit.ts` around lines 11 - 42, AuditLogInput and AuditLogData duplicate nearly all fields; refactor by making one type extend the other: define AuditLogData with the common fields and then change AuditLogInput to extend AuditLogData adding correlationId and eventPhase (or the inverse if you prefer), removing the duplicated field declarations; update any references to AuditLogInput/AuditLogData accordingly so the interface names remain the same but duplication is eliminated.src/utils/hash-utility.ts (1)
51-64:verifyHashChainis misleadingly named — it does not implement a hash chain.A hash chain links each entry to its predecessor (e.g.,
hash_n = H(data_n || hash_{n-1})). This function independently verifies each log's hash, which is batch verification, not chaining. The name could mislead consumers into thinking sequential integrity is guaranteed.Consider renaming to
verifyAllLogHashesorverifyLogBatch, or actually implementing chaining if sequential tamper detection is intended.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/hash-utility.ts` around lines 51 - 64, The function verifyHashChain misleadingly suggests sequential chaining but currently just batch-verifies each AuditLogDocument using verifyLogHash; either rename verifyHashChain to a clearer name such as verifyAllLogHashes or verifyLogBatch (and update all references/exports accordingly), or implement true chain verification by ordering the AuditLogDocument entries and checking that each entry's stored hash equals the computed hash over its data plus the previous entry's hash (e.g., compute expected = H(data_n || prevHash) and compare), keeping verifyLogHash only for single-entry validation; adjust unit tests and any callers that rely on verifyHashChain to the new name or new chain semantics.src/services/opensearch.service.ts (2)
104-131:dataobject manually mirrors everyAuditLogDatafield — fragile if the interface evolves.If a new field is added to
AuditLogInput/AuditLogData, this explicit field-by-field copy (lines 117–131) must be updated in lockstep, or the new field silently won't be hashed or stored. A destructuring approach is more maintainable:♻️ Proposed refactor
- const { correlationId, eventPhase, ...dataFields } = input; - - const data = { - eventType: dataFields.eventType, - actorId: dataFields.actorId, - actorRole: dataFields.actorRole, - actorName: dataFields.actorName, - resourceType: dataFields.resourceType, - resourceId: dataFields.resourceId, - sourceIp: dataFields.sourceIp, - description: dataFields.description, - status: dataFields.status, - tenantId: dataFields.tenantId, - durationMs: dataFields.durationMs, - outcome: dataFields.outcome, - actionPerformed: dataFields.actionPerformed, - }; + const { correlationId, eventPhase, ...data } = input;The existing destructuring on Line 115 already produces
dataFieldswith exactly the right shape. Just rename it todataand use it directly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/opensearch.service.ts` around lines 104 - 131, The current log method in OpenSearchService manually rebuilds the AuditLogData object (the local variable data) field-by-field, which is fragile; instead use the already-destructured dataFields from the AuditLogInput directly (e.g., const data = { ...dataFields } or simply const data = dataFields) so new fields on AuditLogInput/AuditLogData are preserved automatically; update the public async log(input: AuditLogInput) implementation to stop enumerating each property and use dataFields as the data payload (keeping correlationId and eventPhase excluded as currently done).
80-87: Consider migrating from legacy to composable index templates.OpenSearch 2.x+ marks the legacy
_templateAPIs as deprecated in favor of composable index templates (_index_template). WhileexistsTemplateandputTemplateremain supported for backward compatibility in@opensearch-project/opensearch^3.5.1, modern code should useexists_index_templateandput_index_templateinstead.Migrate to:
this.client.indices.exists_index_template({ name: templateName })this.client.indices.put_index_template({ name: templateName, body: templateBody })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/opensearch.service.ts` around lines 80 - 87, The code currently uses the legacy existsTemplate and putTemplate calls; replace them with the composable template APIs by calling this.client.indices.exists_index_template({ name: templateName }) and, if not found, this.client.indices.put_index_template({ name: templateName, body: templateBody }); update any checks that rely on exists.statusCode === 404 to use the new response shape (exists_index_template resolves to { body: boolean } in newer clients) or adapt to the client’s returned structure, and ensure you update references to existsTemplate and putTemplate to exists_index_template and put_index_template respectively.__tests__/opensearch.service.test.ts (1)
92-104: PrefertoStrictEqualovertoEqualfor thedataobject contract assertion.
toEqualignores object keys withundefinedproperties, so if the implementation emits{ resourceId: undefined }(e.g., becauseresourceIdwas absent from the input), the assertion would still pass.toStrictEqualshould be the default go-to method for precise contract tests like this one, ensuring no unexpectedundefined-valued keys silently leak into thedatapayload.♻️ Proposed refactor
- expect(callArgs.body.data).toEqual({ + expect(callArgs.body.data).toStrictEqual({ eventType: 'LOGIN', actorId: 'user-123', actorRole: 'Admin', actorName: 'John Doe', resourceId: 'account-55', resourceType: 'UserAccount', sourceIp: '127.0.0.1', description: 'User logged in', tenantId: 'tenant-1', outcome: { extra: 'data' }, actionPerformed: { performed: 'action' }, });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__tests__/opensearch.service.test.ts` around lines 92 - 104, Replace the loose deep-equality assertion on the emitted event payload with a strict one: in the test where you assert expect(callArgs.body.data).toEqual({...}), change to expect(callArgs.body.data).toStrictEqual({...}) so the assertion on the data object (the payload emitted by the code under test) fails if any keys are present with undefined values; keep the same object literal contents and location in the test (__tests__/opensearch.service.test.ts) but swap toEqual for toStrictEqual to enforce exact object shape.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@__tests__/opensearch.service.test.ts`:
- Around line 244-262: The test is missing service initialization—before calling
logger.log(...) you must await logger.init('TestService') so serviceName is set
and the call reaches the OpenSearch client (mockIndex) path; update the failing
test to call await logger.init('TestService') prior to invoking logger.log and
keep mockIndex.mockRejectedValueOnce(...) and the .rejects.toThrow(...)
assertion to ensure the error originates from the mocked client.
- Around line 110-137: The timestamp is currently added after computing the
document hash, leaving it unprotected; modify the OpenSearchService code path
that builds the audit document (the method that calls computeHash / creates the
document before index()) to generate the timestamp first and include that
timestamp in the object passed to computeHash so the resulting body.hash covers
timestamp as well; alternatively, if timestamp exclusion is intentional, add an
explicit comment in the same method (e.g., the
log/indexDocument/createAuditDocument function) explaining the security/design
rationale so future maintainers know this is deliberate rather than a bug.
In `@README.md`:
- Around line 148-173: The README's API Reference is out of sync: update the
IAuditService and AuditLogInput definitions to match the implementation by
changing IAuditService.log's return type to Promise<AuditLogResult>, remove the
'INFO' member from the AuditLogInput.eventPhase union (leave only 'INTENT' |
'SUCCESS' | 'FAILED'), and add the missing optional fields status? and
durationMs? to AuditLogInput; ensure the README's symbol names match the code
(IAuditService, IAuditService.log, AuditLogInput, AuditLogResult, and
eventPhase) so the documentation mirrors src/utils/interfaces/audit.ts.
In `@src/index.ts`:
- Line 4: Export the hash utilities from the public API so consumers can perform
tamper detection: add exports for computeLogHash, verifyLogHash, and
verifyHashChain to the module's top-level exports (the same file that currently
exports IAuditService/AuditLog* types). Locate the functions by name
(computeLogHash, verifyLogHash, verifyHashChain) and add them to the index
export list so they become part of the package surface area alongside the
existing Audit types/exports.
In `@src/services/opensearch.service.ts`:
- Around line 46-62: The OpenSearch mapping for the `data` object is missing
`status` and `durationMs`, causing unpredictable dynamic typing; update the
mapping in `opensearch.service.ts` where `data: { properties: { ... } }` is
defined to include `status` (e.g., type 'keyword') and `durationMs` (e.g., type
'long' or another appropriate numeric type) to match `AuditLogData` and ensure
consistent index mappings across deployments.
- Around line 146-151: The current call to this.client.index in
src/services/opensearch.service.ts includes refresh: 'wait_for', which will
bottleneck writes; update the indexing logic (the method that calls
this.client.index, referencing this.client.index, indexName and document) to
stop forcing wait_for by default: either remove the refresh option entirely or
add a configurable flag (e.g., a class or method-level option like
refreshOnWrite / refreshPolicy read from service constructor/config) so callers
can opt into immediate visibility only when needed; ensure the default preserves
cluster throughput (no wait_for) and propagate the new config into the
this.client.index call.
In `@src/utils/hash-utility.ts`:
- Around line 14-31: computeLogHash produces non-deterministic hashes for nested
objects and is vulnerable to delimiter-injection because it builds a custom
delimited string; replace the custom join logic by producing a canonical JSON
representation of the entire AuditLogData (recursively sort object keys,
including nested objects like outcome and actionPerformed) and hash that
canonical JSON; do this either by using a deterministic serializer (e.g.,
json-stringify-deterministic) or implement a deep key-sort + stable stringify
utility and then feed the resulting canonical string into
crypto.createHash('sha256') in computeLogHash so values like
description/tenantId cannot collide via injected delimiters.
In `@src/utils/interfaces/audit.ts`:
- Around line 1-4: The AuditLogResult type is missing from the public exports so
consumers can't import the return type of log(); update the module exports in
src/index.ts to include AuditLogResult alongside the other audit types (the same
place where AuditLog and related audit types are exported) so the
Promise<AuditLogResult> return type from log() can be properly imported and
annotated by callers.
---
Nitpick comments:
In `@__tests__/opensearch.service.test.ts`:
- Around line 92-104: Replace the loose deep-equality assertion on the emitted
event payload with a strict one: in the test where you assert
expect(callArgs.body.data).toEqual({...}), change to
expect(callArgs.body.data).toStrictEqual({...}) so the assertion on the data
object (the payload emitted by the code under test) fails if any keys are
present with undefined values; keep the same object literal contents and
location in the test (__tests__/opensearch.service.test.ts) but swap toEqual for
toStrictEqual to enforce exact object shape.
In `@src/services/opensearch.service.ts`:
- Around line 104-131: The current log method in OpenSearchService manually
rebuilds the AuditLogData object (the local variable data) field-by-field, which
is fragile; instead use the already-destructured dataFields from the
AuditLogInput directly (e.g., const data = { ...dataFields } or simply const
data = dataFields) so new fields on AuditLogInput/AuditLogData are preserved
automatically; update the public async log(input: AuditLogInput) implementation
to stop enumerating each property and use dataFields as the data payload
(keeping correlationId and eventPhase excluded as currently done).
- Around line 80-87: The code currently uses the legacy existsTemplate and
putTemplate calls; replace them with the composable template APIs by calling
this.client.indices.exists_index_template({ name: templateName }) and, if not
found, this.client.indices.put_index_template({ name: templateName, body:
templateBody }); update any checks that rely on exists.statusCode === 404 to use
the new response shape (exists_index_template resolves to { body: boolean } in
newer clients) or adapt to the client’s returned structure, and ensure you
update references to existsTemplate and putTemplate to exists_index_template and
put_index_template respectively.
In `@src/utils/hash-utility.ts`:
- Around line 51-64: The function verifyHashChain misleadingly suggests
sequential chaining but currently just batch-verifies each AuditLogDocument
using verifyLogHash; either rename verifyHashChain to a clearer name such as
verifyAllLogHashes or verifyLogBatch (and update all references/exports
accordingly), or implement true chain verification by ordering the
AuditLogDocument entries and checking that each entry's stored hash equals the
computed hash over its data plus the previous entry's hash (e.g., compute
expected = H(data_n || prevHash) and compare), keeping verifyLogHash only for
single-entry validation; adjust unit tests and any callers that rely on
verifyHashChain to the new name or new chain semantics.
In `@src/utils/interfaces/audit.ts`:
- Around line 11-42: AuditLogInput and AuditLogData duplicate nearly all fields;
refactor by making one type extend the other: define AuditLogData with the
common fields and then change AuditLogInput to extend AuditLogData adding
correlationId and eventPhase (or the inverse if you prefer), removing the
duplicated field declarations; update any references to
AuditLogInput/AuditLogData accordingly so the interface names remain the same
but duplication is eliminated.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (9)
README.md__tests__/hash-utility.test.ts__tests__/opensearch.service.test.tspackage.jsonsrc/index.tssrc/services/audit.service.tssrc/services/opensearch.service.tssrc/utils/hash-utility.tssrc/utils/interfaces/audit.ts
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (4)
src/index.ts (1)
4-4:AuditLogResultis still not exported from the public API.Consumers need to import
AuditLogResultto annotate the return value ofIAuditService.log(). This was flagged in a prior review and remains unresolved.-export type { IAuditService, AuditLogInput, AuditLogDocument } from './utils/interfaces/audit'; +export type { IAuditService, AuditLogInput, AuditLogDocument, AuditLogResult } from './utils/interfaces/audit';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/index.ts` at line 4, Export the missing AuditLogResult type from the public API: add AuditLogResult to the existing export list alongside IAuditService, AuditLogInput, and AuditLogDocument so consumers can type the return value of IAuditService.log(); update the export line that currently exports IAuditService, AuditLogInput, AuditLogDocument to also include AuditLogResult.__tests__/opensearch.service.test.ts (1)
110-137: Timestamp is still outside hash protection — design decision not documented.The deterministic-hash test confirms
timestampis excluded: two calls with identicaldatabut different wall-clock times produce the same hash. This was raised in a prior review. The service adds a comment// Compute hash on data object onlybut does not explain why, leaving it ambiguous for future maintainers whether this is intentional or an oversight. If timestamp exclusion is by design (e.g., to enable de-duplication checks), please add an explicit rationale comment at the hash-computation site inopensearch.service.ts.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__tests__/opensearch.service.test.ts` around lines 110 - 137, Update the inline comment at the hash-computation site in opensearch.service.ts (the line currently reading "// Compute hash on data object only") to explicitly state the design rationale: that timestamp is intentionally excluded to allow deterministic de-duplication/idempotency across repeated logs (describe the intended behavior and any caveats, e.g., collisions when other mutable fields change), and reference the function/method where this occurs (e.g., the compute/hash routine used by log or computeDeterministicHash in the logging flow) so future maintainers understand this is intentional.src/utils/interfaces/audit.ts (1)
1-4:AuditLogResultis still missing from the public exports insrc/index.ts.
IAuditService.logreturnsPromise<AuditLogResult>, so callers need to import this type to annotate results. It is not currently re-exported fromsrc/index.ts.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/interfaces/audit.ts` around lines 1 - 4, The public export barrel is missing the AuditLogResult type required by callers (IAuditService.log returns Promise<AuditLogResult>); fix by adding a re-export for AuditLogResult from the root export file (the package's public export barrel) so consumers can import it, and prefer a type-only export (e.g., export type { AuditLogResult }) if using TypeScript to avoid runtime emits; ensure the new export is grouped with other interface/type exports.src/services/opensearch.service.ts (1)
33-34:⚠️ Potential issue | 🟡 MinorHardcoded template name silently breaks if
indexPrefixis ever changed.
templateNameis always'audit-logs-template'regardless ofthis.indexPrefix. If the prefix is changed in config (e.g.,'my-audit'),existsTemplate('audit-logs-template')returns200(old template still exists), soputTemplateis skipped and the newindex_patterns: ['my-audit-*']is never registered. Newly created indices won't match any template and will use dynamic mappings.🐛 Proposed fix — tie the template name to the prefix
-const templateName = 'audit-logs-template'; +const templateName = `${this.indexPrefix}-template`;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/opensearch.service.ts` around lines 33 - 34, The template name is hardcoded as 'audit-logs-template' causing existsTemplate/putTemplate to miss changes to this.indexPrefix; change templateName to derive from the configured prefix (e.g., const templateName = `${this.indexPrefix}-template` or similar) and use that computed templateName in the existsTemplate and putTemplate calls so the index pattern in templateBody (index_patterns: [`${this.indexPrefix}-*`]) and template lifecycle are always tied to this.indexPrefix (references: templateName, templateBody, this.indexPrefix, existsTemplate, putTemplate).
🧹 Nitpick comments (2)
src/services/opensearch.service.ts (1)
117-131:dataobject construction is verbose and will silently miss newAuditLogDatafields.
dataFieldsalready has the exact shape ofAuditLogData(it'sOmit<AuditLogInput, 'correlationId' | 'eventPhase'>). The explicit property-by-property copy means any future field added toAuditLogInput/AuditLogDatamust also be manually added here.♻️ Proposed simplification
-const data = { - eventType: dataFields.eventType, - actorId: dataFields.actorId, - actorRole: dataFields.actorRole, - actorName: dataFields.actorName, - resourceType: dataFields.resourceType, - resourceId: dataFields.resourceId, - sourceIp: dataFields.sourceIp, - description: dataFields.description, - tenantId: dataFields.tenantId, - outcome: dataFields.outcome, - actionPerformed: dataFields.actionPerformed, -}; +const data: AuditLogData = dataFields;This also requires importing
AuditLogDataat the top of the file:-import type { IAuditService, AuditLogInput, AuditLogResult } from '../utils/interfaces/audit'; +import type { IAuditService, AuditLogInput, AuditLogData, AuditLogResult } from '../utils/interfaces/audit';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/opensearch.service.ts` around lines 117 - 131, The explicit construction of the data object copies each AuditLogData field manually and will miss future fields; instead, replace the manual property list with a typed spread from dataFields (e.g., const data: AuditLogData = { ...dataFields }) and add the AuditLogData import at the top of the file; keep correlationId and eventPhase destructuring as-is and ensure the new data variable type matches AuditLogData to catch mismatches at compile time.src/utils/interfaces/audit.ts (1)
11-38:AuditLogInputandAuditLogDatashare 10 identical fields — consider interface extension.
AuditLogInputis structurallyAuditLogDatapluscorrelationIdandeventPhase. The current layout duplicates every field across both interfaces.♻️ Proposed refactor — eliminate duplication via extension
-export interface AuditLogInput { - correlationId: string; // Client-generated UUID for request tracking - eventPhase: 'INTENT' | 'SUCCESS' | 'FAILED'; - eventType: string; - actorId: string; - actorRole: string; - actorName: string; - resourceType: string; - resourceId?: string; - sourceIp: string; - description: string; - tenantId: string; - outcome?: Record<string, unknown>; - actionPerformed?: Record<string, unknown>; -} export interface AuditLogData { eventType: string; actorId: string; actorRole: string; actorName: string; resourceType: string; resourceId?: string; sourceIp: string; description: string; tenantId: string; outcome?: Record<string, unknown>; actionPerformed?: Record<string, unknown>; } + +export interface AuditLogInput extends AuditLogData { + correlationId: string; // Client-generated UUID for request tracking + eventPhase: 'INTENT' | 'SUCCESS' | 'FAILED'; +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/interfaces/audit.ts` around lines 11 - 38, AuditLogInput duplicates many fields already present in AuditLogData; refactor by extracting the shared fields into a single base interface (e.g., AuditLogBase) or by making AuditLogInput extend AuditLogData so the common properties are declared once; update AuditLogData (or the new AuditLogBase) to contain the shared properties (eventType, actorId, actorRole, actorName, resourceType, resourceId?, sourceIp, description, tenantId, outcome?, actionPerformed?) and adjust AuditLogInput to only add correlationId and eventPhase (or have AuditLogData extend the base and AuditLogInput extend AuditLogData), removing the duplicated property declarations.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@README.md`:
- Around line 81-118: The example passes durationMs to auditLogger.log but
AuditLogInput lacks that field; either add durationMs?: number to the
AuditLogInput type (src/utils/interfaces/audit.ts) and propagate/mapping it
through the audit service and OpenSearch template, ensuring the audit logging
method (auditLogger.log) and any index templates accept and store durationMs, or
remove the durationMs entries from the example usage in README.md so it matches
the current AuditLogInput shape; pick one approach and update the corresponding
service mapping and OpenSearch template if you choose to add the field.
- Line 36: The README documents OPENSEARCH_REFRESH but the code in
openSearch.config.ts hardcodes refresh: 'wait_for'; update openSearch.config.ts
to read process.env.OPENSEARCH_REFRESH (or use your existing env helper) and map
its allowed values ('false' -> false, 'true' -> true, 'wait_for' -> 'wait_for')
into the export used by the OpenSearch client (the refresh option used in the
index/update/create calls), validate unknown values (fallback to the current
default 'wait_for' or the README default) and ensure the config symbol refresh
is set from that mapped value so operators can control behaviour via
OPENSEARCH_REFRESH.
In `@src/config/openSearch.config.ts`:
- Line 32: openSearchConfig currently hardcodes refresh: 'wait_for' instead of
honoring the OPENSEARCH_REFRESH env var or the passed config.refresh; update
openSearchConfig to read process.env.OPENSEARCH_REFRESH (or a parsed
boolean/string) and use config.refresh ?? process.env.OPENSEARCH_REFRESH ??
'false' when setting the refresh property so client.index() uses the
environment/config value instead of always 'wait_for'; reference the
openSearchConfig function and the refresh property in your change.
---
Duplicate comments:
In `@__tests__/opensearch.service.test.ts`:
- Around line 110-137: Update the inline comment at the hash-computation site in
opensearch.service.ts (the line currently reading "// Compute hash on data
object only") to explicitly state the design rationale: that timestamp is
intentionally excluded to allow deterministic de-duplication/idempotency across
repeated logs (describe the intended behavior and any caveats, e.g., collisions
when other mutable fields change), and reference the function/method where this
occurs (e.g., the compute/hash routine used by log or computeDeterministicHash
in the logging flow) so future maintainers understand this is intentional.
In `@src/index.ts`:
- Line 4: Export the missing AuditLogResult type from the public API: add
AuditLogResult to the existing export list alongside IAuditService,
AuditLogInput, and AuditLogDocument so consumers can type the return value of
IAuditService.log(); update the export line that currently exports
IAuditService, AuditLogInput, AuditLogDocument to also include AuditLogResult.
In `@src/services/opensearch.service.ts`:
- Around line 33-34: The template name is hardcoded as 'audit-logs-template'
causing existsTemplate/putTemplate to miss changes to this.indexPrefix; change
templateName to derive from the configured prefix (e.g., const templateName =
`${this.indexPrefix}-template` or similar) and use that computed templateName in
the existsTemplate and putTemplate calls so the index pattern in templateBody
(index_patterns: [`${this.indexPrefix}-*`]) and template lifecycle are always
tied to this.indexPrefix (references: templateName, templateBody,
this.indexPrefix, existsTemplate, putTemplate).
In `@src/utils/interfaces/audit.ts`:
- Around line 1-4: The public export barrel is missing the AuditLogResult type
required by callers (IAuditService.log returns Promise<AuditLogResult>); fix by
adding a re-export for AuditLogResult from the root export file (the package's
public export barrel) so consumers can import it, and prefer a type-only export
(e.g., export type { AuditLogResult }) if using TypeScript to avoid runtime
emits; ensure the new export is grouped with other interface/type exports.
---
Nitpick comments:
In `@src/services/opensearch.service.ts`:
- Around line 117-131: The explicit construction of the data object copies each
AuditLogData field manually and will miss future fields; instead, replace the
manual property list with a typed spread from dataFields (e.g., const data:
AuditLogData = { ...dataFields }) and add the AuditLogData import at the top of
the file; keep correlationId and eventPhase destructuring as-is and ensure the
new data variable type matches AuditLogData to catch mismatches at compile time.
In `@src/utils/interfaces/audit.ts`:
- Around line 11-38: AuditLogInput duplicates many fields already present in
AuditLogData; refactor by extracting the shared fields into a single base
interface (e.g., AuditLogBase) or by making AuditLogInput extend AuditLogData so
the common properties are declared once; update AuditLogData (or the new
AuditLogBase) to contain the shared properties (eventType, actorId, actorRole,
actorName, resourceType, resourceId?, sourceIp, description, tenantId, outcome?,
actionPerformed?) and adjust AuditLogInput to only add correlationId and
eventPhase (or have AuditLogData extend the base and AuditLogInput extend
AuditLogData), removing the duplicated property declarations.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
README.md__tests__/opensearch.service.test.tssrc/config/openSearch.config.tssrc/index.tssrc/services/opensearch.service.tssrc/utils/hash-utility.tssrc/utils/interfaces/audit.tssrc/utils/interfaces/opensearch.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- src/utils/hash-utility.ts
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (3)
README.md (2)
26-37: VerifyOPENSEARCH_REFRESHis actually consumed by config.The README advertises
OPENSEARCH_REFRESH, but I can’t confirm from this file whetheropenSearch.config.tsreads it. Please verify to prevent misleading operators.#!/bin/bash # Verify OPENSEARCH_REFRESH is read and mapped in config rg -n -C3 'OPENSEARCH_REFRESH' rg -n -C3 'refresh' -g '*openSearch*.ts'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 26 - 37, README advertises OPENSEARCH_REFRESH but the codebase may not consume it; check the OpenSearch config module (openSearch.config.ts) for reading process.env.OPENSEARCH_REFRESH and mapping it to whatever config property is used (e.g., refresh, defaultRefresh, or clientOptions); if it's missing, add logic in the exported config function/object (e.g., getOpenSearchConfig or export const openSearchConfig) to parse process.env.OPENSEARCH_REFRESH (handle 'false'|'true'|'wait_for' with a sensible default) and ensure that property is passed into the OpenSearch client initialization or any index request options so the README value is actually honored.
151-171: Cross-checkAuditLogInputfields against source interfaces.Ensure the documented
AuditLogInputmatchessrc/utils/interfaces/audit.ts(past comments flagged missing fields). If the code includes additional optional fields (e.g.,status,durationMs), update the README or adjust the interface.#!/bin/bash # Compare README API reference with the actual interface definition rg -n -C3 'interface AuditLogInput' README.md rg -n -C3 'interface AuditLogInput' -g 'src/utils/interfaces/audit.ts'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 151 - 171, The README's AuditLogInput type must be synchronized with the actual AuditLogInput interface in the source; open the AuditLogInput definition in the codebase (the AuditLogInput interface in src) and reconcile any missing or extra optional fields (e.g., status, durationMs or others) by either adding them to the README snippet or updating the source interface if the docs are authoritative; ensure field names, types and optionality (e.g., outcome?:, resourceId?:) match exactly and update the README block to reflect the canonical AuditLogInput shape.__tests__/opensearch.service.test.ts (1)
116-143: Timestamp exclusion from hash scope confirmed — prior unresolved concern.This test explicitly proves that two
log()calls with identicaldataproduce the samehashregardless of when they run (i.e.,timestampis excluded from hash computation). The prior review comment flagged this as a design gap for an audit library: a tamperedtimestamp(e.g., to suppress sequence or attribution of an event) would not be detected by the current hash.If this exclusion is intentional (e.g., because timestamps are environment-controlled and trusted), add a comment in both the test and in
computeLogHash/opensearch.service.tsexplicitly documenting the design decision.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__tests__/opensearch.service.test.ts` around lines 116 - 143, The test currently demonstrates that computeLogHash excludes timestamp from the hash, which can be surprising for an audit library; if this behavior is intentional, add an explicit clarifying comment in the test (in __tests__/opensearch.service.test.ts near the deterministic-hash case) and add/update the documentation comment above computeLogHash in opensearch.service.ts explaining that timestamp is intentionally omitted from the hash (and why — e.g., trusted/variable environment timestamps or to ensure idempotent hashing across retries), including the function name computeLogHash so future reviewers can locate the logic.
🧹 Nitpick comments (3)
__tests__/hash-utility.test.ts (2)
25-41: Missing canonicalization key-order test.The
computeLogHashfunction uses recursive key-sorting (canonicalization) as its core determinism guarantee. There's no test verifying that twoAuditLogDataobjects with identical key-value pairs but different key insertion orders produce the same hash. This is the most important property to exercise.✅ Suggested test
it('should produce the same hash regardless of key insertion order', () => { const data1: AuditLogData = { eventType: 'LOGIN', actorId: 'user-123', actorRole: 'Admin', actorName: 'John Doe', resourceType: 'UserAccount', sourceIp: '127.0.0.1', description: 'User login', tenantId: 'tenant-1', }; // Same fields, different insertion order const data2: AuditLogData = { tenantId: 'tenant-1', description: 'User login', sourceIp: '127.0.0.1', resourceType: 'UserAccount', actorName: 'John Doe', actorRole: 'Admin', actorId: 'user-123', eventType: 'LOGIN', }; expect(computeLogHash(data1)).toBe(computeLogHash(data2)); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__tests__/hash-utility.test.ts` around lines 25 - 41, Add a unit test that verifies computeLogHash is order-independent by creating two AuditLogData objects with identical key/value pairs but different insertion orders (e.g., one literal with eventType first and another with tenantId first), call computeLogHash on both, and assert the hashes are equal; place this alongside the existing deterministic-hash test and name it something like "should produce the same hash regardless of key insertion order" to exercise the recursive key-sorting canonicalization logic of computeLogHash.
22-22: Redundant length assertion.
.toMatch(/^[a-f0-9]{64}$/)already guarantees exactly 64 characters; the subsequent.toBe(64)check is redundant. Same pattern at line 130–131.🧹 Proposed cleanup
- expect(hash).toMatch(/^[a-f0-9]{64}$/); - expect(hash.length).toBe(64); + expect(hash).toMatch(/^[a-f0-9]{64}$/);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__tests__/hash-utility.test.ts` at line 22, Remove the redundant length assertions that follow the regex checks: since expect(hash).toMatch(/^[a-f0-9]{64}$/) already enforces a 64-character hex string, delete the subsequent expect(hash.length).toBe(64) calls (and the identical pair at the later assertion). Keep the regex assertions and remove only the .toBe(64) length checks to clean up the tests.src/config/openSearch.config.ts (1)
18-21:?.and?? 'false'are dead code.
validateEnvVaris typedstring | number | booleanand never returnsnullorundefined. When the optional env var is absent it returns'';''.toString()is'', which is not nullish, so?? 'false'never triggers. The actual default of'false'comes fromincludes('')returningfalseon line 19–21, which is the correct path. The operators are misleading here.🧹 Proposed cleanup
- const rawRefresh = validateEnvVar('OPENSEARCH_REFRESH', 'string', true)?.toString() ?? 'false'; + const rawRefresh = validateEnvVar('OPENSEARCH_REFRESH', 'string', true).toString();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/config/openSearch.config.ts` around lines 18 - 21, The current rawRefresh assignment uses optional chaining and nullish coalescing which are dead since validateEnvVar never returns null/undefined; replace the expression to explicitly coerce the returned value to a string (use String(...) or .toString() without `?.`/`??`) so rawRefresh truly represents the env value (possibly empty string) and then keep the existing validation that computes refresh from rawRefresh; update references to OPENSEARCH_REFRESH, validateEnvVar, rawRefresh, and refresh accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@__tests__/hash-utility.test.ts`:
- Around line 219-296: verifyHashChain currently only calls verifyLogHash on
each AuditLogDocument independently (no linking), so it cannot detect
reordering/insertion/deletion; either 1) rename verifyHashChain to
verifyAllHashes (or verifyLogIntegrity), update its JSDoc/comment and tests to
document the limitation and add a new test that asserts reordering is NOT
detected (use verifyAllHashes/verifyHashChain with [logDoc2, logDoc1] expecting
true), or 2) implement a cryptographic chain: add previousHash?: string to
AuditLogDocument, include previousHash when computing the per-log value in
computeLogHash, and change verifyHashChain to validate that for each consecutive
pair the later log's previousHash equals the prior log's hash and that each
log's own hash matches computeLogHash; update tests to assert reordering fails
and add a test for missing/inserted entries; pick one option and apply
consistent renaming/JSdoc, update computeLogHash, verifyLogHash/verifyHashChain,
and test cases accordingly.
---
Duplicate comments:
In `@__tests__/opensearch.service.test.ts`:
- Around line 116-143: The test currently demonstrates that computeLogHash
excludes timestamp from the hash, which can be surprising for an audit library;
if this behavior is intentional, add an explicit clarifying comment in the test
(in __tests__/opensearch.service.test.ts near the deterministic-hash case) and
add/update the documentation comment above computeLogHash in
opensearch.service.ts explaining that timestamp is intentionally omitted from
the hash (and why — e.g., trusted/variable environment timestamps or to ensure
idempotent hashing across retries), including the function name computeLogHash
so future reviewers can locate the logic.
In `@README.md`:
- Around line 26-37: README advertises OPENSEARCH_REFRESH but the codebase may
not consume it; check the OpenSearch config module (openSearch.config.ts) for
reading process.env.OPENSEARCH_REFRESH and mapping it to whatever config
property is used (e.g., refresh, defaultRefresh, or clientOptions); if it's
missing, add logic in the exported config function/object (e.g.,
getOpenSearchConfig or export const openSearchConfig) to parse
process.env.OPENSEARCH_REFRESH (handle 'false'|'true'|'wait_for' with a sensible
default) and ensure that property is passed into the OpenSearch client
initialization or any index request options so the README value is actually
honored.
- Around line 151-171: The README's AuditLogInput type must be synchronized with
the actual AuditLogInput interface in the source; open the AuditLogInput
definition in the codebase (the AuditLogInput interface in src) and reconcile
any missing or extra optional fields (e.g., status, durationMs or others) by
either adding them to the README snippet or updating the source interface if the
docs are authoritative; ensure field names, types and optionality (e.g.,
outcome?:, resourceId?:) match exactly and update the README block to reflect
the canonical AuditLogInput shape.
---
Nitpick comments:
In `@__tests__/hash-utility.test.ts`:
- Around line 25-41: Add a unit test that verifies computeLogHash is
order-independent by creating two AuditLogData objects with identical key/value
pairs but different insertion orders (e.g., one literal with eventType first and
another with tenantId first), call computeLogHash on both, and assert the hashes
are equal; place this alongside the existing deterministic-hash test and name it
something like "should produce the same hash regardless of key insertion order"
to exercise the recursive key-sorting canonicalization logic of computeLogHash.
- Line 22: Remove the redundant length assertions that follow the regex checks:
since expect(hash).toMatch(/^[a-f0-9]{64}$/) already enforces a 64-character hex
string, delete the subsequent expect(hash.length).toBe(64) calls (and the
identical pair at the later assertion). Keep the regex assertions and remove
only the .toBe(64) length checks to clean up the tests.
In `@src/config/openSearch.config.ts`:
- Around line 18-21: The current rawRefresh assignment uses optional chaining
and nullish coalescing which are dead since validateEnvVar never returns
null/undefined; replace the expression to explicitly coerce the returned value
to a string (use String(...) or .toString() without `?.`/`??`) so rawRefresh
truly represents the env value (possibly empty string) and then keep the
existing validation that computes refresh from rawRefresh; update references to
OPENSEARCH_REFRESH, validateEnvVar, rawRefresh, and refresh accordingly.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
.gitignoreREADME.md__tests__/hash-utility.test.ts__tests__/opensearch.service.test.tssrc/config/openSearch.config.ts
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/utils/hash-utility.ts`:
- Around line 14-56: The canonicalize function currently returns the string
'undefined' causing hash mismatches after JSON serialization; update
canonicalize (used by computeLogHash for AuditLogData) to mirror JSON semantics:
treat undefined as null (i.e., return 'null'), when processing objects skip keys
whose value is undefined (do not include them in pairs), and when processing
arrays convert any undefined items to canonicalize(null) so undefined array
elements become null; keep boolean/number/string handling and sorted object keys
otherwise unchanged so the pre-storage canonical string matches the post-storage
JSON form.
There was a problem hiding this comment.
🧹 Nitpick comments (2)
src/utils/hash-utility.ts (2)
68-71: Consider constant-time comparison inverifyLogHash.
===on hex strings is not constant-time. While the practical exploitation risk here is low (this is integrity verification, not HMAC/password auth, and SHA-256 is 256-bit),crypto.timingSafeEqualis the idiomatic Node.js approach for any security-sensitive hash comparison.🔒 Proposed fix — constant-time comparison
export function verifyLogHash(logDocument: AuditLogDocument): boolean { const recomputedHash = computeLogHash(logDocument.data); - return recomputedHash === logDocument.hash; + const a = Buffer.from(recomputedHash, 'hex'); + const b = Buffer.from(logDocument.hash, 'hex'); + return a.length === b.length && crypto.timingSafeEqual(a, b); }The
a.length === b.lengthguard handles the case wherelogDocument.hashis not a valid 64-character hex string (which would produce a buffer of different length, causingtimingSafeEqualto throw).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/hash-utility.ts` around lines 68 - 71, The verifyLogHash function currently compares hex strings with ===; change it to perform a constant-time comparison by converting recomputedHash and logDocument.hash into Buffers and using crypto.timingSafeEqual, but first check that both buffers have the same length to avoid timingSafeEqual throwing; update references in verifyLogHash (and continue to use computeLogHash to generate recomputedHash) so the function returns the boolean result of timingSafeEqual when lengths match and false otherwise.
89-102:verifyAllHashescan be simplified withArray.every.The
EMPTY_ARRAY_LENGTHconstant and the early-return guard are only needed because of the explicitforloop.Array.everyreturnstruevacuously for an empty array, eliminating both.♻️ Proposed simplification
-const EMPTY_ARRAY_LENGTH = 0; - ... export function verifyAllHashes(logs: AuditLogDocument[]): boolean { - if (logs.length === EMPTY_ARRAY_LENGTH) { - return true; - } - - // Verify each log's hash independently - for (const log of logs) { - if (!verifyLogHash(log)) { - return false; - } - } - - return true; + return logs.every((log) => verifyLogHash(log)); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/hash-utility.ts` around lines 89 - 102, Replace the manual for-loop and explicit empty-array guard in verifyAllHashes with Array.every: remove the EMPTY_ARRAY_LENGTH check and the early return, and return logs.every(log => verifyLogHash(log)) instead (referencing the verifyAllHashes function and verifyLogHash helper) so empty arrays are handled vacuously and each log is verified succinctly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/utils/hash-utility.ts`:
- Around line 68-71: The verifyLogHash function currently compares hex strings
with ===; change it to perform a constant-time comparison by converting
recomputedHash and logDocument.hash into Buffers and using
crypto.timingSafeEqual, but first check that both buffers have the same length
to avoid timingSafeEqual throwing; update references in verifyLogHash (and
continue to use computeLogHash to generate recomputedHash) so the function
returns the boolean result of timingSafeEqual when lengths match and false
otherwise.
- Around line 89-102: Replace the manual for-loop and explicit empty-array guard
in verifyAllHashes with Array.every: remove the EMPTY_ARRAY_LENGTH check and the
early return, and return logs.every(log => verifyLogHash(log)) instead
(referencing the verifyAllHashes function and verifyLogHash helper) so empty
arrays are handled vacuously and each log is verified succinctly.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (2)
package.jsonsrc/utils/hash-utility.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- package.json
| ```typescript | ||
| import { Inject, Injectable } from '@nestjs/common'; | ||
| import { IAuditService, AuditLogData } from '@tazama-lf/audit-lib'; | ||
| import { IAuditService, AuditLogInput } from '@tazama-lf/audit-lib'; |
There was a problem hiding this comment.
Importing interface from audit-lib, naming convention for AuditLogInput as similar to IAuditService.
| @@ -1,15 +1,17 @@ | |||
| // SPDX-License-Identifier: Apache-2.0 | |||
| import type { IAuditService, AuditLogData } from '../utils/interfaces/audit'; | |||
| import type { IAuditService, AuditLogInput, AuditLogResult } from '../utils/interfaces/audit'; | |||
There was a problem hiding this comment.
Plesae use the correct naming conventions for interface AuditLogInput, AuditLogResult
| export interface AuditLogData { | ||
| export interface AuditLogInput { | ||
| correlationId: string; // Client-generated UUID for request tracking | ||
| eventPhase: 'INTENT' | 'SUCCESS' | 'FAILED'; |
There was a problem hiding this comment.
You can use enum here :eventPhase: 'INTENT' | 'SUCCESS' | 'FAILED';
| return OpenSearchService.instance; | ||
| } | ||
|
|
||
| private async ensureSchema(): Promise<void> { |
There was a problem hiding this comment.
I would suggest as clean code perspective:
-
We can write schema in separate file.
-
Inside the ensureSchema() we can import schema model inside templatBody object.
| } | ||
|
|
||
| public async log(data: AuditLogData): Promise<void> { | ||
| public async log(input: AuditLogInput): Promise<AuditLogResult> { |
There was a problem hiding this comment.
Here we can write a function which perform single responsiblity for the date conversion.
const date = new Date();
// Monthly Index: audit-logs-YYYY.MM
const MONTH_OFFSET = 1;
const PAD_WIDTH = 2;
const PAD_CHAR = '0';
const month = date.getUTCMonth() + MONTH_OFFSET;
const monthStr = String(month).padStart(PAD_WIDTH, PAD_CHAR);
const config = openSearchConfig();
const indexName = ${config.indexPrefix}-${date.getUTCFullYear()}.${monthStr};
const indexName = ${this.indexPrefix}-${date.getUTCFullYear()}.${monthStr};
|
|
||
| const doc = { | ||
| ...data, | ||
| const data = { |
There was a problem hiding this comment.
Do you think it's better if you use spraed operator for populate the all data , it will concise your code.
const data = {}
| data, | ||
| }; | ||
|
|
||
| try { |
There was a problem hiding this comment.
I think it's better to write all code inside try block if you are handle exception.
SPDX-License-Identifier: Apache-2.0
What did we change?
We updated the audit log data model to improve end-to-end traceability and integrity:
eventPhase: each business event is now logged in two phases:
intent (the action was attempted/started)
outcome (the final result: success or failed)
correlationId: a single ID used to link the intent and outcome logs for the same event. The same correlationId must be sent in both records so they can be queried and stitched together.
hash: a hash computed from the audit log payload to provide tamper detection / integrity validation for stored audit records.
Why are we doing this?
We’re adding a reusable audit logging library so Tazama services can produce consistent, centralized, searchable audit trails in OpenSearch—supporting compliance/forensics, fast investigation/monitoring queries, and an extensible provider-based API for future backends.
How was it tested?
Summary by CodeRabbit
New Features
Documentation
Improvements
Tests
Chores