This project uses Vitest for unit and integration testing.
chains-api/
├── tests/
│ ├── unit/ # Unit tests for individual modules
│ │ ├── dataService.test.js # Data service function tests
│ │ └── rpcMonitor.test.js # RPC monitoring function tests
│ ├── integration/ # Integration tests for API endpoints
│ │ └── api.test.js # REST API endpoint tests
│ ├── fixtures/ # Reusable test data and mocks
│ │ └── README.md
│ ├── helpers/ # Test utility functions
│ │ └── README.md
│ └── README.md # Tests directory overview
├── src files (index.js, etc.)
├── vitest.config.js # Vitest configuration
├── TESTING.md # This file
└── TEST_SUMMARY.md # Test results summary
rpcMonitor.test.js (14 tests)
- URL validation and filtering
- RPC call handling (success, errors, timeouts)
- Real-time monitoring updates
- Chain endpoint limiting
- Error handling
dataService.test.js (31 tests)
- Data caching and retrieval
- Chain search (by ID, name, case-insensitive)
- Relations mapping (l1Of, l2Of, testnetOf)
- Endpoints extraction (RPC, firehose, substreams)
- Data transformation and merging
- SLIP-0044 parsing and tag handling
api.test.js (19 tests)
- All REST API endpoints
- Query parameter validation
- Error responses (400, 404)
- Request/response cycles
- Tag filtering
- Search functionality
npm testnpx vitest run tests/unitnpx vitest run tests/integrationnpx vitest run tests/unit/dataService.test.jsnpm run test:watchnpm run test:coveragenpx vitest run -t "RPC Monitor"Coverage reports are generated in the coverage/ directory and include:
- Line coverage - Percentage of lines executed
- Branch coverage - Percentage of branches (if/else) taken
- Function coverage - Percentage of functions called
- Statement coverage - Percentage of statements executed
- Total Tests: 64
- Test Files: 3
- Status: ✅ All Passing
Create in tests/unit/<module>.test.js:
import { describe, it, expect, vi } from 'vitest';
import { functionName } from '../../moduleName.js';
describe('Module Name', () => {
describe('functionName', () => {
it('should do something when condition', () => {
// Arrange
const input = 'test';
// Act
const result = functionName(input);
// Assert
expect(result).toBe('expected');
});
});
});Create in tests/integration/<feature>.test.js:
import { describe, it, expect, beforeAll, afterAll } from 'vitest';
describe('API Feature', () => {
let server;
beforeAll(async () => {
// Setup test server
});
afterAll(async () => {
// Cleanup
await server.close();
});
it('should return expected response', async () => {
const response = await server.inject({
method: 'GET',
url: '/endpoint'
});
expect(response.statusCode).toBe(200);
});
});Create reusable test data in tests/fixtures/:
// tests/fixtures/chains.fixture.js
export const mockChains = [
{ chainId: 1, name: 'Ethereum' },
{ chainId: 137, name: 'Polygon' }
];
// In test file
import { mockChains } from '../fixtures/chains.fixture.js';Create utilities in tests/helpers/:
// tests/helpers/server.helper.js
export async function createTestServer() {
// Setup logic
}
// In test file
import { createTestServer } from '../helpers/server.helper.js';import { vi } from 'vitest';
vi.mock('../../dataService.js', () => ({
getCachedData: vi.fn(() => ({ chains: [] }))
}));const mockFn = vi.fn(() => 'mocked value');
expect(mockFn).toHaveBeenCalledWith('arg');global.fetch = vi.fn(() =>
Promise.resolve({
ok: true,
json: async () => ({ result: 'data' })
})
);Add to your CI/CD pipeline:
# .github/workflows/test.yml
- name: Run tests
run: npm test
- name: Generate coverage
run: npm run test:coverageExit codes:
0= All tests passed ✅1= Tests failed ❌
- Group related tests - Use nested
describeblocks - One assertion per test - When possible
- Clear test names - "should X when Y"
- AAA pattern - Arrange, Act, Assert
- Independent tests - Don't rely on test order
- Clean state - Reset mocks in
beforeEach - Cleanup - Use
afterEach/afterAll
- Mock external calls - No real network requests
- Mock at boundaries - API calls, database, file system
- Don't over-mock - Test real internal logic
- Unit tests: 80%+ coverage
- Integration tests: All endpoints covered
- Critical paths: 100% coverage
npx vitest run --reporter=verbosenpx vitest run -t "specific test name"Add to .vscode/launch.json:
{
"type": "node",
"request": "launch",
"name": "Debug Vitest Tests",
"runtimeExecutable": "npm",
"runtimeArgs": ["run", "test"],
"console": "integratedTerminal",
"internalConsoleOptions": "neverOpen"
}Tests suppress logs by default. To see them:
console.log('Debug info'); // Will show in test output- Increase timeout in
vitest.config.js - Use
{ timeout: 30000 }in specific tests
- Check relative paths:
../../module.js - Ensure mocks match import paths
- Mock before importing the module
- Check mock path matches import path
- Use
async/awaitproperly - Return promises from tests