Comprehensive guide for AI agents and developers scaffolding new Go projects that depend on github.com/jasoet/pkg/v2.
Audience: AI code-generation agents (Claude, Cursor, Copilot) and human developers. Scope: Consumer projects — applications built with this library, not contributions to it. Prerequisites: Nix (with flakes enabled), go-task (global via Homebrew). Go 1.24+ is provided by the Nix flake.
Each layer handles a different concern. They don't overlap.
| Layer | Tool | What It Manages | Examples |
|---|---|---|---|
| Global CLI + GUI apps | Homebrew | Always-available tools, GUI apps, system utilities | fish, git, go-task, direnv, gh |
| Per-project dev tools | Nix (flake.nix) |
Compilers, linters, language runtimes, build tools | go, golangci-lint, gofumpt, buf, bun |
| Services | Podman/Docker | Databases, message brokers, infrastructure | PostgreSQL, Redis, Temporal |
Nix provides per-project, reproducible development environments. Each project declares its exact tool versions in a flake.nix file. The flake.lock file pins those versions — commit it to git so all machines (macOS ARM, Linux x86_64) get identical tooling. When you enter the project directory with direnv, tools activate automatically. When you leave, they deactivate.
Without Nix, tool versions drift across machines, global upgrades break unrelated projects, and onboarding requires manually installing the right versions of everything.
go-task is the entry point that runs Taskfile commands. Taskfile commands invoke Nix (nix develop -c ...), so if go-task were inside the flake, you'd need Nix to run tasks, but you'd need tasks to run Nix — a chicken-and-egg problem. Install go-task globally via Homebrew. Same reasoning applies to gh (GitHub CLI).
Every Taskfile command that uses a Nix-provided tool is prefixed with nix develop -c via a variable:
vars:
N: "nix develop -c"
tasks:
test:
cmds:
- '{{.N}} go test ./...'
lint:
cmds:
- '{{.N}} golangci-lint run'This is the default because most commands are executed by AI agents through the Taskfile — tasks work without requiring direnv to be active. direnv with .envrc (use flake) is optional for interactive shell use.
Infrastructure commands (docker compose, podman) run bare — they are system-level, not Nix-provided.
{
description = "Project development environment";
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
flake-utils.url = "github:numtide/flake-utils";
};
outputs = { self, nixpkgs, flake-utils }:
flake-utils.lib.eachDefaultSystem (system:
let
pkgs = nixpkgs.legacyPackages.${system};
in {
devShells.default = pkgs.mkShell {
packages = [
pkgs.go
pkgs.golangci-lint
pkgs.gofumpt
pkgs.jq
# Add project-specific tools here (buf, grpcurl, etc.)
# go-task is global (Homebrew), not in flake
];
shellHook = ''
export GOPATH="$HOME/go"
export PATH="$GOPATH/bin:$PATH"
echo "Dev environment ready — Go $(go version | awk '{print $3}')"
'';
};
});
}# .envrc (optional — for interactive shell auto-activation)
use flakeAdd to .gitignore:
# Nix / direnv
.direnv/No Nix? If you skip Nix, install Go, golangci-lint, and other tools globally and remove the
N:variable prefix from the Taskfile. Everything else in this template still applies.
Two variants are provided: module-based (recommended for projects with 3+ domains) and flat (suitable for small projects with 1-2 domains). Both share the same outer structure (cmd/, migrations/, test/, docs/, http/, docker/).
Each domain concept is a self-contained package under internal/. All layers for a domain (handler, service, repository, DTOs, errors, interfaces) live together, making it easy to navigate and maintain as the project grows.
myapp/
├── flake.nix # Nix: per-project dev tools
├── flake.lock # Nix: pinned versions (committed)
├── .envrc # direnv: auto-activate nix (optional)
├── cmd/
│ ├── server/
│ │ └── main.go # API server entry point
│ └── worker/
│ └── main.go # Temporal worker entry point
├── internal/
│ ├── config/
│ │ └── config.go # AppConfig struct + loader
│ ├── shared/
│ │ ├── model/ # All GORM models (centralized — single source of truth)
│ │ │ ├── user.go
│ │ │ ├── vessel.go
│ │ │ └── incident.go
│ │ ├── client/ # External API clients (weather, AIS, etc.)
│ │ │ ├── weather_client.go
│ │ │ └── ais_client.go
│ │ └── dto/ # Cross-module DTOs (only when needed)
│ ├── user/ # User module
│ │ ├── handler.go # HTTP handlers
│ │ ├── service.go # Business logic
│ │ ├── repository.go # Database access
│ │ ├── dto.go # Module-local request/response DTOs
│ │ ├── errors.go # Domain-specific errors
│ │ └── interfaces.go # Consumer-defined interfaces
│ ├── vessel/ # Vessel module (same structure)
│ │ ├── handler.go
│ │ ├── service.go
│ │ ├── repository.go
│ │ ├── dto.go
│ │ ├── errors.go
│ │ └── interfaces.go
│ ├── temporal/
│ │ ├── workflows.go # Workflow definitions
│ │ └── activities.go # Activity implementations
│ └── testutil/
│ └── db.go # Shared test helpers (testcontainer DB, etc.)
├── migrations/
│ ├── embed.go # //go:embed for migration files
│ ├── 000001_create_users.up.sql
│ └── 000001_create_users.down.sql
├── test/
│ └── e2e/
│ ├── setup_test.go # testServer, HTTP helpers, startTestServer()
│ └── api_test.go # Full-stack API tests
├── docs/
│ ├── docs.go # Generated by swag init
│ ├── swagger.json
│ └── swagger.yaml
├── http/
│ └── api.http # IntelliJ/VS Code REST Client file
├── docker/
│ └── compose.yml # Podman/Docker: dev services (PostgreSQL, etc.)
├── tools.go # //go:build tools — pin swag, testcontainers, etc.
├── Taskfile.yml # Task runner (commands prefixed with nix develop -c)
├── config.yaml # Default configuration
├── go.mod
└── go.sum
| Directory | Purpose |
|---|---|
flake.nix |
Nix flake — declares project dev tools (Go, linters, etc.) |
flake.lock |
Pinned Nix package versions — commit this file |
.envrc |
direnv auto-activation (optional — use flake) |
cmd/server/ |
API server entry point — config load, wiring, server start |
cmd/worker/ |
Temporal worker entry point — registers workflows/activities |
internal/config/ |
AppConfig struct with YAML + env var support |
internal/shared/model/ |
All GORM model structs (centralized to avoid circular imports) |
internal/shared/client/ |
External API clients (may be used by multiple modules) |
internal/shared/dto/ |
Cross-module DTOs (only when referenced by 2+ modules) |
internal/<module>/ |
Domain module — handler, service, repository, DTOs, errors, interfaces |
internal/temporal/ |
Workflow definitions and activity implementations |
internal/testutil/ |
Shared test helpers (testcontainer setup, fixtures) |
migrations/ |
SQL migration files with embed.FS |
test/e2e/ |
End-to-end API tests against real HTTP + real DB |
docs/ |
Generated Swagger/OpenAPI files (committed) |
http/ |
.http files for manual API testing |
docker/ |
Podman/Docker Compose for dev services (PostgreSQL, Temporal, etc.) |
- One module per domain concept — each module maps to a business domain (user, vessel, incident, communication).
- Each module is a Go package — the package name matches the directory name (e.g.,
package user). - Modules import
shared/model— all GORM model structs live inshared/model/so relationships between entities are visible in one place and circular imports are avoided. - Modules import
shared/client— external API clients live inshared/client/since they may be used by multiple modules. - DTOs are module-local — each module defines its own request/response DTOs in
dto.go. If a DTO is needed by multiple modules, move it toshared/dto/. - Errors are module-local — each module defines its own domain errors in
errors.go. - Interfaces are consumer-defined — each module defines the interfaces it depends on in
interfaces.go(see Section 4a). - Cross-module communication — if a service needs data from another module, it depends on that module's repository or service interface, declared in the consuming module's
interfaces.go.
| Directory | Contains | Why Shared |
|---|---|---|
shared/model/ |
All GORM model structs | Foreign key relationships span modules; centralizing avoids circular imports and makes the data model visible at a glance |
shared/client/ |
External API clients (weather, AIS, etc.) | Multiple modules may need the same external API |
shared/dto/ |
Cross-module DTOs (only when needed) | DTOs referenced by more than one module |
Create a new module when you have a distinct business domain that will have its own API endpoints, business logic, and data access. If a piece of functionality is just a helper used by many modules, it belongs in shared/.
Each file within a module has a specific purpose:
| File | Purpose |
|---|---|
handler.go |
HTTP handlers — request parsing, response formatting, error-to-status-code mapping |
service.go |
Business logic, orchestration, transactions |
repository.go |
Database access — one query per method |
dto.go |
Request/response structs for the HTTP layer |
errors.go |
Domain-specific error variables |
interfaces.go |
Consumer-defined interfaces for dependencies (repositories, clients, other services) |
If a module grows large enough that any single file becomes unwieldy, split by sub-concern within the module (e.g., vessel_tracking_service.go, vessel_registry_service.go).
For small projects with 1-2 domains, a flat layout groups files by layer instead of by domain:
myapp/
├── flake.nix # Nix: per-project dev tools
├── flake.lock # Nix: pinned versions (committed)
├── .envrc # direnv: auto-activate nix (optional)
├── cmd/
│ └── server/
│ └── main.go
├── internal/
│ ├── config/
│ │ └── config.go
│ ├── model/
│ │ └── user.go # GORM models
│ ├── repository/
│ │ └── user_repo.go # Data access layer
│ ├── service/
│ │ └── user_service.go # Business logic
│ ├── handler/
│ │ ├── user_handler.go # HTTP handlers with swagger annotations
│ │ └── dto.go # Request/Response DTOs (exported for swag)
│ └── testutil/
│ └── db.go
├── migrations/
│ └── ...
└── ...
| Directory | Purpose |
|---|---|
internal/model/ |
GORM model structs with table name methods |
internal/repository/ |
Database access, one repo per aggregate |
internal/service/ |
Business logic, orchestrates repositories |
internal/handler/ |
Echo HTTP handlers + DTOs + swagger annotations |
When to switch from flat to module-based: When you find yourself prefixing files by domain (e.g., user_repo.go, vessel_repo.go, incident_repo.go all in repository/), it's time to migrate to module-based layout.
type AppConfig struct {
Server struct {
Port int `yaml:"port" mapstructure:"port" validate:"required,min=1,max=65535"`
ShutdownTimeout time.Duration `yaml:"shutdownTimeout" mapstructure:"shutdownTimeout"`
} `yaml:"server" mapstructure:"server"`
Database db.ConnectionConfig `yaml:"database" mapstructure:"database"`
Temporal temporal.Config `yaml:"temporal" mapstructure:"temporal"`
Auth struct {
SessionDuration time.Duration `yaml:"sessionDuration" mapstructure:"sessionDuration"`
BcryptCost int `yaml:"bcryptCost" mapstructure:"bcryptCost"`
} `yaml:"auth" mapstructure:"auth"`
}cfg, err := config.LoadString[AppConfig](yamlContent, "APP")envPrefixis variadic — defaults to"ENV"if omitted.- Struct tags: always include
yaml,mapstructure, andvalidate.
Automatic via Viper: APP_SERVER_PORT=9090 overrides server.port.
For deeply nested structs, use:
config.NestedEnvVars("APP", 3, "database", viperInstance)pool, err := cfg.Database.Pool() // OTelConfig injected at runtime, not from YAMLRule: OTelConfig *otel.Config fields must always use yaml:"-" mapstructure:"-" tags. Never serialize OTel config — inject it at runtime via functional options or direct assignment.
// Create OTel config with service name
otelCfg := otel.NewConfig("myapp")
// Optionally attach real providers (nil = no-op, zero overhead)
otelCfg = otelCfg.
WithTracerProvider(tracerProvider).
WithMeterProvider(meterProvider)
// For OTel-based logging (replaces zerolog global)
loggerProvider, err := otel.NewLoggerProviderWithOptions("myapp",
otel.WithConsoleOutput(true),
otel.WithLogLevel(otel.LogLevel("info")),
)
otelCfg = otelCfg.WithLoggerProvider(loggerProvider)
// Store in context for downstream access
ctx = otel.ContextWithConfig(ctx, otelCfg)// Database — direct field assignment
cfg.Database.OTelConfig = otelCfg
pool, err := cfg.Database.Pool()
// REST client — functional option
client := rest.NewClient(
rest.WithOTelConfig(otelCfg),
rest.WithRestConfig(restCfg),
)
// Retry — builder method (value receiver)
retryCfg := retry.DefaultConfig().
WithName("db.connect").
WithOTel(otelCfg)When OTelConfig is nil or providers are nil, all instrumentation becomes no-op with zero runtime overhead. No nil-checks needed in application code.
HTTP Request → Handler → Service → Repository → Database
↓ ↓ ↓
StartHandler StartService StartRepository
Each layer gets its own LayerContext from otel.Layers:
func (s *UserService) Create(ctx context.Context, req CreateUserRequest) (*User, error) {
lc := otel.Layers.StartService(ctx, "user", "Create",
otel.F("username", req.Username))
defer lc.End()
lc.Logger.Info("Creating user")
user, err := s.repo.Save(lc.Context(), req.ToModel())
if err != nil {
return nil, lc.Error(err, "failed to save user")
}
lc.Success("User created")
return user, nil
}Available start methods:
| Method | Layer | SpanKind |
|---|---|---|
Layers.StartHandler(ctx, component, operation, fields...) |
Handler | Server |
Layers.StartService(ctx, component, operation, fields...) |
Service | Internal |
Layers.StartRepository(ctx, component, operation, fields...) |
Repository | Client |
Layers.StartOperations(ctx, component, operation, fields...) |
Operations | Internal |
Layers.StartMiddleware(ctx, component, operation, fields...) |
Middleware | Server |
Naming convention:
- Tracer name:
{layer}.{component}(e.g.,service.user) - Span name:
{component}.{operation}(e.g.,user.Create)
LayerContext methods:
| Method | Returns | Purpose |
|---|---|---|
lc.Context() |
context.Context |
Pass to downstream calls |
lc.Error(err, msg, fields...) |
error |
Log error + set span error + return the error |
lc.Success(msg, fields...) |
(void) | Log success at info level |
lc.End() |
(void) | End the span (always defer) |
Interfaces are defined in the consumer module, not the provider — this is idiomatic Go (accept interfaces, return structs). Each module's interfaces live in its interfaces.go file.
Services define the repository interfaces they depend on. Handlers define the service interfaces they depend on.
// internal/user/interfaces.go
package user
import (
"context"
"myapp/internal/shared/model"
)
// Repository interface — consumed by the service
type UserRepository interface {
FindAll(ctx context.Context) ([]model.User, error)
FindByID(ctx context.Context, id string) (*model.User, error)
FindByUsername(ctx context.Context, username string) (*model.User, error)
Create(ctx context.Context, user *model.User) error
Delete(ctx context.Context, id string) error
}
// Service interface — consumed by the handler
type ServiceInterface interface {
ListUsers(ctx context.Context) ([]model.User, error)
GetUser(ctx context.Context, id string) (*model.User, error)
CreateUser(ctx context.Context, username, password string) (*model.User, error)
DeleteUser(ctx context.Context, id string) error
}The service depends on the interface, not the concrete type:
// internal/user/service.go
type Service struct {
repo UserRepository
}
func NewService(repo UserRepository) *Service {
return &Service{repo: repo}
}When a module needs data from another module, define the interface in the consuming module's interfaces.go:
// internal/dashboard/interfaces.go
package dashboard
type VesselRepository interface {
CountByStatus(ctx context.Context) (map[string]int, error)
}
type IncidentRepository interface {
FindRecent(ctx context.Context, since time.Duration) ([]model.Incident, error)
}
type WeatherClient interface {
GetCurrent(ctx context.Context, lat, lon float64) (*WeatherData, error)
}- Repositories and clients stay simple structs — no interface boilerplate in their packages.
- Each consumer defines only the methods it actually uses (Interface Segregation Principle).
- Mocking becomes straightforward — mock only what the consumer calls.
- Adding a new repository method doesn't force updating interfaces elsewhere unless a consumer needs it.
Each module defines its own domain-specific errors in errors.go. These errors are used by services and repositories within the module, and mapped to HTTP status codes by handlers.
// internal/user/errors.go
package user
import "errors"
var (
ErrUserNotFound = errors.New("user not found")
ErrUsernameExists = errors.New("username already exists")
ErrInvalidInput = errors.New("invalid input")
)// internal/vessel/errors.go
package vessel
import "errors"
var (
ErrVesselNotFound = errors.New("vessel not found")
ErrDuplicateIMO = errors.New("duplicate IMO number")
)Handler error mapping pattern:
func (h *Handler) Get(c echo.Context) error {
user, err := h.svc.GetUser(c.Request().Context(), c.Param("id"))
if err != nil {
switch {
case errors.Is(err, ErrUserNotFound):
return c.JSON(http.StatusNotFound, ErrorResponse{Error: err.Error()})
default:
return c.JSON(http.StatusInternalServerError, ErrorResponse{Error: "internal error"})
}
}
return c.JSON(http.StatusOK, toUserResponse(user))
}Services should never return HTTP-specific errors — keep the domain clean.
pool, err := db.ConnectionConfig{
DbType: db.Postgresql,
Host: cfg.Database.Host,
Port: cfg.Database.Port,
Username: cfg.Database.Username,
Password: cfg.Database.Password,
DbName: cfg.Database.DbName,
OTelConfig: otelCfg, // Automatic query tracing
}.Pool()// migrations/embed.go
package migrations
import "embed"
//go:embed *.sql
var FS embed.FS// In main.go
err := db.RunPostgresMigrationsWithGorm(ctx, pool, migrations.FS, ".")Migration file naming: {sequence}_{description}.{up|down}.sql
000001_create_users.up.sql
000001_create_users.down.sql
000002_add_sessions.up.sql
000002_add_sessions.down.sql
All database access follows the layered architecture. The key principle: a repository method maps to exactly one database hit.
| Layer | Responsibility | Owns Transactions? | Depends On |
|---|---|---|---|
| Handler | HTTP request/response, input validation, status codes | No | Service |
| Service | Business logic, orchestration, data transformation | Yes | Repository, Client, *gorm.DB (for transactions) |
| Repository | Single database operation per method | No | *gorm.DB |
| Client | Single external API wrapper | No | rest.Client |
- One database hit per method — no multi-step queries, no loops with queries inside.
- Always use
r.db.WithContext(lc.Context())— never skip context. This ensures OTel tracing and timeout propagation. - Always wrap in
otel.Layers.StartRepositorywith adb.operationattribute (select,insert,update,delete,upsert). - Always use
lc.Error()/lc.Success()— never return raw errors without recording them. - Always use parameterized queries (
?placeholders) — never string concatenation orfmt.Sprintffor SQL values. - Define domain-specific errors in the module's
errors.gofile. - If a GORM chain exceeds 3-4 method calls or requires subqueries, switch to raw SQL for clarity.
| Use Case | Approach | Example |
|---|---|---|
| Simple CRUD (insert, update, delete, find by ID/field) | GORM methods (.Create(), .Save(), .First(), .Find(), .Delete()) |
r.db.WithContext(ctx).Create(&user) |
| Filtering with dynamic conditions | GORM query builder (.Where(), .Order(), .Limit()) |
r.db.Where("status = ?", s).Find(&list) |
| Queries with joins, CTEs, window functions, aggregations | Raw SQL via db.Raw().Scan() |
See raw SQL pattern below |
| Bulk inserts, upserts, or batch updates | Raw SQL via db.Exec() |
r.db.Exec("INSERT INTO ... ON CONFLICT ...") |
func (r *Repository) FindByID(ctx context.Context, id string) (*model.Vessel, error) {
lc := otel.Layers.StartRepository(ctx, "vessel", "FindByID",
otel.F("db.operation", "select"),
otel.F("vessel.id", id))
defer lc.End()
var vessel model.Vessel
if err := r.db.WithContext(lc.Context()).First(&vessel, "id = ?", id).Error; err != nil {
if errors.Is(err, gorm.ErrRecordNotFound) {
return nil, lc.Error(ErrVesselNotFound, "vessel not found")
}
return nil, lc.Error(err, "query failed")
}
lc.Success("vessel found")
return &vessel, nil
}Use db.Raw().Scan() for complex queries. It is still one database hit.
func (r *Repository) FindNearby(ctx context.Context, lat, lon, radiusKm float64) ([]model.Vessel, error) {
lc := otel.Layers.StartRepository(ctx, "vessel", "FindNearby",
otel.F("db.operation", "select"),
otel.F("query.lat", lat),
otel.F("query.lon", lon))
defer lc.End()
var vessels []model.Vessel
err := r.db.WithContext(lc.Context()).Raw(`
SELECT v.*
FROM vessels v
WHERE ST_DWithin(v.position, ST_MakePoint(?, ?)::geography, ?)
ORDER BY v.name
`, lon, lat, radiusKm*1000).Scan(&vessels).Error
if err != nil {
return nil, lc.Error(err, "query failed")
}
lc.Success("nearby vessels found", otel.F("result.count", len(vessels)))
return vessels, nil
}When a raw SQL query returns a shape that doesn't match an existing model, define a result struct in the same repository.go file. Use gorm:"column:..." tags to map columns. These structs do not go in shared/model/ — they are query-specific.
type ActivitySummary struct {
VesselID string `gorm:"column:vessel_id"`
VesselName string `gorm:"column:vessel_name"`
TripCount int `gorm:"column:trip_count"`
LastSeen time.Time `gorm:"column:last_seen"`
}
func (r *Repository) GetActivitySummary(ctx context.Context, since time.Time) ([]ActivitySummary, error) {
lc := otel.Layers.StartRepository(ctx, "vessel", "GetActivitySummary",
otel.F("db.operation", "select"))
defer lc.End()
var summaries []ActivitySummary
err := r.db.WithContext(lc.Context()).Raw(`
SELECT v.id AS vessel_id, v.name AS vessel_name,
COUNT(t.id) AS trip_count, MAX(t.ended_at) AS last_seen
FROM vessels v
LEFT JOIN trips t ON t.vessel_id = v.id AND t.started_at >= ?
GROUP BY v.id, v.name
ORDER BY trip_count DESC
`, since).Scan(&summaries).Error
if err != nil {
return nil, lc.Error(err, "query failed")
}
lc.Success("summary loaded", otel.F("result.count", len(summaries)))
return summaries, nil
}Within a repository.go file, follow this order:
1. Repository struct type Repository struct { db *gorm.DB }
2. Constructor func NewRepository(db *gorm.DB) *Repository
3. Result structs type XxxSummary struct { ... } (if any)
4. Read methods FindAll, FindByID, FindByXxx, search/filter methods
5. Write methods Create, Update, Delete, bulk operations
- Transactions (belongs in the service layer)
- Multi-query orchestration or loops with queries (belongs in the service layer)
- Business logic or validation (belongs in the service layer)
- HTTP request/response handling (belongs in the handler layer)
- Direct
fmt.Printlnor rawzerologlogging (uselc.Logger) - Calling other repositories (coordinate from the service layer)
External API wrappers live in internal/shared/client/. Each client wraps a single third-party API and provides typed Go methods. Clients are in shared/ because multiple modules may depend on the same external API.
Use jasoet/pkg/v2/rest for HTTP calls with automatic OTel instrumentation and retry support:
// internal/shared/client/weather_client.go
package client
type WeatherData struct {
Temperature float64 `json:"temperature"`
WindSpeed float64 `json:"wind_speed"`
Condition string `json:"condition"`
}
type WeatherClient struct {
client *rest.Client
baseURL string
apiKey string
}
func NewWeatherClient(otelCfg *otel.Config, baseURL, apiKey string) *WeatherClient {
return &WeatherClient{
client: rest.NewClient(rest.WithOTelConfig(otelCfg)),
baseURL: baseURL,
apiKey: apiKey,
}
}
func (c *WeatherClient) GetCurrent(ctx context.Context, lat, lon float64) (*WeatherData, error) {
lc := otel.Layers.StartRepository(ctx, "weather", "GetCurrent",
otel.F("api.service", "weather"),
otel.F("query.lat", lat),
otel.F("query.lon", lon))
defer lc.End()
var data WeatherData
url := fmt.Sprintf("%s/current?lat=%f&lon=%f", c.baseURL, lat, lon)
resp, err := c.client.R().
SetContext(lc.Context()).
SetHeader("X-API-Key", c.apiKey).
SetResult(&data).
Get(url)
if err != nil {
return nil, lc.Error(err, "weather API request failed")
}
if resp.IsError() {
return nil, lc.Error(fmt.Errorf("weather API returned %d", resp.StatusCode()), "unexpected status")
}
lc.Success("weather data fetched")
return &data, nil
}- One client per external service — don't combine multiple APIs in one client.
- Use
rest.NewClient()withrest.WithOTelConfig()— never rawnet/httpor plain resty. - Use
otel.Layers.StartRepositoryfor span creation (clients are data-access boundaries, same as repositories). - Handle API-specific errors — translate HTTP errors into meaningful Go errors.
- Never expose HTTP details to the caller — return typed Go structs, not raw responses.
- Store API keys/credentials in config — inject via constructor, never hardcode.
- Prefer header-based authentication (
Authorization,X-API-Key) over query-string API keys. Query params appear in server logs and browser history.
When a service needs data from multiple sources, it coordinates the calls:
func (s *Service) GetOverview(ctx context.Context) (*Overview, error) {
lc := otel.Layers.StartService(ctx, "dashboard", "GetOverview")
defer lc.End()
vessels, err := s.vesselRepo.CountByStatus(lc.Context())
if err != nil {
return nil, lc.Error(err, "failed to count vessels")
}
incidents, err := s.incidentRepo.FindRecent(lc.Context(), 24*time.Hour)
if err != nil {
return nil, lc.Error(err, "failed to fetch recent incidents")
}
weather, err := s.weatherClient.GetCurrent(lc.Context(), s.defaultLat, s.defaultLon)
if err != nil {
return nil, lc.Error(err, "failed to fetch weather")
}
return &Overview{
VesselCounts: vessels,
RecentIncidents: incidents,
CurrentWeather: weather,
}, lc.Success("dashboard overview loaded")
}When multiple writes must succeed or fail together, the service owns the transaction. The service holds a *gorm.DB reference alongside its repository/client dependencies and uses db.Transaction() directly.
// Service constructor with transaction support
type Service struct {
db *gorm.DB
incidentRepo IncidentRepository
auditRepo AuditLogRepository
}
func NewService(db *gorm.DB, incidentRepo IncidentRepository, auditRepo AuditLogRepository) *Service {
return &Service{db: db, incidentRepo: incidentRepo, auditRepo: auditRepo}
}
func (s *Service) ReportWithAudit(ctx context.Context, req ReportRequest) (*model.Incident, error) {
lc := otel.Layers.StartService(ctx, "incident", "ReportWithAudit")
defer lc.End()
var incident model.Incident
err := s.db.WithContext(lc.Context()).Transaction(func(tx *gorm.DB) error {
incident = model.Incident{
Title: req.Title,
Severity: req.Severity,
}
if err := tx.Create(&incident).Error; err != nil {
return err
}
auditLog := model.AuditLog{
EntityType: "incident",
EntityID: incident.ID,
Action: "created",
ActorID: req.ReportedBy,
}
return tx.Create(&auditLog).Error
})
if err != nil {
return nil, lc.Error(err, "transaction failed")
}
lc.Success("incident reported", otel.F("incident.id", incident.ID))
return &incident, nil
}- Keep transactions short — no external API calls, no long-running work inside them.
- Let errors propagate by returning them from the
Transactioncallback; GORM handles rollback automatically. - Don't nest transactions — if a service calls another service, the outer service should own the transaction scope.
- Use
tx(notr.db) for all operations inside the callback. - Direct
tx.Create()/tx.Save()is acceptable inside transaction callbacks — since the transaction callback receives a*gorm.DB(tx), you use it directly for model operations. Repository methods are not used here because they hold their ownr.dbreference, which is outside the transaction scope. This is the one place where the service layer performs direct GORM operations.
serverCfg := server.Config{
Port: cfg.Server.Port,
ShutdownTimeout: cfg.Server.ShutdownTimeout,
Middleware: []echo.MiddlewareFunc{
middleware.Recover(),
middleware.Logger(),
},
EchoConfigurer: func(e *echo.Echo) {
// Register all routes here
e.GET("/swagger/*", echoSwagger.WrapHandler)
apiV1 := e.Group("/api/v1")
userHandler.RegisterRoutes(apiV1.Group("/users"))
},
Operation: func(e *echo.Echo) {
// Additional startup operations
},
Shutdown: func(e *echo.Echo) {
// Cleanup: close DB pools, flush telemetry, etc.
sqlDB, _ := pool.DB()
_ = sqlDB.Close()
},
}
server.StartWithConfig(serverCfg)The server package automatically registers:
| Endpoint | Response |
|---|---|
GET / |
200 "Home" |
GET /health |
200 {"status": "UP"} |
GET /health/ready |
200 {"status": "READY"} |
GET /health/live |
200 {"status": "ALIVE"} |
Note: server.Config does not have an OTelConfig field. Inject OTel middleware through the Middleware slice or inside EchoConfigurer.
Use this instead of (or alongside) the HTTP server when your project exposes gRPC services. The
grpcpackage supports two modes: H2C (gRPC + HTTP on a single port, default) and Separate (gRPC and HTTP on different ports). Both include an Echo-based HTTP gateway automatically.
Single port serves both gRPC and REST via HTTP/2 cleartext:
import (
"github.com/jasoet/pkg/v2/grpc"
"google.golang.org/grpc"
pb "myapp/proto/gen"
)
server, err := grpc.New(
grpc.WithGRPCPort("8080"),
grpc.WithOTelConfig(otelCfg),
grpc.WithServiceRegistrar(func(s *grpc.Server) {
pb.RegisterMyServiceServer(s, &myServiceImpl{})
}),
grpc.WithEchoConfigurer(func(e *echo.Echo) {
// Additional REST routes alongside gRPC
e.GET("/swagger/*", echoSwagger.WrapHandler)
}),
grpc.WithShutdownHandler(func() error {
sqlDB, _ := pool.DB()
return sqlDB.Close()
}),
)
if err != nil {
log.Fatal(err)
}
server.Start()gRPC on one port, HTTP gateway on another:
server, err := grpc.New(
grpc.WithSeparateMode("9090", "8080"), // gRPC:9090, HTTP:8080
grpc.WithOTelConfig(otelCfg),
grpc.WithServiceRegistrar(func(s *grpc.Server) {
pb.RegisterMyServiceServer(s, &myServiceImpl{})
}),
)// H2C mode — single call, blocks until signal
grpc.StartH2C("8080", func(s *grpc.Server) {
pb.RegisterMyServiceServer(s, &myServiceImpl{})
}, grpc.WithOTelConfig(otelCfg))
// Separate mode
grpc.StartSeparate("9090", "8080", func(s *grpc.Server) {
pb.RegisterMyServiceServer(s, &myServiceImpl{})
})All enabled by default (toggle with Without*() options):
| Feature | Option | Default |
|---|---|---|
Health checks (/health, /health/ready, /health/live) |
WithHealthCheck() / WithoutHealthCheck() |
Enabled |
| OpenTelemetry (traces, metrics, logs) | WithOTelConfig(cfg) |
Disabled (nil) |
| gRPC reflection | WithReflection() / WithoutReflection() |
Disabled |
| CORS | WithCORS() |
Disabled |
| Rate limiting | WithRateLimit(rps) |
Disabled |
// Ports & Mode
grpc.WithGRPCPort("9090")
grpc.WithSeparateMode("9090", "8080")
grpc.WithH2CMode()
// Service registration
grpc.WithServiceRegistrar(func(s *grpc.Server) { ... })
// HTTP gateway customization
grpc.WithEchoConfigurer(func(e *echo.Echo) { ... })
grpc.WithGatewayBasePath("/api/v1") // default
grpc.WithMiddleware(mw1, mw2)
// Lifecycle
grpc.WithShutdownHandler(func() error { ... })
grpc.WithShutdownTimeout(30 * time.Second)
// Observability
grpc.WithOTelConfig(otelCfg) // Replaces Prometheus with OTel when setWhen using gRPC, add a proto/ directory to your project:
myapp/
├── proto/
│ ├── myservice.proto # Protobuf definitions
│ └── gen/ # Generated Go code (buf generate or protoc)
│ ├── myservice.pb.go
│ └── myservice_grpc.pb.go
If your application needs async jobs, background processing, or scheduled tasks, use the temporal package.
temporal.Config is a plain serializable struct (no OTelConfig field — unlike most other packages):
import "github.com/jasoet/pkg/v2/temporal"
// In AppConfig:
Temporal temporal.Config `yaml:"temporal" mapstructure:"temporal"`# config.yaml
temporal:
hostPort: localhost:7233
namespace: default
metricsListenAddress: "0.0.0.0:9090"// internal/temporal/workflows.go
package temporal
import (
"go.temporal.io/sdk/workflow"
)
func OrderProcessingWorkflow(ctx workflow.Context, orderID string) (*OrderResult, error) {
opts := workflow.ActivityOptions{
StartToCloseTimeout: 30 * time.Second,
}
ctx = workflow.WithActivityOptions(ctx, opts)
// Execute activities in sequence
var validated bool
if err := workflow.ExecuteActivity(ctx, ValidateOrderActivity, orderID).Get(ctx, &validated); err != nil {
return nil, err
}
var result OrderResult
if err := workflow.ExecuteActivity(ctx, ProcessPaymentActivity, orderID).Get(ctx, &result); err != nil {
return nil, err
}
return &result, nil
}// internal/temporal/activities.go
package temporal
import "context"
// Activities struct holds injected dependencies (repos, services, etc.)
type Activities struct {
orderRepo *repository.OrderRepo
paymentSvc *service.PaymentService
}
func NewActivities(orderRepo *repository.OrderRepo, paymentSvc *service.PaymentService) *Activities {
return &Activities{orderRepo: orderRepo, paymentSvc: paymentSvc}
}
func (a *Activities) ValidateOrderActivity(ctx context.Context, orderID string) (bool, error) {
order, err := a.orderRepo.FindByID(ctx, orderID)
if err != nil {
return false, err
}
return order.IsValid(), nil
}
func (a *Activities) ProcessPaymentActivity(ctx context.Context, orderID string) (*OrderResult, error) {
return a.paymentSvc.Process(ctx, orderID)
}package main
import (
"context"
"flag"
"log"
"go.temporal.io/sdk/worker"
appconfig "myapp/internal/config"
apptemporal "myapp/internal/temporal"
"myapp/internal/repository"
"myapp/internal/service"
"myapp/migrations"
"github.com/jasoet/pkg/v2/db"
"github.com/jasoet/pkg/v2/temporal"
)
const taskQueue = "myapp-tasks"
func main() {
configPath := flag.String("config", "config.yaml", "path to config file")
flag.Parse()
cfg, err := appconfig.Load(*configPath)
if err != nil {
log.Fatalf("failed to load config: %v", err)
}
// Database (activities need repos)
pool, err := cfg.Database.Pool()
if err != nil {
log.Fatalf("failed to connect to database: %v", err)
}
if err := db.RunPostgresMigrationsWithGorm(context.Background(), pool, migrations.FS, "."); err != nil {
log.Fatalf("failed to run migrations: %v", err)
}
// Build activity dependencies
orderRepo := repository.NewOrderRepo(pool)
paymentSvc := service.NewPaymentService(orderRepo)
activities := apptemporal.NewActivities(orderRepo, paymentSvc)
// Create WorkerManager (owns its own Temporal client)
wm, err := temporal.NewWorkerManager(&cfg.Temporal)
if err != nil {
log.Fatalf("failed to create worker manager: %v", err)
}
defer wm.Close()
// Register worker with workflows and activities
w := wm.Register(taskQueue, worker.Options{})
w.RegisterWorkflow(apptemporal.OrderProcessingWorkflow)
w.RegisterActivity(activities) // registers all methods on the struct
// Start worker (blocks until interrupted)
log.Printf("Starting worker on task queue: %s", taskQueue)
if err := wm.StartAll(context.Background()); err != nil {
log.Fatalf("worker failed: %v", err)
}
}// In your handler or service:
temporalClient, err := temporal.NewClient(&cfg.Temporal)
if err != nil {
return err
}
defer temporalClient.Close()
run, err := temporalClient.ExecuteWorkflow(ctx,
client.StartWorkflowOptions{
ID: fmt.Sprintf("order-%s", orderID),
TaskQueue: "myapp-tasks",
},
apptemporal.OrderProcessingWorkflow, orderID,
)For recurring jobs (e.g., periodic sync, cleanup):
sm := temporal.NewScheduleManager(temporalClient)
handle, err := sm.CreateWorkflowSchedule(ctx, "daily-cleanup", temporal.WorkflowScheduleOptions{
WorkflowID: "cleanup-workflow",
Workflow: apptemporal.CleanupWorkflow,
TaskQueue: "myapp-tasks",
Interval: 24 * time.Hour,
Args: []interface{}{"arg1"},
})
// List, update, delete schedules
schedules, _ := sm.ListSchedules(ctx, 10)
sm.DeleteSchedule(ctx, "daily-cleanup")For integration tests against a real Temporal server:
//go:build integration
import "github.com/jasoet/pkg/v2/temporal/testcontainer"
func TestWorkflow(t *testing.T) {
ctx := context.Background()
// One-call setup: container + client + cleanup
container, temporalClient, cleanup, err := testcontainer.Setup(
ctx,
testcontainer.ClientConfig{Namespace: "default"},
testcontainer.Options{Logger: t},
)
require.NoError(t, err)
defer cleanup()
// Use temporalClient to start workflows, create workers, etc.
}# docker/compose.yml (add alongside PostgreSQL)
services:
temporal:
image: temporalio/auto-setup:latest
ports:
- "7233:7233"
environment:
DB: postgresql
DB_PORT: 5432
POSTGRES_USER: postgres
POSTGRES_PWD: postgres
POSTGRES_SEEDS: postgres
depends_on:
- postgres
temporal-ui:
image: temporalio/ui:latest
ports:
- "8233:8080"
environment:
TEMPORAL_ADDRESS: temporal:7233
depends_on:
- temporalpackage service_test
func TestUserService_Create(t *testing.T) {
// No Docker, no network — pure logic tests
result, err := svc.Create(ctx, req)
require.NoError(t, err)
assert.Equal(t, "alice", result.Username)
}Run: go test ./... -short
//go:build integration
package repository_test
func TestUserRepo_Integration(t *testing.T) {
pool := testutil.SetupTestDB(t) // Testcontainer PostgreSQL
repo := repository.NewUserRepo(pool)
user, err := repo.Create(ctx, model)
require.NoError(t, err)
assert.NotZero(t, user.ID)
}Run: go test ./internal/... -tags=integration -v -count=1 -timeout 300s
Full-stack API tests: real HTTP server + real database. See Section 10 below.
Run: go test ./test/e2e/ -tags=integration -v -count=1 -timeout 300s
//go:build example
package main
func main() {
// Runnable demonstration
}Run: go run -tags=example ./examples/...
Always use github.com/stretchr/testify:
import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
require.NoError(t, err) // Fail immediately if error
assert.Equal(t, want, got) // Continue on failureThis is the pattern AI agents most commonly miss. E2E tests verify the full stack — real HTTP requests against a real server backed by a real database.
//go:build integration
package e2e
import (
"bytes"
"encoding/json"
"fmt"
"io"
"net"
"net/http"
"testing"
"github.com/labstack/echo/v4"
"github.com/labstack/echo/v4/middleware"
"myapp/internal/handler"
"myapp/internal/repository"
"myapp/internal/service"
"myapp/internal/testutil"
)
// testServer holds everything needed to interact with a running test API server.
type testServer struct {
BaseURL string
Echo *echo.Echo
}
// startTestServer boots the full API stack against a real PostgreSQL container.
// It replicates the wiring from cmd/server/main.go but listens on a random port.
func startTestServer(t *testing.T) *testServer {
t.Helper()
// Real database via testcontainer
pool := testutil.SetupTestDB(t)
// Wire the same layers as cmd/server/main.go
userRepo := repository.NewUserRepo(pool)
userSvc := service.NewUserService(userRepo)
userHandler := handler.NewUserHandler(userSvc)
e := echo.New()
e.HideBanner = true
e.Use(middleware.Recover())
// Health (replicates server package behavior)
e.GET("/health", func(c echo.Context) error {
return c.JSON(http.StatusOK, map[string]string{"status": "UP"})
})
// App routes (same wiring as production)
apiV1 := e.Group("/api/v1")
userHandler.RegisterRoutes(apiV1.Group("/users"))
// Listen on random port
ln, err := net.Listen("tcp", "127.0.0.1:0")
if err != nil {
t.Fatalf("failed to listen: %v", err)
}
e.Listener = ln
go func() {
if err := e.Start(""); err != nil && err != http.ErrServerClosed {
// Server stopped
}
}()
t.Cleanup(func() {
_ = e.Close()
})
return &testServer{
BaseURL: fmt.Sprintf("http://%s", ln.Addr().String()),
Echo: e,
}
}
// --- HTTP helpers ---
func (ts *testServer) get(t *testing.T, path, token string) *http.Response {
t.Helper()
req, _ := http.NewRequest(http.MethodGet, ts.BaseURL+path, nil)
if token != "" {
req.Header.Set("Authorization", "Bearer "+token)
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
t.Fatalf("request failed: %v", err)
}
return resp
}
func (ts *testServer) postJSON(t *testing.T, path string, body interface{}, token string) *http.Response {
t.Helper()
jsonBody, _ := json.Marshal(body)
req, _ := http.NewRequest(http.MethodPost, ts.BaseURL+path, bytes.NewReader(jsonBody))
req.Header.Set("Content-Type", "application/json")
if token != "" {
req.Header.Set("Authorization", "Bearer "+token)
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
t.Fatalf("request failed: %v", err)
}
return resp
}
func (ts *testServer) putJSON(t *testing.T, path string, body interface{}, token string) *http.Response {
t.Helper()
jsonBody, _ := json.Marshal(body)
req, _ := http.NewRequest(http.MethodPut, ts.BaseURL+path, bytes.NewReader(jsonBody))
req.Header.Set("Content-Type", "application/json")
if token != "" {
req.Header.Set("Authorization", "Bearer "+token)
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
t.Fatalf("request failed: %v", err)
}
return resp
}
func (ts *testServer) delete(t *testing.T, path, token string) *http.Response {
t.Helper()
req, _ := http.NewRequest(http.MethodDelete, ts.BaseURL+path, nil)
if token != "" {
req.Header.Set("Authorization", "Bearer "+token)
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
t.Fatalf("request failed: %v", err)
}
return resp
}
func readBody(t *testing.T, resp *http.Response) []byte {
t.Helper()
defer resp.Body.Close()
body, err := io.ReadAll(resp.Body)
if err != nil {
t.Fatalf("failed to read body: %v", err)
}
return body
}
func parseJSON(t *testing.T, resp *http.Response) map[string]interface{} {
t.Helper()
body := readBody(t, resp)
var result map[string]interface{}
if err := json.Unmarshal(body, &result); err != nil {
t.Fatalf("failed to parse JSON: %v\nbody: %s", err, string(body))
}
return result
}
func parseJSONArray(t *testing.T, resp *http.Response) []map[string]interface{} {
t.Helper()
body := readBody(t, resp)
var result []map[string]interface{}
if err := json.Unmarshal(body, &result); err != nil {
t.Fatalf("failed to parse JSON array: %v\nbody: %s", err, string(body))
}
return result
}
// login authenticates and returns the session token.
func (ts *testServer) login(t *testing.T, username, password string) string {
t.Helper()
resp := ts.postJSON(t, "/api/v1/auth/login", map[string]string{
"username": username,
"password": password,
}, "")
if resp.StatusCode != http.StatusOK {
body := readBody(t, resp)
t.Fatalf("login failed: %d %s", resp.StatusCode, string(body))
}
result := parseJSON(t, resp)
return result["token"].(string)
}//go:build integration
package e2e
import (
"net/http"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestHealthEndpoint(t *testing.T) {
ts := startTestServer(t)
resp := ts.get(t, "/health", "")
assert.Equal(t, http.StatusOK, resp.StatusCode)
result := parseJSON(t, resp)
assert.Equal(t, "UP", result["status"])
}
func TestUserCRUD(t *testing.T) {
ts := startTestServer(t)
token := ts.login(t, "admin", "changeme")
// Create
resp := ts.postJSON(t, "/api/v1/users", map[string]string{
"username": "alice",
"email": "alice@example.com",
"password": "secret123",
}, token)
require.Equal(t, http.StatusCreated, resp.StatusCode)
created := parseJSON(t, resp)
assert.Equal(t, "alice", created["username"])
// List
resp = ts.get(t, "/api/v1/users", token)
require.Equal(t, http.StatusOK, resp.StatusCode)
users := parseJSONArray(t, resp)
assert.GreaterOrEqual(t, len(users), 1)
// Delete
resp = ts.delete(t, "/api/v1/users/2", token)
assert.Equal(t, http.StatusNoContent, resp.StatusCode)
_ = readBody(t, resp)
}Key design decisions:
- Each test calls
startTestServer(t)— full isolation via fresh testcontainer DB. - Uses
//go:build integrationtag (same as integration tests) since it needs Docker. - HTTP helpers (
get,postJSON,putJSON,delete) live ontestServerstruct. login()helper handles auth token extraction.- Test scenarios: CRUD lifecycle, auth lifecycle, role enforcement, edge cases (duplicates, invalid input).
Second most commonly missed pattern. Swagger provides auto-generated API docs and a test UI.
// tools.go
//go:build tools
package tools
import _ "github.com/swaggo/swag/cmd/swag"// cmd/server/main.go
import _ "myapp/docs" // swagger generated docs
// @title My App API
// @version 1.0
// @description REST API for My App
// @host localhost:8080
// @BasePath /
// @securityDefinitions.apikey BearerAuth
// @in header
// @name Authorization
// @description Enter your session token with the `Bearer ` prefix
func main() {
// ...
}// @Summary Create user
// @Description Create a new user account
// @Tags users
// @Accept json
// @Produce json
// @Param body body CreateUserRequest true "User data"
// @Success 201 {object} UserResponse
// @Failure 400 {object} ErrorResponse
// @Failure 409 {object} ErrorResponse
// @Security BearerAuth
// @Router /api/v1/users [post]
func (h *UserHandler) Create(c echo.Context) error {
// ...
}Define exported structs so swag can parse them:
package handler
// ErrorResponse represents an error response.
type ErrorResponse struct {
Error string `json:"error" example:"error message"`
}
// CreateUserRequest represents the create user request body.
type CreateUserRequest struct {
Username string `json:"username" example:"johndoe"`
Email string `json:"email" example:"john@example.com"`
Password string `json:"password" example:"secret123"`
Role string `json:"role" example:"viewer"`
}
// UserResponse represents a user in API responses.
type UserResponse struct {
ID int `json:"id" example:"1"`
Username string `json:"username" example:"admin"`
Email string `json:"email" example:"admin@example.com"`
Role string `json:"role" example:"admin"`
}- Use
examplestruct tags for Swagger example values. - Keep DTOs separate from GORM models — handler layer maps between them.
import echoSwagger "github.com/swaggo/echo-swagger"
// Inside EchoConfigurer or Operation callback:
e.GET("/swagger/*", echoSwagger.WrapHandler)swag init -g cmd/server/main.go -o docs --parseDependency --parseInternalCommit the generated docs/ directory. Re-run after any annotation changes.
Third most commonly missed pattern.
.httpfiles enable one-click API testing in IntelliJ and VS Code (REST Client extension).
Place in http/api.http:
### Variables
@host = http://localhost:8080
@contentType = application/json
### ==========================================================
### Auth
### ==========================================================
### Login as admin
POST {{host}}/api/v1/auth/login
Content-Type: {{contentType}}
{
"username": "admin",
"password": "changeme"
}
> {%
client.global.set("token", response.body.token);
%}
### Get current user
GET {{host}}/api/v1/auth/me
Authorization: Bearer {{token}}
### Logout
POST {{host}}/api/v1/auth/logout
Authorization: Bearer {{token}}
### ==========================================================
### Users (Admin only)
### ==========================================================
### List all users
GET {{host}}/api/v1/users
Authorization: Bearer {{token}}
### Create user
POST {{host}}/api/v1/users
Content-Type: {{contentType}}
Authorization: Bearer {{token}}
{
"username": "viewer1",
"email": "viewer1@example.com",
"password": "password123",
"role": "viewer"
}
### Update user (change ID as needed)
PUT {{host}}/api/v1/users/2
Content-Type: {{contentType}}
Authorization: Bearer {{token}}
{
"role": "admin"
}
### Delete user (change ID as needed)
DELETE {{host}}/api/v1/users/2
Authorization: Bearer {{token}}
### ==========================================================
### Health
### ==========================================================
### Health check
GET {{host}}/health
### Swagger UI
GET {{host}}/swagger/index.htmlConventions:
@hostvariable at the top — change once for different environments.> {% ... %}response handler scripts capture tokens after login.### =====section headers group endpoints by resource.- Cover every API endpoint with realistic example payloads.
- Keep request bodies minimal but complete.
Commands that use Nix-provided tools (Go, linters, formatters) are prefixed with {{.N}} which expands to nix develop -c. Infrastructure commands (docker compose, podman) run bare since they are system-level.
version: "3"
vars:
N: "nix develop -c"
BIN_DIR: ./bin
tasks:
default:
desc: List all available tasks
cmds:
- task --list-all
# --- Infrastructure (system-level — no Nix prefix) ---
infra:up:
desc: Start dev services (PostgreSQL, Temporal, etc.)
cmds:
- docker compose -f docker/compose.yml up -d
infra:down:
desc: Stop dev services
cmds:
- docker compose -f docker/compose.yml down
# --- Development ---
dev:api:
desc: Run the API server locally
cmds:
- '{{.N}} go run ./cmd/server'
dev:worker:
desc: Run the Temporal worker locally
cmds:
- '{{.N}} go run ./cmd/worker'
# --- Test ---
test:unit:
desc: Run unit tests (short mode, no Docker)
cmds:
- '{{.N}} go test ./... -short -v'
test:integration:
desc: Run integration tests (Docker required)
cmds:
- '{{.N}} go test ./internal/... -tags=integration -v -count=1 -timeout 300s'
test:e2e:
desc: Run E2E API tests (full server + real DB)
cmds:
- '{{.N}} go test ./test/e2e/ -tags=integration -v -count=1 -timeout 300s'
test:all:
desc: Run all tests (unit + integration + e2e)
cmds:
- task: test:unit
- task: test:integration
- task: test:e2e
# --- Documentation ---
docs:swagger:
desc: Generate Swagger/OpenAPI docs
cmds:
- '{{.N}} swag init -g cmd/server/main.go -o docs --parseDependency --parseInternal'
# --- Quality ---
lint:
desc: Run golangci-lint
cmds:
- '{{.N}} golangci-lint run'
fmt:
desc: Format code with gofumpt
cmds:
- '{{.N}} gofumpt -w .'
# --- Dependencies ---
vendor:
desc: Tidy and vendor dependencies
cmds:
- '{{.N}} go mod tidy'
- '{{.N}} go mod vendor'
# --- Nix ---
nix:check:
desc: Verify Nix environment and tool availability
cmds:
- '{{.N}} go version'
- '{{.N}} golangci-lint --version'
- echo "All tools available"
nix:update:
desc: Update flake inputs (bump tool versions)
cmds:
- nix flake update
- echo "Flake inputs updated. Run 'task nix:check' to verify."
clean:
desc: Remove build artifacts
cmds:
- rm -rf {{.BIN_DIR}} output/flake.nixexists with project-specific dev tools.flake.lockis committed.go-taskis not inflake.nix— it's global via Homebrew (chicken-and-egg). Same forgh.- Taskfile uses
{{.N}}prefix (nix develop -c) for all Nix-provided tool commands. Infrastructure commands (docker compose) run bare. .direnv/is in.gitignoreif.envrcis present.
- Config tags: Every config struct field has
yaml,mapstructure, andvalidatetags. - OTelConfig isolation:
OTelConfig *otel.Configalways taggedyaml:"-" mapstructure:"-". Never serialized. - OTel injection: OTelConfig is set at runtime via functional options or direct assignment — never from YAML/env.
- LayerContext usage: Every service/repository method starts with
otel.Layers.Start*()and deferslc.End(). - Context propagation: Always pass
lc.Context()to downstream calls, never the originalctx. - Error returns:
lc.Error(err, msg)returnserror— use it as a return value.lc.Success(msg)returns nothing.
- Layer separation: Handler → Service → Repository. No skipping layers. Handlers never touch
*gorm.DBdirectly. - Consumer-defined interfaces: Interfaces live in the consuming module's
interfaces.go. Each consumer defines only the methods it uses. - Module-local errors: Each module defines domain errors in
errors.go(var ErrXxx = errors.New(...)). Handlers map these to HTTP status codes. - DTO mapping: Handlers use DTOs (
dto.go) for request/response. GORM models stay inshared/model/. Map between them in handlers or services. - Exported DTOs: Request/Response structs must be exported (capitalized) for
swagto parse.
- One DB hit per repository method — no multi-step queries, no loops with queries inside.
- Always use
r.db.WithContext(lc.Context())— never skip context for OTel tracing. - Always use parameterized queries (
?placeholders) — never string concatenation for SQL values. - Service owns transactions — repositories never start transactions. Use
db.Transaction()in the service. - Transaction callbacks use
txdirectly — not repository methods (which hold their ownr.dboutside the tx scope). - Keep transactions short — no external API calls or long-running work inside them.
- One client per external service — each in
shared/client/, usingrest.NewClient()with OTel. - Never expose HTTP details — return typed Go structs, not raw responses.
- Prefer header-based auth — over query-string API keys.
- Swagger annotations: Every handler method has
@Summary,@Tags,@Param,@Success,@Failure,@Router, and@Security(where applicable). .httpfile exists:http/api.httpcovers every API endpoint with example payloads.
- E2E tests exist:
test/e2e/contains full-stack API tests usingstartTestServer(t)with testcontainer DB. - Migration naming:
{6-digit sequence}_{description}.{up|down}.sqlwithembed.FS. - Test assertions: Use
testify/assertandtestify/require— neverif err != nil { t.Fatal() }patterns.
- Temporal separation: Workflow functions use
workflow.Context, activity functions usecontext.Context. Activities hold injected dependencies via a struct. Worker binary lives incmd/worker/, separate from the API server. - Temporal config:
temporal.Confighas noOTelConfigfield — it is fully serializable. Embed as a value type inAppConfig, not a pointer.
| Need | Package | Key API |
|---|---|---|
| Configuration | config |
config.LoadString[T](yaml, prefix...) |
| OpenTelemetry | otel |
otel.NewConfig(name), otel.Layers.Start*(), otel.F(k, v) |
| OTel Logging | otel |
otel.NewLoggerProviderWithOptions(name, opts...) |
| Legacy Logging | logging |
logging.Initialize(name, debug) |
| Database Pool | db |
db.ConnectionConfig{...}.Pool() |
| Migrations | db |
db.RunPostgresMigrationsWithGorm(ctx, pool, fs, path) |
| HTTP Server | server |
server.StartWithConfig(cfg), server.DefaultConfig(port, op, shut) |
| gRPC Server | grpc |
grpc.New(opts...), grpc.Start(port, registrar, opts...) |
| REST Client | rest |
rest.NewClient(opts...), client.MakeRequestWithTrace(...) |
| Retry | retry |
retry.Do(ctx, cfg, op), retry.DefaultConfig().WithName(n).WithOTel(c) |
| Concurrency | concurrent |
concurrent.ExecuteConcurrently(ctx, funcs) |
| Temporal Client | temporal |
temporal.NewClient(cfg) |
| Temporal Worker | temporal |
temporal.NewWorkerManager(cfg), wm.Register(queue, opts) |
| Temporal Schedule | temporal |
temporal.NewScheduleManager(client), sm.CreateWorkflowSchedule(...) |
| Temporal Test | temporal/testcontainer |
testcontainer.Setup(ctx, cfg, opts) |
| Docker | docker |
docker.New(opts...), docker.NewFromRequest(req) |
With the module-based layout, each module exports its own constructors. Import modules by alias to avoid name collisions:
package main
import (
"context"
"flag"
"log"
"github.com/labstack/echo/v4"
"github.com/labstack/echo/v4/middleware"
echoSwagger "github.com/swaggo/echo-swagger"
"myapp/internal/config"
"myapp/internal/shared/client"
usermod "myapp/internal/user"
vesselmod "myapp/internal/vessel"
dashboardmod "myapp/internal/dashboard"
"myapp/migrations"
"github.com/jasoet/pkg/v2/db"
"github.com/jasoet/pkg/v2/otel"
"github.com/jasoet/pkg/v2/server"
_ "myapp/docs" // swagger generated docs
)
// @title My App API
// @version 1.0
// @description REST API for My App
// @host localhost:8080
// @BasePath /
// @securityDefinitions.apikey BearerAuth
// @in header
// @name Authorization
// @description Enter your session token with the `Bearer ` prefix
func main() {
configPath := flag.String("config", "config.yaml", "path to config file")
flag.Parse()
// --- Config ---
cfg, err := config.Load(*configPath)
if err != nil {
log.Fatalf("failed to load config: %v", err)
}
// --- OpenTelemetry ---
otelCfg := otel.NewConfig("myapp")
// --- Database ---
cfg.Database.OTelConfig = otelCfg
pool, err := cfg.Database.Pool()
if err != nil {
log.Fatalf("failed to connect to database: %v", err)
}
// --- Migrations ---
if err := db.RunPostgresMigrationsWithGorm(context.Background(), pool, migrations.FS, "."); err != nil {
log.Fatalf("failed to run migrations: %v", err)
}
// --- 1. Repositories (depend on *gorm.DB) ---
userRepo := usermod.NewRepository(pool)
vesselRepo := vesselmod.NewRepository(pool)
// --- 2. Clients (depend on *otel.Config + config values) ---
weatherClient := client.NewWeatherClient(otelCfg, cfg.Weather.BaseURL, cfg.Weather.APIKey)
// --- 3. Services (depend on repositories, clients, optionally *gorm.DB for transactions) ---
userSvc := usermod.NewService(userRepo)
dashboardSvc := dashboardmod.NewService(vesselRepo, weatherClient)
// --- 4. Handlers (depend on services) ---
userHandler := usermod.NewHandler(userSvc)
dashboardHandler := dashboardmod.NewHandler(dashboardSvc)
// --- Server ---
server.StartWithConfig(server.Config{
Port: cfg.Server.Port,
ShutdownTimeout: cfg.Server.ShutdownTimeout,
Middleware: []echo.MiddlewareFunc{
middleware.Recover(),
middleware.Logger(),
},
EchoConfigurer: func(e *echo.Echo) {
e.GET("/swagger/*", echoSwagger.WrapHandler)
apiV1 := e.Group("/api/v1")
userHandler.RegisterRoutes(apiV1.Group("/users"))
dashboardHandler.RegisterRoutes(apiV1.Group("/dashboard"))
},
Operation: func(e *echo.Echo) {},
Shutdown: func(e *echo.Echo) {
log.Println("Shutting down...")
if sqlDB, err := pool.DB(); err == nil {
_ = sqlDB.Close()
}
_ = otelCfg.Shutdown(context.Background())
},
})
}For the simpler flat layout:
import (
"myapp/internal/config"
"myapp/internal/handler"
"myapp/internal/repository"
"myapp/internal/service"
)
// Repositories
userRepo := repository.NewUserRepo(pool)
// Services
userSvc := service.NewUserService(userRepo)
// Handlers
userHandler := handler.NewUserHandler(userSvc)Both examples demonstrate the same bootstrap sequence: config → OTel → database → migrations → repositories → clients → services → handlers → routes → server start.