Alpaca over NATS, with k8s support via helm. Nalpaca steps in front of the Alpaca API and makes certain things available to you in NATS. Nalpaca will take on the onus of trade retries, trade updates, backoffs, failures, metrics, logging and basically everything that you'd probably want out of your own code interacting with alpaca. It uses protobuf to make messaging as fast and small as possible. Here are some example things it can do:
- Trades: push trades on the NATS message bus, then the trader will perform these trades for you, retrying as much as you want
- Trade updates: when a trade is filled, it will push notifications
- Positions: positions are available in a KV store
- Trade cancels: cancel trades (in progress)
- Streaming: stream stocks, and eventually options
Nalpaca is based on NATS resources, there are 2 primary streams to look at that it creates and a myriad of consumers, and a single KV store
- Action stream: this stream is the stream that you will write to if you want to have nalpaca perform some actions on your behalf like executing trades. View the docs for the component you wish to use to see what subjects to publish to in order to use it
- The action consumer is also created, which is configured as a
workqueueand is reserved only for nalpaca to use - Data stream: this stream is the stream that nalpaca writes to in order to publish information about live updates, which include updates on trades, stock bars, and more in future updates. Nalpaca will write to this stream, and any number of consumers it offers will be what you subscribe to in order to get info on it. See the component you want to use to get information on it. This stream is configured as an
interestconsumer - KV store: this KV store is a global store for the app, where it will write useful things like your current position list, which you can subscribe to via keywatcher, or just access on the fly if needed. See the docs for the component you want to use for more
To create trades, create a protobuf message of type tradesvc.v0.Trade. Then create an idempotency ID and send it on the topic nalpaca.action.v0.orders.<string client order id (<=128 chars)>
To execute a cancel, you just publish an empty message on <prefix>.action.v0.cancel.<order ID or special keyword "ALL">. Using special keyword ALL will initiate a cancel of all orders
TODO docs
Stream bar data. Possibly the most useful feature, allows you to broadcast messages across your architecture for lots of listeners
Updates on trades can be received as a consumer. Connect to nalpaca-tradeupdater-consumer-v0. Once connected, the consumer will receive updates on subject <prefix>.data.v0.tradeupdates.<TICKER> with message type tradesvc.v0.TradeUpdate
Positions for the account are placed in a KV store under the bucket specified in the helm deployment (default bucket name: nalpaca). Once you connect to the bucket the key positions will store the positions of the account with type tradesvc.v0.Positions. Being that it's a KV store you can subscribe to changes or choose to fetch whenever desired
Deployment is geared toward k8s but not required. Right now, this only works having a single pod, so it can't handle any degree of scaling yet because of the nature of alpaca websockets. However, future deployments may be able to
The only real dependency for nalpaca is that you have an account and some NATS jetstream client to connect to. If you have those 2 things, you're good to go
- Set the NATS url in
nats.urlyou want to connect to - Set the api key
alpaca.apiKey - Set the
secrets.namevalue (default:nalpaca-secrets) to a k8s secret containing the keyALPACA_API_SECRET - Optional: set a subject prefix so all subjects are namespaced for nalpaca (default:
nalpaca) - Optional: if you want to go right into live trading, uncomment the production API URL
Adding user/password:
- Set the username in the NATS config block
nats.user - Add
NATS_PASSWORDto the secret located atsecrets.name
Make tasks
(target) Build a target binary in current arch for running locally: nalpaca
all Build all targets
docker build docker image
compose build docker compose
up run docker compose
down teardown docker compose
run-compose Run a binary with docker compose
proto buf generate
clean gofmt, go generate, then go mod tidy, and finally rm -rf bin/
test Run go vet, then test all files
help Print help
cp .env.tmpl .env- Set all your secrets in
.env(nats user/pass not required) - Choose the method:
Docker compose
- Optional: if you feel the need to use a different NATS connection, you'll want to edit the
compose.yamlto do so make compose up
Terminal
This option requires NATS running elsewhere that you can connect to, and requires running the init script
- Run the
scripts/nats/init.shscript; you can edit the options to your liking for testing make all run-nalpacawill start the server
Debugging PBF requires some scripting. Some scripts exist in the scripts directory to debug certain things, e.g. debugging the positions KV store
