# Cosmos Stack Developer Documentation Source: https://docs.cosmos.network/index
Cosmos Cosmos
Cosmos Cosmos
Cosmos SDK Cosmos EVM IBC Cosmos Enterprise
Talk to an expert

Cosmos Stack Developer Documentation

The most widely adopted, battle-tested Layer 1 blockchain stack, trusted by 200+ chains live in production.

How it works

The stack is modular. Leverage pre-built components or integrate custom features for your specific use case, from consensus mechanisms to governance and compliance.

Fast and Reliable

Performant, customizable, and EVM-compatible, the Cosmos stack offers engineers full control of their blockchain infrastructure and implementation. Its stable and secure codebase enables blockchains to achieve up to 10,000 transactions per second.

Maintainers and Contributors

Cosmos Labs maintains the core components of the stack: Cosmos SDK, CometBFT, IBC, Cosmos EVM, and various developer tools and frameworks. In addition to developing and maintaining the Cosmos Stack, Cosmos Labs provides advisory and engineering services for blockchain solutions.

 

Cosmos Labs is a wholly-owned subsidiary of the Interchain Foundation, the Swiss nonprofit responsible for treasury management, funding public goods, and supporting governance for Cosmos.

 

The Cosmos Stack is supported by a global community of open-source contributors.

Get in Touch with Cosmos Labs
# Changelog Source: https://docs.cosmos.network/sdk/latest/changelog/release-notes Release history and changelog for Cosmos SDK This page tracks releases and changes for v0.54.1. For the full release history, see the [CHANGELOG](https://github.com/cosmos/cosmos-sdk/blob/main/CHANGELOG.md) on GitHub. ## Improvements * (x/auth) [#26297](https://github.com/cosmos/cosmos-sdk/pull/26297) Cap pagination limit at number of txs within block during `GetBlockWithTxs` instead of 100. ## Breaking Changes * (x/consensus) [#25607](https://github.com/cosmos/cosmos-sdk/pull/25607) Add `AuthorityParams` to consensus params. When set, the consensus params authority takes precedence over per-keeper authority for all module parameter updates. Keeper constructor signatures are unchanged. * (x/staking) [#25724](https://github.com/cosmos/cosmos-sdk/issues/25724) Validate `BondDenom` in `MsgUpdateParams` to prevent setting non-existent or zero-supply denoms. * [#25778](https://github.com/cosmos/cosmos-sdk/pull/25778) Update `log` to log v2. * [#25090](https://github.com/cosmos/cosmos-sdk/pull/25090) Moved deprecated modules to `./contrib`. These modules are still available but will no longer be actively maintained or supported in the Cosmos SDK Bug Bounty program. * `x/group` * `x/nft` * `x/circuit` * `x/crisis` * (crypto) [#24414](https://github.com/cosmos/cosmos-sdk/pull/24414) Remove sr25519 support, since it was removed in CometBFT v1.x (see: CometBFT [#3646](https://github.com/cometbft/cometbft/pull/3646)). * (x/mint) [#25599](https://github.com/cosmos/cosmos-sdk/pull/25599) Add max supply param. * (x/gov) [#25615](https://github.com/cosmos/cosmos-sdk/pull/25615) Decouple `x/gov` from `x/staking` by making `CalculateVoteResultsAndVotingPowerFn` a required parameter to `keeper.NewKeeper` instead of `StakingKeeper`. * (x/gov) [#25617](https://github.com/cosmos/cosmos-sdk/pull/25617) `AfterProposalSubmission` hook now includes proposer address as a parameter. * (x/gov) [#25616](https://github.com/cosmos/cosmos-sdk/pull/25616) `DistrKeeper` `x/distribution` is now optional. Genesis validation ensures `distrKeeper` is set if distribution module is used as proposal cancel destination. * (systemtests) \[#25930][https://github.com/cosmos/cosmos-sdk/pull/25930](https://github.com/cosmos/cosmos-sdk/pull/25930)) Move `systemtests` into `testutil` and no longer under its own `go.mod`. * (baseapp) [#26060](https://github.com/cosmos/cosmos-sdk/pull/26060) Remove `BaseApp.SetStoreMetrics`. The `StoreMetrics` interface never worked, so removing dead code. * (store) [#26061](https://github.com/cosmos/cosmos-sdk/pull/26061) Remove store tracing API and all related plumbing: * Remove `SetTracer`, `SetTracingContext`, and `TracingEnabled` from `MultiStore` interface. * Remove `CacheWrapWithTrace` from `CacheWrapper` interface. * Remove `BaseApp.SetCommitMultiStoreTracer` and tracing context logic from `BaseApp.cacheTxContext` and `FinalizeBlock`. * Remove `io.Writer` parameter from `servertypes.AppCreator` and `traceWriter io.Writer` from `servertypes.AppExporter`. * Remove `traceStore io.Writer` parameter from `simapp.NewSimApp` and all enterprise simapp constructors. * Remove `traceStore io.Writer` from all `testutil/simsx` app factory signatures. * (store) [#26042](https://github.com/cosmos/cosmos-sdk/pull/26042) We are now importing `github.com/cosmos/cosmos-sdk/store/v2` as the store package instead of `cosmossdk.io/store` and all import paths have changed. * (baseapp) [#26138](https://github.com/cosmos/cosmos-sdk/pull/26138) Default block gas meter to disabled. Adds checking to ensure block gas meter is not enabled while bstm parallel execution is configured and panics in these scenarios during parameter assignment. ## Features * [#25471](https://github.com/cosmos/cosmos-sdk/pull/25471) Full BLS 12-381 support enabled. * [#24872](https://github.com/cosmos/cosmos-sdk/pull/24872) Support BLS 12-381 for cli `init`, `gentx`, `collect-gentx` * (crypto) [#24919](https://github.com/cosmos/cosmos-sdk/pull/24919) add `NewPubKeyFromBytes` function to the `secp256r1` package to create `PubKey` from bytes * (server) [#24720](https://github.com/cosmos/cosmos-sdk/pull/24720) add `verbose_log_level` flag for configuring the log level when switching to verbose logging mode during sensitive operations (such as chain upgrades). * (crypto) [#24861](https://github.com/cosmos/cosmos-sdk/pull/24861) add `PubKeyFromCometTypeAndBytes` helper function to convert from `comet/v2` PubKeys to the `cryptotypes.Pubkey` interface. * (abci\_utils) [#25008](https://github.com/cosmos/cosmos-sdk/pull/25008) add the ability to assign a custom signer extraction adapter in `DefaultProposalHandler`. * (x/distribution) [#25650](https://github.com/cosmos/cosmos-sdk/pull/25650) Add new gRPC query endpoints and CLI commands for `DelegatorStartingInfo`, `ValidatorHistoricalRewards`, and `ValidatorCurrentRewards`. * [#25745](https://github.com/cosmos/cosmos-sdk/pull/25745) Add DiskIO telemetry via gopsutil. * (grpc) [#25648](https://github.com/cosmos/cosmos-sdk/pull/25648) Add `earliest_block_height` and `latest_block_height` fields to `GetSyncingResponse`. * (collections/codec) \[#25614] ([https://github.com/cosmos/cosmos-sdk/pull/25827](https://github.com/cosmos/cosmos-sdk/pull/25827)) Add `TimeValue` (`ValueCodec[time.Time]`) to collections/codec. * (enterprise/poa) [#25838](https://github.com/cosmos/cosmos-sdk/pull/25838) Add the `poa` module under the `enterprise` directory. * (grpc) [#25850](https://github.com/cosmos/cosmos-sdk/pull/25850) Add `GetBlockResults` and `GetLatestBlockResults` gRPC endpoints to expose CometBFT block results including `finalize_block_events`. ## Improvements * (ci) Use softprops/action-gh-release for main-nightly instead of custom gh/git to avoid repository ruleset conflicts. * (telemetry) [#26006](https://github.com/cosmos/cosmos-sdk/pull/26006) Export `ExtensionOptions` type for programmatic otel.yaml generation. * [#25955](https://github.com/cosmos/cosmos-sdk/pull/25955) Use cosmos/btree directly instead of replacing it in go.mods * (types) [#25342](https://github.com/cosmos/cosmos-sdk/pull/25342) Undeprecated `EmitEvent` and `EmitEvents` on the `EventManager`. These functions will continue to be maintained. * (types) [#24668](https://github.com/cosmos/cosmos-sdk/pull/24668) Scope the global config to a particular binary so that multiple SDK binaries can be properly run on the same machine. * (baseapp) [#24655](https://github.com/cosmos/cosmos-sdk/pull/24655) Add mutex locks for `state` and make `lastCommitInfo` atomic to prevent race conditions between `Commit` and `CreateQueryContext`. * (proto) [#24161](https://github.com/cosmos/cosmos-sdk/pull/24161) Remove unnecessary annotations from `x/staking` authz proto. * (x/bank) [#24660](https://github.com/cosmos/cosmos-sdk/pull/24660) Improve performance of the `GetAllBalances` and `GetAccountsBalances` keeper methods. * (collections) [#25464](https://github.com/cosmos/cosmos-sdk/pull/25464) Add `IterateRaw` method to `Multi` index type to satisfty query `Collection` interface. * (api) [#25613](https://github.com/cosmos/cosmos-sdk/pull/25613) Separated deprecated modules into the contrib directory, distinct from api, to enable and unblock new proto changes without affecting legacy code. * (server) [#25740](https://github.com/cosmos/cosmos-sdk/pull/25740) Add variadic `grpc.DialOption` parameter to `StartGrpcServer` for custom gRPC client connection options. * (blockstm) [#25765](https://github.com/cosmos/cosmos-sdk/pull/25765) Minor code readability improvement in block-stm. * (blockstm) [#25786](https://github.com/cosmos/cosmos-sdk/pull/25786) Add pre-state checking in transaction state transition. * (server/config) [#25807](https://github.com/cosmos/cosmos-sdk/pull/25807) fix(server): reject overlapping historical gRPC block ranges. * [#25857](https://github.com/cosmos/cosmos-sdk/pull/25857) Reduce scope of mutex in `PriorityNonceMempool.Remove`. * (baseapp) [#25862](https://github.com/cosmos/cosmos-sdk/pull/25862) Skip running validateBasic for rechecking txs. (Backport of [https://github.com/cosmos/cosmos-sdk/pull/20208](https://github.com/cosmos/cosmos-sdk/pull/20208)). * (blockstm) [25883](https://github.com/cosmos/cosmos-sdk/pull/25883) Re-use decoded tx object in pre-estimates. * (blockstm) [#25788](https://github.com/cosmos/cosmos-sdk/pull/25788) Only validate transactions that's executed at lease once. * (blockstm) [#25767](https://github.com/cosmos/cosmos-sdk/pull/25767) Optimize block-stm MVMemory with bitmap index. ## Bug Fixes * (baseapp) [#25331](https://github.com/cosmos/cosmos-sdk/issues/25331) Avoid noisy errors when gRPC response headers are already sent, set block height as a header when possible and fall back to a trailer. * (blockstm) [#25789](https://github.com/cosmos/cosmos-sdk/issues/25789) Wake up suspended executors when scheduler doesn't complete to prevent goroutine leaks. * (grpc) [#25647](https://github.com/cosmos/cosmos-sdk/pull/25647) Return actual `earliest_store_height` in `node.Status` gRPC endpoint instead of hardcoded `0`. * (types/query) [#25665](https://github.com/cosmos/cosmos-sdk/issues/25665) Fix pagination offset when querying a collection with predicate function. * (x/staking) [#25649](https://github.com/cosmos/cosmos-sdk/pull/25649) Add missing `defer iterator.Close()` calls in `IterateDelegatorRedelegations` and `GetRedelegations` to prevent resource leaks. * (mempool) [#25563](https://github.com/cosmos/cosmos-sdk/pull/25563) Cleanup sender indices in case of tx replacement. * (x/epochs) [#25425](https://github.com/cosmos/cosmos-sdk/pull/25425) Fix `InvokeSetHooks` being called with a nil keeper and `AppModule` containing a copy instead of a pointer (hooks set post creating the `AppModule` like with depinject didn't apply because it's a different instance). * (client, client/rpc, x/auth/tx) [#24551](https://github.com/cosmos/cosmos-sdk/pull/24551) Handle cancellation properly when supplying context to client methods. * (x/authz) [#24638](https://github.com/cosmos/cosmos-sdk/pull/24638) Fixed a minor bug where the grant key was cast as a string and dumped directly into the error message leading to an error string possibly containing invalid UTF-8. * (client, client/rpc, x/auth/tx) [#24551](https://github.com/cosmos/cosmos-sdk/pull/24551) Handle cancellation properly when supplying context to client methods. * (x/epochs) [#24770](https://github.com/cosmos/cosmos-sdk/pull/24770) Fix register of epoch hooks in `InvokeSetHooks`. * (x/epochs) [#25087](https://github.com/cosmos/cosmos-sdk/pull/25087) Remove redundant error check in BeginBlocker. * [GHSA-p22h-3m2v-cmgh](https://github.com/cosmos/cosmos-sdk/security/advisories/GHSA-p22h-3m2v-cmgh) Fix x/distribution can halt when historical rewards overflow. * (x/staking) [#25258](https://github.com/cosmos/cosmos-sdk/pull/25258) Add delegator address to redelegate event. * (x/bank) [#25751](https://github.com/cosmos/cosmos-sdk/pull/25751) Fix recipient address in events. * (client) \[#25811] ([https://github.com/cosmos/cosmos-sdk/pull/25811](https://github.com/cosmos/cosmos-sdk/pull/25811)) fix(client): fix file handle leaks in snapshot commands. * (server/config) [#25806](https://github.com/cosmos/cosmos-sdk/pull/25806) fix: add missing commas in historical gRPC config template. * (client) [#25804](https://github.com/cosmos/cosmos-sdk/pull/25804) Add `GetHeightFromMetadataStrict` API to `grpc` client for better error handling. * (x/staking) [#25829](https://github.com/cosmos/cosmos-sdk/pull/25829) Validates case-sensitivity on authz grands in x/staking. * (mempool) [#25869](https://github.com/cosmos/cosmos-sdk/pull/25869) fix(mempool): add thread safety to NextSenderTx. * (blockstm) [#25912](https://github.com/cosmos/cosmos-sdk/pull/25912) Remove `SigVerificationDecorator` signature incarnation cache causing state divergence under blockstm. * (x/group) [#25922](https://github.com/cosmos/cosmos-sdk/pull/25922) Add zero-total-weight check for ThresholdDecisionPolicy * (x/group) [#25917](https://github.com/cosmos/cosmos-sdk/pull/25917) Prevent creation of zero-weight groups. * (x/group) [#25919](https://github.com/cosmos/cosmos-sdk/pull/25919) add safer type assertions to group `DecisionPolicy` getter calls. * (x/group) [#25920](https://github.com/cosmos/cosmos-sdk/pull/25920) Expand voting period check to verify period is positive instead of nonzero. * (baseapp) [#26063](https://github.com/cosmos/cosmos-sdk/pull/26063) Fixes an issue where values embedded in context during ante handling were wiped after the handlers returned. ## Deprecated * [#25948](https://github.com/cosmos/cosmos-sdk/pull/25948) Change default `app.go` code to not use `depinject` as we are phasing it out. * (baseapp) [#26107](https://github.com/cosmos/cosmos-sdk/pull/26170) Deprecate baseapp test helper `app.NewUncachedContext`, consider using `app.NewNextBlockContext` or `app.NewContext` instead, see `UPGRADING.md` for more details. # Block-STM: Parallel Transaction Execution Source: https://docs.cosmos.network/sdk/latest/experimental/blockstm **Synopsis** Block-STM enables parallel execution of transactions during `FinalizeBlock`, using optimistic concurrency control to improve block processing throughput. **Prerequisite Readings** * [BaseApp](/sdk/latest/learn/concepts/baseapp) * [Transactions](/sdk/latest/learn/concepts/transactions) * [Store](/sdk/latest/learn/concepts/store) ## Background Block-STM is an algorithm originally published in the [Block-STM paper](https://arxiv.org/pdf/2203.06871) and implemented for the Aptos blockchain. The algorithm was then written for Cosmos SDK compatible chains in Go by developers for the Cronos blockchain in [go-block-stm](https://github.com/crypto-org-chain/go-block-stm). This library was forked and directly integrated into the Cosmos SDK with accompanying changes to the `baseapp` and `store` packages. Subsequent changes and improvements have been made on top of the original implementation to further optimize performance in both memory and time. ## Algorithm Overview Block-STM implements a form of optimistic concurrency control to enable parallel execution of transactions. It does this by implementing read and write set tracking on top of the SDK's IAVL storage layer. This, combined with the absolute ordering of transactions provided by the block proposal, is used in a validation phase which determines if any two executed transactions have conflicting storage access. In the case of conflicting storage access, the algorithm provides a means for re-execution and re-validation of the conflicting transactions based on the ordering in the proposal. Block-STM is currently only integrated into the `FinalizeBlock` phase of execution, meaning the code path is never accessed until the block is agreed upon in consensus. It is possible that the algorithm may be extended in the future to support different execution models, but as of right now it expects a complete block and returns its result after the entire block has been executed. For this reason, Block-STM is expected to produce identical results to serial execution. In other words, the `AppHash` produced by Block-STM's parallel execution should be equal to the `AppHash` produced by the default serial transaction runner. ## Safe Deployment Practices Given the Block-STM executor is a general purpose parallel execution engine, we recommend a phased rollout with extensive testing for each application individually. * *Phased Rollout* Since parallel execution is purely a performance optimization, applications should expect to calculate the same AppHash when using Block-STM as they would when serial execution is enabled via the default TxRunner. This allows teams to turn on parallel execution for a fraction of their nodes--API nodes instead of validators or on a portion of a distributed validator cluster for example. Running with a mixed fleet of parallel and serial execution for an extended time should minimize blast radius in the event that a failure occurs. * *Message Type Support* We have done testing on as many of the core SDK message types as possible, but given the Cosmos SDK allows arbitrary message creation it will be impossible to validate all message types that exist and all combinations of workflows. Each team integrating Block-STM in production should validate their own message types for both correctness and performance. NOTE: We specifically have **not** validated support for CosmWasm message types run using Block-STM. Run independent validation before enabling if your chain uses CosmWasm. * *Caching Risks* Block-STM works via dependency tracking within the SDK's `MultiStore` interface. Any data which could cause stateful changes to execution that lives outside the store poses the biggest risk for indeterminism. We recommend in general avoiding the use of cached data, in memory stores, or persisting any state outside the scope of a `Store`. ## App Integration Integration of parallel execution is abstracted into two interfaces: `DeliverTxFunc` and `TxRunner`. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // DeliverTxFunc is the function called for each transaction in order to produce // a single ExecTxResult. `memTx` is an optional in-memory representation of // the transaction, which can be used to avoid decoding the transaction. type DeliverTxFunc func( tx []byte, memTx Tx, ms storetypes.MultiStore, txIndex int, incarnationCache map[string]any, ) *abci.ExecTxResult // TxRunner defines an interface for types which can be used to execute the // DeliverTxFunc. It should return an array of *abci.ExecTxResult corresponding // to the result of executing each transaction provided to the Run function. type TxRunner interface { Run( ctx context.Context, ms storetypes.MultiStore, txs [][]byte, deliverTx DeliverTxFunc, ) ([]*abci.ExecTxResult, error) } ``` The `TxRunner` is the primary interface that developers wire into their application. The `baseapp` package provides an option to set it up: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (app *BaseApp) SetBlockSTMTxRunner(txRunner sdk.TxRunner) { app.txRunner = txRunner } ``` ### Runner Implementations Two implementations of `TxRunner` are provided in the `baseapp/txnrunner` package: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} NewDefaultRunner(txDecoder sdk.TxDecoder) *DefaultRunner NewSTMRunner( txDecoder sdk.TxDecoder, stores []storetypes.StoreKey, workers int, estimate bool, coinDenom func(storetypes.MultiStore) string, ) *STMRunner ``` `NewDefaultRunner` is used by `BaseApp` by default and provides serial execution without using the Block-STM code paths. You do not need to wire this in explicitly. `NewSTMRunner` constructs a runner which uses parallel execution. Its parameters are: * **`txDecoder`** — A standard `sdk.TxDecoder`, readily available in any SDK application. * **`stores`** — A list of every store key used in your application. Since Block-STM needs to track store usage across transactions, it must be passed all module-level store keys. Here is an example taken from the Cosmos EVM: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} keys := storetypes.NewKVStoreKeys( authtypes.StoreKey, banktypes.StoreKey, stakingtypes.StoreKey, minttypes.StoreKey, distrtypes.StoreKey, slashingtypes.StoreKey, govtypes.StoreKey, consensusparamtypes.StoreKey, upgradetypes.StoreKey, feegrant.StoreKey, evidencetypes.StoreKey, authzkeeper.StoreKey, // IBC keys ibcexported.StoreKey, ibctransfertypes.StoreKey, // Cosmos EVM store keys evmtypes.StoreKey, feemarkettypes.StoreKey, erc20types.StoreKey, ) oKeys := storetypes.NewObjectStoreKeys( banktypes.ObjectStoreKey, evmtypes.ObjectKey, ) var nonTransientKeys []storetypes.StoreKey for _, k := range keys { nonTransientKeys = append(nonTransientKeys, k) } for _, k := range oKeys { nonTransientKeys = append(nonTransientKeys, k) } ``` * **`workers`** — The number of parallel workers. Experimentation has shown diminishing returns above your system's hardware parallelism. The recommended value is: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import "runtime" workers := min(runtime.GOMAXPROCS(0), runtime.NumCPU()) ``` * **`estimate`** — Controls whether the system should proactively determine transaction read/write conflicts before execution. Set this to `true` in all cases. * **`coinDenom`** — A function that returns the staking coin denom at runtime. This is used during estimation to reason about which keys in the `bank` module will be modified when fees are collected. A hard-coded value is acceptable; the value should be your chain's bond denom. ### Full Wiring Example Here is a complete example taken from the Cosmos EVM's `evmd` application: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} bApp.SetBlockSTMTxRunner(txnrunner.NewSTMRunner( encodingConfig.TxConfig.TxDecoder(), nonTransientKeys, min(goruntime.GOMAXPROCS(0), goruntime.NumCPU()), true, func(ms storetypes.MultiStore) string { return sdk.DefaultBondDenom }, )) ``` ## Parallel Transaction Optimization Once Block-STM is wired in, you may initially notice that most blocks execute slower than with serial execution. This is due to the overhead of re-executing transactions when any two have conflicting reads or writes. To realize performance gains, you need to reduce storage access conflicts between transactions. An example of this can be seen in [PR #26005](https://github.com/cosmos/cosmos-sdk/pull/26005) where new account creation involved assigning an ID whose value was retrieved and incremented via a single key in the `x/auth` module. The linked PR converts account ID generation to use deterministic UUID generation instead of relying on a conflicting storage location. The result is that multiple transactions in the same block which each create new accounts no longer access this key and can be run in parallel without re-executions. Work has already been done within the SDK and Cosmos EVM for common transaction types such as bank sends and EVM gas sends. The following steps describe the additional configuration needed. ### Enable Virtual Fee Collection (EVM-specific) This alters how fee collection works for EVM transactions, accumulating fees to the fee collector module in the `EndBlocker` instead of using regular sends during transaction execution. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} app.EVMKeeper.EnableVirtualFeeCollection() ``` ### Set Up the Object Store in the Bank Keeper This enables the bank keeper to collect fees in the `EndBlocker` instead of requiring every transaction to send fees directly to the `FeeCollector` module account. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} app.BankKeeper = app.BankKeeper.WithObjStoreKey(oKeys[banktypes.ObjectStoreKey]) ``` ### Custom Modules All other changes to parallelize common transactions were done in a way that does not require configuration. For custom transaction types or custom modules, additional changes to KV store access patterns may be required. There is no generalized approach for this yet. The common pattern for functionality that requires access to the same storage key is to write intermediate values to transient or object storage and use an `EndBlocker` to collect the values after all transaction execution completes. ## Benchmarks ### Environment * **Machine:** Apple M3 Pro, 11 cores * **OS:** macOS (Darwin 25.3.0) * **Package:** `github.com/cosmos/cosmos-sdk/internal/blockstm` *** ### Random Workload (10k txs, 100 keys) | Workers | ns/op | B/op | allocs/op | Speedup | | ---------- | ------ | ----- | --------- | -------- | | sequential | 1,169M | 9.9M | 220K | 1.0x | | 1 | 1,208M | 36.3M | 544K | 0.97x | | 5 | 324M | 37.1M | 552K | **3.6x** | | 10 | 218M | 43.7M | 621K | **5.4x** | | 15 | 211M | 77.4M | 975K | **5.5x** | | 20 | 226M | 78.0M | 982K | **5.2x** | ### No-Conflict Workload (10k txs) | Workers | ns/op | B/op | allocs/op | Speedup | | ---------- | ------ | ----- | --------- | -------- | | sequential | 1,381M | 11.7M | 221K | 1.0x | | 1 | 1,358M | 80.3M | 1,095K | 1.0x | | 5 | 291M | 81.1M | 1,103K | **4.8x** | | 10 | 209M | 83.5M | 1,131K | **6.6x** | | 15 | 200M | 83.7M | 1,135K | **6.9x** | | 20 | 204M | 83.8M | 1,136K | **6.8x** | ### Worst-Case Workload (full conflict, 10k txs) | Workers | ns/op | B/op | allocs/op | Speedup | | ---------- | ------ | ----- | --------- | -------- | | sequential | 1,239M | 9.6M | 220K | 1.0x | | 1 | 1,280M | 34.5M | 520K | 0.97x | | 5 | 295M | 42.8M | 607K | **4.2x** | | 10 | 224M | 70.1M | 899K | **5.5x** | | 15 | 246M | 77.3M | 980K | **5.0x** | | 20 | 262M | 77.4M | 980K | **4.7x** | ### Iterate Workload (10k txs, 100 keys) | Workers | ns/op | B/op | allocs/op | Speedup | | ---------- | ------ | ------ | --------- | -------- | | sequential | 1,286M | 16.5M | 290K | 1.0x | | 1 | 1,332M | 75.2M | 843K | 0.97x | | 5 | 288M | 76.5M | 855K | **4.5x** | | 10 | 252M | 123.1M | 1,280K | **5.1x** | | 15 | 317M | 359.5M | 3,405K | **4.1x** | | 20 | 319M | 363.5M | 3,419K | **4.0x** | *** ### Key Takeaways * Peak speedup is **6.9x** at 15 workers on the no-conflict workload * Diminishing returns beyond 10–15 workers, with memory usage increasing significantly * Even the worst-case (full conflict) scenario achieves \~5x speedup at 10 workers * The iterate workload shows performance degradation beyond 10 workers, likely due to increased contention on range reads (memory usage jumps \~3x at 15+ workers) # ABCI Overview Source: https://docs.cosmos.network/sdk/latest/guides/abci/abci ABCI, Application Blockchain Interface is the interface between CometBFT and the application. More information about ABCI can be found here. CometBFT version 0.38 included a new version of ABCI (called ABCI 2.0) which added several new methods. ## What is ABCI? ABCI, Application Blockchain Interface is the interface between CometBFT and the application. More information about ABCI can be found [here](/cometbft/latest/spec/abci/Overview). CometBFT version 0.38 introduced ABCI 2.0, which added several new methods: * `PrepareProposal` * `ProcessProposal` * `ExtendVote` * `VerifyVoteExtension` * `FinalizeBlock` The Cosmos SDK's `BaseApp` implements the full ABCI interface. The source lives in [`baseapp/abci.go`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/baseapp/abci.go). ## CheckTx ```mermaid theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} graph TD subgraph SDK[Cosmos SDK] B[BaseApp] A[AnteHandlers] B <-->|Validate TX| A end C[CometBFT] <-->|CheckTx|SDK U((User)) -->|Submit TX| C N[P2P] -->|Receive TX| C ``` `CheckTx` is called by `BaseApp` whenever CometBFT receives a transaction from a client, over the p2p network, or via RPC. Its sole job is to decide whether the transaction is valid enough to enter the mempool. It does not execute messages. The default implementation runs the transaction through the `AnteHandler` chain, which performs signature verification, fee checks, and other stateless or lightweight stateful validation. If the `AnteHandler` returns an error, the transaction is rejected and never reaches the mempool. See the implementation at [`baseapp/abci.go`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/baseapp/abci.go). ### Custom CheckTx handler `CheckTxHandler` lets you replace the default `CheckTx` logic entirely. The type is defined in [`types/abci.go`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/types/abci.go): ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type CheckTxHandler func(runTx RunTx, req *abci.RequestCheckTx) (*abci.ResponseCheckTx, error) ``` Where `RunTx` is: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type RunTx = func(txBytes []byte, tx Tx) (gInfo GasInfo, result *Result, anteEvents []abci.Event, err error) ``` The handler receives the `runTx` closure from `BaseApp` (bound to the correct execution mode) and the raw ABCI request. It must return deterministic results for the same input bytes. Register a custom handler from `app.go`: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} app.SetCheckTxHandler(myCheckTxHandler) ``` ## PrepareProposal Based on validator voting power, CometBFT selects a block proposer and calls `PrepareProposal` on that validator's application. The proposer collects outstanding transactions from the mempool and returns a proposal to CometBFT. CometBFT's own mempool uses FIFO ordering. `PrepareProposal` gives the application full control to reorder, drop, or inject transactions before the proposal is sent. For example, an application can inject vote extension data from the previous block as synthetic transactions. What the application does here has no effect on CometBFT's mempool state. `PrepareProposal` MAY be non-deterministic and is only executed by the current block proposer. The Cosmos SDK provides `DefaultProposalHandler` in [`baseapp/abci_utils.go`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/baseapp/abci_utils.go), which selects transactions from the app-side mempool up to `req.MaxTxBytes` and the block gas limit. If you implement a custom `PrepareProposal` handler, the selected transactions MUST NOT exceed the maximum block gas (if set) or `req.MaxTxBytes`. To wire the default handler (or swap in a custom one) from `app.go`: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} abciPropHandler := baseapp.NewDefaultProposalHandler(mempool, app) app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler()) ``` Vote extensions are only available at the height after they are enabled. See [Vote Extensions](/sdk/latest/guides/abci/vote-extensions) for details. ## ProcessProposal After the block proposer broadcasts a proposal, every validator calls `ProcessProposal` to accept or reject it. The default implementation checks that each transaction decodes correctly and passes the `AnteHandler`. `ProcessProposal` MUST be deterministic. Non-deterministic results cause apphash mismatches across validators. If the handler panics or returns an error, honest validators prevote nil and CometBFT starts a new round with a new proposal. See the default implementation in [`baseapp/abci_utils.go`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/baseapp/abci_utils.go). To wire a custom handler: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} app.SetProcessProposal(myProcessProposalHandler) ``` ## ExtendVote and VerifyVoteExtensions These methods allow applications to extend the voting process by requiring validators to perform additional actions beyond simply validating blocks. If vote extensions are enabled, `ExtendVote` is called on every validator and each one returns its vote extension — an arbitrary byte slice. This data is only available in the next block height during `PrepareProposal`. Common use cases include prices for a price oracle or encryption shares for an encrypted transaction mempool. `ExtendVote` CAN be non-deterministic. `VerifyVoteExtension` is called on every validator to verify other validators' vote extensions. It MUST be deterministic. Applications must keep vote extension data concise, as large extensions degrade chain performance. See the [CometBFT QA results](/cometbft/latest/docs/qa/CometBFT-QA-38#vote-extensions-testbed) for benchmarks. See [Vote Extensions](/sdk/latest/guides/abci/vote-extensions) for implementation details. ## FinalizeBlock `FinalizeBlock` is called once consensus is reached on a proposal. It executes all transactions in the block, runs `BeginBlock`/`EndBlock` equivalents, and commits the resulting state. It replaces the old `BeginBlock`, `DeliverTx`, and `EndBlock` methods from ABCI 1.0. See the implementation at [`baseapp/abci.go`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/baseapp/abci.go). # Application Mempool Source: https://docs.cosmos.network/sdk/latest/guides/abci/app-mempool **Synopsis** This section describes how the app-side mempool can be used and replaced. Since `v0.47` the application has its own mempool to allow much more granular block building than previous versions. This change was enabled by [ABCI 1.0](https://github.com/cometbft/cometbft/blob/v0.37.0/spec/abci). Notably it introduces the `PrepareProposal` and `ProcessProposal` steps of ABCI++. **Prerequisite Readings** * [BaseApp](/sdk/latest/learn/concepts/baseapp) * [ABCI](/sdk/latest/guides/abci/abci) ## Mempool There are countless designs that an application developer can write for a mempool, the SDK opted to provide only simple mempool implementations. Namely, the SDK provides the following mempools: * [No-op Mempool](#no-op-mempool) * [Sender Nonce Mempool](#sender-nonce-mempool) * [Priority Nonce Mempool](#priority-nonce-mempool) By default, the SDK uses the [No-op Mempool](#no-op-mempool), but it can be replaced by the application developer in \[`app.go`: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} nonceMempool := mempool.NewSenderNonceMempool() mempoolOpt := baseapp.SetMempool(nonceMempool) baseAppOptions = append(baseAppOptions, mempoolOpt) ``` ### No-op Mempool A no-op mempool is a mempool where transactions are completely discarded and ignored when BaseApp interacts with the mempool. When this mempool is used, it is assumed that an application will rely on CometBFT's transaction ordering defined in `RequestPrepareProposal`, which is FIFO-ordered by default. > Note: If a NoOp mempool is used, PrepareProposal and ProcessProposal both should be aware of this as > PrepareProposal could include transactions that could fail verification in ProcessProposal. ### Sender Nonce Mempool The nonce mempool is a mempool that keeps transactions from an sorted by nonce in order to avoid the issues with nonces. It works by storing the transaction in a list sorted by the transaction nonce. When the proposer asks for transactions to be included in a block it randomly selects a sender and gets the first transaction in the list. It repeats this until the mempool is empty or the block is full. It is configurable with the following parameters: #### MaxTxs It is an integer value that sets the mempool in one of three modes, *bounded*, *unbounded*, or *disabled*. * **negative**: Disabled, mempool does not insert new transaction and return early. * **zero**: Unbounded mempool has no transaction limit and will never fail with `ErrMempoolTxMaxCapacity`. * **positive**: Bounded, it fails with `ErrMempoolTxMaxCapacity` when `maxTx` value is the same as `CountTx()` #### Seed Set the seed for the random number generator used to select transactions from the mempool. ### Priority Nonce Mempool The [priority nonce mempool](https://github.com/cosmos/cosmos-sdk/blob/main/types/mempool/priority_nonce_spec.md) is a mempool implementation that stores txs in a partially ordered set by 2 dimensions: * priority * sender-nonce (sequence number) Internally it uses one priority ordered [skip list](https://pkg.go.dev/github.com/huandu/skiplist) and one skip list per sender ordered by sender-nonce (sequence number). When there are multiple txs from the same sender, they are not always comparable by priority to other sender txs and must be partially ordered by both sender-nonce and priority. It is configurable with the following parameters: #### MaxTxs It is an integer value that sets the mempool in one of three modes, *bounded*, *unbounded*, or *disabled*. * **negative**: Disabled, mempool does not insert new transaction and return early. * **zero**: Unbounded mempool has no transaction limit and will never fail with `ErrMempoolTxMaxCapacity`. * **positive**: Bounded, it fails with `ErrMempoolTxMaxCapacity` when `maxTx` value is the same as `CountTx()` #### Callback The priority nonce mempool provides mempool options allowing the application sets callback(s). * **OnRead**: Set a callback to be called when a transaction is read from the mempool. * **TxReplacement**: Sets a callback to be called when duplicated transaction nonce detected during mempool insert. Application can define a transaction replacement rule based on tx priority or certain transaction fields. More information on the SDK mempool implementation can be found in the [godocs](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/types/mempool). # Vote Extensions Source: https://docs.cosmos.network/sdk/latest/guides/abci/vote-extensions Vote extensions are arbitrary bytes that validators can attach to their pre-commit vote at block height `H`. They are part of ABCI 2.0 and are available starting from CometBFT v0.38 and Cosmos SDK v0.50. ## Enabling vote extensions Vote extensions are controlled by the `VoteExtensionsEnableHeight` consensus parameter. At the configured height, CometBFT begins calling `ExtendVote` and `VerifyVoteExtension` on every validator. Extensions produced at height `H` are available to the block proposer at height `H+1` via `PrepareProposal`. To check whether vote extensions are active in a handler: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cp := ctx.ConsensusParams() if cp.Abci != nil && req.Height > cp.Abci.VoteExtensionsEnableHeight { // vote extensions are available } ``` `ConsensusParams().Abci` is a pointer and must be nil-checked before use. ## ExtendVote The Cosmos SDK defines [`ExtendVoteHandler`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/types/abci.go#L48): ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type ExtendVoteHandler func(Context, *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) ``` Register a handler in `app.go` via `baseapp.SetExtendVoteHandler` (defined in [`baseapp/options.go`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/baseapp/options.go)): ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} app.SetExtendVoteHandler(myExtendVoteHandler) ``` If `ExtendVoteHandler` is set, it **must** return a non-nil `VoteExtension`. An empty byte slice is valid. `ExtendVote` is called only on the local validator and does **not** need to be deterministic. Common uses include: * Submitting prices for an oracle * Sharing encryption shares for an encrypted mempool Keep extensions small — large extensions increase consensus latency. See [CometBFT QA results](/cometbft/latest/docs/qa/CometBFT-QA-38#vote-extensions-testbed) for benchmarks. ## VerifyVoteExtension The SDK defines [`VerifyVoteExtensionHandler`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/types/abci.go#L52): ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type VerifyVoteExtensionHandler func(Context, *abci.RequestVerifyVoteExtension) (*abci.ResponseVerifyVoteExtension, error) ``` Register it in `app.go`: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} app.SetVerifyVoteExtensionHandler(myVerifyVoteExtensionHandler) ``` `VerifyVoteExtension` is called on every validator for every peer's pre-commit. It **must** be deterministic — the same extension must produce the same result on every validator. If an application defines `ExtendVoteHandler`, it should also define a `VerifyVoteExtensionHandler`. Always validate the size of incoming extensions in this handler. ## Validating vote extension signatures Before processing vote extensions in `PrepareProposal` or `ProcessProposal`, validate that they are properly signed. The SDK provides [`baseapp.ValidateVoteExtensions`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/baseapp/abci_utils.go) for this: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} err := baseapp.ValidateVoteExtensions(ctx, valStore, req.Height, ctx.ChainID(), req.LocalLastCommit) if err != nil { return nil, err } ``` `ValidateVoteExtensions` verifies that each vote extension in the commit is correctly signed by its validator. `valStore` is a [`baseapp.ValidatorStore`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/baseapp/abci_utils.go), an interface with a single method: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type ValidatorStore interface { GetPubKeyByConsAddr(context.Context, sdk.ConsAddress) (cmtprotocrypto.PublicKey, error) } ``` Call `ValidateVoteExtensions` in both `PrepareProposal` (on `req.LocalLastCommit`) and `ProcessProposal` (on the `ExtendedCommitInfo` recovered from the injected transaction) before trusting any extension data. ## Vote extension propagation Vote extensions from height `H` are provided only to the block proposer at height `H+1` via `req.LocalLastCommit` in `PrepareProposal`. They are **not** provided to other validators during `ProcessProposal`. If all validators need to use extension data at `H+1`, the proposer must inject it into the block proposal. Since the `Txs` field in `PrepareProposal` is a `[][]byte`, any byte slice — including a serialized extensions summary — can be prepended to the proposal: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} injectedVoteExtTx := StakeWeightedPrices{ StakeWeightedPrices: stakeWeightedPrices, ExtendedCommitInfo: req.LocalLastCommit, } bz, err := json.Marshal(injectedVoteExtTx) if err != nil { return nil, err } proposalTxs = append([][]byte{bz}, proposalTxs...) ``` `FinalizeBlock` ignores any byte slice that does not implement `sdk.Tx`, so injected extensions are safely skipped during message execution. For more details on propagation design, see the [ABCI 2.0 ADR](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/docs/architecture/adr-064-abci-2.0.md#vote-extension-propagation--verification). ## Recovery via PreBlocker The SDK's `PreBlocker` runs before any message execution in `FinalizeBlock`. Use it to recover injected vote extensions and make the results available to modules during the block: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (h *ProposalHandler) PreBlocker(ctx sdk.Context, req *abci.RequestFinalizeBlock) (*sdk.ResponsePreBlock, error) { res := &sdk.ResponsePreBlock{} if len(req.Txs) == 0 { return res, nil } cp := ctx.ConsensusParams() if cp.Abci != nil && req.Height > cp.Abci.VoteExtensionsEnableHeight { var injectedVoteExtTx StakeWeightedPrices if err := json.Unmarshal(req.Txs[0], &injectedVoteExtTx); err != nil { return nil, err } if err := h.keeper.SetOraclePrices(ctx, injectedVoteExtTx.StakeWeightedPrices); err != nil { return nil, err } } return res, nil } ``` Register the PreBlocker in `app.go` (see [`baseapp/options.go`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/baseapp/options.go)): ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} app.SetPreBlocker(proposalHandler.PreBlocker) ``` The `sdk.PreBlocker` type is defined in [`types/abci.go`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/types/abci.go): ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type PreBlocker func(Context, *abci.RequestFinalizeBlock) (*ResponsePreBlock, error) ``` State written to the context inside `PreBlocker` is available to all `BeginBlock` and message handlers in the same block. # Guides Overview Source: https://docs.cosmos.network/sdk/latest/guides/guides Deep dives into specific Cosmos SDK topics for developers who have completed the tutorial. These guides go deeper on specific topics. If you've completed the [Build a Chain Tutorial](/sdk/latest/tutorials/example/00-overview) and want to learn more about a particular area, this is where to look. ## Module Design Best practices and architectural patterns for building well-structured modules. * [Module Design Considerations](/sdk/latest/guides/module-design/module-design-considerations): module boundaries, state layout, privileged operations, and inter-module dependencies * [Object-Capability Model](/sdk/latest/guides/module-design/ocap): how the SDK uses keeper interfaces to scope access between modules ## ABCI How your application interacts with CometBFT at the protocol level, including advanced features such as mempool design and vote extensions. * [ABCI Overview](/sdk/latest/guides/abci/abci): CheckTx, PrepareProposal, ProcessProposal, FinalizeBlock * [Application Mempool](/sdk/latest/guides/abci/app-mempool): custom mempool implementations and transaction ordering * [Vote Extensions](/sdk/latest/guides/abci/vote-extensions): injecting application data into the consensus process ## Tooling Tools available to Cosmos SDK developers. * [Tool Guide](/sdk/latest/guides/tooling/tool-guide): overview of all available tools by category * [Writing CLI Commands](/sdk/latest/guides/tooling/autocli): AutoCLI and hand-written commands * [Confix](/sdk/latest/guides/tooling/confix): managing and migrating node configuration ## State How modules store and access state. * [Module Store Internals](/sdk/latest/guides/state/store): KVStore, prefix stores, and the multistore * [Collections API](/sdk/latest/guides/state/collections): the modern Collections framework for module state ## Upgrades and Migrations How to upgrade modules and chains without downtime. * [Upgrading Modules](/sdk/latest/guides/upgrades/upgrade): consensus versions, migration handlers, and store upgrades * [Cosmovisor](/sdk/latest/guides/upgrades/cosmovisor): automated binary upgrade management ## Testing and Observability Testing your modules and monitoring a running chain. * [Module Simulation](/sdk/latest/guides/testing/simulator): fuzz testing with the SDK's simulation framework * [Telemetry](/sdk/latest/guides/testing/telemetry): metrics and instrumentation * [Log v2](/sdk/latest/guides/testing/log): structured logging with zerolog and OpenTelemetry # Module Design Considerations Source: https://docs.cosmos.network/sdk/latest/guides/module-design/module-design-considerations **Synopsis** Modules define most of the logic of Cosmos SDK applications. Developers compose modules together using the Cosmos SDK to build their custom application-specific blockchains. This document outlines the basic concepts behind SDK modules and how to approach module management. This page discusses some of the design considerations for building modules in the Cosmos SDK. For more in-depth information on modules, see the following pages: Deep dive into how modules work -- keepers, message handlers, query services, and the module manager. Follow a step-by-step tutorial to build a custom module from scratch on an example Cosmos SDK chain. ## Design Considerations Before writing any code, these are the key design decisions that shape how a module will behave, interoperate, and evolve. ### Define clear module boundaries A module should own a single, well-scoped piece of application state. Resist the temptation to bundle unrelated functionality into one module because it is convenient. Narrow modules are easier to audit, re-use across chains, and upgrade independently. Ask: could a different chain reasonably use this module without modification? If the answer depends on removing half the features, the module is probably doing too much. ### Plan your state structure early Every `KVStore` key your module defines is permanent: removing or renaming keys requires a migration. Use the [Collections](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/collections/README.md) library for structured state management, and name keys to be collision-resistant and self-documenting. Consider what your module needs to index. A value that is only ever looked up by a single key is simple. A value looked up by multiple dimensions (e.g. by owner and by ID) requires secondary indexes, which add complexity and storage overhead. ### Design your message and query surface Keep the `Msg` service minimal. Every message your module accepts becomes part of your public API and must be handled across upgrades. Prefer fewer, general-purpose messages over many narrow ones. Queries are cheaper to add later than messages, but consider what clients need from day one. Poorly designed queries often lead to excessive on-chain state that exists solely to support a query no one else needs. ### Decide how privileged operations are controlled Most modules have parameters that governance should be able to update. Use the standard `MsgUpdateParams` pattern with an `Authority` field, and set that authority to the governance module address at genesis. This ensures parameter changes go through on-chain governance rather than being hardcoded or requiring a chain upgrade. If your module needs to call into another module's privileged functions, establish those permissions through keeper references at app initialization -- not through dynamic lookups at runtime. ### Model inter-module dependencies carefully List every other module your module needs access to. Each dependency becomes a keeper reference injected into your keeper at construction. Avoid circular dependencies: if module A needs B and B needs A, one of them is doing too much. Introduce a third module or restructure the shared logic. Prefer accepting interfaces over concrete keeper types. This makes your module testable in isolation and re-usable across chains with different module implementations. ### Plan for upgrades from the start If your module defines state, it will eventually need a migration. Write migration logic in `x//migrations/` from the first version, even if v1 to v2 is a no-op. Establish the pattern early so upgrades are not an afterthought. See [Module Upgrades](/sdk/latest/guides/upgrades/upgrade) for implementation details. ## Role of Modules in a Cosmos SDK Application The Cosmos SDK can be thought of as the Ruby-on-Rails of blockchain development. It comes with a core that provides the basic functionalities every blockchain application needs, like a [boilerplate implementation of the ABCI](/sdk/latest/learn/concepts/baseapp) to communicate with the underlying consensus engine, a [`multistore`](/sdk/latest/learn/concepts/store#multistore) to persist state, a [server](/sdk/latest/node/run-node) to form a full-node and interfaces to handle queries. On top of this core, the Cosmos SDK enables developers to build modules that implement the business logic of their application. In other words, SDK modules implement the bulk of the logic of applications, while the core does the wiring and enables modules to be composed together. The end goal is to build a robust ecosystem of open-source Cosmos SDK modules, making it increasingly easier to build complex blockchain applications. Cosmos SDK modules can be seen as little state-machines within the state-machine. They generally define a subset of the state using one or more `KVStore`s in the [main multistore](/sdk/latest/learn/concepts/store#multistore), as well as a subset of [message types](/sdk/latest/learn/concepts/transactions#messages). These messages are routed by one of the main components of Cosmos SDK core, [`BaseApp`](/sdk/latest/learn/concepts/baseapp), to a module Protobuf [`Msg` service](/sdk/latest/learn/concepts/transactions#messages) that defines them. ```mermaid expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} flowchart TD A[Transaction relayed from the full-node's consensus engine to the node's application via FinalizeBlock] A --> B[APPLICATION] B --> C["Using baseapp's methods: Decode the Tx, extract and route the message(s)"] C --> D[Message routed to the correct module to be processed] D --> E[AUTH MODULE] D --> F[BANK MODULE] D --> G[STAKING MODULE] D --> H[GOV MODULE] H --> I[Handles message, Updates state] E --> I F --> I G --> I I --> J["Return result to the underlying consensus engine (e.g. CometBFT) (0=Ok, 1=Err)"] ``` As a result of this architecture, building a Cosmos SDK application usually revolves around writing modules to implement the specialized logic of the application and composing them with existing modules to complete the application. Developers will generally work on modules that implement logic needed for their specific use case that do not exist yet, and will use existing modules for more generic functionalities like staking, accounts, or token management. ### Modules as super-users Modules have the ability to perform actions that are not available to regular users. This is because modules are given sudo permissions by the state machine. Modules can reject another modules desire to execute a function but this logic must be explicit. Examples of this can be seen when modules create functions to modify parameters: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package keeper import ( "context" "github.com/hashicorp/go-metrics" errorsmod "cosmossdk.io/errors" "cosmossdk.io/x/bank/types" "github.com/cosmos/cosmos-sdk/telemetry" sdk "github.com/cosmos/cosmos-sdk/types" sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" ) type msgServer struct { Keeper } var _ types.MsgServer = msgServer{ } // NewMsgServerImpl returns an implementation of the bank MsgServer interface // for the provided Keeper. func NewMsgServerImpl(keeper Keeper) types.MsgServer { return &msgServer{ Keeper: keeper } } func (k msgServer) Send(ctx context.Context, msg *types.MsgSend) (*types.MsgSendResponse, error) { var ( from, to []byte err error ) if base, ok := k.Keeper.(BaseKeeper); ok { from, err = base.ak.AddressCodec().StringToBytes(msg.FromAddress) if err != nil { return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid from address: %s", err) } to, err = base.ak.AddressCodec().StringToBytes(msg.ToAddress) if err != nil { return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid to address: %s", err) } } else { return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid keeper type: %T", k.Keeper) } if !msg.Amount.IsValid() { return nil, errorsmod.Wrap(sdkerrors.ErrInvalidCoins, msg.Amount.String()) } if !msg.Amount.IsAllPositive() { return nil, errorsmod.Wrap(sdkerrors.ErrInvalidCoins, msg.Amount.String()) } if err := k.IsSendEnabledCoins(ctx, msg.Amount...); err != nil { return nil, err } if k.BlockedAddr(to) { return nil, errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "%s is not allowed to receive funds", msg.ToAddress) } err = k.SendCoins(ctx, from, to, msg.Amount) if err != nil { return nil, err } defer func() { for _, a := range msg.Amount { if a.Amount.IsInt64() { telemetry.SetGaugeWithLabels( []string{"tx", "msg", "send" }, float32(a.Amount.Int64()), []metrics.Label{ telemetry.NewLabel("denom", a.Denom) }, ) } } }() return &types.MsgSendResponse{ }, nil } func (k msgServer) MultiSend(ctx context.Context, msg *types.MsgMultiSend) (*types.MsgMultiSendResponse, error) { if len(msg.Inputs) == 0 { return nil, types.ErrNoInputs } if len(msg.Inputs) != 1 { return nil, types.ErrMultipleSenders } if len(msg.Outputs) == 0 { return nil, types.ErrNoOutputs } if err := types.ValidateInputOutputs(msg.Inputs[0], msg.Outputs); err != nil { return nil, err } // NOTE: totalIn == totalOut should already have been checked for _, in := range msg.Inputs { if err := k.IsSendEnabledCoins(ctx, in.Coins...); err != nil { return nil, err } } for _, out := range msg.Outputs { if base, ok := k.Keeper.(BaseKeeper); ok { accAddr, err := base.ak.AddressCodec().StringToBytes(out.Address) if err != nil { return nil, err } if k.BlockedAddr(accAddr) { return nil, errorsmod.Wrapf(sdkerrors.ErrUnauthorized, "%s is not allowed to receive funds", out.Address) } } else { return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid keeper type: %T", k.Keeper) } } err := k.InputOutputCoins(ctx, msg.Inputs[0], msg.Outputs) if err != nil { return nil, err } return &types.MsgMultiSendResponse{ }, nil } func (k msgServer) UpdateParams(ctx context.Context, req *types.MsgUpdateParams) (*types.MsgUpdateParamsResponse, error) { if k.GetAuthority() != req.Authority { return nil, errorsmod.Wrapf(types.ErrInvalidSigner, "invalid authority; expected %s, got %s", k.GetAuthority(), req.Authority) } if err := req.Params.Validate(); err != nil { return nil, err } if err := k.SetParams(ctx, req.Params); err != nil { return nil, err } return &types.MsgUpdateParamsResponse{ }, nil } func (k msgServer) SetSendEnabled(ctx context.Context, msg *types.MsgSetSendEnabled) (*types.MsgSetSendEnabledResponse, error) { if k.GetAuthority() != msg.Authority { return nil, errorsmod.Wrapf(types.ErrInvalidSigner, "invalid authority; expected %s, got %s", k.GetAuthority(), msg.Authority) } seen := map[string]bool{ } for _, se := range msg.SendEnabled { if _, alreadySeen := seen[se.Denom]; alreadySeen { return nil, sdkerrors.ErrInvalidRequest.Wrapf("duplicate denom entries found for %q", se.Denom) } seen[se.Denom] = true if err := se.Validate(); err != nil { return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid SendEnabled denom %q: %s", se.Denom, err) } } for _, denom := range msg.UseDefaultFor { if err := sdk.ValidateDenom(denom); err != nil { return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid UseDefaultFor denom %q: %s", denom, err) } } if len(msg.SendEnabled) > 0 { k.SetAllSendEnabled(ctx, msg.SendEnabled) } if len(msg.UseDefaultFor) > 0 { k.DeleteSendEnabled(ctx, msg.UseDefaultFor...) } return &types.MsgSetSendEnabledResponse{ }, nil } func (k msgServer) Burn(goCtx context.Context, msg *types.MsgBurn) (*types.MsgBurnResponse, error) { var ( from []byte err error ) var coins sdk.Coins for _, coin := range msg.Amount { coins = coins.Add(sdk.NewCoin(coin.Denom, coin.Amount)) } if base, ok := k.Keeper.(BaseKeeper); ok { from, err = base.ak.AddressCodec().StringToBytes(msg.FromAddress) if err != nil { return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid from address: %s", err) } } else { return nil, sdkerrors.ErrInvalidRequest.Wrapf("invalid keeper type: %T", k.Keeper) } if !coins.IsValid() { return nil, errorsmod.Wrap(sdkerrors.ErrInvalidCoins, coins.String()) } if !coins.IsAllPositive() { return nil, errorsmod.Wrap(sdkerrors.ErrInvalidCoins, coins.String()) } err = k.BurnCoins(goCtx, from, coins) if err != nil { return nil, err } return &types.MsgBurnResponse{ }, nil } ``` ## How to Approach Building Modules as a Developer While there are no definitive guidelines for writing modules, here are some important design principles developers should keep in mind when building them: * **Composability**: Cosmos SDK applications are almost always composed of multiple modules. This means developers need to carefully consider the integration of their module not only with the core of the Cosmos SDK, but also with other modules. The former is achieved by following standard design patterns outlined [here](#main-components-of-cosmos-sdk-modules), while the latter is achieved by properly exposing the store(s) of the module via the [`keeper`](/sdk/latest/learn/concepts/modules#keeper). * **Specialization**: A direct consequence of the **composability** feature is that modules should be **specialized**. Developers should carefully establish the scope of their module and not batch multiple functionalities into the same module. This separation of concerns enables modules to be re-used in other projects and improves the upgradability of the application. **Specialization** also plays an important role in the [object-capabilities model](/sdk/latest/guides/module-design/ocap) of the Cosmos SDK. * **Capabilities**: Most modules need to read and/or write to the store(s) of other modules. However, in an open-source environment, it is possible for some modules to be malicious. That is why module developers need to carefully think not only about how their module interacts with other modules, but also about how to give access to the module's store(s). The Cosmos SDK takes a capabilities-oriented approach to inter-module security. This means that each store defined by a module is accessed by a `key`, which is held by the module's [`keeper`](/sdk/latest/learn/concepts/modules#keeper). This `keeper` defines how to access the store(s) and under what conditions. Access to the module's store(s) is done by passing a reference to the module's `keeper`. ## Main Components of Cosmos SDK Modules Modules are by convention defined in the `./x/` subfolder (e.g. the `bank` module will be defined in the `./x/bank` folder). They generally share the same core components: * A [`keeper`](/sdk/latest/learn/concepts/modules#keeper), used to access the module's store(s) and update the state. * A [`Msg` service](/sdk/latest/learn/concepts/transactions#messages), used to process messages when they are routed to the module by [`BaseApp`](/sdk/latest/learn/concepts/baseapp#message-routing) and trigger state-transitions. * A [query service](/sdk/latest/learn/concepts/transactions#queries), used to process user queries when they are routed to the module by [`BaseApp`](/sdk/latest/learn/concepts/baseapp#query-routing). * Interfaces, for end users to query the subset of the state defined by the module and create `message`s of the custom types defined in the module. In addition to these components, modules implement the `AppModule` interface in order to be managed by the [`module manager`](/sdk/latest/learn/concepts/app-go#module-manager). # Object-Capability Model Source: https://docs.cosmos.network/sdk/latest/guides/module-design/ocap How the Cosmos SDK uses object capabilities to isolate modules and limit the blast radius of faulty or malicious code. The Cosmos SDK is built around the **object-capability model** (ocap) — a security model designed for systems that compose untrusted components. The threat model is explicit: a thriving ecosystem of Cosmos SDK modules will eventually include faulty or malicious ones. Ocap limits the damage any single module can do. ## How it works The model has two rules: 1. An object can send a message to another object only if it holds a reference to it. 2. An object can obtain a reference to another object only by receiving it through a message. In practice: a module can only affect the state it has been explicitly handed access to. If the bank keeper was not passed to your module, your module cannot touch balances — full stop. There is no global registry to reach into. This makes security analysis local. You can audit what a module can do by looking at what references it was given at wiring time, without reading its implementation. ## Pointer vs. value Only pass what a module needs. If you pass a pointer, you grant write access. If you pass a value, you grant read access. This code violates the principle — passing a pointer to an external module grants it the ability to mutate the account: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} account := &AppAccount{ Address: pub.Address(), Coins: sdk.Coins{sdk.NewInt64Coin("ATM", 100)}, } sumValue := externalModule.ComputeSumValue(account) // can modify account ``` Pass a copy instead: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} sumValue := externalModule.ComputeSumValue(*account) // read-only ``` ## Keeper interfaces The most common place to apply ocap in SDK modules is at keeper boundaries. Instead of accepting a concrete keeper type from another module, define a narrow interface containing only the methods your module actually calls. For example, `x/distribution` needs to query balances and send coins, but it does not need the full bank keeper. It defines its own interface: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // x/distribution/types/expected_keepers.go type BankKeeper interface { GetAllBalances(ctx context.Context, addr sdk.AccAddress) sdk.Coins SpendableCoins(ctx context.Context, addr sdk.AccAddress) sdk.Coins SendCoinsFromModuleToModule(ctx context.Context, senderModule, recipientModule string, amt sdk.Coins) error SendCoinsFromModuleToAccount(ctx context.Context, senderModule string, recipientAddr sdk.AccAddress, amt sdk.Coins) error SendCoinsFromAccountToModule(ctx context.Context, senderAddr sdk.AccAddress, recipientModule string, amt sdk.Coins) error BlockedAddr(addr sdk.AccAddress) bool } ``` By convention these live in `types/expected_keepers.go`. The benefit is twofold: the interface documents exactly what cross-module access your module requires, and it makes the dependency easy to mock in tests. ## Store isolation Modules do not receive direct access to the global multistore. Instead, each module gets a `store.KVStoreService` scoped to its own prefix — it can only read and write within that namespace. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type Keeper struct { storeService store.KVStoreService // ... } ``` This means a bug or malicious call in one module's keeper cannot read or corrupt another module's state. The scoping is enforced at the store layer, not by convention. ## Authority Some operations — updating parameters, pausing a module, triggering emergency actions — should only be callable by governance or another trusted account. The SDK handles this with an explicit `authority` string stored in the keeper. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type Keeper struct { // the address capable of executing privileged messages, // typically the x/gov module account authority string } ``` Message handlers check the caller against this address before proceeding: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} if msg.Authority != k.authority { return nil, errors.Wrapf(sdkerrors.ErrUnauthorized, "expected %s, got %s", k.authority, msg.Authority) } ``` The authority address is set at wiring time in `app.go` and cannot be changed at runtime. This is ocap applied to governance: privileged capability is a reference, and only the holder of that reference can exercise it. See [`simapp/app.go`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/simapp/app.go) for how keeper dependencies and authorities are wired in a complete application. For background, see the [Wikipedia article on object-capability model](https://en.wikipedia.org/wiki/Object-capability_model). # Address Encoding Source: https://docs.cosmos.network/sdk/latest/guides/reference/bech32 The Cosmos SDK uses the Bech32 address format for all user-facing addresses. Bech32 encoding provides robust integrity checks through checksums and includes a human-readable prefix (HRP) that provides contextual information about the address type. ## Address Types The SDK defines three distinct address types, each with its own Bech32 prefix: | Address Type | Bech32 Prefix | Example | Purpose | | -------------------------- | --------------- | ------------------------- | ------------------------------------------------ | | Account Address | `cosmos` | `cosmos1r5v5sr...` | User accounts, balances, transactions | | Validator Operator Address | `cosmosvaloper` | `cosmosvaloper1r5v5sr...` | Validator operator identity, staking operations | | Consensus Address | `cosmosvalcons` | `cosmosvalcons1r5v5sr...` | Validator consensus participation, block signing | Each address type also has a corresponding public key prefix: * Account public keys: `cosmospub` * Validator public keys: `cosmosvaloperpub` * Consensus public keys: `cosmosvalconspub` ## Supported Key Schemes The Cosmos SDK supports three key schemes. The choice of scheme affects address length and whether it can be used for transactions or consensus: | | Address length in bytes | Public key length in bytes | Used for transaction authentication | Used for consensus (CometBFT) | | :----------: | :---------------------: | :------------------------: | :---------------------------------: | :---------------------------: | | `secp256k1` | 20 | 33 | yes | no | | `secp256r1` | 32 | 33 | yes | no | | `tm-ed25519` | -- not used -- | 32 | no | yes | `secp256k1` is the default for user accounts. `secp256r1` is supported as an alternative and produces longer addresses (32 bytes). `tm-ed25519` is used exclusively for validator consensus keys and does not produce a user-facing address. ## Address Derivation Addresses are derived from public keys through cryptographic hashing. The process differs based on the key algorithm: ### Secp256k1 Keys (Account Addresses) Account addresses use Bitcoin-style address derivation: ``` 1. Public Key: 33 bytes (compressed secp256k1 public key) 2. SHA-256 hash of public key: 32 bytes 3. RIPEMD-160 hash of result: 20 bytes (final address) ``` **Implementation:** `crypto/keys/secp256k1/secp256k1.go` ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (pubKey *PubKey) Address() crypto.Address { sha := sha256.Sum256(pubKey.Key) // Step 1: SHA-256 hasherRIPEMD160 := ripemd160.New() hasherRIPEMD160.Write(sha[:]) return hasherRIPEMD160.Sum(nil) // Step 2: RIPEMD-160 = 20 bytes } ``` ### Ed25519 Keys (Consensus Addresses) Consensus addresses use truncated SHA-256: ``` 1. Public Key: 32 bytes (Ed25519 public key) 2. SHA-256 hash, truncated to first 20 bytes ``` **Implementation:** `crypto/keys/ed25519/ed25519.go` ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (pubKey *PubKey) Address() crypto.Address { return crypto.Address(tmhash.SumTruncated(pubKey.Key)) // SHA-256-20 } ``` ## Bech32 Encoding Process Once address bytes are derived, they're converted to Bech32 format: **Step 1: Convert from 8-bit to 5-bit encoding** ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Address bytes (20 bytes = 160 bits) addressBytes := []byte{0x12, 0x34, ..., 0xab} // 20 bytes // Convert to 5-bit groups for Bech32 converted, _ := bech32.ConvertBits(addressBytes, 8, 5, true) ``` **Step 2: Encode with Human-Readable Prefix** ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Combine HRP with converted bytes bech32Address, _ := bech32.Encode("cosmos", converted) // Result: "cosmos1r5v5srda7xfth3uckstjst6k05kmeyzptewwdk" ``` **Implementation:** `types/bech32/bech32.go` ## Configuring Bech32 prefixes Every Cosmos SDK application sets its Bech32 prefixes and SLIP-44 coin type once at startup via `sdk.GetConfig()`, then seals the config so it cannot be changed at runtime. The defaults (`cosmos`, `cosmosvaloper`, etc.) are defined in [`types/config.go`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/types/config.go). Chain developers override them before the app starts: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} config := sdk.GetConfig() config.SetBech32PrefixForAccount("cosmos", "cosmospub") config.SetBech32PrefixForValidator("cosmosvaloper", "cosmosvaloperpub") config.SetBech32PrefixForConsensusNode("cosmosvalcons", "cosmosvalconspub") config.SetCoinType(118) // SLIP-44 coin type config.Seal() ``` ## Address Validation The SDK validates addresses through: 1. **Format validation**: Ensures valid Bech32 encoding 2. **Prefix validation**: Confirms correct HRP for address type 3. **Length validation**: Verifies address is exactly 20 bytes when decoded ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (bc Bech32Codec) StringToBytes(text string) ([]byte, error) { hrp, bz, err := bech32.DecodeAndConvert(text) if err != nil { return nil, err } if hrp != bc.Bech32Prefix { return nil, fmt.Errorf("invalid prefix") } return bz, sdk.VerifyAddressFormat(bz) // Checks length = 20 bytes } ``` ## Module Addresses Module accounts use deterministic address derivation defined in [ADR-028](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-028-public-key-addresses.md): ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Module address without derivation keys func Module(moduleName string) []byte { return crypto.AddressHash([]byte(moduleName)) } // Module address with derivation keys (new method) func Module(moduleName string, derivationKeys ...[]byte) []byte { mKey := append([]byte(moduleName), 0) // Null byte separator addr := Hash("module", append(mKey, derivationKeys[0]...)) return addr // 32 bytes (not 20 bytes like user addresses) } ``` Module addresses are longer (32 bytes vs 20 bytes) to reduce collision probability. ## Validator Address Relationships A validator has three related addresses: 1. **Operator Address** (`cosmosvaloper1...`): The validator's operational identity, derived from the operator's account key 2. **Consensus Address** (`cosmosvalcons1...`): Derived from the validator's consensus public key (Ed25519), used for block signing 3. **Account Address** (`cosmos1...`): The operator's account for receiving rewards ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Validator stores its consensus pubkey type Validator struct { OperatorAddress string // cosmosvaloper1... (from operator's account) ConsensusPubkey *Any // Ed25519 public key for signing // ... } // Consensus address is derived from the consensus pubkey func (v Validator) GetConsAddr() ([]byte, error) { pk := v.ConsensusPubkey.GetCachedValue().(cryptotypes.PubKey) return pk.Address().Bytes(), nil // SHA-256-20 of Ed25519 pubkey } ``` ## Performance: Address Caching The SDK caches Bech32-encoded addresses to optimize repeated conversions: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} var ( accAddrCache *simplelru.LRU // 60,000 entries valAddrCache *simplelru.LRU // 500 entries consAddrCache *simplelru.LRU // 500 entries ) ``` When `Address.String()` is called, the SDK: 1. Checks the LRU cache for the encoded address 2. Returns cached value if found 3. Otherwise, performs Bech32 encoding and caches the result This significantly improves performance during block processing and state queries. ## Complete Example Here's the full pipeline for creating an account address: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // 1. Generate keypair privKey := secp256k1.GenPrivKey() // 32 bytes pubKey := privKey.PubKey() // 33 bytes (compressed) // 2. Derive address bytes sha := sha256.Sum256(pubKey.Bytes()) // 32 bytes ripemd := ripemd160.Sum(sha[:]) // 20 bytes addrBytes := ripemd[:] // 3. Create AccAddress type accAddr := sdk.AccAddress(addrBytes) // 4. Convert to Bech32 string // Internally: bech32.ConvertAndEncode("cosmos", addrBytes) addressStr := accAddr.String() // Result: "cosmos1r5v5srda7xfth3uckstjst6k05kmeyzptewwdk" // 5. Use in account account := auth.NewBaseAccount(accAddr, pubKey, accountNumber, sequence) ``` ## Related Concepts * [Accounts](/sdk/latest/learn/concepts/accounts) - Understanding account types and management * [Store](/sdk/latest/learn/concepts/store) - How addresses are used as keys in state storage * [Transactions](/sdk/latest/learn/concepts/transactions) - How addresses are used in transaction signing # SDK Go Packages Source: https://docs.cosmos.network/sdk/latest/guides/reference/packages The Cosmos SDK is a collection of Go modules. This section provides documentation on various packages that can be used when developing a Cosmos SDK chain. It lists all standalone Go modules that are part of the Cosmos SDK. The Cosmos SDK is a collection of Go modules. This section provides documentation on various packages that can be used when developing a Cosmos SDK chain. For more information on SDK modules, see the [SDK Modules](/sdk/latest/modules/modules) section. For more information on SDK tooling, see the [Tooling](/sdk/latest/guides/tooling/tool-guide) section. ## Core * [Core](https://pkg.go.dev/cosmossdk.io/core) - Core library defining SDK interfaces ([ADR-063](/sdk/latest/reference/architecture/adr-063-core-module-api)) * [API](https://pkg.go.dev/cosmossdk.io/api) - API library containing generated SDK Pulsar API * [Store](https://pkg.go.dev/cosmossdk.io/store) - Implementation of the Cosmos SDK store ## State Management * [Collections](https://pkg.go.dev/cosmossdk.io/collections) - Typed state management library with automatic key encoding, iteration, and secondary indexes. See the [Collections guide](/sdk/latest/guides/state/collections). * [ORM](https://pkg.go.dev/cosmossdk.io/orm) - ORM-style state layer built on top of collections, providing table abstractions with primary and secondary indexes. Based on [ADR-055](/sdk/latest/reference/architecture/adr-055-orm). ## Automation * [Client/v2](https://pkg.go.dev/cosmossdk.io/client/v2) - Library powering [AutoCLI](/sdk/latest/guides/tooling/autocli) ## Transactions * [x/tx](https://pkg.go.dev/cosmossdk.io/x/tx) - Transaction signing types, sign mode implementations (direct, amino JSON, textual), and transaction decoder utilities. ## Utilities * [Log](https://pkg.go.dev/cosmossdk.io/log) - Logging library * [Errors](https://pkg.go.dev/cosmossdk.io/errors) - Error handling library * [Math](https://pkg.go.dev/cosmossdk.io/math) - Math library for SDK arithmetic operations ## SimApp * [SimApp](https://pkg.go.dev/cosmossdk.io/simapp) - SimApp is a sample Cosmos SDK chain used for testing and development. # Cosmos Protobuf Docs Source: https://docs.cosmos.network/sdk/latest/guides/reference/proto-docs # Protobuf Annotations Source: https://docs.cosmos.network/sdk/latest/guides/reference/protobuf-annotations This document explains the various protobuf scalars that have been added to make working with protobuf easier for Cosmos SDK application developers This document explains the various protobuf scalars that have been added to make working with protobuf easier for Cosmos SDK application developers ### Gogoproto Modules are encouraged to utilize Protobuf encoding for their respective types. In the Cosmos SDK, we use the [Gogoproto](https://github.com/cosmos/gogoproto) specific implementation of the Protobuf spec that offers speed and developer experience improvements compared to the official [Google protobuf implementation](https://github.com/protocolbuffers/protobuf). ### Guidelines for protobuf message definitions In addition to [following official Protocol Buffer guidelines](https://developers.google.com/protocol-buffers/docs/proto3#simple), we recommend using these annotations in `.proto` files when dealing with interfaces: * Use `cosmos_proto.accepts_interface` to annotate `Any` fields that accept interfaces: * Pass the same fully qualified name as `protoName` to `InterfaceRegistry.RegisterInterface`. * Example: `(cosmos_proto.accepts_interface) = "cosmos.gov.v1beta1.Content"` (not just `Content`). * Annotate interface implementations with `cosmos_proto.implements_interface`: * Pass the same fully qualified name as `protoName` to `InterfaceRegistry.RegisterInterface`. * Example: `(cosmos_proto.implements_interface) = "cosmos.authz.v1beta1.Authorization"` (not just `Authorization`). Code generators can then match the `accepts_interface` and `implements_interface` annotations to determine whether some Protobuf messages are allowed to be packed in a given `Any` field. ## Signer Signer specifies which field should be used to determine the signer of a message for the Cosmos SDK. This field can be used for clients as well to infer which field should be used to determine the signer of a message. Read more about the signer field [here](/sdk/latest/learn/concepts/encoding#message-signers). ```proto theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/proto/cosmos/bank/v1beta1/tx.proto#L40 option (cosmos.msg.v1.signer) = "from_address"; ``` ## Scalar The scalar type defines a way for clients to understand how to construct protobuf messages according to what is expected by the module and sdk. ```proto theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} (cosmos_proto.scalar) = "cosmos.AddressString" ``` Example of account address string scalar: ```proto theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/proto/cosmos/bank/v1beta1/tx.proto#L46 string from_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; ``` Example of validator address string scalar: ```proto theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/proto/cosmos/distribution/v1beta1/query.proto#L108 string validator_address = 1 [(cosmos_proto.scalar) = "cosmos.ValidatorAddressString"]; ``` Example of Dec scalar: ```proto theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/proto/cosmos/distribution/v1beta1/distribution.proto#L17 string community_tax = 1 [(cosmos_proto.scalar) = "cosmos.Dec"]; ``` Example of Int scalar: ```proto theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/proto/cosmos/gov/v1/gov.proto#L127 string yes_count = 1 [(cosmos_proto.scalar) = "cosmos.Int"]; ``` There are a few options for what can be provided as a scalar: `cosmos.AddressString`, `cosmos.ValidatorAddressString`, `cosmos.ConsensusAddressString`, `cosmos.Int`, `cosmos.Dec`. ## Implements\_Interface Implement interface is used to provide information to client tooling like [telescope](https://github.com/cosmology-tech/telescope) on how to encode and decode protobuf messages. ```proto theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} option (cosmos_proto.implements_interface) = "cosmos.auth.v1beta1.AccountI"; ``` ## Method,Field,Message Added In `method_added_in`, `field_added_in` and `message_added_in` are annotations to indicate to clients that a method, field, or message has been supported since a later version. This is useful when new methods or fields are added in later versions and the client needs to be aware of what it can call. The annotations are used as follows: ```proto theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} option (cosmos_proto.method_added_in) = "cosmos-sdk 0.50.1"; option (cosmos_proto.field_added_in) = "cosmos-sdk 0.50.1"; option (cosmos_proto.message_added_in) = "cosmos-sdk 0.50.1"; ``` ## Amino The amino codec was removed in `v0.50+`, this means there is not a need register `legacyAminoCodec`. To replace the amino codec, Amino protobuf annotations are used to provide information to the amino codec on how to encode and decode protobuf messages. Amino annotations are only used for backwards compatibility with amino. New modules are not required use amino annotations. The below annotations are used to provide information to the amino codec on how to encode and decode protobuf messages in a backwards compatible manner. ### Name Name specifies the amino name that would show up for the user in order for them see which message they are signing. ```proto theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/proto/cosmos/bank/v1beta1/tx.proto#L41 option (amino.name) = "cosmos-sdk/MsgSend"; ``` ### Field\_Name Field name specifies the amino name that would show up for the user in order for them see which field they are signing. ```proto theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/proto/cosmos/distribution/v1beta1/distribution.proto#L165 uint64 height = 3 [(amino.field_name) = "creation_height"]; ``` ### Dont\_OmitEmpty Dont omitempty specifies that the field should not be omitted when encoding to amino. ```proto theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/proto/cosmos/bank/v1beta1/tx.proto#L48 repeated cosmos.base.v1beta1.Coin amount = 3 [(amino.dont_omitempty) = true]; ``` ### Encoding Encoding instructs the amino json marshaler how to encode certain fields that may differ from the standard encoding behavior. The most common example of this is how `repeated cosmos.base.v1beta1.Coin` is encoded when using the amino json encoding format. The `legacy_coins` option tells the json marshaler [how to encode a null slice](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/x/tx/signing/aminojson/json_marshal.go#L85) of `cosmos.base.v1beta1.Coin`. ```proto theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/proto/cosmos/bank/v1beta1/genesis.proto#L23 (amino.encoding) = "legacy_coins", ``` ## Module Query Safe The `cosmos.query.v1.module_query_safe` annotation ([source](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/proto/cosmos/query/v1/query.proto)) marks a query method as safe to call from within the state machine — for example from another module's keeper, via ADR-033 intermodule calls, or from CosmWasm contracts. ```proto theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} rpc Balance(QueryBalanceRequest) returns (QueryBalanceResponse) { option (cosmos.query.v1.module_query_safe) = true; } ``` When set to `true`, the annotation asserts that the query is: 1. **Deterministic**: given a block height, it returns the exact same response on every call and does not introduce state-machine-breaking changes across SDK patch versions. 2. **Gas-tracked**: gas consumption is correctly accounted for, preventing attack vectors where high-computation queries consume no gas. If you add this annotation to your own query, you must ensure both conditions hold. For queries that may consume significant gas (for example those with pagination that could be misconfigured), add a Protobuf comment warning downstream module developers. This annotation was introduced in v0.47. # Collections API Source: https://docs.cosmos.network/sdk/latest/guides/state/collections Collections is a library meant to simplify the experience with respect to module state handling. Collections is a library meant to simplify the experience with respect to module state handling. Cosmos SDK modules handle their state using the `KVStore` interface. The problem with working with `KVStore` is that it forces you to think of state as a bytes KV pairings when in reality the majority of state comes from complex concrete golang objects (strings, ints, structs, etc.). Collections allows you to work with state as if they were normal golang objects and removes the need for you to think of your state as raw bytes in your code. It also allows you to migrate your existing state without causing any state breakage that forces you into tedious and complex chain state migrations. ## Installation To install collections in your cosmos-sdk chain project, run the following command: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} go get cosmossdk.io/collections ``` ## Core types Collections offers 5 different APIs to work with state, which will be explored in the next sections, these APIs are: * `Map`: to work with typed arbitrary KV pairings. * `KeySet`: to work with just typed keys * `Item`: to work with just one typed value * `Sequence`: which is a monotonically increasing number. * `IndexedMap`: which combines `Map` and `KeySet` to provide a `Map` with indexing capabilities. ## Preliminary components Before exploring the different collections types and their capability it is necessary to introduce the three components that every collection shares. In fact when instantiating a collection type by doing, for example, `collections.NewMap/collections.NewItem/...` you will find yourself having to pass them some common arguments. For example, in code: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package collections import ( "cosmossdk.io/collections" store "cosmossdk.io/core/store" ) var AllowListPrefix = collections.NewPrefix(0) type Keeper struct { Schema collections.Schema AllowList collections.KeySet[string] } func NewKeeper(storeService store.KVStoreService) Keeper { sb := collections.NewSchemaBuilder(storeService) return Keeper{ AllowList: collections.NewKeySet(sb, AllowListPrefix, "allow_list", collections.StringKey), } } ``` Let's analyze the shared arguments, what they do, and why we need them. ### SchemaBuilder The first argument passed is the `SchemaBuilder` `SchemaBuilder` is a structure that keeps track of all the state of a module, it is not required by the collections to deal with state but it offers a dynamic and reflective way for clients to explore a module's state. We instantiate a `SchemaBuilder` by passing it a `store.KVStoreService`, which is the module's store service obtained via dependency injection or `runtime.NewKVStoreService`. We then need to pass the schema builder to every collection type we instantiate in our keeper, in our case the `AllowList`. After creating all collections, call `sb.Build()` to validate prefix uniqueness and finalize the schema. Store the returned `collections.Schema` in the keeper's `Schema` field: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} k := Keeper{ AllowList: collections.NewKeySet(sb, AllowListPrefix, "allow_list", collections.StringKey), } schema, err := sb.Build() if err != nil { panic(err) } k.Schema = schema return k ``` The code examples in this document show the collection instantiation patterns but omit the `sb.Build()` call for brevity. In production code, `sb.Build()` is required. ### Prefix The second argument passed to our `KeySet` is a `collections.Prefix`, a prefix represents a partition of the module's `KVStore` where all the state of a specific collection will be saved. Since a module can have multiple collections, the following is expected: * module params will become a `collections.Item` * the `AllowList` is a `collections.KeySet` We don't want a collection to write over the state of the other collection so we pass it a prefix, which defines a storage partition owned by the collection. If you already built modules, the prefix translates to the items you were creating in your `types/keys.go` file, example: [Link](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/x/feegrant/key.go#L16-L22) your old: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} var ( // FeeAllowanceKeyPrefix is the set of the kvstore for fee allowance data // - 0x00: allowance FeeAllowanceKeyPrefix = []byte{0x00 } // FeeAllowanceQueueKeyPrefix is the set of the kvstore for fee allowance keys data // - 0x01: FeeAllowanceQueueKeyPrefix = []byte{0x01 } ) ``` becomes: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} var ( // FeeAllowanceKeyPrefix is the set of the kvstore for fee allowance data // - 0x00: allowance FeeAllowanceKeyPrefix = collections.NewPrefix(0) // FeeAllowanceQueueKeyPrefix is the set of the kvstore for fee allowance keys data // - 0x01: FeeAllowanceQueueKeyPrefix = collections.NewPrefix(1) ) ``` #### Rules `collections.NewPrefix` accepts either `int`, `string` or `[]byte`. It is good practice to use a monotonically increasing `int` (values 0–255) for disk space efficiency. A collection **MUST NOT** share the same prefix as another collection in the same module, and a collection prefix **MUST NEVER** start with the same prefix as another, examples: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} prefix1 := collections.NewPrefix("prefix") prefix2 := collections.NewPrefix("prefix") // THIS IS BAD! ``` ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} prefix1 := collections.NewPrefix("a") prefix2 := collections.NewPrefix("aa") // prefix2 starts with the same as prefix1: BAD!!! ``` ### Human-Readable Name The third parameter we pass to a collection is a string, which is a human-readable name. It is needed to make the role of a collection understandable by clients who have no clue about what a module is storing in state. #### Rules Each collection in a module **MUST** have a unique humanized name. ## Key and Value Codecs A collection is generic over the type you can use as keys or values. This makes collections dumb, but also means that hypothetically we can store everything that can be a go type into a collection. We are not bounded to any type of encoding (be it proto, json or whatever) So a collection needs to be given a way to understand how to convert your keys and values to bytes. This is achieved through `KeyCodec` and `ValueCodec`, which are arguments that you pass to your collections when you're instantiating them using the `collections.NewMap/collections.NewItem/...` instantiation functions. NOTE: Generally speaking you will never be required to implement your own `Key/ValueCodec` as the SDK and collections libraries already come with default, safe and fast implementation of those. You might need to implement them only if you're migrating to collections and there are state layout incompatibilities. Let's explore an example: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package collections import ( "cosmossdk.io/collections" store "cosmossdk.io/core/store" sdk "github.com/cosmos/cosmos-sdk/types" ) var IDsPrefix = collections.NewPrefix(0) type Keeper struct { Schema collections.Schema IDs collections.Map[string, uint64] } func NewKeeper(storeService store.KVStoreService) Keeper { sb := collections.NewSchemaBuilder(storeService) return Keeper{ IDs: collections.NewMap(sb, IDsPrefix, "ids", collections.StringKey, collections.Uint64Value), } } ``` We're now instantiating a map where the key is string and the value is `uint64`. We already know the first three arguments of the `NewMap` function. The fourth parameter is our `KeyCodec`, we know that the `Map` has `string` as key so we pass it a `KeyCodec` that handles strings as keys. The fifth parameter is our `ValueCodec`, we know that the `Map` has a `uint64` as value so we pass it a `ValueCodec` that handles uint64. Collections already comes with all the required implementations for golang primitive types. Let's make another example, this falls closer to what we build using cosmos SDK, let's say we want to create a `collections.Map` that maps account addresses to their base account. So we want to map an `sdk.AccAddress` to an `auth.BaseAccount` (which is a proto): ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package collections import ( "cosmossdk.io/collections" store "cosmossdk.io/core/store" "github.com/cosmos/cosmos-sdk/codec" sdk "github.com/cosmos/cosmos-sdk/types" authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" ) var AccountsPrefix = collections.NewPrefix(0) type Keeper struct { Schema collections.Schema Accounts collections.Map[sdk.AccAddress, authtypes.BaseAccount] } func NewKeeper(storeService store.KVStoreService, cdc codec.BinaryCodec) Keeper { sb := collections.NewSchemaBuilder(storeService) return Keeper{ Accounts: collections.NewMap(sb, AccountsPrefix, "accounts", sdk.AccAddressKey, codec.CollValue[authtypes.BaseAccount](cdc)), } } ``` As we can see here since our `collections.Map` maps `sdk.AccAddress` to `authtypes.BaseAccount`, we use the `sdk.AccAddressKey` which is the `KeyCodec` implementation for `AccAddress` and we use `codec.CollValue` to encode our proto type `BaseAccount`. Generally speaking you will always find the respective key and value codecs for types in the `go.mod` path you're using to import that type. If you want to encode proto values refer to the codec `codec.CollValue` function, which allows you to encode any type implement the `proto.Message` interface. ## Map We analyze the first and most important collection type, the `collections.Map`. This is the type that everything else builds on top of. ### Use case A `collections.Map` is used to map arbitrary keys with arbitrary values. ### Example It's easier to explain a `collections.Map` capabilities through an example: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package collections import ( "cosmossdk.io/collections" store "cosmossdk.io/core/store" "fmt" "github.com/cosmos/cosmos-sdk/codec" sdk "github.com/cosmos/cosmos-sdk/types" authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" ) var AccountsPrefix = collections.NewPrefix(0) type Keeper struct { Schema collections.Schema Accounts collections.Map[sdk.AccAddress, authtypes.BaseAccount] } func NewKeeper(storeService store.KVStoreService, cdc codec.BinaryCodec) Keeper { sb := collections.NewSchemaBuilder(storeService) return Keeper{ Accounts: collections.NewMap(sb, AccountsPrefix, "accounts", sdk.AccAddressKey, codec.CollValue[authtypes.BaseAccount](cdc)), } } func (k Keeper) CreateAccount(ctx sdk.Context, addr sdk.AccAddress, account authtypes.BaseAccount) error { has, err := k.Accounts.Has(ctx, addr) if err != nil { return err } if has { return fmt.Errorf("account already exists: %s", addr) } err = k.Accounts.Set(ctx, addr, account) if err != nil { return err } return nil } func (k Keeper) GetAccount(ctx sdk.Context, addr sdk.AccAddress) (authtypes.BaseAccount, error) { acc, err := k.Accounts.Get(ctx, addr) if err != nil { return authtypes.BaseAccount{ }, err } return acc, nil } func (k Keeper) RemoveAccount(ctx sdk.Context, addr sdk.AccAddress) error { err := k.Accounts.Remove(ctx, addr) if err != nil { return err } return nil } ``` #### Set method Set maps with the provided `AccAddress` (the key) to the `auth.BaseAccount` (the value). Under the hood the `collections.Map` will convert the key and value to bytes using the [key and value codec](#key-and-value-codecs). It will prepend to our bytes key the [prefix](#prefix) and store it in the KVStore of the module. #### Has method The has method reports if the provided key exists in the store. #### Get method The get method accepts the `AccAddress` and returns the associated `auth.BaseAccount` if it exists, otherwise it errors. #### Remove method The remove method accepts the `AccAddress` and removes it from the store. It won't report errors if it does not exist, to check for existence before removal use the `Has` method. #### Iteration Iteration has a separate section. ## KeySet The second type of collection is `collections.KeySet`, as the word suggests it maintains only a set of keys without values. #### Implementation curiosity A `collections.KeySet` is just a `collections.Map` with a `key` but no value. The value internally is always the same and is represented as an empty byte slice `[]byte{}`. ### Example As always we explore the collection type through an example: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package collections import ( "cosmossdk.io/collections" store "cosmossdk.io/core/store" "fmt" sdk "github.com/cosmos/cosmos-sdk/types" ) var ValidatorsSetPrefix = collections.NewPrefix(0) type Keeper struct { Schema collections.Schema ValidatorsSet collections.KeySet[sdk.ValAddress] } func NewKeeper(storeService store.KVStoreService) Keeper { sb := collections.NewSchemaBuilder(storeService) return Keeper{ ValidatorsSet: collections.NewKeySet(sb, ValidatorsSetPrefix, "validators_set", sdk.ValAddressKey), } } func (k Keeper) AddValidator(ctx sdk.Context, validator sdk.ValAddress) error { has, err := k.ValidatorsSet.Has(ctx, validator) if err != nil { return err } if has { return fmt.Errorf("validator already in set: %s", validator) } err = k.ValidatorsSet.Set(ctx, validator) if err != nil { return err } return nil } func (k Keeper) RemoveValidator(ctx sdk.Context, validator sdk.ValAddress) error { err := k.ValidatorsSet.Remove(ctx, validator) if err != nil { return err } return nil } ``` The first difference we notice is that `KeySet` needs use to specify only one type parameter: the key (`sdk.ValAddress` in this case). The second difference we notice is that `KeySet` in its `NewKeySet` function does not require us to specify a `ValueCodec` but only a `KeyCodec`. This is because a `KeySet` only saves keys and not values. Let's explore the methods. #### Has method Has allows us to understand if a key is present in the `collections.KeySet` or not, functions in the same way as `collections.Map.Has ` #### Set method Set inserts the provided key in the `KeySet`. #### Remove method Remove removes the provided key from the `KeySet`, it does not error if the key does not exist, if existence check before removal is required it needs to be coupled with the `Has` method. ## Item The third type of collection is the `collections.Item`. It stores only one single item, it's useful for example for parameters, there's only one instance of parameters in state always. #### implementation curiosity A `collections.Item` is just a `collections.Map` with no key but just a value. The key is the prefix of the collection! ### Example ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package collections import ( "cosmossdk.io/collections" store "cosmossdk.io/core/store" "github.com/cosmos/cosmos-sdk/codec" sdk "github.com/cosmos/cosmos-sdk/types" stakingtypes "cosmossdk.io/x/staking/types" ) var ParamsPrefix = collections.NewPrefix(0) type Keeper struct { Schema collections.Schema Params collections.Item[stakingtypes.Params] } func NewKeeper(storeService store.KVStoreService, cdc codec.BinaryCodec) Keeper { sb := collections.NewSchemaBuilder(storeService) return Keeper{ Params: collections.NewItem(sb, ParamsPrefix, "params", codec.CollValue[stakingtypes.Params](cdc)), } } func (k Keeper) UpdateParams(ctx sdk.Context, params stakingtypes.Params) error { err := k.Params.Set(ctx, params) if err != nil { return err } return nil } func (k Keeper) GetParams(ctx sdk.Context) (stakingtypes.Params, error) { return k.Params.Get(ctx) } ``` The first key difference we notice is that we specify only one type parameter, which is the value we're storing. The second key difference is that we don't specify the `KeyCodec`, since we store only one item we already know the key and the fact that it is constant. ## Iteration One of the key features of the `KVStore` is iterating over keys. Collections which deal with keys (so `Map`, `KeySet` and `IndexedMap`) allow you to iterate over keys in a safe and typed way. They all share the same API, the only difference being that `KeySet` returns a different type of `Iterator` because `KeySet` only deals with keys. Every collection shares the same `Iterator` semantics. Let's have a look at the `Map.Iterate` method: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (m Map[K, V]) Iterate(ctx context.Context, ranger Ranger[K]) (Iterator[K, V], error) ``` It accepts a `collections.Ranger[K]`, which is an API that instructs map on how to iterate over keys. As always we don't need to implement anything here as `collections` already provides some generic `Ranger` implementers that expose all you need to work with ranges. ### Example We have a `collections.Map` that maps accounts using `uint64` IDs. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package collections import ( "cosmossdk.io/collections" store "cosmossdk.io/core/store" "github.com/cosmos/cosmos-sdk/codec" sdk "github.com/cosmos/cosmos-sdk/types" authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" ) var AccountsPrefix = collections.NewPrefix(0) type Keeper struct { Schema collections.Schema Accounts collections.Map[uint64, authtypes.BaseAccount] } func NewKeeper(storeService store.KVStoreService, cdc codec.BinaryCodec) Keeper { sb := collections.NewSchemaBuilder(storeService) return Keeper{ Accounts: collections.NewMap(sb, AccountsPrefix, "accounts", collections.Uint64Key, codec.CollValue[authtypes.BaseAccount](cdc)), } } func (k Keeper) GetAllAccounts(ctx sdk.Context) ([]authtypes.BaseAccount, error) { // passing a nil Ranger equals to: iterate over every possible key iter, err := k.Accounts.Iterate(ctx, nil) if err != nil { return nil, err } accounts, err := iter.Values() if err != nil { return nil, err } return accounts, err } func (k Keeper) IterateAccountsBetween(ctx sdk.Context, start, end uint64) ([]authtypes.BaseAccount, error) { // The collections.Range API offers a lot of capabilities // like defining where the iteration starts or ends. rng := new(collections.Range[uint64]). StartInclusive(start). EndExclusive(end). Descending() iter, err := k.Accounts.Iterate(ctx, rng) if err != nil { return nil, err } accounts, err := iter.Values() if err != nil { return nil, err } return accounts, nil } func (k Keeper) IterateAccounts(ctx sdk.Context, do func(id uint64, acc authtypes.BaseAccount) (stop bool)) error { iter, err := k.Accounts.Iterate(ctx, nil) if err != nil { return err } defer iter.Close() for ; iter.Valid(); iter.Next() { kv, err := iter.KeyValue() if err != nil { return err } if do(kv.Key, kv.Value) { break } } return nil } ``` Let's analyze each method in the example and how it makes use of the `Iterate` and the returned `Iterator` API. #### GetAllAccounts In `GetAllAccounts` we pass to our `Iterate` a nil `Ranger`. This means that the returned `Iterator` will include all the existing keys within the collection. Then we use the `Values` method from the returned `Iterator` API to collect all the values into a slice. `Iterator` offers other methods such as `Keys()` to collect only the keys and not the values and `KeyValues` to collect all the keys and values. #### IterateAccountsBetween Here we make use of the `collections.Range` helper to specialize our range. We make it start in a point through `StartInclusive` and end in the other with `EndExclusive`, then we instruct it to report us results in reverse order through `Descending` Then we pass the range instruction to `Iterate` and get an `Iterator`, which will contain only the results we specified in the range. Then we use again the `Values` method of the `Iterator` to collect all the results. `collections.Range` also offers a `Prefix` API which is not applicable to all keys types, for example uint64 cannot be prefix because it is of constant size, but a `string` key can be prefixed. #### IterateAccounts Here we showcase how to lazily collect values from an Iterator. `Keys/Values/KeyValues` fully consume and close the `Iterator`, here we need to explicitly do a `defer iterator.Close()` call. `Iterator` also exposes a `Value` and `Key` method to collect only the current value or key, if collecting both is not needed. For this `callback` pattern, collections expose a `Walk` API. ## Composite keys So far we've worked only with simple keys, like `uint64`, the account address, etc. There are some more complex cases in, which we need to deal with composite keys. A key is composite when it is composed of multiple keys, for example bank balances as stored as the composite key `(AccAddress, string)` where the first part is the address holding the coins and the second part is the denom. Example, let's say address `BOB` holds `10atom,15osmo`, this is how it is stored in state: ```javascript theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} (bob, atom) => 10 (bob, osmos) => 15 ``` Now this allows to efficiently get a specific denom balance of an address, by simply `getting` `(address, denom)`, or getting all the balances of an address by prefixing over `(address)`. Let's see now how we can work with composite keys using collections. ### Example In our example we will showcase how we can use collections when we are dealing with balances, similar to bank, a balance is a mapping between `(address, denom) => math.Int` the composite key in our case is `(address, denom)`. ## Instantiation of a composite key collection ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package collections import ( "cosmossdk.io/collections" "cosmossdk.io/math" store "cosmossdk.io/core/store" sdk "github.com/cosmos/cosmos-sdk/types" ) var BalancesPrefix = collections.NewPrefix(1) type Keeper struct { Schema collections.Schema Balances collections.Map[collections.Pair[sdk.AccAddress, string], math.Int] } func NewKeeper(storeService store.KVStoreService) Keeper { sb := collections.NewSchemaBuilder(storeService) return Keeper{ Balances: collections.NewMap( sb, BalancesPrefix, "balances", collections.PairKeyCodec(sdk.AccAddressKey, collections.StringKey), sdk.IntValue, ), } } ``` #### The Map Key definition First of all we can see that in order to define a composite key of two elements we use the `collections.Pair` type: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} collections.Map[collections.Pair[sdk.AccAddress, string], math.Int] ``` `collections.Pair` defines a key composed of two other keys, in our case the first part is `sdk.AccAddress`, the second part is `string`. #### The Key Codec instantiation The arguments to instantiate are always the same, the only thing that changes is how we instantiate the `KeyCodec`, since this key is composed of two keys we use `collections.PairKeyCodec`, which generates a `KeyCodec` composed of two key codecs. The first one will encode the first part of the key, the second one will encode the second part of the key. ### Working with composite key collections Let's expand on the example we used before: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} var BalancesPrefix = collections.NewPrefix(1) type Keeper struct { Schema collections.Schema Balances collections.Map[collections.Pair[sdk.AccAddress, string], math.Int] } func NewKeeper(storeService store.KVStoreService) Keeper { sb := collections.NewSchemaBuilder(storeService) return Keeper{ Balances: collections.NewMap( sb, BalancesPrefix, "balances", collections.PairKeyCodec(sdk.AccAddressKey, collections.StringKey), sdk.IntValue, ), } } func (k Keeper) SetBalance(ctx sdk.Context, address sdk.AccAddress, denom string, amount math.Int) error { key := collections.Join(address, denom) return k.Balances.Set(ctx, key, amount) } func (k Keeper) GetBalance(ctx sdk.Context, address sdk.AccAddress, denom string) (math.Int, error) { return k.Balances.Get(ctx, collections.Join(address, denom)) } func (k Keeper) GetAllAddressBalances(ctx sdk.Context, address sdk.AccAddress) (sdk.Coins, error) { balances := sdk.NewCoins() rng := collections.NewPrefixedPairRange[sdk.AccAddress, string](address) iter, err := k.Balances.Iterate(ctx, rng) if err != nil { return nil, err } kvs, err := iter.KeyValues() if err != nil { return nil, err } for _, kv := range kvs { balances = balances.Add(sdk.NewCoin(kv.Key.K2(), kv.Value)) } return balances, nil } func (k Keeper) GetAllAddressBalancesBetween(ctx sdk.Context, address sdk.AccAddress, startDenom, endDenom string) (sdk.Coins, error) { rng := collections.NewPrefixedPairRange[sdk.AccAddress, string](address). StartInclusive(startDenom). EndInclusive(endDenom) iter, err := k.Balances.Iterate(ctx, rng) if err != nil { return nil, err } ... } ``` #### SetBalance As we can see here we're setting the balance of an address for a specific denom. We use the `collections.Join` function to generate the composite key. `collections.Join` returns a `collections.Pair` (which is the key of our `collections.Map`) `collections.Pair` contains the two keys we have joined, it also exposes two methods: `K1` to fetch the 1st part of the key and `K2` to fetch the second part. As always, we use the `collections.Map.Set` method to map the composite key to our value (`math.Int` in this case) #### GetBalance To get a value in composite key collection, we simply use `collections.Join` to compose the key. #### GetAllAddressBalances We use `collections.PrefixedPairRange` to iterate over all the keys starting with the provided address. Concretely the iteration will report all the balances belonging to the provided address. The first part is that we instantiate a `PrefixedPairRange`, which is a `Ranger` implementer aimed to help in `Pair` keys iterations. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} rng := collections.NewPrefixedPairRange[sdk.AccAddress, string](address) ``` As we can see here we're passing the type parameters of the `collections.Pair` because golang type inference with respect to generics is not as permissive as other languages, so we need to explicitly say what are the types of the pair key. #### GetAllAddressesBalancesBetween This showcases how we can further specialize our range to limit the results further, by specifying the range between the second part of the key (in our case the denoms, which are strings). ## IndexedMap `collections.IndexedMap` is a collection that uses under the hood a `collections.Map`, and has a struct, which contains the indexes that we need to define. ### Example Let's say we have an `auth.BaseAccount` struct which looks like the following: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type BaseAccount struct { AccountNumber uint64 `protobuf:"varint,3,opt,name=account_number,json=accountNumber,proto3" json:"account_number,omitempty"` Sequence uint64 `protobuf:"varint,4,opt,name=sequence,proto3" json:"sequence,omitempty"` } ``` First of all, when we save our accounts in state we map them using a primary key `sdk.AccAddress`. If it were to be a `collections.Map` it would be `collections.Map[sdk.AccAddress, authtypes.BaseAccount]`. Then we also want to be able to get an account not only by its `sdk.AccAddress`, but also by its `AccountNumber`. So we can say we want to create an `Index` that maps our `BaseAccount` to its `AccountNumber`. We also know that this `Index` is unique. Unique means that there can only be one `BaseAccount` that maps to a specific `AccountNumber`. First of all, we start by defining the object that contains our index: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} var AccountsNumberIndexPrefix = collections.NewPrefix(1) type AccountsIndexes struct { Number *indexes.Unique[uint64, sdk.AccAddress, authtypes.BaseAccount] } func NewAccountIndexes(sb *collections.SchemaBuilder) AccountsIndexes { return AccountsIndexes{ Number: indexes.NewUnique( sb, AccountsNumberIndexPrefix, "accounts_by_number", collections.Uint64Key, sdk.AccAddressKey, func(_ sdk.AccAddress, v authtypes.BaseAccount) (uint64, error) { return v.AccountNumber, nil }, ), } } ``` We create an `AccountIndexes` struct which contains a field: `Number`. This field represents our `AccountNumber` index. `AccountNumber` is a field of `authtypes.BaseAccount` and it's a `uint64`. Then we can see in our `AccountIndexes` struct the `Number` field is defined as: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} *indexes.Unique[uint64, sdk.AccAddress, authtypes.BaseAccount] ``` Where the first type parameter is `uint64`, which is the field type of our index. The second type parameter is the primary key `sdk.AccAddress`. And the third type parameter is the actual object we're storing `authtypes.BaseAccount`. Then we create a `NewAccountIndexes` function that instantiates and returns the `AccountsIndexes` struct. The function takes a `SchemaBuilder`. Then we instantiate our `indexes.Unique`, let's analyze the arguments we pass to `indexes.NewUnique`. #### NOTE: indexes list The `AccountsIndexes` struct contains the indexes, the `NewIndexedMap` function will infer the indexes form that struct using reflection, this happens only at init and is not computationally expensive. In case you want to explicitly declare indexes: implement the `Indexes` interface in the `AccountsIndexes` struct: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (a AccountsIndexes) IndexesList() []collections.Index[sdk.AccAddress, authtypes.BaseAccount] { return []collections.Index[sdk.AccAddress, authtypes.BaseAccount]{ a.Number } } ``` #### Instantiating a `indexes.Unique` The first three arguments, we already know them, they are: `SchemaBuilder`, `Prefix` which is our index prefix (the partition where index keys relationship for the `Number` index will be maintained), and the human name for the `Number` index. The second argument is a `collections.Uint64Key` which is a key codec to deal with `uint64` keys, we pass that because the key we're trying to index is a `uint64` key (the account number), and then we pass as fifth argument the primary key codec, which in our case is `sdk.AccAddress` (remember: we're mapping `sdk.AccAddress` => `BaseAccount`). Then as last parameter we pass a function that: given the `BaseAccount` returns its `AccountNumber`. After this we can proceed instantiating our `IndexedMap`. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} var AccountsPrefix = collections.NewPrefix(0) type Keeper struct { Schema collections.Schema Accounts *collections.IndexedMap[sdk.AccAddress, authtypes.BaseAccount, AccountsIndexes] } func NewKeeper(storeService store.KVStoreService, cdc codec.BinaryCodec) Keeper { sb := collections.NewSchemaBuilder(storeService) return Keeper{ Accounts: collections.NewIndexedMap( sb, AccountsPrefix, "accounts", sdk.AccAddressKey, codec.CollValue[authtypes.BaseAccount](cdc), NewAccountIndexes(sb), ), } } ``` As we can see here what we do, for now, is the same thing as we did for `collections.Map`. We pass it the `SchemaBuilder`, the `Prefix` where we plan to store the mapping between `sdk.AccAddress` and `authtypes.BaseAccount`, the human name and the respective `sdk.AccAddress` key codec and `authtypes.BaseAccount` value codec. Then we pass the instantiation of our `AccountIndexes` through `NewAccountIndexes`. Full example: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package docs import ( "cosmossdk.io/collections" "cosmossdk.io/collections/indexes" store "cosmossdk.io/core/store" "github.com/cosmos/cosmos-sdk/codec" sdk "github.com/cosmos/cosmos-sdk/types" authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" ) var AccountsNumberIndexPrefix = collections.NewPrefix(1) type AccountsIndexes struct { Number *indexes.Unique[uint64, sdk.AccAddress, authtypes.BaseAccount] } func (a AccountsIndexes) IndexesList() []collections.Index[sdk.AccAddress, authtypes.BaseAccount] { return []collections.Index[sdk.AccAddress, authtypes.BaseAccount]{ a.Number } } func NewAccountIndexes(sb *collections.SchemaBuilder) AccountsIndexes { return AccountsIndexes{ Number: indexes.NewUnique( sb, AccountsNumberIndexPrefix, "accounts_by_number", collections.Uint64Key, sdk.AccAddressKey, func(_ sdk.AccAddress, v authtypes.BaseAccount) (uint64, error) { return v.AccountNumber, nil }, ), } } var AccountsPrefix = collections.NewPrefix(0) type Keeper struct { Schema collections.Schema Accounts *collections.IndexedMap[sdk.AccAddress, authtypes.BaseAccount, AccountsIndexes] } func NewKeeper(storeService store.KVStoreService, cdc codec.BinaryCodec) Keeper { sb := collections.NewSchemaBuilder(storeService) return Keeper{ Accounts: collections.NewIndexedMap( sb, AccountsPrefix, "accounts", sdk.AccAddressKey, codec.CollValue[authtypes.BaseAccount](cdc), NewAccountIndexes(sb), ), } } ``` ### Working with IndexedMaps While instantiating `collections.IndexedMap` is tedious, working with them is extremely smooth. Let's take the full example, and expand it with some use-cases. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package docs import ( "cosmossdk.io/collections" "cosmossdk.io/collections/indexes" store "cosmossdk.io/core/store" "github.com/cosmos/cosmos-sdk/codec" sdk "github.com/cosmos/cosmos-sdk/types" authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" ) var AccountsNumberIndexPrefix = collections.NewPrefix(1) type AccountsIndexes struct { Number *indexes.Unique[uint64, sdk.AccAddress, authtypes.BaseAccount] } func (a AccountsIndexes) IndexesList() []collections.Index[sdk.AccAddress, authtypes.BaseAccount] { return []collections.Index[sdk.AccAddress, authtypes.BaseAccount]{ a.Number } } func NewAccountIndexes(sb *collections.SchemaBuilder) AccountsIndexes { return AccountsIndexes{ Number: indexes.NewUnique( sb, AccountsNumberIndexPrefix, "accounts_by_number", collections.Uint64Key, sdk.AccAddressKey, func(_ sdk.AccAddress, v authtypes.BaseAccount) (uint64, error) { return v.AccountNumber, nil }, ), } } var AccountsPrefix = collections.NewPrefix(0) type Keeper struct { Schema collections.Schema Accounts *collections.IndexedMap[sdk.AccAddress, authtypes.BaseAccount, AccountsIndexes] } func NewKeeper(storeService store.KVStoreService, cdc codec.BinaryCodec) Keeper { sb := collections.NewSchemaBuilder(storeService) return Keeper{ Accounts: collections.NewIndexedMap( sb, AccountsPrefix, "accounts", sdk.AccAddressKey, codec.CollValue[authtypes.BaseAccount](cdc), NewAccountIndexes(sb), ), } } func (k Keeper) CreateAccount(ctx sdk.Context, addr sdk.AccAddress) error { nextAccountNumber := k.getNextAccountNumber() newAcc := authtypes.BaseAccount{ AccountNumber: nextAccountNumber, Sequence: 0, } return k.Accounts.Set(ctx, addr, newAcc) } func (k Keeper) RemoveAccount(ctx sdk.Context, addr sdk.AccAddress) error { return k.Accounts.Remove(ctx, addr) } func (k Keeper) GetAccountByNumber(ctx sdk.Context, accNumber uint64) (sdk.AccAddress, authtypes.BaseAccount, error) { accAddress, err := k.Accounts.Indexes.Number.MatchExact(ctx, accNumber) if err != nil { return nil, authtypes.BaseAccount{ }, err } acc, err := k.Accounts.Get(ctx, accAddress) return accAddress, acc, nil } func (k Keeper) GetAccountsByNumber(ctx sdk.Context, startAccNum, endAccNum uint64) ([]authtypes.BaseAccount, error) { rng := new(collections.Range[uint64]). StartInclusive(startAccNum). EndInclusive(endAccNum) iter, err := k.Accounts.Indexes.Number.Iterate(ctx, rng) if err != nil { return nil, err } return indexes.CollectValues(ctx, k.Accounts, iter) } func (k Keeper) getNextAccountNumber() uint64 { return 0 } ``` ## Collections with interfaces as values Although cosmos-sdk is shifting away from the usage of interface registry, there are still some places where it is used. In order to support old code, we have to support collections with interface values. The generic `codec.CollValue` is not able to handle interface values, so we need to use a special type `codec.CollInterfaceValue`. `codec.CollInterfaceValue` takes a `codec.BinaryCodec` as an argument, and uses it to marshal and unmarshal values as interfaces. The `codec.CollInterfaceValue` lives in the `codec` package, whose import path is `github.com/cosmos/cosmos-sdk/codec`. ### Instantiating Collections with interface values In order to instantiate a collection with interface values, we need to use `codec.CollInterfaceValue` instead of `codec.CollValue`. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package example import ( "cosmossdk.io/collections" store "cosmossdk.io/core/store" "github.com/cosmos/cosmos-sdk/codec" sdk "github.com/cosmos/cosmos-sdk/types" authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" ) var AccountsPrefix = collections.NewPrefix(0) type Keeper struct { Schema collections.Schema Accounts *collections.Map[sdk.AccAddress, sdk.AccountI] } func NewKeeper(cdc codec.BinaryCodec, storeService store.KVStoreService) Keeper { sb := collections.NewSchemaBuilder(storeService) return Keeper{ Accounts: collections.NewMap( sb, AccountsPrefix, "accounts", sdk.AccAddressKey, codec.CollInterfaceValue[sdk.AccountI](cdc), ), } } func (k Keeper) SaveBaseAccount(ctx sdk.Context, account authtypes.BaseAccount) error { return k.Accounts.Set(ctx, account.GetAddress(), account) } func (k Keeper) SaveModuleAccount(ctx sdk.Context, account authtypes.ModuleAccount) error { return k.Accounts.Set(ctx, account.GetAddress(), account) } func (k Keeper) GetAccount(ctx sdk.Context, addr sdk.AccAddress) (sdk.AccountI, error) { return k.Accounts.Get(ctx, addr) } ``` ## Triple key The `collections.Triple` is a special type of key composed of three keys, it's identical to `collections.Pair`. Let's see an example. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package example import ( "context" "cosmossdk.io/collections" store "cosmossdk.io/core/store" ) type AccAddress = string type ValAddress = string type Keeper struct { // let's simulate we have redelegations which are stored as a triple key composed of // the delegator, the source validator and the destination validator. Redelegations collections.KeySet[collections.Triple[AccAddress, ValAddress, ValAddress]] } func NewKeeper(storeService store.KVStoreService) Keeper { sb := collections.NewSchemaBuilder(storeService) return Keeper{ Redelegations: collections.NewKeySet(sb, collections.NewPrefix(0), "redelegations", collections.TripleKeyCodec(collections.StringKey, collections.StringKey, collections.StringKey) } } // RedelegationsByDelegator iterates over all the redelegations of a given delegator and calls onResult providing // each redelegation from source validator towards the destination validator. func (k Keeper) RedelegationsByDelegator(ctx context.Context, delegator AccAddress, onResult func(src, dst ValAddress) (stop bool, err error)) error { rng := collections.NewPrefixedTripleRange[AccAddress, ValAddress, ValAddress](delegator) return k.Redelegations.Walk(ctx, rng, func(key collections.Triple[AccAddress, ValAddress, ValAddress]) (stop bool, err error) { return onResult(key.K2(), key.K3()) }) } // RedelegationsByDelegatorAndValidator iterates over all the redelegations of a given delegator and its source validator and calls onResult for each // destination validator. func (k Keeper) RedelegationsByDelegatorAndValidator(ctx context.Context, delegator AccAddress, validator ValAddress, onResult func(dst ValAddress) (stop bool, err error)) error { rng := collections.NewSuperPrefixedTripleRange[AccAddress, ValAddress, ValAddress](delegator, validator) return k.Redelegations.Walk(ctx, rng, func(key collections.Triple[AccAddress, ValAddress, ValAddress]) (stop bool, err error) { return onResult(key.K3()) }) } ``` ## Advanced Usages ### Alternative Value Codec The `codec.AltValueCodec` allows a collection to decode values using a different codec than the one used to encode them. Basically it enables to decode two different byte representations of the same concrete value. It can be used to lazily migrate values from one bytes representation to another, as long as the new representation is not able to decode the old one. A concrete example can be found in `x/bank` where the balance was initially stored as `Coin` and then migrated to `Int`. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} var BankBalanceValueCodec = codec.NewAltValueCodec(sdk.IntValue, func(b []byte) (sdk.Int, error) { coin := sdk.Coin{ } err := coin.Unmarshal(b) if err != nil { return sdk.Int{ }, err } return coin.Amount, nil }) ``` The above example shows how to create an `AltValueCodec` that can decode both `sdk.Int` and `sdk.Coin` values. The provided decoder function will be used as a fallback in case the default decoder fails. When the value will be encoded back into state it will use the default encoder. This allows to lazily migrate values to a new bytes representation. # Module Store Internals Source: https://docs.cosmos.network/sdk/latest/guides/state/store The store package defines the interfaces, types and abstractions for Cosmos SDK modules to read and write to Merkleized state within a Cosmos SDK application. The store package provides many primitives for developers to use in order to work with both state storage and state commitment. Below we describe the various abstractions. ## Types ### `Store` The bulk of the store interfaces are defined [here](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/store/types/store.go), where the base primitive interface, for which other interfaces build off of, is the `Store` type. The `Store` interface defines the ability to tell the type of the implementing store and the ability to cache wrap via the `CacheWrapper` interface. ### `CacheWrapper` & `CacheWrap` One of the most important features a store has the ability to perform is the ability to cache wrap. Cache wrapping is essentially the underlying store wrapping itself within another store type that performs caching for both reads and writes with the ability to flush writes via `Write()`. ### `KVStore` & `CacheKVStore` One of the most important interfaces that both developers and modules interface with, which also provides the basis of most state storage and commitment operations, is the `KVStore`. The `KVStore` interface provides basic CRUD abilities and prefix-based iteration, including reverse iteration. Typically, each module has its own dedicated `KVStore` instance, which it can get access to via the `sdk.Context` and the use of a pointer-based named key -- `KVStoreKey`. The `KVStoreKey` provides pseudo-OCAP. How exactly a `KVStoreKey` maps to a `KVStore` will be illustrated below through the `CommitMultiStore`. Note, a `KVStore` cannot directly commit state. Instead, a `KVStore` can be wrapped by a `CacheKVStore` which extends a `KVStore` and provides the ability for the caller to execute `Write()` which flushes pending writes to the parent `KVStore` in memory. Note, this doesn't actually flush writes to disk as writes are held in memory until `Commit()` is called on the `CommitMultiStore`. ### `CommitMultiStore` The `CommitMultiStore` interface exposes the top-level interface that is used to manage state commitment and storage by an SDK application and abstracts the concept of multiple `KVStore`s which are used by multiple modules. Specifically, it supports the following high-level primitives: * Allows for a caller to retrieve a `KVStore` by providing a `KVStoreKey`. * Exposes pruning mechanisms to remove state pinned against a specific height/version in the past. * Allows for loading state storage at a particular height/version in the past to provide current head and historical queries. * Provides the ability to rollback state to a previous height/version. * Provides the ability to load state storage at a particular height/version while also performing store upgrades, which are used during live hard-fork application state migrations. * Provides the ability to commit all current accumulated state to disk and performs Merkle commitment. ## Implementation Details While there are many interfaces that the `store` package provides, there is typically a core implementation for each main interface that modules and developers interact with that are defined in the Cosmos SDK. ### `iavl.Store` The `iavl.Store` provides the core implementation for state storage and commitment by implementing the following interfaces: * `KVStore` * `CommitStore` * `CommitKVStore` * `Queryable` * `StoreWithInitialVersion` It allows for all CRUD operations to be performed along with allowing current and historical state queries, prefix iteration, and state commitment along with Merkle proof operations. The `iavl.Store` also provides the ability to remove historical state from the state commitment layer. An overview of the IAVL implementation can be found [here](https://github.com/cosmos/iavl/blob/master/docs/overview.md). It is important to note that the IAVL store provides both state commitment and logical storage operations, which comes with drawbacks as there are various performance impacts, some of which are very drastic, when it comes to the operations mentioned above. When dealing with state management in modules and clients, the Cosmos SDK provides various layers of abstractions or "store wrapping", where the `iavl.Store` is the bottom most layer. When requesting a store to perform reads or writes in a module, the typical abstraction layer in order is defined as follows: ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} rootmulti.Store -> cachemulti.Store -> gaskv.Store -> cachekv.Store -> iavl.Store ``` ### Concurrent use of IAVL store The tree under `iavl.Store` is not safe for concurrent use. It is the responsibility of the caller to ensure that concurrent access to the store is not performed. The main issue with concurrent use is when data is written at the same time as it's being iterated over. Doing so will cause an irrecoverable fatal error because of concurrent reads and writes to an internal map. Although it's not recommended, you can iterate through values while writing to it by disabling "FastNode" **without guarantees that the values being written will be returned during the iteration** (if you need this, you might want to reconsider the design of your application). This is done by setting `iavl-disable-fastnode` to `true` in the config TOML file. ### `cachekv.Store` The `cachekv.Store` store wraps an underlying `KVStore`, typically a `iavl.Store` and contains an in-memory cache for storing pending writes to underlying `KVStore`. `Set` and `Delete` calls are executed on the in-memory cache. `Has` checks the cache first, falling through to the underlying `KVStore` only on a cache miss. One of the most important calls to a `cachekv.Store` is `Write()`, which ensures that key-value pairs are written to the underlying `KVStore` in a deterministic and ordered manner by sorting the keys first. The store keeps track of "dirty" keys and uses these to determine what keys to sort. Deletions are represented as zero-value (nil) entries; `Write()` detects these and calls `Delete` on the underlying `KVStore` for each one. The `cachekv.Store` also provides the ability to perform iteration and reverse iteration. Iteration is performed through the `cacheMergeIterator` type and uses both the dirty cache and underlying `KVStore` to iterate over key-value pairs. Note, all calls to CRUD and iteration operations on a `cachekv.Store` are thread-safe. ### `gaskv.Store` The `gaskv.Store` store provides a simple implementation of a `KVStore`. Specifically, it just wraps an existing `KVStore`, such as a cache-wrapped `iavl.Store`, and incurs configurable gas costs for CRUD operations via `ConsumeGas()` calls on a `GasMeter` passed at construction time, then proxies the underlying CRUD call to the wrapped store. ### `cachemulti.Store` & `rootmulti.Store` The `rootmulti.Store` acts as an abstraction around a series of stores. Namely, it implements the `CommitMultiStore` an `Queryable` interfaces. Through the `rootmulti.Store`, an SDK module can request access to a `KVStore` to perform state CRUD operations and queries by holding access to a unique `KVStoreKey`. The `rootmulti.Store` ensures these queries and state operations are performed through cached-wrapped instances of `cachekv.Store` which is described above. The `rootmulti.Store` implementation is also responsible for committing all accumulated state from each `KVStore` to disk and returning an application state Merkle root. Queries can be performed to return state data along with associated state commitment proofs for both previous heights/versions and the current state root. Queries are routed based on store name, i.e. a module, along with other parameters defined in the SDK's `RequestQuery` type. The `rootmulti.Store` also provides primitives for pruning data at a given height/version from state storage. When a height is committed, the `rootmulti.Store` will determine if other previous heights should be considered for removal based on the operator's pruning settings defined by `PruningOptions`, which defines how many recent versions to keep on disk and the interval at which to remove "staged" pruned heights from disk. During each interval, the staged heights are removed from each `KVStore`. Note, it is up to the underlying `KVStore` implementation to determine how pruning is actually performed. The `PruningOptions` are defined as follows: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type PruningOptions struct { // KeepRecent defines how many recent heights to keep on disk. KeepRecent uint64 // Interval defines when the pruned heights are removed from disk. Interval uint64 // Strategy defines the kind of pruning strategy. See below for more information on each. Strategy PruningStrategy } ``` The Cosmos SDK defines a preset number of pruning "strategies": `default`, `everything`, `nothing`, and `custom`. It is important to note that the `rootmulti.Store` considers each `KVStore` as a separate logical store. In other words, they do not share a Merkle tree or comparable data structure. This means that when state is committed via `rootmulti.Store`, each store is committed in sequence and thus is not atomic. In terms of store construction and wiring, each Cosmos SDK application contains a `BaseApp` instance which internally has a reference to a `CommitMultiStore` that is implemented by a `rootmulti.Store`. The application then registers one or more `KVStoreKey` that pertain to a unique module and thus a `KVStore`. Through the use of an `sdk.Context` and a `KVStoreKey`, each module can get direct access to it's respective `KVStore` instance. Example: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func NewApp(...) Application { // ... bApp := baseapp.NewBaseApp(appName, logger, db, txConfig.TxDecoder(), baseAppOptions...) bApp.SetVersion(version.Version) bApp.SetInterfaceRegistry(interfaceRegistry) // ... keys := sdk.NewKVStoreKeys(...) transientKeys := sdk.NewTransientStoreKeys(...) memKeys := sdk.NewMemoryStoreKeys(...) // ... // initialize stores app.MountKVStores(keys) app.MountTransientStores(transientKeys) app.MountMemoryStores(memKeys) // ... } ``` The `rootmulti.Store` itself can be cache-wrapped which returns an instance of a `cachemulti.Store`. For each block, `BaseApp` ensures that the proper abstractions are created on the `CommitMultiStore`, i.e. ensuring that the `rootmulti.Store` is cached-wrapped and uses the resulting `cachemulti.Store` to be set on the `sdk.Context` which is then used for block and transaction execution. As a result, all state mutations due to block and transaction execution are actually held ephemerally until `Commit()` is called by the ABCI client. This concept is further expanded upon when the AnteHandler is executed per transaction to ensure state is not committed for transactions that failed CheckTx. # Log v2 Source: https://docs.cosmos.network/sdk/latest/guides/testing/log `cosmossdk.io/log/v2` is the Cosmos SDK logging package. At a high level, there are three pieces to understand: 1. `log.NewLogger(...)` creates the default Cosmos SDK logger. It is backed by `zerolog`. 2. `cosmossdk.io/log/v2/slog` lets you satisfy the same SDK `Logger` interface with a standard library `*slog.Logger`. 3. `log.NewMultiLogger(...)` fans one log call out to multiple SDK loggers. The SDK uses this during server startup when OpenTelemetry log exporting is enabled. To learn more about how we support OpenTelemetry, read the [Telemetry docs](/sdk/latest/guides/testing/telemetry). If you only need ordinary SDK logging, you usually only need `log.NewLogger`, which is automatically provisioned and set on `sdk.Context`. ## Default Logger The default implementation is a small wrapper around `zerolog`. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} logger := log.NewLogger(os.Stderr) logger.Info("starting app", "chain_id", chainID) logger.Error("failed to load state", "err", err) ``` `NewLogger` writes human-readable console output by default. The server command wiring switches options based on CLI configuration, for example: * `OutputJSONOption()` for JSON logs * `LevelOption(...)` for a global log level * `FilterOption(...)` for module-based filtering * `TraceOption(true)` to include stack traces on error logs * `VerboseLevelOption(...)` for temporary verbose mode The SDK also uses the `module` field consistently. The package exposes `log.ModuleKey` for this: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} logger = logger.With(log.ModuleKey, "bank") logger.Info("send coins", "from", from, "to", to) ``` That matters because the log filter implementation keys off the `module` field when parsing values such as `consensus:debug,*:error`. ## Structured Context `Logger.With(...)` returns a derived logger with additional fields: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} keeperLogger := logger.With(log.ModuleKey, "staking", "component", "keeper") keeperLogger.Info("validator updated", "operator", valAddr) ``` This is the normal way to attach stable metadata to a logger instance. ## Context-Aware Logging The v2 `Logger` interface adds `*Context` methods: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type Logger interface { Info(msg string, keyVals ...any) InfoContext(ctx context.Context, msg string, keyVals ...any) Warn(msg string, keyVals ...any) WarnContext(ctx context.Context, msg string, keyVals ...any) Error(msg string, keyVals ...any) ErrorContext(ctx context.Context, msg string, keyVals ...any) Debug(msg string, keyVals ...any) DebugContext(ctx context.Context, msg string, keyVals ...any) With(keyVals ...any) Logger Impl() any } ``` The important distinction is: * `Info`, `Warn`, `Error`, and `Debug` log without inspecting a `context.Context` * `InfoContext`, `WarnContext`, `ErrorContext`, and `DebugContext` use the provided context for trace correlation For the default `zerolog` implementation, the `*Context` methods extract the active OpenTelemetry span from `ctx` and add: * `trace_id` * `span_id` * `trace_flags` when present If there is no valid span in the context, they behave like normal log calls. ## Trace Correlation When you want logs to line up with spans, use the context-aware methods. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (k Keeper) UpdateBalance(ctx sdk.Context, addr sdk.AccAddress, coins sdk.Coins) error { ctx, span := ctx.StartSpan(tracer, "UpdateBalance") defer span.End() logger := ctx.Logger().With(log.ModuleKey, "bank") logger.InfoContext(ctx, "updating balance", "address", addr.String()) return nil } ``` Two details matter here: 1. `sdk.Context.StartSpan(...)` returns a new `sdk.Context` with the Go `context.Context` updated to include the span. 2. The logger only sees trace information when you call one of the logger's `*Context` methods with that updated context. Without the `*Context` call, the default logger will not add trace fields to the log record. ## `log/slog` `cosmossdk.io/log/v2/slog` is an adapter for code that already has a standard library `*slog.Logger`. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} base := slog.New(handler) logger := sdklogSlog.NewCustomLogger(base) ``` This does not add extra SDK behavior by itself. It simply makes a `*slog.Logger` satisfy the Cosmos SDK `Logger` interface. Filtering, formatting, sinks, and handler behavior are whatever the underlying `slog.Logger` is configured to do. ## `MultiLogger` `log.NewMultiLogger(loggers...)` returns a logger that dispatches each log call to every wrapped logger. That includes: * ordinary log methods such as `Info(...)` * context-aware methods such as `InfoContext(...)` * `With(...)`, which derives a child logger for each wrapped logger If an underlying logger implements `VerboseModeLogger`, `SetVerboseMode(...)` is also forwarded. In other words, `MultiLogger` is just fanout. It does not merge records or add new fields on its own. ## When The SDK Configures `MultiLogger` `MultiLogger` is not created for every app automatically. During the node's server start, the SDK first builds the normal server logger from CLI/config flags. That logger is the usual `zerolog`-backed logger. Then the SDK initializes OpenTelemetry from `config/otel.yaml`. If `telemetry.IsOtelLoggerEnabled()` reports that the global OpenTelemetry logger provider has active log processors/exporters, the SDK wraps the existing server logger like this: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} otelLogger := sdkSlog.NewCustomLogger(otelslog.NewLogger("")) svrCtx.Logger = log.NewMultiLogger(svrCtx.Logger, otelLogger) ``` So when OpenTelemetry log exporting is enabled, one log call is sent to: * the existing console/stdout logger * an OpenTelemetry-backed logger for export If OpenTelemetry logging is not enabled, the server continues using only the normal logger. ## What `otelslog` Is `otelslog` is an OpenTelemetry bridge for Go's `log/slog` package. More specifically, it provides a `slog.Handler` and `slog.Logger` that convert `slog.Record` values into OpenTelemetry log records and sends them to the configured OpenTelemetry logger provider. In the Cosmos SDK startup path: * `otelslog.NewLogger("")` creates an `*slog.Logger` backed by that bridge * `cosmossdk.io/log/v2/slog.NewCustomLogger(...)` wraps it so it satisfies the SDK `Logger` interface * `log.NewMultiLogger(...)` fans logs out to both the normal `zerolog` logger and the OpenTelemetry bridge Because `slog` has native `InfoContext`/`WarnContext`/`ErrorContext`/`DebugContext` methods, the `otelslog` side receives the context directly. That means trace/span correlation is handled by the OpenTelemetry logging pipeline without the SDK needing to manually inject `trace_id` fields into that branch. ## Two Common Setups ### 1. Stdout only If you do not configure an OpenTelemetry logger provider, logs only go to the normal SDK logger output. This does not restrict you from log correlation, however. For trace correlation in tools such as Grafana Tempo and Loki, you can: 1. Emit JSON logs to stdout/stderr. 2. Scrape those logs with an agent such as the OpenTelemetry Collector filelog receiver. 3. Forward them to Loki. 4. Query by the `trace_id` field in the logs. Remember, `trace_id` is only injected into the log if a contextual method was called with a context that contains an active span. ### 2. OpenTelemetry log exporter enabled If `otel.yaml` enables an OpenTelemetry log pipeline with real log processors/exporters, the SDK configures a `MultiLogger`. In that setup: * console logging still works as before * logs are also exported through OpenTelemetry * context-aware log calls carry trace context into the OpenTelemetry branch as well This is the path to use when you want the SDK to write logs directly into an OpenTelemetry logging backend, which eliminates the need to set up scraping infrastructure. ## Future Direction Today the SDK uses a `MultiLogger` because the default logger is `zerolog`, while OpenTelemetry currently offers a bridge for `slog` rather than `zerolog`. If a first-class `zerolog` bridge becomes available and suitable, that would likely be a simpler export path than maintaining a separate fanout logger. Relevant discussion: * [https://github.com/rs/zerolog/pull/682](https://github.com/rs/zerolog/pull/682) * [https://github.com/open-telemetry/opentelemetry-go-contrib/issues/5969](https://github.com/open-telemetry/opentelemetry-go-contrib/issues/5969) # Module Simulation Source: https://docs.cosmos.network/sdk/latest/guides/testing/simulator **Prerequisite Readings** * [Testing in the SDK](/sdk/latest/learn/concepts/testing) ## Synopsis This document guides developers on integrating their custom modules with the Cosmos SDK `Simulations`. Simulations are useful for testing edge cases in module implementations. * [Simulation Package](#simulation-package) * [Simulation App Module](#simulation-app-module) * [SimsX](#simsx) * [Example Implementations](#example-implementations) * [Store decoders](#store-decoders) * [Randomized genesis](#randomized-genesis) * [Random weighted operations](#random-weighted-operations) * [Using Simsx](#using-simsx) * [App Simulator manager](#app-simulator-manager) * [Running Simulations](#running-simulations) ## Simulation Package The Cosmos SDK suggests organizing your simulation related code in a `x//simulation` package. ## Simulation App Module To integrate with the Cosmos SDK `SimulationManager`, app modules must implement the `AppModuleSimulation` interface. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // AppModuleSimulation defines the standard functions that every module should expose // for the SDK blockchain simulator type AppModuleSimulation interface { // randomized genesis states GenerateGenesisState(input *SimulationState) // register a func to decode the each module's defined types from their corresponding store key RegisterStoreDecoder(simulation.StoreDecoderRegistry) // simulation operations (i.e msgs) with their respective weight WeightedOperations(simState SimulationState) []simulation.WeightedOperation } // HasProposalMsgs defines the messages that can be used to simulate governance (v1) proposals type HasProposalMsgs interface { // msg functions used to simulate governance proposals ProposalMsgs(simState SimulationState) []simulation.WeightedProposalMsg } ``` See the full source at [`types/module/simulation.go`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/types/module/simulation.go). See an example implementation of these methods from `x/distribution` [here](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/x/distribution/module.go#L170-L194). ## SimsX Cosmos SDK v0.53.0 introduced a new package, `simsx`, providing improved DevX for writing simulation code. It exposes the following extension interfaces that modules may implement to integrate with the new `simsx` runner. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type ( HasWeightedOperationsX interface { WeightedOperationsX(weight WeightSource, reg Registry) } HasWeightedOperationsXWithProposals interface { WeightedOperationsX(weights WeightSource, reg Registry, proposals WeightedProposalMsgIter, legacyProposals []simtypes.WeightedProposalContent) } HasProposalMsgsX interface { ProposalMsgsX(weights WeightSource, reg Registry) } ) ``` See the full source at [`testutil/simsx/runner.go`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/testutil/simsx/runner.go). `SimMsgFactoryFn` is the default factory for most cases. It does not create future operations but ensures successful message delivery: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // SimMsgFactoryFn is the default factory for most cases. It does not create future operations but ensures successful message delivery. type SimMsgFactoryFn[T sdk.Msg] func(ctx context.Context, testData *ChainDataSource, reporter SimulationReporter) (signer []SimAccount, msg T) ``` See the full source at [`testutil/simsx/msg_factory.go`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/testutil/simsx/msg_factory.go). These methods allow constructing randomized messages and/or proposal messages. Note that modules should **not** implement both `HasWeightedOperationsX` and `HasWeightedOperationsXWithProposals`. See the runner code [here](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/testutil/simsx/runner.go#L330-L339) for details If the module does **not** have message handlers or governance proposal handlers, these interface methods do **not** need to be implemented. ### Example Implementations * `HasWeightedOperationsXWithProposals`: [x/gov](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/x/gov/module.go#L242-L261) * `HasWeightedOperationsX`: [x/bank](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/x/bank/module.go#L201-L205) * `HasProposalMsgsX`: [x/bank](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/x/bank/module.go#L196-L199) ## Store decoders Registering the store decoders is required for the `AppImportExport` simulation. This allows for the key-value pairs from the stores to be decoded to their corresponding types. In particular, it matches the key to a concrete type and then unmarshalls the value from the `KVPair` to the type provided. Modules using [collections](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/collections/README.md) can use the `NewStoreDecoderFuncFromCollectionsSchema` function that builds the decoder for you: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // RegisterStoreDecoder registers a decoder for supply module's types func (am AppModule) RegisterStoreDecoder(sdr simtypes.StoreDecoderRegistry) { sdr[types.StoreKey] = simtypes.NewStoreDecoderFuncFromCollectionsSchema(am.keeper.(keeper.BaseKeeper).Schema) } ``` See the full source at [`types/simulation/collections.go`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/types/simulation/collections.go) and the bank module example at [`x/bank/module.go`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/x/bank/module.go#L183-L186). Modules not using collections must manually build the store decoder. See the implementation [here](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/x/distribution/simulation/decoder.go) from the distribution module for an example. ## Randomized genesis The simulator tests different scenarios and values for genesis parameters. App modules must implement a `GenerateGenesisState` method to generate the initial random `GenesisState` from a given seed. See an example from `x/auth` [here](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/x/auth/module.go#L174-L177). Once the module's genesis parameters are generated randomly (or with the key and values defined in a `params` file), they are marshaled to JSON format and added to the app genesis JSON for the simulation. ## Random weighted operations Operations are one of the crucial parts of the Cosmos SDK simulation. They are the transactions (`Msg`) that are simulated with random field values. The sender of the operation is also assigned randomly. Operations on the simulation are simulated using the full [transaction cycle](/sdk/latest/learn/concepts/lifecycle) of a `ABCI` application that exposes the `BaseApp`. ### Using Simsx Simsx introduces the ability to define a `MsgFactory` for each of a module's messages. These factories are registered in `WeightedOperationsX` and/or `ProposalMsgsX`. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // ProposalMsgsX registers governance proposal messages in the simulation registry. func (AppModule) ProposalMsgsX(weights simsx.WeightSource, reg simsx.Registry) { reg.Add(weights.Get("msg_update_params", 100), simulation.MsgUpdateParamsFactory()) } // WeightedOperationsX registers weighted distribution module operations for simulation. func (am AppModule) WeightedOperationsX(weights simsx.WeightSource, reg simsx.Registry) { reg.Add(weights.Get("msg_set_withdraw_address", 50), simulation.MsgSetWithdrawAddressFactory(am.keeper)) reg.Add(weights.Get("msg_withdraw_delegation_reward", 50), simulation.MsgWithdrawDelegatorRewardFactory(am.keeper, am.stakingKeeper)) reg.Add(weights.Get("msg_withdraw_validator_commission", 50), simulation.MsgWithdrawValidatorCommissionFactory(am.keeper, am.stakingKeeper)) } ``` Note that the name passed in to `weights.Get` must match the name of the operation set in the `WeightedOperations`. For example, if the module contains an operation `op_weight_msg_set_withdraw_address`, the name passed to `weights.Get` should be `msg_set_withdraw_address`. See the `x/distribution` for an example of implementing message factories [here](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/x/distribution/simulation/msg_factory.go) ## App Simulator manager The following step is setting up the `SimulationManager` at the app level. This is required for the simulation test files in the next step. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type CoolApp struct { ... sm *module.SimulationManager } ``` Within the constructor of the application, construct the simulation manager using the modules from `ModuleManager` and call the `RegisterStoreDecoders` method. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} overrideModules := map[string]module.AppModuleSimulation{ authtypes.ModuleName: auth.NewAppModule(app.appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), } app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) app.sm.RegisterStoreDecoders() ``` Note that you may override some modules. This is useful if the existing module configuration in the `ModuleManager` should be different in the `SimulationManager`. Finally, the application should expose the `SimulationManager` via the following method defined in the `AppI` interface: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // SimulationManager implements the SimulationApp interface func (app *SimApp) SimulationManager() *module.SimulationManager { return app.sm } ``` See the full simapp setup at [`simapp/app.go`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/simapp/app.go). ## Running Simulations To run the simulation, use the `simsx` runner. Call `simsx.Run` to begin simulating with the default seeds, or `simsx.RunWithSeeds` to provide specific seeds: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func TestFullAppSimulation(t *testing.T) { sims.Run(t, NewSimApp, setupStateFactory) } func TestAppImportExport(t *testing.T) { sims.Run(t, NewSimApp, setupStateFactory, func(tb testing.TB, ti sims.TestInstance[*SimApp], accs []simtypes.Account) { // post-run assertions: export and compare stores }) } ``` These functions should be called in tests (i.e., `app_test.go`, `app_sim_test.go`, etc.). See the full simapp test file at [`simapp/sim_test.go`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/simapp/sim_test.go). ### Simulation test types The simulation framework provides four test functions, each testing a different failure scenario: * `TestFullAppSimulation`: General simulation mode. Runs the chain and specified operations for a given number of blocks, checking for panics. * `TestAppImportExport`: Exports the initial app state and creates a new app with the exported `genesis.json` as input, checking for store inconsistencies between the two. * `TestAppSimulationAfterImport`: Chains two simulations -- the first provides its app state to the second. Useful for testing software upgrades or hard-forks from a live chain. * `TestAppStateDeterminism`: Checks that all nodes return the same values in the same order. ### Simulator modes Simulations run in three modes: 1. **Fully random** -- initial state, module parameters, and simulation parameters are all pseudo-randomly generated. 2. **From a `genesis.json` file** -- initial state and module parameters are defined by the file. Useful for testing against a known state such as a live network export. 3. **From a `params.json` file** -- initial state is pseudo-randomly generated but module and simulation parameters are set manually. Available parameters are listed [here](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/x/simulation/client/cli/flags.go#L43-L70). These modes are not mutually exclusive. For example, you can combine a randomly generated genesis state (mode 1) with manually defined simulation params (mode 3). ### Running via go test Simulations can be run directly with `go test`: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} go test -mod=readonly github.com/cosmos/cosmos-sdk/simapp \ -run=TestApp \ ... \ -v -timeout 24h ``` The full list of available flags is defined [here](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/x/simulation/client/cli/flags.go#L43-L70). For Makefile examples, see the Cosmos SDK [`Makefile`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/Makefile#L280-L340). ### Debugging tips When encountering a simulation failure: * **Export app state** at the failure height using the `-ExportStatePath` flag. * **Use `-Verbose` logs** for a fuller picture of all operations involved. * **Try a different `-Seed`**. If the same error reproduces sooner, you will spend less time on each run. * **Reduce `-NumBlocks`** to isolate what the app state looks like at the block before failure. * **Add a [`Logger`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/x/staking/keeper/keeper.go#L78-L82)** to operations that are not being logged. # Telemetry Source: https://docs.cosmos.network/sdk/latest/guides/testing/telemetry Gather relevant insights about your application and modules with custom metrics and telemetry. ## Overview The `telemetry` package provides observability tooling for Cosmos SDK applications using [OpenTelemetry](https://opentelemetry.io/docs/). It offers a unified initialization point for traces, metrics, and logs via the OpenTelemetry declarative configuration API. This package: * Initializes OpenTelemetry SDK using YAML configuration files * Provides backward compatibility with Cosmos SDK's legacy `go-metrics` wrapper API * Includes built-in instrumentation for host, runtime, and disk I/O metrics ## Quick Start ### 1. Start a Local Telemetry Backend ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} docker run -p 3000:3000 -p 4317:4317 -p 4318:4318 --rm -ti grafana/otel-lgtm ``` ### 2. Create Configuration File Create an `otel.yaml` file: ```yaml theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} file_format: "1.0-rc.3" resource: attributes: - name: service.name value: my-cosmos-app tracer_provider: processors: - batch: exporter: otlp_grpc: endpoint: http://localhost:4317 meter_provider: readers: - pull: exporter: prometheus/development: host: 0.0.0.0 port: 9464 logger_provider: processors: - batch: exporter: otlp_grpc: endpoint: http://localhost:4317 extensions: instruments: host: {} runtime: {} diskio: {} propagators: - tracecontext ``` ### 3. Initialize Telemetry **Option A: Environment Variable (Recommended)** Set `OTEL_EXPERIMENTAL_CONFIG_FILE` to your config path. This initializes the SDK before any meters/tracers are created, avoiding atomic load overhead. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} export OTEL_EXPERIMENTAL_CONFIG_FILE=/path/to/otel.yaml ``` **Option B: Node Config Directory** An empty `otel.yaml` will now be generated in `~/./config/`. Place the desired configuration in `otel.yaml`. **Option C: Programmatic Initialization** The SDK will first attempt to initialize via env var, then using the config in the node's home directory. You may optionally initialize telemetry yourself using the `telemetry.InitializeOpenTelemetry` function: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} err := telemetry.InitializeOpenTelemetry("/path/to/otel.yaml") if err != nil { log.Fatal(err) } defer telemetry.Shutdown(context.Background()) ``` ## Configuration ### OpenTelemetry Configuration The package uses the [OpenTelemetry declarative configuration spec](https://opentelemetry.io/docs/languages/sdk-configuration/declarative-configuration/). Key sections: | Section | Purpose | | ----------------- | ------------------------------- | | `resource` | Service identity and attributes | | `tracer_provider` | Trace export configuration | | `meter_provider` | Metrics export configuration | | `logger_provider` | Log export configuration | For examples containing available options, see the [OpenTelemetry configuration examples](https://github.com/open-telemetry/opentelemetry-configuration/tree/main/examples). ### Extensions The `extensions` section of the `otel.yaml` configuration file provides additional features not yet supported by the standard otelconf: ```yaml theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} extensions: # Optional file-based exporters trace_file: "/path/to/traces.json" metrics_file: "/path/to/metrics.json" metrics_file_interval: "10s" logs_file: "/path/to/logs.json" # Custom instrumentation additions instruments: host: {} runtime: {} diskio: disable_virtual_device_filter: true # removes the automatic filtering of virtual disks. Operating systems such as Linux typically add virtual disks, which can add duplication to disk io data. These disks usually take the form of loopback, RAID, partitions, etc. # Trace context propagation propagators: - tracecontext - baggage - b3 - jaeger ``` ## Custom Instruments ### Host Instrumentation (`host`) Reports host-level metrics using `go.opentelemetry.io/contrib/instrumentation/host`: * CPU usage * Memory usage * Network I/O ```yaml theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} extensions: instruments: host: {} ``` ### Runtime Instrumentation (`runtime`) Reports Go runtime metrics using `go.opentelemetry.io/contrib/instrumentation/runtime`: * Goroutine count * GC statistics * Memory allocations ```yaml theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} extensions: instruments: runtime: {} ``` ### Disk I/O Instrumentation (`diskio`) Reports disk I/O metrics using gopsutil: | Metric | Description | | ---------------------------- | ----------------------------- | | `system.disk.io` | Bytes read/written | | `system.disk.operations` | Read/write operation counts | | `system.disk.io_time` | Time spent on I/O operations | | `system.disk.operation_time` | Time per read/write operation | | `system.disk.merged` | Merged read/write operations | ```yaml theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} extensions: instruments: diskio: {} # Or with options: diskio: disable_virtual_device_filter: true # Include loopback, RAID, partitions on Linux ``` By default, virtual devices (loopback, RAID, partitions) are filtered out on Linux to avoid double-counting I/O. ## Propagators Configure trace context propagation for distributed tracing: | Propagator | Description | | -------------- | --------------------------- | | `tracecontext` | W3C Trace Context (default) | | `baggage` | W3C Baggage | | `b3` | Zipkin B3 single header | | `b3multi` | Zipkin B3 multi-header | | `jaeger` | Jaeger propagation | ## Developer Usage ### Using Meters and Tracers After initialization, use standard OpenTelemetry APIs: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import ( "go.opentelemetry.io/otel" "go.opentelemetry.io/otel/metric" ) var ( tracer = otel.Tracer("my-package") meter = otel.Meter("my-package") myCounter metric.Int64Counter ) func init() { var err error myCounter, err = meter.Int64Counter("my.counter") if err != nil { panic(err) } } func MyFunction(ctx context.Context) error { ctx, span := tracer.Start(ctx, "MyFunction") defer span.End() myCounter.Add(ctx, 1) // ... your code return nil } ``` ### Shutdown Always call `Shutdown()` when the application exits: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (a *App) Close() { telemetry.Shutdown(ctx) } ``` ## Legacy API (Deprecated) The package provides backward-compatible wrappers for `github.com/hashicorp/go-metrics`. These are **deprecated** and users should migrate to OpenTelemetry APIs directly. ### OpenTelemetry Bridge Cosmos SDK v0.54.0+ provides a bridge to send existing go-metrics to the meter provider defined in your OpenTelemetry config. To bridge your metrics, set the `metrics-sink` in `app.toml` to "otel". ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} ############################################################################### ### Telemetry Configuration ### ############################################################################### [telemetry] # other fields... metrics-sink = "otel" ``` ### Legacy Configuration ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cfg := telemetry.Config{ ServiceName: "my-service", Enabled: true, EnableHostname: true, EnableHostnameLabel: true, EnableServiceLabel: true, PrometheusRetentionTime: 60, // seconds GlobalLabels: [][]string{{"chain_id", "cosmoshub-1"}}, MetricsSink: "otel", // "mem", "statsd", "dogstatsd", "otel" StatsdAddr: "localhost:8125", } m, err := telemetry.New(cfg) ``` ### Legacy Metrics Functions All are deprecated; prefer OpenTelemetry: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Counters telemetry.IncrCounter(1.0, "tx", "count") telemetry.IncrCounterWithLabels([]string{"tx", "count"}, 1.0, labels) // Gauges telemetry.SetGauge(42.0, "mempool", "size") telemetry.SetGaugeWithLabels([]string{"mempool", "size"}, 42.0, labels) // Timing start := telemetry.Now() // ... operation telemetry.MeasureSince(start, "tx", "process_time") // Module-specific helpers telemetry.ModuleMeasureSince("bank", start, "send", "time") telemetry.ModuleSetGauge("bank", 100.0, "balance", "total") ``` ## Metrics Sink Types | Sink | Description | | ----------- | --------------------------------------------------- | | `mem` | In-memory sink with SIGUSR1 dump support (default) | | `statsd` | StatsD protocol | | `dogstatsd` | Datadog DogStatsD | | `otel` | OpenTelemetry (bridges to configured MeterProvider) | ## Best Practices 1. **Use environment variable initialization** for production to avoid atomic load overhead 2. **Always call `Shutdown()`** to ensure metrics/traces are flushed 3. **Thread `context.Context`** properly for correct span correlation ## Viewing Telemetry Data With Grafana LGTM running: 1. Open [http://localhost:3000](http://localhost:3000) 2. Use the Drilldown views to explore: * **Traces**: Distributed trace visualization * **Metrics**: Query and dashboard metrics * **Logs**: Structured log search ## Related Documentation * [OpenTelemetry Go SDK](https://opentelemetry.io/docs/languages/go/) * [OpenTelemetry Configuration Spec](https://opentelemetry.io/docs/languages/sdk-configuration/declarative-configuration/) * [otelconf Go Package](https://pkg.go.dev/go.opentelemetry.io/contrib/otelconf) ## Cosmos SDK Metrics The following metrics are emitted from the Cosmos SDK. | Metric | Description | Unit | Type | | :------------------------------ | :---------------------------------------------------------------------------------------- | :-------------- | :------ | | `tx_count` | Total number of txs processed via `FinalizeBlock` | tx | counter | | `tx_successful` | Total number of successful txs processed via `FinalizeBlock` | tx | counter | | `tx_failed` | Total number of failed txs processed via `FinalizeBlock` | tx | counter | | `tx_gas_used` | The total amount of gas used by a tx | gas | gauge | | `tx_gas_wanted` | The total amount of gas requested by a tx | gas | gauge | | `tx_msg_send` | The total amount of tokens sent in a `MsgSend` (per denom) | token | gauge | | `tx_msg_withdraw_reward` | The total amount of tokens withdrawn in a `MsgWithdrawDelegatorReward` (per denom) | token | gauge | | `tx_msg_withdraw_commission` | The total amount of tokens withdrawn in a `MsgWithdrawValidatorCommission` (per denom) | token | gauge | | `tx_msg_delegate` | The total amount of tokens delegated in a `MsgDelegate` | token | gauge | | `tx_msg_begin_unbonding` | The total amount of tokens undelegated in a `MsgUndelegate` | token | gauge | | `tx_msg_begin_begin_redelegate` | The total amount of tokens redelegated in a `MsgBeginRedelegate` | token | gauge | | `tx_msg_ibc_transfer` | The total amount of tokens transferred via IBC in a `MsgTransfer` (source or sink chain) | token | gauge | | `ibc_transfer_packet_receive` | The total amount of tokens received in a `FungibleTokenPacketData` (source or sink chain) | token | gauge | | `new_account` | Total number of new accounts created | account | counter | | `gov_proposal` | Total number of governance proposals | proposal | counter | | `gov_vote` | Total number of governance votes for a proposal | vote | counter | | `gov_deposit` | Total number of governance deposits for a proposal | deposit | counter | | `staking_delegate` | Total number of delegations | delegation | counter | | `staking_undelegate` | Total number of undelegations | undelegation | counter | | `staking_redelegate` | Total number of redelegations | redelegation | counter | | `ibc_transfer_send` | Total number of IBC transfers sent from a chain (source or sink) | transfer | counter | | `ibc_transfer_receive` | Total number of IBC transfers received to a chain (source or sink) | transfer | counter | | `ibc_client_create` | Total number of clients created | create | counter | | `ibc_client_update` | Total number of client updates | update | counter | | `ibc_client_upgrade` | Total number of client upgrades | upgrade | counter | | `ibc_client_misbehaviour` | Total number of client misbehaviors | misbehaviour | counter | | `ibc_connection_open-init` | Total number of connection `OpenInit` handshakes | handshake | counter | | `ibc_connection_open-try` | Total number of connection `OpenTry` handshakes | handshake | counter | | `ibc_connection_open-ack` | Total number of connection `OpenAck` handshakes | handshake | counter | | `ibc_connection_open-confirm` | Total number of connection `OpenConfirm` handshakes | handshake | counter | | `ibc_channel_open-init` | Total number of channel `OpenInit` handshakes | handshake | counter | | `ibc_channel_open-try` | Total number of channel `OpenTry` handshakes | handshake | counter | | `ibc_channel_open-ack` | Total number of channel `OpenAck` handshakes | handshake | counter | | `ibc_channel_open-confirm` | Total number of channel `OpenConfirm` handshakes | handshake | counter | | `ibc_channel_close-init` | Total number of channel `CloseInit` handshakes | handshake | counter | | `ibc_channel_close-confirm` | Total number of channel `CloseConfirm` handshakes | handshake | counter | | `tx_msg_ibc_recv_packet` | Total number of IBC packets received | packet | counter | | `tx_msg_ibc_acknowledge_packet` | Total number of IBC packets acknowledged | acknowledgement | counter | | `ibc_timeout_packet` | Total number of IBC timeout packets | timeout | counter | | `store_iavl_get` | Duration of an IAVL `Store#Get` call | ms | summary | | `store_iavl_set` | Duration of an IAVL `Store#Set` call | ms | summary | | `store_iavl_has` | Duration of an IAVL `Store#Has` call | ms | summary | | `store_iavl_delete` | Duration of an IAVL `Store#Delete` call | ms | summary | | `store_iavl_commit` | Duration of an IAVL `Store#Commit` call | ms | summary | | `store_iavl_query` | Duration of an IAVL `Store#Query` call | ms | summary | # Writing CLI Commands Source: https://docs.cosmos.network/sdk/latest/guides/tooling/autocli For a conceptual overview of how CLI, gRPC, and REST fit together in a Cosmos SDK app, see [CLI, gRPC & REST](/sdk/latest/learn/concepts/cli-grpc-rest). ## Overview `autocli` generates CLI commands and flags for each method defined in your gRPC service. By default, it generates a command for each gRPC service method. The commands are named based on the name of the service method. For example, given the following protobuf definition for a service: ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} service MyService { rpc MyMethod(MyRequest) returns (MyResponse) {} } ``` The `autocli` package will generate a command named `my-method` for the `MyMethod` method. The command will have flags for each field in the `MyRequest` message. It is possible to customize the generation of transactions and queries by defining options for each service. ## Application Wiring Here are the steps to use AutoCLI: 1. Ensure your app's modules implement the `appmodule.AppModule` interface. 2. (optional) Configure how `autocli` behaves during command generation, by implementing the `func (am AppModule) AutoCLIOptions() *autocliv1.ModuleOptions` method on the module. 3. Call `app.AutoCliOpts()` to get an `autocli.AppOptions` populated from the module manager, then set `ClientCtx` on it to wire in the keyring. 4. Call `EnhanceRootCommand()` to add the generated CLI commands to your root command. AutoCLI is additive only, meaning *enhancing* the root command will only add subcommands that are not already registered. This means that you can use AutoCLI alongside other custom commands within your app. In practice this looks like (from the [example chain](https://github.com/cosmos/example/blob/main/exampled/cmd/root.go)): ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} autoCliOpts := app.AutoCliOpts() autoCliOpts.ClientCtx = initClientCtx // wires keyring + node connection if err := autoCliOpts.EnhanceRootCommand(rootCmd); err != nil { panic(err) } ``` ### Keyring AutoCLI resolves key names and signs transactions using the keyring from `client.Context`. At runtime, it reads the keyring from the command's live context (set by `SetCmdClientContextHandler` in `PersistentPreRunE` — see [Root Command Setup](#root-command-setup)) and adapts it to the [`cosmossdk.io/client/v2/autocli/keyring`](https://pkg.go.dev/cosmossdk.io/client/v2/autocli/keyring) interface via `keyring.NewAutoCLIKeyring` internally. If no keyring is provided, AutoCLI-generated commands can still query the chain but cannot sign transactions. Because AutoCLI resolves key names from the keyring, you can use account names directly instead of addresses: ```sh theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} q bank balances alice tx bank send alice bob 1000denom ``` ## Signing `autocli` supports signing transactions with the keyring. The [`cosmos.msg.v1.signer` protobuf annotation](/sdk/latest/guides/reference/protobuf-annotations) defines the signer field of the message. This field is automatically filled when using the `--from` flag or defining the signer as a positional argument. AutoCLI currently supports only one signer per transaction. ## Module wiring & Customization The `AutoCLIOptions()` method on your module allows to specify custom commands, sub-commands or flags for each service, as it was a `cobra.Command` instance, within the `RpcCommandOptions` struct. Defining such options will customize the behavior of the `autocli` command generation, which by default generates a command for each method in your gRPC service. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} autocliv1.RpcCommandOptions{ RpcMethod: "Params", // The name of the gRPC service Use: "params", // Command usage that is displayed in the help Short: "Query the parameters of the governance process", // Short description of the command Long: "Query the parameters of the governance process. Specify specific param types (voting|tallying|deposit) to filter results.", // Long description of the command PositionalArgs: []*autocliv1.PositionalArgDescriptor{ { ProtoField: "params_type", Optional: true }, // Transform a flag into a positional argument }, } ``` AutoCLI can create a gov proposal of any tx by simply setting the `GovProposal` field to `true` in the `autocliv1.RpcCommandOptions` struct. Users can however use the `--no-proposal` flag to disable the proposal creation (which is useful if the authority isn't the gov module on a chain). ### Specifying Subcommands By default, `autocli` generates a command for each method in your gRPC service. However, you can specify subcommands to group related commands together. To specify subcommands, use the `autocliv1.ServiceCommandDescriptor` struct. For a real-world example, see the `gov` module's [`autocli.go`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/x/gov/autocli.go) in the Cosmos SDK. It demonstrates `ServiceCommandDescriptor` with `RpcCommandOptions`, `PositionalArgs`, `SubCommands`, `EnhanceCustomCommand`, and `GovProposal` all in one file. ### Positional Arguments By default `autocli` generates a flag for each field in your protobuf message. However, you can choose to use positional arguments instead of flags for certain fields. To add positional arguments to a command, use the `autocliv1.PositionalArgDescriptor` struct, as seen in the example below. Specify the `ProtoField` parameter, which is the name of the protobuf field that should be used as the positional argument. In addition, if the parameter is a variable-length argument, you can specify the `Varargs` parameter as `true`. This can only be applied to the last positional parameter, and the `ProtoField` must be a repeated field. For a real-world example, see the `auth` module's [`autocli.go`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/x/auth/autocli.go) in the Cosmos SDK. It shows positional args wired for every query method, with `address` as a positional argument on the `Account` method. After wiring positional args, the command can be used as follows, instead of having to specify the `--address` flag: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} query auth account cosmos1abcd...xyz ``` #### Flattened Fields in Positional Arguments AutoCLI also supports flattening nested message fields as positional arguments. This means you can access nested fields using dot notation in the `ProtoField` parameter. This is particularly useful when you want to directly set nested message fields as positional arguments. For example, if you have a nested message structure like this: ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message Permissions { string level = 1; repeated string limit_type_urls = 2; } message MsgAuthorizeCircuitBreaker { string grantee = 1; Permissions permissions = 2; } ``` You can flatten the fields in your AutoCLI configuration: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { RpcMethod: "AuthorizeCircuitBreaker", Use: "authorize [grantee] [level] [msg_type_urls]", PositionalArgs: []*autocliv1.PositionalArgDescriptor{ {ProtoField: "grantee"}, {ProtoField: "permissions.level"}, {ProtoField: "permissions.limit_type_urls", Varargs: true}, }, } ``` This allows users to provide values for nested fields directly as positional arguments: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} tx circuit authorize cosmos1... super-admin "/cosmos.bank.v1beta1.MsgSend" "/cosmos.bank.v1beta1.MsgMultiSend" ``` Instead of having to provide a complex JSON structure for nested fields, flattening makes the CLI more user-friendly by allowing direct access to nested fields. #### Customizing Flag Names By default, `autocli` generates flag names based on the names of the fields in your protobuf message. However, you can customize the flag names by providing a `FlagOptions`. This parameter allows you to specify custom names for flags based on the names of the message fields. For example, if you have a message with the fields `test` and `test1`, you can use the following naming options to customize the flags: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} autocliv1.RpcCommandOptions{ FlagOptions: map[string]*autocliv1.FlagOptions{ "test": { Name: "custom_name", }, "test1": { Name: "other_name", }, }, } ``` ### Combining AutoCLI with Other Commands Within A Module AutoCLI can be used alongside other commands within a module. For example, the `gov` module uses AutoCLI for its query commands while also keeping hand-written tx commands for `submit-proposal`, `weighted-vote`, and similar. Set `EnhanceCustomCommand: true` on each `ServiceCommandDescriptor` where you want AutoCLI to add generated commands alongside existing ones: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (am AppModule) AutoCLIOptions() *autocliv1.ModuleOptions { return &autocliv1.ModuleOptions{ Query: &autocliv1.ServiceCommandDescriptor{ Service: govv1.Query_ServiceDesc.ServiceName, EnhanceCustomCommand: true, // keep hand-written gov query commands RpcCommandOptions: []*autocliv1.RpcCommandOptions{ /* ... */ }, }, Tx: &autocliv1.ServiceCommandDescriptor{ Service: govv1.Msg_ServiceDesc.ServiceName, EnhanceCustomCommand: true, // keep hand-written gov tx commands }, } } ``` If `EnhanceCustomCommand` is not set to `true`, AutoCLI skips command generation for any service that already has commands registered via `GetTxCmd()` or `GetQueryCmd()`. ### Skip a command AutoCLI checks the [`cosmos_proto.method_added_in` protobuf annotation](/sdk/latest/guides/reference/protobuf-annotations) and skips commands that were introduced in a newer SDK version than the one currently running. Additionally, a command can be manually skipped using the `autocliv1.RpcCommandOptions`: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} autocliv1.RpcCommandOptions{ RpcMethod: "Params", // The name of the gRPC method Skip: true, } ``` ### Use AutoCLI for non module commands It is possible to use `AutoCLI` for non-module commands. The pattern is to add the options directly to `autoCliOpts.ModuleOptions` after calling `AutoCliOpts()`: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} nodeCmds := nodeservice.NewNodeCommands() autoCliOpts.ModuleOptions[nodeCmds.Name()] = nodeCmds.AutoCLIOptions() ``` `AutoCliOpts()` only picks up modules registered with the module manager — non-module commands always need to be added to `ModuleOptions` manually, as the example chain does with `nodeservice.NewNodeCommands()`. For a more complete example of this pattern, see [`client/grpc/cmtservice/autocli.go`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/client/grpc/cmtservice/autocli.go) and [`client/grpc/node/autocli.go`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/client/grpc/node/autocli.go) in the Cosmos SDK. ## Root Command Setup For AutoCLI-generated commands (and hand-written commands) to work correctly — signing transactions, querying the chain, reading configuration — the root command must set up the `client.Context` and `server.Context` in a `PersistentPreRunE` function. This runs before every subcommand and makes both contexts available to all child commands. See [`simapp/simd/cmd/root.go`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/simapp/simd/cmd/root.go#L50-L93) for a complete example. The two key calls inside `PersistentPreRun` are: * `SetCmdClientContextHandler` reads persistent flags via `ReadPersistentCommandFlags`, creates a `client.Context`, and sets it on the command context. This is what AutoCLI and hand-written commands use to sign transactions and connect to a node. * `InterceptConfigsPreRunHandler` creates the `server.Context`, loads `app.toml` and `config.toml` from the node home directory, and binds them to the server context's viper instance. This is what makes application configuration available at startup. ### Custom logger By default, `InterceptConfigsPreRunHandler` sets the default SDK logger. To use a custom logger, use `InterceptConfigsAndCreateContext` instead and set the logger manually: ```diff expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} -return server.InterceptConfigsPreRunHandler(cmd, customAppTemplate, customAppConfig, customCMTConfig) +serverCtx, err := server.InterceptConfigsAndCreateContext(cmd, customAppTemplate, customAppConfig, customCMTConfig) +if err != nil { + return err +} +// overwrite default server logger +logger, err := server.CreateSDKLogger(serverCtx, cmd.OutOrStdout()) +if err != nil { + return err +} +serverCtx.Logger = logger.With(log.ModuleKey, "server") +// set server context +return server.SetCmdServerContext(cmd, serverCtx) ``` ## Environment Variables Every CLI flag is automatically bound to an environment variable. The variable name is the app's `basename` in uppercase followed by the flag name, with `-` replaced by `_`. For example, `--node` for an app with basename `GAIA` binds to `GAIA_NODE`. This lets you pre-configure common flags instead of passing them on every command: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} # set once in .env or shell profile GAIA_HOME= GAIA_NODE= GAIA_CHAIN_ID="cosmoshub-4" GAIA_KEYRING_BACKEND="test" # then just run gaiad tx bank send alice bob 1000uatom --fees 500uatom ``` ## Hand-Written Commands AutoCLI covers the standard case: one protobuf RPC method maps to one CLI command. For commands that don't fit that model, you can write Cobra commands manually and combine them with AutoCLI using `EnhanceCustomCommand: true`. Common reasons to write a command manually: * **Complex argument parsing** — multiple positional args that require custom validation or coin parsing before the message is built * **Commands that span multiple RPC calls** — e.g., building a transaction from inputs that require a preceding query * **Non-standard UX** — interactive prompts, offline signing flows, or commands that generate output rather than broadcast ### Pattern A manual transaction command uses `client.GetClientTxContext` to retrieve the signing context, constructs a message, and passes it to `tx.GenerateOrBroadcastTxCLI`: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func NewSendTxCmd(ac address.Codec) *cobra.Command { cmd := &cobra.Command{ Use: "send [from_key_or_address] [to_address] [amount]", Short: "Send tokens from one account to another", Args: cobra.ExactArgs(3), RunE: func(cmd *cobra.Command, args []string) error { // set --from from the first positional arg if err := cmd.Flags().Set(flags.FlagFrom, args[0]); err != nil { return err } clientCtx, err := client.GetClientTxContext(cmd) if err != nil { return err } toAddr, err := ac.StringToBytes(args[1]) if err != nil { return err } coins, err := sdk.ParseCoinsNormalized(args[2]) if err != nil { return err } msg := types.NewMsgSend(clientCtx.GetFromAddress(), toAddr, coins) return tx.GenerateOrBroadcastTxCLI(clientCtx, cmd.Flags(), msg) }, } flags.AddTxFlagsToCmd(cmd) return cmd } ``` Key elements: * `client.GetClientTxContext(cmd)` retrieves the client context (signer, node connection, codec) * `flags.AddTxFlagsToCmd(cmd)` adds standard transaction flags (`--from`, `--fees`, `--gas`, etc.) * `tx.GenerateOrBroadcastTxCLI` handles both `--generate-only` (offline) and live broadcast modes # Confix Source: https://docs.cosmos.network/sdk/latest/guides/tooling/confix Confix is a configuration management tool that allows you to manage your configuration via CLI. `Confix` is a configuration management tool that allows you to manage your configuration via CLI. It is based on the [CometBFT RFC 019](https://github.com/cometbft/cometbft/blob/5013bc3f4a6d64dcc2bf02ccc002ebc9881c62e4/docs/rfc/rfc-019-config-version.md). ## Installation ### Add Config Command To add the confix tool, it's required to add the `ConfigCommand` to your application's root command file (e.g. `/cmd/root.go`). Import the `confixCmd` package: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import confixcmd "cosmossdk.io/tools/confix/cmd" ``` Inside your `initRootCmd` function, add the command to the root: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} rootCmd.AddCommand( confixcmd.ConfigCommand(), ) ``` The `ConfigCommand` function builds the `config` root command and is defined in the `confixcmd` package (`cosmossdk.io/tools/confix/cmd`). An implementation example can be found in `simapp`. The command will be available as `simd config`. Using confix directly in the application can have less features than using it standalone. This is because confix is versioned with the SDK, while `latest` is the standalone version. ### Using Confix Standalone To use Confix standalone, without having to add it in your application, install it with the following command: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} go install cosmossdk.io/tools/confix/cmd/confix@latest ``` Alternatively, for building from source, simply run `make confix`. The binary will be located in `tools/confix`. ## Usage Use standalone: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} confix --help ``` Use in simd: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd config --help ``` ### Get Get a configuration value, e.g.: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd config get app pruning # gets the value pruning from app.toml simd config get client chain-id # gets the value chain-id from client.toml ``` ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} confix get ~/.simapp/config/app.toml pruning # gets the value pruning from app.toml confix get ~/.simapp/config/client.toml chain-id # gets the value chain-id from client.toml ``` ### Set Set a configuration value, e.g.: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd config set app pruning "enabled" # sets the value pruning from app.toml simd config set client chain-id "foo-1" # sets the value chain-id from client.toml ``` ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} confix set ~/.simapp/config/app.toml pruning "enabled" # sets the value pruning from app.toml confix set ~/.simapp/config/client.toml chain-id "foo-1" # sets the value chain-id from client.toml ``` ### Migrate Migrate a configuration file to a new version, config type defaults to `app.toml`, if you want to change it to `client.toml`, please indicate it by adding the optional parameter, e.g.: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd config migrate v0.53 # migrates defaultHome/config/app.toml to the latest v0.53 config simd config migrate v0.53 --client # migrates defaultHome/config/client.toml to the latest v0.53 config ``` ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} confix migrate v0.53 ~/.simapp/config/app.toml # migrate ~/.simapp/config/app.toml to the latest v0.53 config confix migrate v0.53 ~/.simapp/config/client.toml --client # migrate ~/.simapp/config/client.toml to the latest v0.53 config ``` ### Diff Get the diff between a given configuration file and the default configuration file, e.g.: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd config diff v0.53 # gets the diff between defaultHome/config/app.toml and the latest v0.53 config simd config diff v0.53 --client # gets the diff between defaultHome/config/client.toml and the latest v0.53 config ``` ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} confix diff v0.53 ~/.simapp/config/app.toml # gets the diff between ~/.simapp/config/app.toml and the latest v0.53 config confix diff v0.53 ~/.simapp/config/client.toml --client # gets the diff between ~/.simapp/config/client.toml and the latest v0.53 config ``` ### View View a configuration file, e.g: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd config view client # views the current app client config ``` ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} confix view ~/.simapp/config/client.toml # views the current app client config ``` ### Maintainer At each SDK modification of the default configuration, add the default SDK config under `data/vXX-app.toml`. This allows users to use the tool standalone. ## Credits This project is based on the [CometBFT RFC 019](https://github.com/cometbft/cometbft/blob/5013bc3f4a6d64dcc2bf02ccc002ebc9881c62e4/docs/rfc/rfc-019-config-version.md) and their never released own implementation of [confix](https://github.com/cometbft/cometbft/blob/v0.36.x/scripts/confix/confix.go). # Tool Guide Source: https://docs.cosmos.network/sdk/latest/guides/tooling/tool-guide What tools should I use and for what? A practical guide to the Cosmos SDK developer toolbox. A practical reference for Cosmos chain and module developers: what each tool does and when to reach for it. ## Code generation **[Buf](https://buf.build/cosmos/cosmos-sdk/docs/main)** — Compiles `.proto` files into Go types, gRPC stubs, and REST gateway code. The standard way to run `proto-gen` in a Cosmos project. Also lints and formats proto files, and publishes generated docs to the Buf registry. See the [Protobuf Documentation](https://buf.build/cosmos/cosmos-sdk/docs/main) on the Buf registry. **[Protobuf Annotations](/sdk/latest/guides/reference/protobuf-annotations)** — Cosmos SDK-specific proto field options (scalar descriptors, amino names, query pagination, etc.) that affect code generation output. Consult this when writing `.proto` files for a new module. **[AutoCLI](/sdk/latest/guides/tooling/autocli)** — Generates CLI commands and gRPC-gateway routes for your module's messages and queries directly from proto definitions. Use it instead of hand-writing CLI commands — it also handles pagination, output formatting, and custom flag mappings. ## Client library **[CosmJS](https://github.com/cosmos/cosmjs)** — The official JavaScript and TypeScript library for building clients, frontends, and scripts that interact with Cosmos chains. Handles transaction signing, broadcasting, querying, and wallet integration in browser and Node.js environments. ## State management **[Collections](/sdk/latest/guides/state/collections)** — A typed abstraction over raw `KVStore` access. Handles key encoding, prefix isolation, iteration, and secondary indexes. Also produces a schema used automatically by simulation decoders. Use it for all new module state instead of raw byte keys. **[Store](/sdk/latest/guides/state/store)** — Reference documentation for the SDK store layer: `KVStore`, `CommitMultiStore`, `CacheKVStore`, IAVL, pruning strategies, and store versioning. Read this when you need to understand what is happening under the collections abstraction or need to work with stores directly. ## Testing **[Testing](/sdk/latest/learn/concepts/testing)** — The SDK's testing conventions: unit tests for keepers and message servers, integration tests wired with `depinject`, and end-to-end tests using the `testnet` package. **[Module Simulation](/sdk/latest/guides/testing/simulator)** — A fuzz-testing framework that runs your module's messages with randomized inputs and genesis states. Checks for panics, non-determinism, and import/export inconsistencies. Use it to catch edge cases that unit tests miss. ## Node setup and operations **[Prerequisites](/sdk/latest/node/prerequisites)** — Required software and environment setup before running a node. **[Run a Node](/sdk/latest/node/run-node)** — How to initialize a chain, configure genesis, and start a node with `simd`. **[Run a Testnet](/sdk/latest/node/run-testnet)** — Running a local multi-node testnet using `simd testnet`. **[Production Deployment](/sdk/latest/node/run-production)** — Hardening and deployment guidance for running a node in production: systemd, state sync, backup strategies, and security considerations. **[Cosmovisor](/sdk/latest/guides/upgrades/cosmovisor)** — A process manager for your chain binary that watches for on-chain upgrade proposals and automatically swaps in the new binary at the correct upgrade height. Required for zero-downtime upgrades in production. **[Confix](/sdk/latest/guides/tooling/confix)** — A CLI tool for reading, setting, migrating, and diffing `app.toml` and `client.toml` configuration files across SDK versions. Use it when upgrading a node between SDK versions or scripting config changes. ## Keys and transactions **[Keyring](/sdk/latest/node/keyring)** — The SDK's key management layer. Covers keyring backends (`os`, `file`, `test`, `memory`), key types, and how to manage keys via `simd keys`. Use this to understand key storage security trade-offs in production deployments. **[Building Transactions](/sdk/latest/node/txs)** — How to programmatically construct, sign, encode, and broadcast transactions using the SDK's `TxBuilder` and `TxConfig` APIs. **[Interacting with a Node](/sdk/latest/node/interact-node)** — Using the CLI and gRPC to query state and broadcast transactions against a running node. ## Observability **[Telemetry](/sdk/latest/guides/testing/telemetry)** — OpenTelemetry-based metrics for the SDK and your modules. Emit counters, gauges, and histograms from keeper methods. Integrates with Prometheus and any OTLP-compatible backend. **[Logging](/sdk/latest/guides/testing/log)** — Structured logging via `cosmossdk.io/log` (backed by zerolog). Use it in keepers and servers to emit structured log lines, with support for log correlation and OpenTelemetry log export. ## IBC **[IBC Go](/ibc)** — The canonical IBC implementation for Cosmos SDK chains. Use it to add cross-chain token transfers and arbitrary message passing to your chain. # Cosmovisor Source: https://docs.cosmos.network/sdk/latest/guides/upgrades/cosmovisor `cosmovisor` is a process manager for Cosmos SDK application binaries that automates application binary switch at chain upgrades. It polls the `upgrade-info.json` file that is created by the x/upgrade module at upgrade height, and then can automatically download the new binary, stop the current binary, switch from the old binary to the new one, and finally restart the node with the new binary. * [Design](#design) * [Contributing](#contributing) * [Setup](#setup) * [Installation](#installation) * [Command Line Arguments And Environment Variables](#command-line-arguments-and-environment-variables) * [Folder Layout](#folder-layout) * [Usage](#usage) * [Initialization](#initialization) * [Detecting Upgrades](#detecting-upgrades) * [Adding Upgrade Binary](#adding-upgrade-binary) * [Auto-Download](#auto-download) * [Preparing for an Upgrade](#preparing-for-an-upgrade) * [Example: SimApp Upgrade](#example-simapp-upgrade) * [Chain Setup](#chain-setup) * [Prepare Cosmovisor and Start the Chain](#prepare-cosmovisor-and-start-the-chain) * [Update App](#update-app) ## Design Cosmovisor is designed to be used as a wrapper for a `Cosmos SDK` app: * it will pass arguments to the associated app (configured by `DAEMON_NAME` env variable). Running `cosmovisor run arg1 arg2 ....` will run `app arg1 arg2 ...`; * it will manage an app by restarting and upgrading if needed; * it is configured using environment variables, not positional arguments. *Note: If new versions of the application are not set up to run in-place store migrations, migrations will need to be run manually before restarting `cosmovisor` with the new binary. For this reason, we recommend applications adopt in-place store migrations.* Only the latest version of cosmovisor is actively developed/maintained. Versions prior to v1.0.0 have a vulnerability that could lead to a DOS. Please upgrade to the latest version. ## Contributing Cosmovisor is part of the Cosmos SDK monorepo, but it's a separate module with its own release schedule. Release branches have the following format `release/cosmovisor/vA.B.x`, where A and B are a number (e.g. `release/cosmovisor/v1.3.x`). Releases are tagged using the following format: `cosmovisor/vA.B.C`. ## Setup ### Installation You can download Cosmovisor from the [GitHub releases](https://github.com/cosmos/cosmos-sdk/releases/tag/cosmovisor%2Fv1.5.0). To install the latest version of `cosmovisor`, run the following command: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} go install cosmossdk.io/tools/cosmovisor/cmd/cosmovisor@latest ``` To install a specific version, you can specify the version: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} go install cosmossdk.io/tools/cosmovisor/cmd/cosmovisor@v1.5.0 ``` Run `cosmovisor version` to check the cosmovisor version. Alternatively, for building from source, simply run `make cosmovisor`. The binary will be located in `tools/cosmovisor`. Installing cosmovisor using `go install` will display the correct `cosmovisor` version. Building from source (`make cosmovisor`) or installing `cosmovisor` by other means won't display the correct version. ### Command Line Arguments And Environment Variables The first argument passed to `cosmovisor` is the action for `cosmovisor` to take. Options are: * `help`, `--help`, or `-h` - Output `cosmovisor` help information and check your `cosmovisor` configuration. * `run` - Run the configured binary using the rest of the provided arguments. * `version` - Output the `cosmovisor` version and also run the binary with the `version` argument. * `config` - Display the current `cosmovisor` configuration, that means displaying the environment variables value that `cosmovisor` is using. * `add-upgrade` - Add an upgrade manually to `cosmovisor`. This command allow you to easily add the binary corresponding to an upgrade in cosmovisor. * `add-batch-upgrade` - Add multiple upgrades at once. * `show-upgrade-info` - Show the current upgrade info from the upgrade-info.json file. All arguments passed to `cosmovisor run` will be passed to the application binary (as a subprocess). `cosmovisor` will return `/dev/stdout` and `/dev/stderr` of the subprocess as its own. For this reason, `cosmovisor run` cannot accept any command-line arguments other than those available to the application binary. `cosmovisor` reads its configuration from environment variables, or its configuration file (use `--cosmovisor-config `): * `DAEMON_HOME` is the location where the `cosmovisor/` directory is kept that contains the genesis binary, the upgrade binaries, and any additional auxiliary files associated with each binary (e.g. `$HOME/.gaiad`, `$HOME/.regend`, `$HOME/.simd`, etc.). * `DAEMON_NAME` is the name of the binary itself (e.g. `gaiad`, `regend`, `simd`, etc.). * `DAEMON_ALLOW_DOWNLOAD_BINARIES` (*optional*), if set to `true`, will enable auto-downloading of new binaries (for security reasons, this is intended for full nodes rather than validators). By default, `cosmovisor` will not auto-download new binaries. * `DAEMON_DOWNLOAD_MUST_HAVE_CHECKSUM` (*optional*, default = `false`), if `true` cosmovisor will require that a checksum is provided in the upgrade plan for the binary to be downloaded. If `false`, cosmovisor will not require a checksum to be provided, but still check the checksum if one is provided. * `DAEMON_RESTART_AFTER_UPGRADE` (*optional*, default = `true`), if `true`, restarts the subprocess with the same command-line arguments and flags (but with the new binary) after a successful upgrade. Otherwise (`false`), `cosmovisor` stops running after an upgrade and requires the system administrator to manually restart it. Note restart is only after the upgrade and does not auto-restart the subprocess after an error occurs. * `DAEMON_RESTART_DELAY` (*optional*, default none), allow a node operator to define a delay between the node halt (for upgrade) and backup by the specified time. The value must be a duration (e.g. `1s`). * `DAEMON_SHUTDOWN_GRACE` (*optional*, default none), if set, send interrupt to binary and wait the specified time to allow for cleanup/cache flush to disk before sending the kill signal. The value must be a duration (e.g. `1s`). * `DAEMON_POLL_INTERVAL` (*optional*, default 300 milliseconds), is the interval length for polling the upgrade plan file. The value must be a duration (e.g. `1s`). * `DAEMON_DATA_BACKUP_DIR` option to set a custom backup directory. If not set, `DAEMON_HOME` is used. * `UNSAFE_SKIP_BACKUP` (defaults to `false`), if set to `true`, upgrades directly without performing a backup. Otherwise (`false`, default) backs up the data before trying the upgrade. The default value of false is useful and recommended in case of failures and when a backup needed to rollback. We recommend using the default backup option `UNSAFE_SKIP_BACKUP=false`. * `DAEMON_PREUPGRADE_MAX_RETRIES` (defaults to `0`). The maximum number of times to retry [`pre-upgrade`](#pre-upgrade-handling) after exit status of `31`. With the default of `0`, a single exit-31 result immediately fails the upgrade. After retries are exhausted, Cosmovisor fails the upgrade. * `DAEMON_GRPC_ADDRESS` (*optional*, default `localhost:9090`). The gRPC address of the node, used by the `prepare-upgrade` command and the batch upgrade watcher. * `COSMOVISOR_DISABLE_LOGS` (defaults to `false`). If set to true, this will disable Cosmovisor logs (but not the underlying process) completely. This may be useful, for example, when a Cosmovisor subcommand you are executing returns a valid JSON you are then parsing, as logs added by Cosmovisor make this output not a valid JSON. * `COSMOVISOR_COLOR_LOGS` (defaults to `true`). If set to true, this will colorize Cosmovisor logs (but not the underlying process). * `COSMOVISOR_TIMEFORMAT_LOGS` (defaults to `kitchen`). If set to a value (`layout|ansic|unixdate|rubydate|rfc822|rfc822z|rfc850|rfc1123|rfc1123z|rfc3339|rfc3339nano|kitchen`), this will add timestamp prefix to Cosmovisor logs (but not the underlying process). * `COSMOVISOR_CUSTOM_PREUPGRADE` (defaults to \`\`). If set, this will run $DAEMON\_HOME/cosmovisor/$COSMOVISOR\_CUSTOM\_PREUPGRADE prior to upgrade with the arguments \[ upgrade.Name, upgrade.Height ]. Executes a custom script (separate and prior to the chain daemon pre-upgrade command) * `COSMOVISOR_DISABLE_RECASE` (defaults to `false`). If set to true, the upgrade directory will expected to match the upgrade plan name without any case changes ### Folder Layout `$DAEMON_HOME/cosmovisor` is expected to belong completely to `cosmovisor` and the subprocesses that are controlled by it. The folder content is organized as follows: ```text expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} . ├── current -> genesis or upgrades/ ├── genesis │   └── bin │   └── $DAEMON_NAME └── upgrades │ └── │ ├── bin │ │   └── $DAEMON_NAME │ └── upgrade-info.json └── preupgrade.sh (optional) ``` The `cosmovisor/` directory includes a subdirectory for each version of the application (i.e. `genesis` or `upgrades/`). Within each subdirectory is the application binary (i.e. `bin/$DAEMON_NAME`) and any additional auxiliary files associated with each binary. `current` is a symbolic link to the currently active directory (i.e. `genesis` or `upgrades/`). The `name` variable in `upgrades/` is the lowercased URI-encoded name of the upgrade as specified in the upgrade module plan. Note that the upgrade name path are normalized to be lowercased: for instance, `MyUpgrade` is normalized to `myupgrade`, and its path is `upgrades/myupgrade`. Please note that `$DAEMON_HOME/cosmovisor` only stores the *application binaries*. The `cosmovisor` binary itself can be stored in any typical location (e.g. `/usr/local/bin`). The application will continue to store its data in the default data directory (e.g. `$HOME/.simapp`) or the data directory specified with the `--home` flag. `$DAEMON_HOME` is dependent of the data directory and must be set to the same directory as the data directory, you will end up with a configuration like the following: ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} .simapp ├── config ├── data └── cosmovisor ``` ## Usage The system administrator is responsible for: * installing the `cosmovisor` binary * configuring the host's init system (e.g. `systemd`, `launchd`, etc.) * appropriately setting the environmental variables * creating the `/cosmovisor` directory * creating the `/cosmovisor/genesis/bin` folder * creating the `/cosmovisor/upgrades//bin` folders * placing the different versions of the `` executable in the appropriate `bin` folders. `cosmovisor` will set the `current` link to point to `genesis` at first start (i.e. when no `current` link exists) and then handle switching binaries at the correct points in time so that the system administrator can prepare days in advance and relax at upgrade time. In order to support downloadable binaries, a tarball for each upgrade binary will need to be packaged up and made available through a canonical URL. Additionally, a tarball that includes the genesis binary and all available upgrade binaries can be packaged up and made available so that all the necessary binaries required to sync a fullnode from start can be easily downloaded. The `DAEMON` specific code and operations (e.g. CometBFT config, the application db, syncing blocks, etc.) all work as expected. The application binaries' directives such as command-line flags and environment variables also work as expected. ### Initialization The `cosmovisor init ` command creates the folder structure required for using cosmovisor. It does the following: * creates the `/cosmovisor` folder if it doesn't yet exist * creates the `/cosmovisor/genesis/bin` folder if it doesn't yet exist * copies the provided executable file to `/cosmovisor/genesis/bin/` * creates the `current` link, pointing to the `genesis` folder It uses the `DAEMON_HOME` and `DAEMON_NAME` environment variables for folder location and executable name. The `cosmovisor init` command is specifically for initializing cosmovisor, and should not be confused with a chain's `init` command (e.g. `cosmovisor run init`). ### Detecting Upgrades `cosmovisor` is polling the `$DAEMON_HOME/data/upgrade-info.json` file for new upgrade instructions. The file is created by the x/upgrade module in `BeginBlocker` when an upgrade is detected and the blockchain reaches the upgrade height. The following heuristic is applied to detect the upgrade: * When starting, `cosmovisor` doesn't know much about currently running upgrade, except the binary which is `current/bin/`. It tries to read the `current/upgrade-info.json` file to get information about the current upgrade name. * If neither `cosmovisor/current/upgrade-info.json` nor `data/upgrade-info.json` exist, then `cosmovisor` will wait for `data/upgrade-info.json` file to trigger an upgrade. * If `cosmovisor/current/upgrade-info.json` doesn't exist but `data/upgrade-info.json` exists, then `cosmovisor` assumes that whatever is in `data/upgrade-info.json` is a valid upgrade request. In this case `cosmovisor` tries immediately to make an upgrade according to the `name` attribute in `data/upgrade-info.json`. * Otherwise, `cosmovisor` waits for changes in `upgrade-info.json`. As soon as a new upgrade name is recorded in the file, `cosmovisor` will trigger an upgrade mechanism. When the upgrade mechanism is triggered, `cosmovisor` will: 1. if `DAEMON_ALLOW_DOWNLOAD_BINARIES` is enabled, start by auto-downloading a new binary into `cosmovisor//bin` (where `` is the `upgrade-info.json:name` attribute); 2. update the `current` symbolic link to point to the new directory and save `data/upgrade-info.json` to `cosmovisor/current/upgrade-info.json`. ### Adding Upgrade Binary `cosmovisor` has an `add-upgrade` command that allows to easily link a binary to an upgrade. It creates a new folder in `cosmovisor/upgrades/` and copies the provided executable file to `cosmovisor/upgrades//bin/`. Using the `--upgrade-height` flag allows you to specify at which height the binary should be switched, without going via a governance proposal. This enables support for an emergency coordinated upgrades where the binary must be switched at a specific height, but there is no time to go through a governance proposal. `--upgrade-height` creates an `upgrade-info.json` file. This means if a chain upgrade via governance proposal is executed before the specified height with `--upgrade-height`, the governance proposal will overwrite the `upgrade-info.json` plan created by `add-upgrade --upgrade-height `. Take this into consideration when using `--upgrade-height`. ### Auto-Download Generally, `cosmovisor` requires that the system administrator place all relevant binaries on disk before the upgrade happens. However, for people who don't need such control and want an automated setup (maybe they are syncing a non-validating fullnode and want to do little maintenance), there is another option. **NOTE: we don't recommend using auto-download** because it doesn't verify in advance if a binary is available. If there will be any issue with downloading a binary, the cosmovisor will stop and won't restart an App (which could lead to a chain halt). If `DAEMON_ALLOW_DOWNLOAD_BINARIES` is set to `true`, and no local binary can be found when an upgrade is triggered, `cosmovisor` will attempt to download and install the binary itself based on the instructions in the `info` attribute in the `data/upgrade-info.json` file. The files is constructed by the x/upgrade module and contains data from the upgrade `Plan` object. The `Plan` has an info field that is expected to have one of the following two valid formats to specify a download: 1. Store an os/architecture -> binary URI map in the upgrade plan info field as JSON under the `"binaries"` key. For example: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "binaries": { "linux/amd64": "https://example.com/gaia.zip?checksum=sha256:aec070645fe53ee3b3763059376134f058cc337247c978add178b6ccdfb0019f" } } ``` You can include multiple binaries at once to ensure more than one environment will receive the correct binaries: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "binaries": { "linux/amd64": "https://example.com/gaia.zip?checksum=sha256:aec070645fe53ee3b3763059376134f058cc337247c978add178b6ccdfb0019f", "linux/arm64": "https://example.com/gaia.zip?checksum=sha256:aec070645fe53ee3b3763059376134f058cc337247c978add178b6ccdfb0019f", "darwin/amd64": "https://example.com/gaia.zip?checksum=sha256:aec070645fe53ee3b3763059376134f058cc337247c978add178b6ccdfb0019f" } } ``` When submitting this as a proposal ensure there are no spaces. An example command using `gaiad` could look like: ```shell expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} > gaiad tx upgrade software-upgrade Vega \ --title Vega \ --deposit 100uatom \ --upgrade-height 7368420 \ --upgrade-info '{"binaries":{"linux/amd64":"https://github.com/cosmos/gaia/releases/download/v6.0.0-rc1/gaiad-v6.0.0-rc1-linux-amd64","linux/arm64":"https://github.com/cosmos/gaia/releases/download/v6.0.0-rc1/gaiad-v6.0.0-rc1-linux-arm64","darwin/amd64":"https://github.com/cosmos/gaia/releases/download/v6.0.0-rc1/gaiad-v6.0.0-rc1-darwin-amd64"}}' \ --summary "upgrade to Vega" \ --gas 400000 \ --from user \ --chain-id test \ --home test/val2 \ --node tcp://localhost:36657 \ --yes ``` 2. Store a link to a file that contains all information in the above format (e.g. if you want to specify lots of binaries, changelog info, etc. without filling up the blockchain). For example: ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} https://example.com/testnet-1001-info.json?checksum=sha256:deaaa99fda9407c4dbe1d04bd49bab0cc3c1dd76fa392cd55a9425be074af01e ``` When `cosmovisor` is triggered to download the new binary, `cosmovisor` will parse the `"binaries"` field, download the new binary with [go-getter](https://github.com/hashicorp/go-getter), and unpack the new binary in the `upgrades/` folder so that it can be run as if it was installed manually. Note that for this mechanism to provide strong security guarantees, all URLs should include a SHA 256/512 checksum. This ensures that no false binary is run, even if someone hacks the server or hijacks the DNS. `go-getter` will always ensure the downloaded file matches the checksum if it is provided. `go-getter` will also handle unpacking archives into directories (in this case the download link should point to a `zip` file of all data in the `bin` directory). To properly create a sha256 checksum on linux, you can use the `sha256sum` utility. For example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} sha256sum ./testdata/repo/zip_directory/autod.zip ``` The result will look something like the following: `29139e1381b8177aec909fab9a75d11381cab5adf7d3af0c05ff1c9c117743a7`. You can also use `sha512sum` if you would prefer to use longer hashes, or `md5sum` if you would prefer to use broken hashes. Whichever you choose, make sure to set the hash algorithm properly in the checksum argument to the URL. ### Preparing for an Upgrade To prepare for an upgrade, use the `prepare-upgrade` command: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmovisor prepare-upgrade ``` This command performs the following actions: 1. Retrieves upgrade information directly from the blockchain about the next scheduled upgrade. 2. Downloads the new binary specified in the upgrade plan. 3. Verifies the binary's checksum (if required by configuration). 4. Places the new binary in the appropriate directory for Cosmovisor to use during the upgrade. This command requires gRPC to be enabled on the node (configured via `DAEMON_GRPC_ADDRESS`, default `localhost:9090`). The `prepare-upgrade` command logs the following: * The name and height of the upcoming upgrade * The URL from which the new binary is being downloaded * Confirmation of successful completion Example output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} INFO Preparing for upgrade name=v1.0.0 height=1000000 INFO Downloading upgrade binary url=https://example.com/binary/v1.0.0?checksum=sha256:339911508de5e20b573ce902c500ee670589073485216bee8b045e853f24bce8 INFO Upgrade preparation complete name=v1.0.0 height=1000000 ``` *Note: The current way of downloading manually and placing the binary at the right place would still work.* ## Example: SimApp Upgrade The following instructions provide a demonstration of `cosmovisor` using the simulation application (`simapp`) shipped with the Cosmos SDK's source code. The following commands are to be run from within the `cosmos-sdk` repository. ### Chain Setup Let's create a new chain using the `v0.47.4` version of simapp (the Cosmos SDK demo app): ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} git checkout v0.47.4 make build ``` Clean `~/.simapp` (never do this in a production environment): ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} ./build/simd tendermint unsafe-reset-all ``` Set up app config: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} ./build/simd config chain-id test ./build/simd config keyring-backend test ./build/simd config broadcast-mode sync ``` Initialize the node and overwrite any previous genesis file (never do this in a production environment): ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} ./build/simd init test --chain-id test --overwrite ``` For the sake of this demonstration, amend `voting_period` in `genesis.json` to a reduced time of 20 seconds (`20s`): ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cat <<< $(jq '.app_state.gov.params.voting_period = "20s"' $HOME/.simapp/config/genesis.json) > $HOME/.simapp/config/genesis.json ``` Create a validator, and setup genesis transaction: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} ./build/simd keys add validator ./build/simd genesis add-genesis-account validator 1000000000stake --keyring-backend test ./build/simd genesis gentx validator 1000000stake --chain-id test ./build/simd genesis collect-gentxs ``` #### Prepare Cosmovisor and Start the Chain Set the required environment variables: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} export DAEMON_NAME=simd export DAEMON_HOME=$HOME/.simapp ``` Set the optional environment variable to trigger an automatic app restart: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} export DAEMON_RESTART_AFTER_UPGRADE=true ``` Initialize cosmovisor with the current binary: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmovisor init ./build/simd ``` Now you can run cosmovisor with simapp v0.47.4: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmovisor run start ``` ### Update App Update app to the latest version (e.g. v0.50.0). Migration plans are defined using the `x/upgrade` module and described in [Upgrading Modules](/sdk/latest/guides/upgrades/upgrade). Migrations can perform any deterministic state change. The migration plan to upgrade the simapp from v0.47 to v0.50 is defined in `simapp/upgrade.go`. Build the new version `simd` binary: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} make build ``` Add the new `simd` binary and the upgrade name: The migration name must match the one defined in the migration plan. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmovisor add-upgrade v047-to-v050 ./build/simd ``` Open a new terminal window and submit an upgrade proposal along with a deposit and a vote (these commands must be run within 20 seconds of each other): ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} ./build/simd tx upgrade software-upgrade v047-to-v050 --title upgrade --summary upgrade --upgrade-height 200 --upgrade-info "{}" --no-validate --from validator --yes ./build/simd tx gov deposit 1 10000000stake --from validator --yes ./build/simd tx gov vote 1 yes --from validator --yes ``` The upgrade will occur automatically at height 200. Note: you may need to change the upgrade height in the snippet above if your test play takes more time. ## Pre-Upgrade Handling Cosmovisor supports custom pre-upgrade handling. Use pre-upgrade handling when you need to implement application config changes that are required in the newer version before you perform the upgrade. If pre-upgrade handling is not implemented, the upgrade continues normally. Before the application binary is upgraded, Cosmovisor calls a `pre-upgrade` command that can be implemented by the application. The `pre-upgrade` command does not take in any command-line arguments and is expected to terminate with the following exit codes: | Exit status code | How it is handled in Cosmovisor | | ---------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `0` | `pre-upgrade` command executed successfully. Cosmovisor continues the upgrade. | | `1` | `pre-upgrade` command is not implemented. Cosmovisor continues the upgrade normally. | | `30` | `pre-upgrade` command failed. Cosmovisor fails the entire upgrade. | | `31` | `pre-upgrade` command failed. Cosmovisor retries until exit code `1` or `30` are returned, or until `DAEMON_PREUPGRADE_MAX_RETRIES` retries are exhausted (at which point the upgrade fails). | The number of allowed retries for exit code `31` is configured via `DAEMON_PREUPGRADE_MAX_RETRIES` (defaults to `0`, meaning no retries -- a single exit-31 result immediately fails the upgrade). Sample `pre-upgrade` command implementation: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func preUpgradeCommand() *cobra.Command { return &cobra.Command{ Use: "pre-upgrade", Short: "Pre-upgrade command", Run: func(cmd *cobra.Command, args []string) { if err := HandlePreUpgrade(); err != nil { os.Exit(30) } os.Exit(0) }, } } ``` Register it in the root command: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} rootCmd.AddCommand( // .. preUpgradeCommand(), ) ``` When not using Cosmovisor, install the new binary first, then run ` pre-upgrade` before starting it. The pre-upgrade command is part of the new binary, not the old one. # Upgrades and Store Migrations Source: https://docs.cosmos.network/sdk/latest/guides/upgrades/upgrade Read and understand all of this page before running a migration on a live chain. **Synopsis** In-place store migrations allow modules to upgrade to new versions that include breaking changes. This document covers both the module-side (writing migrations) and the app-side (running migrations during an upgrade). The Cosmos SDK supports two approaches to chain upgrades: exporting the entire application state to JSON and starting fresh with a modified genesis file, or performing in-place store migrations that update state directly. In-place migrations are significantly faster for chains with large state and are the standard approach for live networks. This page covers how to write module migrations and how to run them inside an upgrade handler in your app. ## Consensus Version Successful upgrades of existing modules require each `AppModule` to implement the function `ConsensusVersion() uint64`. * The versions must be hard-coded by the module developer. * The initial version **must** be set to 1. Consensus versions serve as state-breaking versions of app modules and must be incremented when the module introduces breaking changes. ## Registering Migrations To register the functionality that takes place during a module upgrade, you must register which migrations you want to take place. Migration registration takes place in the `Configurator` using the `RegisterMigration` method. The `AppModule` reference to the configurator is in the `RegisterServices` method. You can register one or more migrations. If you register more than one migration script, list the migrations in increasing order and ensure there are enough migrations that lead to the desired consensus version. For example, to migrate to version 3 of a module, register separate migrations for version 1 and version 2 as shown in the following example: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (am AppModule) RegisterServices(cfg module.Configurator) { // --snip-- if err := cfg.RegisterMigration(types.ModuleName, 1, func(ctx sdk.Context) error { // Perform in-place store migrations from ConsensusVersion 1 to 2. return nil }); err != nil { panic(fmt.Sprintf("failed to migrate %s from version 1 to 2: %v", types.ModuleName, err)) } if err := cfg.RegisterMigration(types.ModuleName, 2, func(ctx sdk.Context) error { // Perform in-place store migrations from ConsensusVersion 2 to 3. return nil }); err != nil { panic(fmt.Sprintf("failed to migrate %s from version 2 to 3: %v", types.ModuleName, err)) } } ``` Since these migrations are functions that need access to a Keeper's store, use a wrapper around the keepers called `Migrator` as shown in this example: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package keeper import ( sdk "github.com/cosmos/cosmos-sdk/types" "github.com/cosmos/cosmos-sdk/x/bank/exported" v2 "github.com/cosmos/cosmos-sdk/x/bank/migrations/v2" v3 "github.com/cosmos/cosmos-sdk/x/bank/migrations/v3" v4 "github.com/cosmos/cosmos-sdk/x/bank/migrations/v4" ) // Migrator is a struct for handling in-place store migrations. type Migrator struct { keeper BaseKeeper legacySubspace exported.Subspace } // NewMigrator returns a new Migrator. func NewMigrator(keeper BaseKeeper, legacySubspace exported.Subspace) Migrator { return Migrator{keeper: keeper, legacySubspace: legacySubspace} } // Migrate1to2 migrates from version 1 to 2. func (m Migrator) Migrate1to2(ctx sdk.Context) error { return v2.MigrateStore(ctx, m.keeper.storeService, m.keeper.cdc) } // Migrate2to3 migrates x/bank storage from version 2 to 3. func (m Migrator) Migrate2to3(ctx sdk.Context) error { return v3.MigrateStore(ctx, m.keeper.storeService, m.keeper.cdc) } // Migrate3to4 migrates x/bank storage from version 3 to 4. func (m Migrator) Migrate3to4(ctx sdk.Context) error { m.MigrateSendEnabledParams(ctx) return v4.MigrateStore(ctx, m.keeper.storeService, m.legacySubspace, m.keeper.cdc) } ``` ## Writing Migration Scripts To define the functionality that takes place during an upgrade, write a migration script and place the functions in a `migrations/` directory. For example, to write migration scripts for the bank module, place the functions in `x/bank/migrations/`. Import each version package and call its `MigrateStore` function from the corresponding `Migrator` method: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Migrating bank module from version 1 to 2 func (m Migrator) Migrate1to2(ctx sdk.Context) error { return v2.MigrateStore(ctx, m.keeper.storeService, m.keeper.cdc) // v2 is package `x/bank/migrations/v2`. } ``` To see example code of changes that were implemented in a migration of balance keys, check out [migrateBalanceKeys](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/x/bank/migrations/v2/store.go#L55-L76). For context, this code introduced migrations of the bank store that updated addresses to be prefixed by their length in bytes as outlined in [ADR-028](/sdk/latest/reference/architecture/adr-028-public-key-addresses). ## Running Migrations in the App Once modules have registered their migrations, the app runs them inside an `UpgradeHandler`. The upgrade handler type is: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type UpgradeHandler func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) ``` The handler receives the `VersionMap` stored by `x/upgrade` (reflecting the consensus versions from the previous binary), performs any additional upgrade logic, and must return the updated `VersionMap` from `RunMigrations`. Register the handler in `app.go`: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { // optional: additional upgrade logic here return app.ModuleManager.RunMigrations(ctx, app.Configurator(), fromVM) }) ``` `RunMigrations` iterates over all registered modules in order, checks each module's version in the `VersionMap`, and runs all registered migration scripts for modules whose consensus version has increased. The updated `VersionMap` is returned to the upgrade keeper, which persists it in the `x/upgrade` store. ### Order of migrations By default, migrations run in alphabetical order by module name, with one exception: `x/auth` runs last due to state dependencies with other modules (see [cosmos/cosmos-sdk#10591](https://github.com/cosmos/cosmos-sdk/issues/10591)). To change the order, call `app.ModuleManager.SetOrderMigrations(module1, module2, ...)` in `app.go`. The function panics if any registered module is omitted. ### Adding new modules during an upgrade New modules are recognized because they have no entry in the `x/upgrade` `VersionMap` store. `RunMigrations` calls `InitGenesis` for them automatically. If you need to add stores for a new module, configure the store loader before the upgrade runs: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} upgradeInfo, err := app.UpgradeKeeper.ReadUpgradeInfoFromDisk() if err != nil { panic(err) } if upgradeInfo.Name == "my-plan" && !app.UpgradeKeeper.IsSkipHeight(upgradeInfo.Height) { storeUpgrades := storetypes.StoreUpgrades{ Added: []string{"newmodule"}, } app.SetStoreLoader(upgradetypes.UpgradeStoreLoader(upgradeInfo.Height, &storeUpgrades)) } ``` To skip `InitGenesis` for a new module (for example, if you are manually initializing state in the handler), set its version in `fromVM` before calling `RunMigrations`: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} fromVM["newmodule"] = newmodule.AppModule{}.ConsensusVersion() return app.ModuleManager.RunMigrations(ctx, app.Configurator(), fromVM) ``` ### Genesis state When starting a new chain, the consensus version of each module must be saved to state during genesis. Add this to `InitChainer` in `app.go`: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (app *MyApp) InitChainer(ctx sdk.Context, req *abci.RequestInitChain) (*abci.ResponseInitChain, error) { // ... app.UpgradeKeeper.SetModuleVersionMap(ctx, app.ModuleManager.GetVersionMap()) // ... } ``` This lets the Cosmos SDK detect when modules with newer consensus versions are introduced in a future upgrade. ### Overwriting genesis functions The SDK provides modules that app developers can import, and those modules often already have an `InitGenesis` function. If you want to run a custom genesis function for one of those modules during an upgrade instead of the default one, you must both call your custom function in the handler AND manually set that module's consensus version in `fromVM`. Without the second step, `RunMigrations` will run the module's existing `InitGenesis` even though you already initialized it. You must manually set the consensus version in `fromVM` for any module whose `InitGenesis` you are overriding. If you don't, the SDK will call the module's default `InitGenesis` in addition to your custom one. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import foo "github.com/my/module/foo" app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { // Prevent RunMigrations from calling foo's default InitGenesis. fromVM["foo"] = foo.AppModule{}.ConsensusVersion() // Run your custom genesis initialization for foo. app.ModuleManager.Modules["foo"].(module.HasGenesis).InitGenesis(ctx, app.appCodec, myCustomGenesisState) return app.ModuleManager.RunMigrations(ctx, app.Configurator(), fromVM) }) ``` ## Syncing a Full Node to an Upgraded Blockchain A full node joining an already-upgraded chain must start from the initial binary that the chain used at genesis and replay all historical upgrades. If all upgrade plans include binary download instructions, Cosmovisor's auto-download mode handles this automatically. Otherwise, you must provide each historical binary manually. See the [Cosmovisor](/sdk/latest/guides/upgrades/cosmovisor) guide for setup and configuration. # Cosmos SDK Docs Source: https://docs.cosmos.network/sdk/latest/learn Version: v0.54 The Cosmos SDK is the most widely adopted, battle-tested Layer 1 blockchain stack, trusted by 200+ chains live in production. This modular framework enables you to build secure, high-performance blockchains with comprehensive guides covering everything from core concepts to advanced implementation patterns. New to the Cosmos SDK? Find the right starting point based on your background and what you want to build. Learn essential concepts including application anatomy, transaction lifecycles, accounts, and gas mechanics. Build and run a Cosmos chain from scratch, with step-by-step guidance from setup to a working custom module. Develop custom modules with comprehensive guides on module architecture, message handling, and state management. Set up, configure, and maintain nodes from local development environments to production deployments. Understand the fundamentals of Cosmos SDK, application-specific blockchains, and the SDK's architecture. # Accounts Source: https://docs.cosmos.network/sdk/latest/learn/concepts/accounts In [Cosmos Architecture](/sdk/latest/learn/intro/sdk-app-architecture), you learned that transactions change state and must be signed and validated. But who creates and signs these transactions? The answer is **accounts**. Accounts represent identities on a Cosmos SDK chain. They hold balances, authorize transactions with digital signatures, and prevent transaction replay using sequence numbers. Accounts are managed by the auth module (`x/auth`), which tracks account metadata like addresses, public keys, account numbers, and sequence numbers. Every account is controlled by a cryptographic keypair derived from a seed phrase. A seed phrase yields one or more private keys, each of which produces a public key and an account address. ## What is an account An account is an on-chain identity used to authorize transactions. Each account stores an address, a public key, an account number, and a sequence number, as defined by [`BaseAccount`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/x/auth/types/auth.pb.go#L32) in the `x/auth` module: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type BaseAccount struct { Address string `protobuf:"bytes,1,opt,name=address,proto3" json:"address,omitempty"` PubKey *anypb.Any `protobuf:"bytes,2,opt,name=pub_key,json=pubKey,proto3" json:"pub_key,omitempty"` AccountNumber uint64 `protobuf:"varint,3,opt,name=account_number,json=accountNumber,proto3" json:"account_number,omitempty"` Sequence uint64 `protobuf:"varint,4,opt,name=sequence,proto3" json:"sequence,omitempty"` } ``` Accounts can be used in other modules to associate on-chain state with an identity. For example, the bank module (`x/bank`) maps account addresses to token balances, and the staking module maps them to delegations. The private key and [seed phrase](#seed-phrases) are never stored on-chain; they are kept locally by the user or wallet. An account does not execute logic itself; instead, it authorizes [transactions](/sdk/latest/learn/concepts/transactions). Balance changes for accounts are handled by the modules that process the transaction's messages. An account's sequence number is used for [replay protection](#sequences-and-replay-protection) during transaction processing. ## Public and private keys Accounts are rooted in cryptographic keypairs. Cosmos SDK uses asymmetric cryptography, where a private key and public key form a pair. This is a fundamental concept in cryptography and is used to secure data and transactions. * A **private key** is used to sign transactions. Before signing, the transaction data is serialized and hashed; the private key then produces a digital signature over this hash. This signature proves ownership of the private key without revealing it. Private keys must always remain secret. * A **public key** is derived mathematically from the private key. The network uses it to verify signatures produced by the corresponding private key. Because the public key is derived through a one-way function, it is not possible to derive the private key from the public key. ## Seed phrases Most wallets do not generate raw private keys directly. Instead, they start from a seed phrase (mnemonic), a list of human-readable words such as: ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} apple maple river stone cloud frame picnic ladder jungle orbit solar velvet ``` A private key is then derived from the seed phrase using a deterministic algorithm. Cosmos wallets follow common standards such as: * [BIP-39 (mnemonic phrases)](https://github.com/bitcoin/bips/blob/master/bip-0039.mediawiki) * [BIP-32 (hierarchical deterministic wallets)](https://github.com/bitcoin/bips/blob/master/bip-0032.mediawiki) * [BIP-44 (multi-account derivation paths)](https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki) From the seed phrase, a binary seed is computed and used to derive a master private key. From that master key, specific private keys are derived along a path (for example: `m/44'/118'/0'/0/0`, where `118` is the Cosmos coin type). Each private key produces a public key. Control of the seed phrase means control of the derived private keys and therefore control of the corresponding accounts. Losing the seed phrase without backing it up means losing access to the account forever. ## Addresses An address is a shortened identifier derived from the public key. The public key is hashed and encoded, typically in [Bech32](/sdk/latest/guides/reference/bech32) format, with a prefix that indicates the chain, for example `cosmos`. This address is what users share and what appears in state and transactions: ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos1qnk2n4nlkpw9xfqntladh74er2xa62wgas7mv0 ``` An address is not the same as a public key. Because an address is only a hash of the public key, users can generate addresses and receive funds entirely offline. The public key is revealed on-chain the first time the account signs a transaction, at which point validators can verify the signature and the chain stores the public key alongside the account metadata. ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} Seed Phrase ↓ (BIP-39/BIP-32/BIP-44) Private Key (secp256k1) ↓ (elliptic curve math) Public Key ↓ (hash + Bech32 encoding) Address ``` ## Sequences and replay protection There are two types of transactions in the Cosmos SDK: ordered and unordered. Ordered transactions are the default. Each account tracks a sequence number starting at zero that increments with each transaction. The network rejects any transaction whose sequence number does not match the current value, preventing replay attacks and ensuring that dependent transactions from the same account execute in order (for example, sending tokens then immediately staking them). Unordered transactions bypass this check and use a timeout-based mechanism instead. Example: ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} Initial state: sequence = 0 After first accepted transaction: sequence = 1 After second accepted transaction: sequence = 2 ``` If a signed transaction carries `sequence = 1` but the account's current sequence is `2`, the transaction is rejected, ensuring that ordered transactions are applied in order and cannot be reused. The Cosmos SDK also supports optional unordered transactions, which allow transactions from the same account to be submitted and processed without strict sequence ordering. When a chain enables unordered transactions, replay protection uses a timeout timestamp and unordered nonce tracking instead of the normal per-signer sequence check. See [Transactions, Messages, and Queries](/sdk/latest/learn/concepts/transactions#message-execution-and-atomicity) for more information. ## Balances Accounts are associated with token balances stored on-chain. Balances are managed by the bank module (`x/bank`) and indexed by account address. While account metadata (address, public key, sequence number) is stored in the auth module's state, token balances are stored separately in the bank module's state. When tokens are sent from one account to another, the bank module updates balances in state. Conceptually, a token transfer decreases the sender's balance and increases the recipient's balance. An account must have sufficient balance to cover the tokens being sent and any associated transaction fees. If the balance is insufficient, the transaction is rejected during validation. ## Types of accounts Cosmos SDK supports several account types that extend the base account model: * **Base account**: A standard account that holds balances and signs transactions. This is the most common account type for users. * **Module account**: Owned by a [module](/sdk/latest/learn/concepts/modules) rather than a user. Module accounts are derived from the module name and cannot be controlled by a private key. For example, the staking module uses a module account to hold all delegated tokens, and the distribution module uses a module account to hold rewards before they are distributed. This design allows protocol logic to custody tokens without requiring a private key holder, which is essential for decentralized operations. For a working example of adding a module account to receive fees, see [Module accounts](/sdk/latest/tutorials/example/04-counter-walkthrough#module-accounts) in the Full Counter Module Walkthrough. * **Vesting account**: Holds tokens that unlock gradually over time according to a schedule. Vesting accounts are often used for team allocations or investor tokens that vest over months or years. They restrict spending to only unlocked tokens while still allowing the account to participate in staking and governance. All account types rely on the same key and address structure but may impose additional rules on balance usage. ## Accounts and transaction authorization Accounts authorize [transactions](/sdk/latest/learn/concepts/transactions) by producing digital signatures. A transaction includes: * One or more messages * A signature created using the private key * A sequence number * Associated fees When a transaction is signed, the transaction bytes are serialized and hashed. The private key then generates a digital signature over that hash. This signature proves that the holder of the private key approved the transaction, without revealing the private key itself. During execution of a standard ordered transaction: 1. The signature is verified using the account's public key. 2. The sequence number is checked against the account's current sequence. 3. Fees are deducted from the account's balance. 4. If validation passes, messages execute and may update state. 5. If execution succeeds, the sequence number increments and state updates are committed. High-level flow: ``` Seed Phrase ↓ Private Key ↓ signs Transaction ↓ verified with Public Key ↓ identifies Address ↓ updates State ``` Accounts provide identity and authorization, transactions carry intent, and modules execute the logic. The result is stored in state. To learn more about the transaction flow in a Cosmos blockchain, visit the [Transaction Lifecycle page](/sdk/latest/learn/concepts/lifecycle) ## Summary Accounts are the foundation of user interaction with a Cosmos SDK chain. They connect cryptographic keys to on-chain identity, authorize transaction execution, and prevent replay attacks. Understanding keys, addresses, balances, and sequence numbers provides the basis for understanding how transactions flow through the system. The next page, [Transactions, Messages, and Queries](/sdk/latest/learn/concepts/transactions), explains how accounts authorize the actions a transaction carries. # app.go Overview Source: https://docs.cosmos.network/sdk/latest/learn/concepts/app-go `app.go` is where an application is assembled into a working chain. It creates the `BaseApp` instance that talks to CometBFT, allocates store keys, initializes keepers, registers modules, configures execution ordering, mounts stores, and sets lifecycle hooks and the `AnteHandler`. Finally, it seals the application with `LoadLatestVersion`. The result is a single constructor, `NewExampleApp`, that returns a fully wired, ready-to-run chain. Most examples on this page come from the counter module example in the `example` repo, where `x/counter` is wired into a fuller chain. The minimal counter module example shows the smaller `app.go` delta needed to add `x/counter` to a stripped-down app. See [Step 10: Wire into app.go](/sdk/latest/tutorials/example/03-build-a-module#step-10-wire-into-appgo) in the Build a Module tutorial. ## What `app.go` does `app.go` performs a one-time, ordered initialization of the entire chain: ``` 1. Create BaseApp and codecs 2. Allocate store keys 3. Initialize keepers 4. Create the ModuleManager 5. Configure execution ordering 6. Register module services 7. Mount KV stores 8. Set lifecycle hooks (InitChainer, PreBlocker, BeginBlocker, EndBlocker, AnteHandler) 9. Load latest version ``` This sequence is strict: * Keepers require store keys, so keys come first. * The `ModuleManager` depends on keepers, so modules come after keeper construction. * Lifecycle hooks depend on the `ModuleManager`, so hook wiring comes later. * `LoadLatestVersion` seals `BaseApp`, so it runs last. ## The app struct The application struct embeds [`BaseApp`](/sdk/latest/learn/concepts/baseapp) and holds all keepers and the `ModuleManager`: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type ExampleApp struct { *baseapp.BaseApp appCodec codec.Codec interfaceRegistry codectypes.InterfaceRegistry keys map[string]*storetypes.KVStoreKey // representative keepers AccountKeeper authkeeper.AccountKeeper BankKeeper bankkeeper.Keeper ConsensusParamsKeeper consensusparamkeeper.Keeper CounterKeeper *counterkeeper.Keeper // application wiring helpers ModuleManager *module.Manager BasicModuleManager module.BasicManager configurator module.Configurator } ``` Embedding `*baseapp.BaseApp` gives `ExampleApp` the full `BaseApp` interface: ABCI methods, message and query routers, store management, and lifecycle hooks. The keeper fields are exported so test code and CLI helpers can reference them. The `keys` map holds the KV store keys allocated during initialization. The real example app includes additional keepers and helper fields; this excerpt shows the part of the struct that matters for understanding the wiring pattern. ## Creating `BaseApp` `NewExampleApp` begins by setting up codecs and creating the `BaseApp` instance: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} appCodec := codec.NewProtoCodec(interfaceRegistry) txConfig := authtx.NewTxConfig(appCodec, authtx.DefaultSignModes) bApp := baseapp.NewBaseApp(appName, logger, db, txConfig.TxDecoder(), baseAppOptions...) bApp.SetVersion(version.Version) bApp.SetInterfaceRegistry(interfaceRegistry) bApp.SetTxEncoder(txConfig.TxEncoder()) ``` `baseapp.NewBaseApp` creates the `BaseApp` with a name, logger, database, and `TxDecoder`. The `TxDecoder` is how `BaseApp` turns raw transaction bytes from CometBFT into an `sdk.Tx` it can inspect and route. Additional functional options (`baseAppOptions`) let callers configure pruning, minimum gas prices, chain ID, and optimistic execution without modifying `NewExampleApp` directly. The full example app also wires legacy Amino support, tracing, and interface registration around this excerpt. See [`BaseApp` Overview](/sdk/latest/learn/concepts/baseapp) for a fuller description of its fields and behavior. ## Allocating store keys Each module that persists state needs a dedicated KV store key. All keys are allocated together before any keeper is created: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} keys := storetypes.NewKVStoreKeys( authtypes.StoreKey, banktypes.StoreKey, stakingtypes.StoreKey, distrtypes.StoreKey, slashingtypes.StoreKey, govtypes.StoreKey, consensusparamtypes.StoreKey, countertypes.StoreKey, ) ``` Each module defines its store key name as a string constant in `types/keys.go` (for example, `countertypes.StoreKey = "counter"` — see [Step 4: Types](/sdk/latest/tutorials/example/03-build-a-module#step-4-types) in the Build a Module tutorial). `NewKVStoreKeys` takes those names and allocates a `*storetypes.KVStoreKey` for each one. Keys are passed to keeper constructors and later mounted on the `CommitMultiStore` via `MountKVStores`. No two modules share a key; that isolation is what keeps module state separate. ## Initializing keepers Each keeper is initialized with its store key, codec, and any dependencies on other keepers. `ConsensusParamsKeeper` is initialized first because it must call `bApp.SetParamStore` before any other keeper is created: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} app.ConsensusParamsKeeper = consensusparamkeeper.NewKeeper(...) bApp.SetParamStore(app.ConsensusParamsKeeper.ParamsStore) app.AccountKeeper = authkeeper.NewAccountKeeper(...) app.BankKeeper = bankkeeper.NewBaseKeeper(..., app.AccountKeeper, ...) app.CounterKeeper = counterkeeper.NewKeeper( runtime.NewKVStoreService(keys[countertypes.StoreKey]), appCodec, app.BankKeeper, ) ``` `runtime.NewKVStoreService(key)` wraps the raw store key in a service interface that keepers use to open their store from a context. This keeps keepers from holding direct references to the underlying store. Instead, they retrieve it at runtime from the context passed into each method. The keeper initialization order matters: `BankKeeper` receives `app.AccountKeeper` as an argument, so `AccountKeeper` must be initialized first. The same dependency ordering applies throughout. The counter module example also passes `app.BankKeeper` into `counterkeeper.NewKeeper`, showing how custom modules depend on existing module services (see [Expected keepers and fee collection](/sdk/latest/tutorials/example/04-counter-walkthrough#expected-keepers-and-fee-collection) in the Full Counter Module Walkthrough). Where modules are interdependent, hooks connect them after both keepers exist: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} app.StakingKeeper.SetHooks( stakingtypes.NewMultiStakingHooks( app.DistrKeeper.Hooks(), app.SlashingKeeper.Hooks(), ), ) ``` The authority address passed to most keepers (`authtypes.NewModuleAddress(govtypes.ModuleName).String()`) is the address that is allowed to call privileged messages such as `MsgUpdateParams`. Governance controls parameter changes by sending messages from the governance module account. See [Params](/sdk/latest/learn/concepts/modules#params) for how this pattern works. ## Registering modules After all keepers are initialized, the module manager is created with every module the application uses: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} app.ModuleManager = module.NewManager( auth.NewAppModule(appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), bank.NewAppModule(appCodec, app.BankKeeper, app.AccountKeeper, nil), consensus.NewAppModule(appCodec, app.ConsensusParamsKeeper), counter.NewAppModule(appCodec, app.CounterKeeper), // ...other modules... ) ``` `module.NewManager` takes a list of `AppModule` implementations. Each `AppModule` wraps a keeper and satisfies the interfaces the `ModuleManager` uses: genesis, block hooks, message and query service registration, and simulation support. The real example app includes the full built-in module set around `x/counter` — see [Step 10: Wire into app.go](/sdk/latest/tutorials/example/03-build-a-module#step-10-wire-into-appgo) for a walkthrough of module registration; this excerpt shows the basic registration pattern. The `BasicModuleManager` is then derived from the `ModuleManager` for codec registration and default genesis handling. ## Module Manager The `ModuleManager` is the application's registry of modules. It holds references to all `AppModule` instances and coordinates their participation in the block lifecycle. When `BaseApp` fires a lifecycle hook (`PreBlock`, `BeginBlock`, `EndBlock`, `InitGenesis`), it delegates to the `ModuleManager`, which calls each module's corresponding method in the configured order. The `ModuleManager` is also responsible for service registration: it iterates all modules and calls each module's `RegisterServices` to register `MsgServer` and `QueryServer` implementations with `BaseApp`'s routers. For the execution-model view, see [Module Manager in `BaseApp`](/sdk/latest/learn/concepts/baseapp#module-manager). ## Execution ordering The order in which modules run their block hooks and genesis initialization matters. Some modules depend on others having already updated state. Ordering is configured explicitly after the module manager is created: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} app.ModuleManager.SetOrderPreBlockers( authtypes.ModuleName, ) app.ModuleManager.SetOrderBeginBlockers( distrtypes.ModuleName, slashingtypes.ModuleName, stakingtypes.ModuleName, countertypes.ModuleName, genutiltypes.ModuleName, ) app.ModuleManager.SetOrderEndBlockers( banktypes.ModuleName, govtypes.ModuleName, stakingtypes.ModuleName, countertypes.ModuleName, genutiltypes.ModuleName, ) ``` Genesis initialization order is separate and equally important: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} genesisModuleOrder := []string{ authtypes.ModuleName, banktypes.ModuleName, distrtypes.ModuleName, stakingtypes.ModuleName, slashingtypes.ModuleName, govtypes.ModuleName, consensusparamtypes.ModuleName, vestingtypes.ModuleName, countertypes.ModuleName, genutiltypes.ModuleName, } app.ModuleManager.SetOrderInitGenesis(genesisModuleOrder...) app.ModuleManager.SetOrderExportGenesis(exportModuleOrder...) ``` `SetOrderExportGenesis` controls the order modules serialize their state when the chain is exported to a genesis file, for example during a hard fork or when creating a snapshot-based testnet. The export order can differ from the init genesis order; in the example chain they use different orderings. The comments in the example app explain the reasoning: `genutil` must run after `staking` so that staking pools are initialized before genesis transactions are processed, and after `auth` so that it can access auth parameters. Each hook type has its own ordering constraint: * [`PreBlock`](/sdk/latest/learn/concepts/lifecycle#preblock): runs before `BeginBlock`. Used for upgrades and consensus parameter changes that must take effect before the block begins. * [`BeginBlock`](/sdk/latest/learn/concepts/lifecycle#beginblock): runs at the start of each block. Used for per-block housekeeping such as minting inflation rewards and distributing staking rewards. * [`EndBlock`](/sdk/latest/learn/concepts/lifecycle#endblock): runs after all transactions in the block. Used for logic that depends on cumulative block state, such as tallying governance votes or recalculating validator power. * [`InitGenesis`](/sdk/latest/learn/concepts/store#genesis-and-chain-initialization): runs once at chain start, populating each module's store from `genesis.json`. For a worked example of implementing these hooks in a custom module, see [BeginBlock and EndBlock](/sdk/latest/tutorials/example/04-counter-walkthrough#beginblock-and-endblock) in the Full Counter Module Walkthrough. ## Routing setup After execution ordering is configured, module services are registered with `BaseApp`'s routers: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} app.configurator = module.NewConfigurator(app.appCodec, app.MsgServiceRouter(), app.GRPCQueryRouter()) err := app.ModuleManager.RegisterServices(app.configurator) ``` `RegisterServices` iterates all modules and calls each module's `RegisterServices(cfg)` method. Each module uses the configurator to register its `MsgServer` with the message router and its `QueryServer` with the gRPC query router. After this step, `BaseApp` can route any registered message type to the correct module handler, and any registered query to the correct query handler. The example app also registers an AutoCLI query service after this step. The AutoCLI query service registration lets the CLI introspect module options without requiring per-module CLI command boilerplate. For how these services are exposed to clients, see [CLI, gRPC, and REST](/sdk/latest/learn/concepts/cli-grpc-rest). ## Block proposal and vote extension handlers `BaseApp` exposes four handlers for the ABCI 2.0 proposal phase: `SetPrepareProposal`, `SetProcessProposal`, `SetExtendVoteHandler`, and `SetVerifyVoteExtensionHandler`. All have sensible defaults. Chains that need custom behavior wire their handlers in `app.go` after the module manager is configured: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} app.SetPrepareProposal(myPrepareProposalHandler) app.SetProcessProposal(myProcessProposalHandler) ``` For a full explanation of what each handler does, see [Block proposal and vote extensions](/sdk/latest/learn/concepts/baseapp#block-proposal-and-vote-extensions). ## Mounting stores and setting hooks With routing configured, stores are mounted and the application's lifecycle hooks are set: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // initialize stores app.MountKVStores(keys) // initialize BaseApp app.SetInitChainer(app.InitChainer) app.SetPreBlocker(app.PreBlocker()) app.SetBeginBlocker(app.BeginBlocker) app.SetEndBlocker(app.EndBlocker) app.setAnteHandler(txConfig) ``` `MountKVStores` registers each key with `BaseApp`'s `CommitMultiStore`. The hooks delegate to the `ModuleManager`: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (app *ExampleApp) BeginBlocker(ctx sdk.Context) (sdk.BeginBlock, error) { return app.ModuleManager.BeginBlock(ctx) } ``` `EndBlocker` delegates in the same way, and `InitChainer` delegates to `ModuleManager.InitGenesis` after decoding `genesis.json`. The `AnteHandler` is configured separately because it takes a `TxConfig` dependency: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (app *ExampleApp) setAnteHandler(txConfig client.TxConfig) { anteHandler, err := ante.NewAnteHandler( ante.HandlerOptions{ AccountKeeper: app.AccountKeeper, BankKeeper: app.BankKeeper, SignModeHandler: txConfig.SignModeHandler(), SigGasConsumer: ante.DefaultSigVerificationGasConsumer, }, ) if err != nil { panic(err) } app.SetAnteHandler(anteHandler) } ``` The `AnteHandler` runs before any message in a transaction executes. It verifies signatures, validates and increments the account sequence number, deducts fees, and meters gas. If it fails, the transaction is rejected before any module logic runs. A `PostHandler` can also be registered with `app.SetPostHandler`. It runs after all messages in a transaction execute (regardless of whether they succeeded), in the same state branch, and is reverted if it fails. The SDK's default `PostHandler` chain is minimal. For a deeper look at how the `AnteHandler` fits into transaction execution, see [AnteHandler](/sdk/latest/learn/concepts/baseapp#antehandler). ## Sealing with LoadLatestVersion The final step in `NewExampleApp` is loading the latest committed state: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} if loadLatest { if err := app.LoadLatestVersion(); err != nil { panic(fmt.Errorf("error loading last version: %w", err)) } } ``` `LoadLatestVersion` calls `storeLoader` to load the latest committed store state from the database, then calls `Init`, which validates that required components are configured, initializes the check state, and sets `BaseApp.sealed` to `true`. Any setter called after this point panics. This enforces that all wiring happens before the application starts serving requests. On first launch, CometBFT calls `InitChain` which triggers `InitChainer`, which calls `ModuleManager.InitGenesis` to populate each module's state from `genesis.json`. ## How everything fits together A Cosmos SDK chain is assembled into a single constructor function that returns a fully wired, ready-to-run chain. Each step builds on the previous: ``` NewBaseApp ↓ NewKVStoreKeys → one key per module ↓ NewKeeper(key, ...) → one keeper per module, dependencies wired explicitly ↓ NewManager(modules) → module manager holds all AppModule instances ↓ SetOrder*(...) → configure hook and genesis execution ordering ↓ RegisterServices(configurator) → wire MsgServer and QueryServer into BaseApp routers ↓ MountKVStores(keys) → attach module stores to CommitMultiStore ↓ SetInitChainer / SetPreBlocker / SetBeginBlocker / SetEndBlocker / SetAnteHandler ↓ LoadLatestVersion → seal BaseApp, ready to serve ``` At runtime, CometBFT drives the application through ABCI. Each ABCI call dispatches through `BaseApp`: * `InitChain` calls `InitChainer`, which runs `ModuleManager.InitGenesis`. * `FinalizeBlock` calls `PreBlocker`, `BeginBlocker`, each transaction's `AnteHandler` and message handlers, then `EndBlocker`. * `CheckTx` validates a transaction through the `AnteHandler` and writes to the internal `CheckTx` state if it passes. * `Commit` persists the finalized block state. Modules never call each other directly. They interact through keeper interfaces wired at initialization time, and they participate in the block lifecycle through hooks that the `ModuleManager` coordinates in a fixed, declared order. See [BaseApp Overview](/sdk/latest/learn/concepts/baseapp) for how ABCI calls flow through the application, and [Intro to Modules](/sdk/latest/learn/concepts/modules) for how individual modules are structured. The next section, [CLI, gRPC, and REST API](/sdk/latest/learn/concepts/cli-grpc-rest), explains how clients interact with the chain once that wiring is in place. # BaseApp Overview Source: https://docs.cosmos.network/sdk/latest/learn/concepts/baseapp `BaseApp` is the execution engine of every Cosmos SDK chain. It implements [ABCI (Application Blockchain Interface)](/sdk/latest/learn/intro/sdk-app-architecture#abci-application-blockchain-interface), the protocol CometBFT uses to communicate with the application, and translates those calls into module execution, transaction processing, and state transitions. Every Cosmos SDK chain embeds `BaseApp`. Your `app.go` creates a `BaseApp` instance, configures it with modules, keepers, and middleware, and the resulting struct is what CometBFT communicates with directly. `BaseApp` provides the base layer of execution infrastructure to your blockchain application. Without it, every chain would need to independently implement ABCI handling, signature verification, gas metering, message routing, block hook orchestration, and state commitment. ## Architectural position `BaseApp` sits between CometBFT and the modules: ``` CometBFT (consensus engine) ↓ ABCI (InitChain, CheckTx, FinalizeBlock, Commit, ...) BaseApp ↓ orchestrates block execution ModuleManager ↓ dispatches to individual modules Modules (x/auth, x/bank, x/staking, ...) ↓ read/write State (KVStores) ``` CometBFT drives the block lifecycle by calling ABCI methods on `BaseApp`. `BaseApp` handles each call, delegating to registered lifecycle hooks and routing messages to the appropriate module handlers. Modules contain the business logic, and KVStores hold the resulting state. ## Key fields [`BaseApp`](https://github.com/cosmos/cosmos-sdk/blob/main/baseapp/baseapp.go) is defined in `baseapp/baseapp.go`. It holds references to everything needed to run a chain: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type BaseApp struct { logger log.Logger name string // application name from abci.BlockInfo db dbm.DB // common DB backend cms storetypes.CommitMultiStore // Main (uncached) state storeLoader StoreLoader // function to handle store loading grpcQueryRouter *GRPCQueryRouter // router for redirecting gRPC query calls msgServiceRouter *MsgServiceRouter // router for redirecting Msg service messages txDecoder sdk.TxDecoder // unmarshal []byte into sdk.Tx mempool mempool.Mempool anteHandler sdk.AnteHandler // ante handler for fee and auth postHandler sdk.PostHandler // post handler, optional // ... sealed bool // ... chainID string // ... } ``` For a complete list of fields, see the [`BaseApp` struct definition](https://github.com/cosmos/cosmos-sdk/blob/main/baseapp/baseapp.go). * `cms` (CommitMultiStore): the root state store. All module substores are mounted here, and all state reads and writes during block execution pass through it. * `storeLoader`: a function that opens and mounts the individual module stores at application startup. * `grpcQueryRouter`: routes incoming gRPC queries to the correct module's query handler. * `msgServiceRouter`: routes each message in a transaction to the correct module's `MsgServer` handler. * `txDecoder`: decodes raw transaction bytes from CometBFT into an `sdk.Tx`. * `anteHandler`: runs before message execution to handle cross-cutting concerns: signature verification, sequence validation, and fee deduction. * `postHandler`: optional middleware that runs after message execution — used for tasks such as tipping or post-execution state adjustments. * `sealed`: set to `true` after `LoadLatestVersion` is called. Setter methods panic if called after sealing. ## Initialization and sealing `BaseApp` enforces a configuration lifecycle: setter methods must be called before `LoadLatestVersion` is invoked. When `LoadLatestVersion` runs, it validates required components, initializes the check state, and sets `sealed` to `true`. Any setter called after sealing panics. On first launch, CometBFT calls `InitChain`. It stores `ConsensusParams` from the genesis file — block gas limit, max block size, evidence rules — in the `ParamStore`, where they can later be adjusted via on-chain governance. It initializes all volatile states by branching the root store, sets the block gas meter to infinite so genesis transactions are not gas-constrained, and calls the application's `initChainer`, which runs each module's `InitGenesis` to populate initial state. How this shapes the structure of `app.go` is covered in the next section. ## Transaction decoding Transactions arrive from CometBFT as raw bytes. Before `BaseApp` can validate or execute them, it must decode them into the SDK's transaction type using the `TxDecoder`: ``` []byte tx ↓ TxDecoder ↓ sdk.Tx ``` This step happens before the transaction enters the execution pipeline. Without it, `BaseApp` cannot inspect messages, run the `AnteHandler`, or route execution to the correct module. ## Execution modes `BaseApp` does not execute everything against the same mutable state. It maintains branched, copy-on-write views of the committed root state for different execution contexts: * `CheckTx` (`ExecModeCheck`): validates a transaction before it enters the mempool, without committing state. * `FinalizeBlock` (`ExecModeFinalize`): executes transactions in a proposed block against a branched state that is committed at the end. * `PrepareProposal` (`ExecModePrepareProposal`): runs when the node is the block proposer, assembling a candidate block. Executes against a branched state that is never committed. * `ProcessProposal` (`ExecModeProcessProposal`): runs on every validator to validate an incoming proposal. Also executes against a branched state that is never committed. * `Simulate` (`ExecModeSimulate`): runs a transaction for gas estimation without committing state. This separation ensures that validation, proposal handling, and simulation cannot accidentally mutate committed application state. ## The transaction execution pipeline When `BaseApp` processes a transaction, it runs through a structured pipeline: ``` RunTx ├─ DecodeTx → raw bytes → sdk.Tx ├─ AnteHandler → signatures, sequence, fees, gas setup ├─ RunMsgs → route each message to the correct module handler └─ PostHandler → optional post-execution middleware ``` If the `AnteHandler` fails, message execution does not begin. If any message fails, message execution reverts atomically; all message writes commit or none do. ## `AnteHandler` The `AnteHandler` is middleware that runs before any message in a transaction executes. It verifies cryptographic signatures, validates and increments the account sequence number, deducts transaction fees, and sets up the gas meter for the transaction. For the application wiring side, including `SetAnteHandler`, `HandlerOptions`, and constructor ordering, see [Mounting stores and setting hooks in `app.go`](/sdk/latest/learn/concepts/app-go#mounting-stores-and-setting-hooks). If the `AnteHandler` fails, the transaction is rejected and its messages never execute. If the `AnteHandler` succeeds but a message later fails, the `AnteHandler`'s state writes, such as fee deduction and sequence increment for ordered transactions, are already flushed to `finalizeBlockState` and will be committed with the block. Fees are charged even for transactions whose messages fail. `BaseApp.runTx()` also handles Go panics that occur during execution — for example, when a keeper encounters an invalid state. By default, panics are caught and logged as errors. Applications can register custom panic recovery logic via `BaseApp.AddRunTxRecoveryHandler`, which adds a `RecoveryHandler` to the chain. See [ADR-022](/sdk/latest/reference/architecture/adr-022-custom-panic-handling) and [`baseapp/recovery.go`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/baseapp/recovery.go) for details. ## Message routing When a transaction contains messages, `BaseApp` routes each one to the appropriate module handler using the `MsgServiceRouter`. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type MsgServiceRouter struct { routes map[string]MsgServiceHandler // ... } ``` The routing process has three steps: 1. Registration: During app startup, each module calls `RegisterService`, which registers its message handlers keyed by message type URL (e.g., `/cosmos.bank.v1beta1.MsgSend`). 2. Lookup: At execution time, `Handler` looks up the registered handler for the incoming message's type URL. 3. Execution: The retrieved handler invokes the module's `MsgServer` implementation, which validates inputs, applies business rules, and updates state through the keeper. This routing is entirely type-URL-based. Modules do not need to know about each other at the routing level; `BaseApp` is the neutral coordinator. ## Queries For read-only access to application state, `BaseApp` uses the **`GRPCQueryRouter`** to route incoming gRPC queries to the correct module query service. Queries bypass the transaction execution pipeline and directly read committed state. They do not go through the `AnteHandler`, do not consume gas in the same way, and do not mutate state. ## Store management `BaseApp` owns the `CommitMultiStore` that holds all module state. At app startup, each module registers its store key, and `BaseApp` mounts the corresponding store: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} app.MountKVStores(keys) ``` Before executing each transaction, `BaseApp` creates a cached, copy-on-write view of the multistore. All writes during that transaction occur in the cache. If the transaction succeeds, the cache is flushed to the underlying store. If the transaction fails at any point, the cache is discarded and no state changes are applied. ## CheckTx and mempool validation Before a transaction reaches block execution, it goes through `CheckTx`. `BaseApp` runs the `AnteHandler` in `CheckTx` mode to validate signatures, check sequence numbers, and verify fees. Each validator also enforces a configurable `minGasPrices` floor, and transactions offering less than the minimum gas price are rejected here as a spam protection measure. Transactions that fail `CheckTx` are rejected and do not enter the mempool. `CheckTx` does not execute messages. It does run the `AnteHandler`, and if ante succeeds the resulting writes are persisted to `BaseApp`'s internal `CheckTx` state rather than to committed chain state. This is how the mempool tracks transaction validity before block execution. After each block commits, CometBFT triggers a recheck pass (`ReCheckTx`) that re-validates all pending mempool transactions against the new state, and any transactions that became invalid (for example, because their sequence number was consumed by a competing transaction) are evicted at this point. ## Coordinating block execution When CometBFT calls `FinalizeBlock`, `BaseApp` runs the full block execution pipeline in order: ``` FinalizeBlock ├─ PreBlock → module pre-block hooks ├─ BeginBlock → module begin-block hooks ├─ For each transaction: │ ├─ AnteHandler (signature verification, fee deduction, gas setup) │ ├─ Message routing and execution │ └─ Commit or revert (atomic per-transaction) └─ EndBlock → module end-block hooks → returns AppHash ``` [`PreBlock`](/sdk/latest/learn/concepts/lifecycle#preblock) runs before any block logic. It handles changes that must take effect before the block begins, such as activating a chain upgrade or modifying consensus parameters. [`BeginBlock`](/sdk/latest/learn/concepts/modules#beginblock) runs after [`PreBlock`](/sdk/latest/learn/concepts/lifecycle#preblock) and handles per-block housekeeping: minting inflation rewards, distributing staking rewards, resetting per-block counters. [Transactions](/sdk/latest/learn/concepts/transactions) execute sequentially in block order. Message execution for each transaction is atomic: if any message fails, the message execution branch reverts. `AnteHandler` side effects may already have been applied. [`EndBlock`](/sdk/latest/learn/concepts/modules#endblock) runs after all transactions. It handles logic that depends on the block's cumulative state — for example, tallying governance votes after all vote messages have been processed, or updating validator power after all delegation changes. After `FinalizeBlock` completes, `BaseApp` computes and returns the app hash — the Merkle root of all committed state. See [App hash](/sdk/latest/learn/concepts/store#app-hash) for how it relates to the multistore and deterministic execution. When CometBFT subsequently calls `Commit`, `BaseApp` writes `finalizeBlockState` to the root store, resets `checkState` to the newly committed state, and clears `finalizeBlockState` to `nil` in preparation for the next block. ## Module Manager `BaseApp` exposes `PreBlock`, `BeginBlock`, and `EndBlock` as lifecycle hook points. Every standard SDK application wires these to a `ModuleManager`, which holds the full set of registered modules and their execution ordering. When a hook fires, `ModuleManager` iterates its ordered module list and calls each module's corresponding hook in sequence. Ordering matters: some modules depend on others having already updated state before they run. The `app.go` page shows how the application constructs the `ModuleManager`, wires it into `BaseApp`, and configures ordering in practice. See [Module Manager in `app.go`](/sdk/latest/learn/concepts/app-go#module-manager). ## Block proposal and vote extensions [ABCI 2.0](/sdk/latest/guides/abci/abci#abci-20) added a proposal phase that runs during consensus rounds, before `FinalizeBlock` executes. `BaseApp` exposes four handlers for this phase, with default implementations wired at construction: * `PrepareProposal`: called on the current block proposer to assemble a block from the mempool. The default selects transactions up to the block gas limit. Chains can override this to implement custom ordering, filtering, or injection of protocol-level transactions. * `ProcessProposal`: called on every validator to validate an incoming proposal. The default accepts any structurally valid proposal. Chains that use `PrepareProposal` to inject data typically also override this to verify that data is present and valid. * `ExtendVote` / `VerifyVoteExtension`: allow validators to attach arbitrary data to their precommit votes and verify other validators' extensions. One major use case is oracle price feeds: validators inject off-chain data into consensus so it becomes available on-chain at block start. All four are configurable in `app.go` via `SetPrepareProposal`, `SetProcessProposal`, `SetExtendVoteHandler`, and `SetVerifyVoteExtensionHandler`. Chains that do not need custom behavior can leave the defaults in place. ## Putting it all together `BaseApp` is the execution engine of a Cosmos SDK chain: ``` CometBFT → ABCI → BaseApp → Modules → State ``` It implements ABCI, coordinates the block lifecycle (`PreBlock` → `BeginBlock` → transactions → `EndBlock`), routes messages to module handlers via the `MsgServiceRouter`, routes queries via the `GRPCQueryRouter`, runs the `AnteHandler` before each transaction, and manages the multistore with copy-on-write caching for atomicity. State changes are committed at block end; validation and simulation run against branched state and never touch committed data. The next section, [app.go Overview](/sdk/latest/learn/concepts/app-go), explains how `BaseApp` is instantiated, configured, and wired with modules to produce a complete, running chain. # CLI, gRPC, and REST API Source: https://docs.cosmos.network/sdk/latest/learn/concepts/cli-grpc-rest A Cosmos SDK chain exposes three external interfaces for interacting with it: a command-line interface (CLI), a gRPC API, and a REST API. Each is a different surface over the same underlying chain logic. Users and developers can choose whichever interface suits their use case without affecting how the chain processes or validates transactions. ## How users interact with a chain Every operation a user performs falls into one of two categories: * **Transactions**: state-changing operations broadcast to the network and included in blocks (send tokens, delegate stake, vote on a proposal) * **Queries**: read-only requests that return data from the current chain state without going through consensus Both categories are accessible through the CLI, gRPC, and REST interfaces. The interfaces differ in how requests are constructed and transmitted, not in what they can do. None of these interfaces affect consensus. Transactions are validated and ordered by the consensus engine (CometBFT); the interfaces are simply delivery mechanisms that carry signed transactions to the network and return results. For the transaction and query model underneath these interfaces, see [Transactions, Messages, and Queries](/sdk/latest/learn/concepts/transactions). ## Interface comparison All endpoints default to `localhost` and must be configured to be accessible over the public internet. | Interface | Default port | Best for | Notes | | ---------------- | ------------ | ---------------------------------------------------------------- | --------------------------------------------------------- | | **CLI** | — | Development, testing, and node operations | Best for operator and developer workflows | | **gRPC** | 9090 | Wallets, backend services, and SDK clients | Not supported in browsers (requires HTTP/2) | | **REST** | 1317 | Web applications, scripts, and environments without gRPC support | Use when gRPC is unavailable; REST is disabled by default | | **CometBFT RPC** | 26657 | Consensus and blockchain data queries | Limited to consensus-layer data | ## CLI The CLI is the primary tool for developers and operators interacting with a chain from the terminal. Most Cosmos SDK chains ship a single binary that acts as both the server process and the CLI client. It is common to append a `d` suffix to the binary name to indicate that it is a daemon process, such as `exampled` or `simd`. For a hands-on walkthrough of running a local chain and using the CLI, see the [Running and Testing](/sdk/latest/tutorials/example/05-run-and-test) tutorial. When used as a client, the CLI constructs a transaction or query, signs it if required, and submits it through the node client interface. To learn how to run a local node and use the CLI, see [Run a Local Node](/sdk/latest/node/prerequisites). ### Using the CLI CLI commands are organized into two categories: * `query` commands retrieve information from chain state * `tx` commands construct and broadcast transactions Example commands: ``` exampled query counter count ``` ``` exampled tx counter add 10 \ --from mykey \ --chain-id example-1 \ --gas auto \ --gas-adjustment 1.3 \ --fees 1000stake ``` * `--from` specifies the signing key * `--gas auto` asks the CLI to estimate gas usage * `--gas-adjustment` applies a safety multiplier to the estimate * `--fees` specifies the transaction fee Gas limits the computational work a transaction can perform. The full gas model is explained in [Execution Context, Gas, and Events](/sdk/latest/learn/concepts/context-gas-events). For a full CLI reference for the example chain, see [CLI reference](/sdk/latest/tutorials/example/05-run-and-test#cli-reference) in the Running and Testing tutorial. ### How modules expose CLI commands with `AutoCLI` In modern Cosmos SDK applications, modules expose CLI commands through **`AutoCLI`**. `AutoCLI` reads a module's protobuf service definitions and generates CLI commands automatically, without requiring modules to hand-write Cobra command boilerplate. The counter snippets in this section are from the minimal counter module example. See the [Build a Module from Scratch](/sdk/latest/tutorials/example/03-build-a-module) tutorial. A module opts into `AutoCLI` by implementing `AutoCLIOptions()` on its `AppModule`: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (a AppModule) AutoCLIOptions() *autocliv1.ModuleOptions { return &autocliv1.ModuleOptions{ Query: &autocliv1.ServiceCommandDescriptor{ Service: "example.counter.Query", EnhanceCustomCommand: true, RpcCommandOptions: []*autocliv1.RpcCommandOptions{ { RpcMethod: "Count", Use: "count", Short: "Query the current counter value", }, }, }, Tx: &autocliv1.ServiceCommandDescriptor{ Service: "example.counter.Msg", EnhanceCustomCommand: true, RpcCommandOptions: []*autocliv1.RpcCommandOptions{ { RpcMethod: "Add", Use: "add [amount]", Short: "Add to the counter", PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ProtoField: "add"}}, }, }, }, } } ``` `AutoCLI` uses this configuration to generate the `exampled tx counter add` and `exampled query counter count` commands. The `Service` field names the protobuf service, and `RpcCommandOptions` maps individual RPC methods to CLI subcommands with positional arguments, flags, and help text. The `AutoCliOpts()` method on the application struct collects these options from all modules and passes them to the `AutoCLI` framework at startup: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (app *ExampleApp) AutoCliOpts() autocli.AppOptions { modules := make(map[string]appmodule.AppModule) for _, m := range app.ModuleManager.Modules { if moduleWithName, ok := m.(module.HasName); ok { moduleName := moduleWithName.Name() if appModule, ok := moduleWithName.(appmodule.AppModule); ok { modules[moduleName] = appModule } } } return autocli.AppOptions{ Modules: modules, ModuleOptions: runtimeservices.ExtractAutoCLIOptions(app.ModuleManager.Modules), AddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32AccountAddrPrefix()), ValidatorAddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32ValidatorAddrPrefix()), ConsensusAddressCodec: authcodec.NewBech32Codec(sdk.GetConfig().GetBech32ConsensusAddrPrefix()), } } ``` This collects module options and address codecs and hands them to `AutoCLI`, which wires the generated commands into the root command. ## gRPC gRPC is the primary programmatic interface for interacting with a Cosmos chain. It uses Protocol Buffers to define strongly typed request and response structures and supports generated clients for many programming languages. Each module exposes its functionality through two protobuf services: * A `Query` service for read-only access to module state * A `Msg` service for state-changing operations These services are defined in the module's `query.proto` and `tx.proto` files. The protobuf definitions for the Cosmos SDK are published at [buf.build/cosmos/cosmos-sdk](https://buf.build/cosmos/cosmos-sdk). ### How modules expose gRPC services Modules register their gRPC services during application startup via `RegisterServices`: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} app.configurator = module.NewConfigurator(app.appCodec, app.MsgServiceRouter(), app.GRPCQueryRouter()) err := app.ModuleManager.RegisterServices(app.configurator) ``` Each module implements `RegisterServices` to connect service implementations to the application routers: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (am AppModule) RegisterServices(cfg module.Configurator) { types.RegisterMsgServer(cfg.MsgServer(), keeper.NewMsgServerImpl(am.keeper)) types.RegisterQueryServer(cfg.QueryServer(), keeper.NewQueryServer(am.keeper)) } ``` `RegisterMsgServer` routes incoming `Msg` service calls to the module's `MsgServer` implementation. `RegisterQueryServer` routes incoming `Query` service calls to the module's `QueryServer` implementation. ### How to interact with gRPC Connect to the node's gRPC endpoint (default: `localhost:9090`) using a generated client: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} conn, _ := grpc.NewClient("localhost:9090", grpc.WithTransportCredentials(insecure.NewCredentials())) queryClient := countertypes.NewQueryClient(conn) resp, _ := queryClient.Count(ctx, &countertypes.QueryCountRequest{}) ``` The gRPC server can be configured in `app.toml`: * `grpc.enable = true|false` — enables or disables the gRPC server (default: `true`) * `grpc.address = {string}` — the `ip:port` the server binds to (default: `localhost:9090`) * `grpc.max-recv-msg-size` — maximum message size in bytes the server can receive (default: 10MB) * `grpc.max-send-msg-size` — maximum message size in bytes the server can send (default: `math.MaxInt32`) For archive node setups, `grpc.historical-grpc-address-block-range` maps gRPC backend addresses to inclusive block height ranges, so historical queries are routed to the node holding that slice of chain history. The value is a JSON string, for example: `'{"archive-node-1:9090": [0, 1000000]}'`. Leave it empty (the default) to disable. For more usage examples, see [Interact with the Node](/sdk/latest/node/interact-node#using-grpc). ## REST via gRPC-gateway The Cosmos SDK also exposes a REST API. REST endpoints are not written by hand; they are generated automatically from the same protobuf definitions used by gRPC, using **gRPC-gateway**. gRPC-gateway reads HTTP annotations in the `.proto` files and generates a reverse proxy that translates REST requests into gRPC calls: ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} service Query { rpc Count(QueryCountRequest) returns (QueryCountResponse) { option (google.api.http) = { get: "/example/counter/v1/count" }; } } ``` This annotation causes gRPC-gateway to generate a `GET /example/counter/v1/count` HTTP endpoint. The gateway receives the HTTP request, marshals it into a `QueryCountRequest`, calls the gRPC `Count` handler, and returns the response as JSON. ### Registering REST routes REST routes are registered in `RegisterAPIRoutes`: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (app *ExampleApp) RegisterAPIRoutes(apiSvr *api.Server, apiConfig config.APIConfig) { clientCtx := apiSvr.ClientCtx // Register new tx routes from grpc-gateway. authtx.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) // Register new CometBFT queries routes from grpc-gateway. cmtservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) // Register node gRPC service for grpc-gateway. nodeservice.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) // Register grpc-gateway routes for all modules. app.BasicModuleManager.RegisterGRPCGatewayRoutes(clientCtx, apiSvr.GRPCGatewayRouter) } ``` The REST server can be configured in `app.toml`: * `api.enable = true|false` — enables or disables the REST server (default: `false`) * `api.address = {string}` — the `ip:port` the server binds to (default: `tcp://localhost:1317`) ### Swagger When the REST server and Swagger are both enabled, the node exposes a Swagger (OpenAPI v2) specification at `http://localhost:1317/swagger/`. Swagger lists all REST endpoints, request parameters, and response schemas, and provides a browser-based interface for exploring the REST API. Both are disabled by default. Enable them in `app.toml`: ``` api.enable = true api.swagger = true ``` To generate Swagger documentation for your own custom modules, see the [`proto-swagger-gen` script](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/scripts/protoc-swagger-gen.sh) in the Cosmos SDK. ## CometBFT RPC CometBFT also exposes its own RPC server, independent of the Cosmos SDK. It serves consensus and blockchain data and is configured under the `rpc` table in `config.toml` (default: `tcp://localhost:26657`). An OpenAPI specification of all CometBFT RPC endpoints is available in the [CometBFT documentation](/cometbft/latest/docs/core/RPC). Some CometBFT RPC endpoints are directly related to the Cosmos SDK: * `/abci_query` — queries the application for state. The `path` parameter accepts: * any protobuf fully-qualified service method, for example `/cosmos.bank.v1beta1.Query/AllBalances` * `/app/simulate` — simulate a transaction and return gas usage * `/app/version` — return the application version * `/store/{storeName}/key` — direct key lookup in a named store * `/store/{storeName}/subspace` — prefix scan in a named store * `/p2p/filter/addr/{addr}` and `/p2p/filter/id/{id}` — filter peers by address or node ID * `/broadcast_tx_sync`, `/broadcast_tx_async`, `/broadcast_tx_commit` — broadcast a signed transaction to peers. The CLI, gRPC, and REST interfaces all use these CometBFT RPCs under the hood. Two gRPC methods on the CometBFT service return ABCI block results: `GetBlockResults` (by height) and `GetLatestBlockResults`. These expose `finalize_block_events` and per-transaction results. ## End-to-end interaction flow To illustrate how these interfaces connect, here is the path of a `counter add` transaction from the user's terminal to a state change on the chain. This example follows the minimal counter module example's CLI shape. See the [Build a Module from Scratch](/sdk/latest/tutorials/example/03-build-a-module#step-9-autocli) tutorial. ``` User runs: exampled tx counter add 10 --from mykey --chain-id example-1 ↓ CLI (AutoCLI generated command) Constructs MsgAddRequest{Sender: mykey, Add: 10} Signs the transaction with mykey Encodes to protobuf bytes ↓ Broadcast through the node client interface ↓ Node: CheckTx AnteHandler verifies signature, deducts fee, meters gas Transaction enters the mempool ↓ CometBFT: block proposal and consensus ↓ FinalizeBlock: transaction executed AnteHandler runs again (finalizeBlock mode) MsgServiceRouter routes MsgAddRequest → counter module MsgServer MsgServer.Add calls keeper.AddCount Keeper reads current count, adds 10, writes new count ↓ Commit: state change persisted ↓ User receives TxResponse with code 0 ``` A query follows a shorter path that bypasses consensus entirely: ``` User runs: exampled query counter count ↓ CLI (AutoCLI generated command) Constructs QueryCountRequest{} Sends directly to node gRPC query endpoint ↓ GRPCQueryRouter routes to counter QueryServer QueryServer.Count calls keeper.GetCount Keeper reads current count from store ↓ QueryCountResponse{Count: 10} returned to user ``` Queries do not enter the mempool, are not included in blocks, and do not pass through the `AnteHandler`. They read committed state and return immediately. ## Interfaces and consensus The CLI, gRPC, and REST interfaces are transport layers. They construct, sign, and deliver messages, but they do not participate in consensus and cannot affect the determinism of block execution. * Transactions become part of consensus only after they pass `CheckTx` and are included in a proposed block. The interface used to submit the transaction has no bearing on how it is validated or ordered. * Queries bypass the transaction pipeline entirely. They read committed state from a node and never reach the consensus engine. * Any node in the network can serve queries or accept transaction submissions. The result is always the same committed state, regardless of which node or which interface is used. This separation means that changing the CLI or REST surface of a module (renaming a command, adding a new query) never requires a chain upgrade. Only changes to message types, keeper logic, or state schema affect consensus. The next section, [Testing in the SDK](/sdk/latest/learn/concepts/testing), shows how to test those behaviors once they are wired up. # Execution Context, Gas, and Events Source: https://docs.cosmos.network/sdk/latest/learn/concepts/context-gas-events In the previous section, [Encoding and Protobuf](/sdk/latest/learn/concepts/encoding) explained how data is serialized and why every validator must encode state identically. This page covers the runtime environment that modules execute within: the context object that carries block metadata and state access, the gas system that limits computation, and the event system that allows modules to emit observable signals. ## What is `sdk.Context` Every message handler, keeper method, and block hook in the Cosmos SDK receives an `sdk.Context`. It is the execution environment for a single unit of work (a transaction, a query, or a block hook) and carries everything that code needs to read state, emit events, and consume gas. Rather than passing the store, gas meter, and block header as separate arguments to every function, `Context` bundles them into a single value. The `Context` struct is defined in [`types/context.go`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/types/context.go): ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type Context struct { ms storetypes.MultiStore chainID string gasMeter storetypes.GasMeter blockGasMeter storetypes.GasMeter eventManager EventManagerI // ... additional fields } ``` Context is a value type. It is passed by value and mutated through `With*` methods that return a new copy. This means a module can safely derive a sub-context (for example, with a different gas meter) without affecting the caller's context. ### Block metadata Context exposes read-only access to the current block's metadata (see [`types/context.go`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/types/context.go)): * `ctx.BlockHeight()` returns the current block number. * `ctx.BlockTime()` returns the block's timestamp. * `ctx.ChainID()` returns the chain identifier string. * `ctx.Logger()` returns a structured logger scoped to the current execution context. Modules use this for operational logging (e.g., logging an upgrade activation or an unexpected state) without affecting consensus. These values are populated by [`BaseApp`](/sdk/latest/learn/concepts/baseapp) from the block header provided by CometBFT before any block logic runs. Modules read them to implement time-dependent logic (for example, checking whether a vesting period has elapsed) or to tag events with the block height. `ctx.IsCheckTx()` returns true when the context is being used for mempool validation rather than block execution. For finer-grained branching, `ctx.ExecMode()` returns the precise execution mode: `ExecModeCheck`, `ExecModeReCheck`, `ExecModeSimulate`, `ExecModePrepareProposal`, `ExecModeProcessProposal`, `ExecModeFinalize`, and others (see [`types/context.go`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/types/context.go#L21) for more details). Modules that need to behave differently during simulation or proposal handling use `ExecMode()` instead of `IsCheckTx()`. ### Context and state access State is accessed through context. The context holds a reference to the multistore, and each keeper opens its own store through the context: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (k Keeper) GetCount(ctx context.Context) (uint64, error) { return k.counter.Get(ctx) } ``` The keeper does not hold a direct reference to the live multistore; it opens its module's store from the context on each call. This is why context must be passed to every keeper method: it is the gateway to the current block's state, the gas meter, and the event manager for that execution unit. ### Atomic sub-execution with `CacheContext` Modules that need to attempt a sub-operation and revert it on failure can call [`ctx.CacheContext()`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/types/context.go#L412), which returns a branched copy of the context and a `writeCache` function. All state changes in the sub-operation go into the branch. Calling `writeCache()` flushes them to the parent context; not calling it discards them atomically. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cacheCtx, writeCache := ctx.CacheContext() if err := doRiskyOperation(cacheCtx); err != nil { return err // branch is discarded, no state changes applied } writeCache() // flush branch to parent context ``` ## Gas metering ### What gas measures Gas is a unit of computation. In the Cosmos SDK, gas accounts for both computation and state access. Every store read, store write, and iterator step costs gas. Complex computations such as signature verification in the `AnteHandler` also cost gas. The gas system exists to prevent abuse. Without a gas limit, a single transaction could exhaust a node's resources with an unbounded computation or an unindexed state scan. ### Gas limit and the transaction gas meter Every transaction specifies a gas limit in its `auth_info.fee.gas_limit` field. When `BaseApp` begins executing a transaction, it creates a `GasMeter` initialized with that limit and attaches it to the context. The [`GasMeter`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/store/types/gas.go#L42) interface provides two key methods: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type GasMeter interface { GasConsumed() Gas ConsumeGas(amount Gas, descriptor string) // ... } ``` `GasConsumed` returns the total gas used so far in the current execution unit. `ConsumeGas` adds to the running total and panics with `ErrorOutOfGas` if consumption exceeds the limit. When submitting a transaction, users specify two of the three values `fees`, `gas`, and `gas-prices` — the third is derived from the equation `fees = gas * gas-prices`. The `gas` value becomes `GasWanted`: the maximum gas the transaction is allowed to consume. The actual gas consumed during execution is `GasUsed`. Both `GasWanted` and `GasUsed` are returned to CometBFT when `FinalizeBlock` completes. ### How gas is consumed Gas is consumed automatically at the store layer. Every read and write through the [`GasKVStore`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/store/gaskv/store.go#L12) wrapper charges gas before delegating to the underlying store: * A `Get` (store read) charges a flat read cost plus a per-byte cost for the key and value. * A `Set` (store write) charges a flat write cost plus a per-byte cost for the key and value. Modules do not need to manually track gas for ordinary state access — the store layer handles it automatically. Modules call `ctx.GasMeter().ConsumeGas(...)` directly only for computation costs that are not captured by store operations (for example, a module that performs a cryptographic operation outside the store). ### When gas runs out If gas is exhausted during execution, `ConsumeGas` panics with `ErrorOutOfGas`. `BaseApp` recovers from this panic, discards the current message execution branch, and returns an error to the user. Fees may still be charged for the gas consumed up to the point of failure, and `AnteHandler` side effects may already have been applied before message execution started. ### Block gas limit In addition to the per-transaction gas meter, there is a block-level gas meter that tracks total gas consumed by all transactions in a block. The block gas limit prevents a single block from consuming unbounded computation. If a transaction would cause the block's gas total to exceed the limit, it is excluded from the block. The block gas limit and minimum gas prices are configured in [`app.toml`](/sdk/latest/tutorials/example/05-run-and-test#apptoml). ## Events ### What events are Events are observable signals emitted during transaction and block execution. A module emits events to describe what happened: tokens were transferred, a validator was slashed, a governance proposal passed. Events carry structured key-value data alongside a type string. Events are not part of consensus state. They are not stored in the KVStore, do not affect the app hash, and are not required for deterministic execution. Instead, they are collected by `BaseApp` and included in the block result, where indexers, explorers, and relayers consume them. ### EventManager Modules emit events through the [`EventManager`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/types/events.go#L25), which is attached to the context. The `EventManager` is created fresh for each transaction and collects all events emitted during that execution. ### Standard event types The SDK automatically emits a `message` event for every transaction, with these attributes set by `BaseApp`: * `message.action` — the full type URL of the message (e.g., `/cosmos.bank.v1beta1.Msg/Send`) * `message.module` — the module name, derived from the type URL * `message.sender` — the signer address, if present These are defined as constants in [`types/events.go`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/types/events.go#L249-L263). Modules follow the same convention when emitting their own events. ### Emitting events Modules emit events using [`EmitEvent`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/types/events.go#L35) or [`EmitTypedEvent`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/types/events.go#L58): ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // emit an untyped event ctx.EventManager().EmitEvent(sdk.NewEvent( "increment", sdk.NewAttribute("new_count", strconv.FormatUint(newCount, 10)), )) ``` `EmitEvent` appends a raw key-value event to the manager's accumulated list. For events backed by protobuf message types, `EmitTypedEvent` serializes the message's fields into event attributes automatically: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} ctx.EventManager().EmitTypedEvent(&types.EventCounterIncremented{ NewCount: newCount, }) ``` Using `EmitTypedEvent` is the modern approach. It provides type safety and makes the event schema explicit through proto definitions, allowing clients to deserialize events back into typed structs. ### Block events and transaction events Events emitted during `BeginBlock` or `EndBlock` hooks are **block events**: they describe things that happened at the block level (inflation minted, validator updates applied). Events emitted inside a message handler are **transaction events**: they describe what a specific transaction did. Both types are included in the `FinalizeBlock` response that CometBFT returns to the network, but they are reported separately so clients can distinguish block-level activity from per-transaction activity. ### Who consumes events Events are consumed outside the node: * **Block explorers** index events to show users what happened in a transaction (which tokens moved, which validator was slashed, which proposal passed). * **Relayers** (IBC) subscribe to specific event types to detect packet sends and acknowledgments. * **Indexers and off-chain services** build queryable databases of chain activity from event streams. Events can also be queried via the node's REST API and WebSocket endpoint. * **Wallets and UIs** display event data to users as transaction receipts. Events are included in the block result that CometBFT returns after each block. They are not replayed or reprocessed; once a block is finalized, its events are fixed. ### Querying events Events are indexed using the format `{type}.{key}={value}` and can be filtered when querying transactions. String values must be wrapped in single quotes. | Filter | Description | | ------------------------------------------------ | ------------------------------------------- | | `tx.height=23` | All transactions at block height 23 | | `message.action='/cosmos.bank.v1beta1.Msg/Send'` | Transactions containing a bank Send message | | `message.module='bank'` | Transactions from the x/bank module | ## Putting it together During transaction execution, context, gas, and events work together as the runtime layer: ``` BaseApp creates Context for the transaction ↓ AnteHandler runs → signature verification, fee deduction, gas meter initialized ↓ Message handler runs → each store read/write consumes gas via GasKVStore → module logic emits events via EventManager ↓ If gas exhausted → panic → state reverted, fees charged for gas consumed If execution succeeds → state changes committed, events returned in block result ``` The context carries the gas meter and event manager into every keeper call. Gas is consumed transparently at the store layer. Events accumulate and are returned as part of the block result once execution completes. The next section, [Intro to SDK Structure](/sdk/latest/learn/concepts/sdk-structure), explains how an SDK application is structured as a codebase: where modules live, what goes in `app/`, and how all the pieces are assembled. # Protobuf and Signing Source: https://docs.cosmos.network/sdk/latest/learn/concepts/encoding As described in [State, Storage, and Genesis](/sdk/latest/learn/concepts/store), modules write structured state values into the KV store as raw bytes. Encoding defines how those structured values are serialized into bytes, and why every validator must produce exactly the same bytes. This page explains how that encoding works, why the Cosmos SDK chose Protocol Buffers, and what that means for module development. ## What is Protobuf? [Protocol Buffers](https://protobuf.dev/) (protobuf) is a language-neutral, binary serialization format developed by Google. You define your data structures in `.proto` files using a schema language, then generate code in your target language from that schema. The generated code handles serialization (converting structured data into bytes) and deserialization (converting bytes back into structured data). A simple protobuf message looks like this: ```proto theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message MsgSend { string from_address = 1; string to_address = 2; repeated Coin amount = 3; } ``` Each field has a name, a type, and a field number. The field numbers are what protobuf actually uses during encoding; field names are only present in the schema, not in the serialized bytes. ## Why the Cosmos SDK uses protobuf The Cosmos SDK uses protobuf for a fundamental reason: consensus requires determinism. Every validator in the network independently executes each block. After execution, each validator computes the [app hash](/sdk/latest/learn/concepts/store#app-hash), a cryptographic hash of the application state. For validators to agree on the app hash, they must all produce exactly the same bytes for every piece of state they write. Protobuf alone does not guarantee this. The Cosmos SDK uses protobuf **with additional deterministic encoding rules** formalized in [ADR-027 (Deterministic Protobuf Serialization)](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/docs/architecture/adr-027-deterministic-protobuf-serialization.md). ADR-027 specifies constraints such as requiring fields to appear in ascending field-number order and varint encodings to be as short as possible. The SDK validates incoming transactions against these rules before processing them, so a non-deterministically encoded transaction is rejected rather than producing divergent state. Every validator encoding the same data under these rules produces an identical byte sequence. Beyond determinism, protobuf provides: * **Compact encoding**: binary wire format is smaller than JSON or XML, which matters for transaction throughput and block size. * **Schema evolution**: fields can be added or deprecated without breaking existing clients, which is critical for chain upgrades. * **Code generation**: `.proto` files generate Go structs, gRPC service stubs, and REST gateway handlers automatically. * **Cross-language support**: clients in any language can interact with the chain by generating code from the same `.proto` files. ## Binary and JSON encoding The Cosmos SDK uses protobuf in two encoding modes: **Binary encoding** is the default for everything that participates in consensus: transactions written to blocks, state stored in KV stores, and genesis data. Binary encoding is compact and deterministic. When a transaction is broadcast to the network, it travels as protobuf binary. When a module writes state, it serializes values to protobuf binary before calling `Set` on the store. **JSON encoding** is used for human-readable output: the [CLI, gRPC-gateway REST endpoints](/sdk/latest/learn/concepts/cli-grpc-rest), and off-chain tooling. The Cosmos SDK uses protobuf's JSON encoding (`ProtoMarshalJSON`) rather than standard Go JSON, which preserves field names from the `.proto` schema and handles special types like `Any` correctly. It is important to keep in mind that **binary encoding is consensus-critical**. Two validators must produce identical binary bytes for identical data. JSON is only used where humans or external clients need to read the data; it never influences the AppHash. ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} Consensus-critical path Human-readable path ───────────────────────── ───────────────────────── Transaction bytes (binary) CLI output (JSON) State KV values (binary) REST API responses (JSON) Genesis KV state (binary) Block explorers (JSON) ``` Note: genesis data is distributed as JSON in `genesis.json`, but during chain initialization `InitGenesis` deserializes that JSON into protobuf structs and writes them to the KV store as binary. The KV store (and therefore the AppHash) only ever contains the binary form. ## Transaction encoding Transactions are protobuf messages defined in [`cosmos.tx.v1beta1`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/proto/cosmos/tx/v1beta1/tx.proto). A transaction is composed of three parts: ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} Tx ├─ TxBody │ └─ repeated google.protobuf.Any messages ├─ AuthInfo │ ├─ repeated SignerInfo (each with sequence) │ └─ Fee └─ repeated bytes signatures ``` * **TxBody** contains the messages to execute, serialized as `repeated google.protobuf.Any messages`. * **AuthInfo** contains signer information (including the per-signer sequence number) and fee. * **signatures** contains the cryptographic signatures, one per signer. Messages inside the transaction are stored as `google.protobuf.Any` values so that a single transaction can contain multiple message types from different modules. When a user submits a transaction, the SDK encodes it as a `TxRaw`—a flat structure with the `TxBody` bytes, `AuthInfo` bytes, and signatures already serialized. It then broadcasts that binary representation over the network. ## Transaction signing and `SignDoc` Transactions are not signed directly. Instead, the SDK constructs a deterministic structure called a **`SignDoc`**, which defines exactly what bytes the signer commits to: ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} SignDoc ├─ body_bytes (serialized TxBody) ├─ auth_info_bytes (serialized AuthInfo, includes sequence per signer) ├─ chain_id (prevents cross-chain replay) └─ account_number (ties the signature to a specific on-chain account) ``` The `SignDoc` is serialized to protobuf binary and then signed with the user's private key: ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} signature = Sign(proto.Marshal(SignDoc)) ``` Because `SignDoc` is serialized deterministically, all validators verify the exact same bytes when checking transaction signatures. The per-signer sequence number lives in `AuthInfo.SignerInfo.sequence` and is included in `auth_info_bytes`, which is part of `SignDoc`—this is what prevents replay attacks. ## Sign modes A **sign mode** determines what bytes a signer commits to when signing a transaction. The SDK supports multiple sign modes to accommodate different clients and hardware: * `SIGN_MODE_DIRECT` (default): the signer signs over the protobuf-binary-serialized `SignDoc` described above. This is compact, deterministic, and the correct choice for all new development. * `SIGN_MODE_LEGACY_AMINO_JSON`: the signer signs over an Amino JSON-encoded `StdSignDoc` instead of the protobuf `SignDoc`. This exists for backward compatibility with hardware wallets (e.g., older Ledger firmware) and client tooling that predates protobuf. New modules and chains should not depend on it. * `SIGN_MODE_TEXTUAL`: the signer signs over a human-readable CBOR-encoded representation of the transaction, designed to display legibly on hardware wallet screens (introduced in v0.50, see [ADR-050](/sdk/latest/reference/architecture/adr-050-sign-mode-textual)). This is the SDK's newer direction for human-readable signing on hardware wallets, intended to replace `SIGN_MODE_LEGACY_AMINO_JSON` over time. Its specification is versioned and has evolved across SDK releases. * `SIGN_MODE_DIRECT_AUX`: allows N-1 signers in a multi-signer transaction to sign over only `TxBody` and their own `SignerInfo`, without specifying fees. The designated fee payer signs last using `SIGN_MODE_DIRECT`. This simplifies multi-signature UX. The sign mode is negotiated at transaction construction time and does not affect how state is stored or how validators execute transactions. It only affects what bytes are signed. The full list of sign modes is defined in [`signing.proto`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/proto/cosmos/tx/signing/v1beta1/signing.proto#L17). **For module developers:** `SIGN_MODE_DIRECT` requires no extra work. If you want your module's messages to be signable on Ledger hardware wallets using `SIGN_MODE_LEGACY_AMINO_JSON`, register your message types with the Amino codec via `RegisterLegacyAminoCodec` in your module's `codec.go`. ## Message signers Every transaction message must declare which addresses are authorized to sign it. In v0.50+, this is done via the `cosmos.msg.v1.signer` protobuf annotation — the SDK reads the annotation at startup and automatically extracts signer addresses from that field. See [Protobuf Annotations](/sdk/latest/guides/reference/protobuf-annotations) for the full annotation reference. For messages that cannot use the annotation — for example, messages with non-standard signing logic such as EVM-compatible transactions — you can register a custom signer function using [`signing.CustomGetSigner`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/x/tx/signing/context.go#L127): ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} signer := signing.CustomGetSigner{ MsgType: proto.MessageName(&MyMsg{}), Fn: func(msg proto.Message) ([][]byte, error) { m := msg.(*MyMsg) // extract and return signer address bytes return [][]byte{m.SignerBytes()}, nil }, } ``` To register it, call `signingOptions.DefineCustomGetSigners(msgType, fn)` on the `txsigning.Options` you pass to `authtx.NewTxConfigWithOptions` when building your app's `TxConfig`. ## How protobuf is used in modules Most public and persisted data types in modern SDK modules are defined in `.proto` files and serialized with protobuf. This covers the core API surface: transaction messages, query request/response types, stored state values, and genesis state. ### Messages and transactions Each module defines its transaction messages in a `tx.proto` file. The `MsgSend` definition above is an example. When a user submits a transaction, the SDK serializes the transaction body (including its messages) to binary using protobuf before broadcasting it. For a hands-on example, see [tx.proto](/sdk/latest/tutorials/example/03-build-a-module#txproto) in the Build a Module tutorial. ### Queries Modules define their query services in `query.proto`. Request and response types are protobuf messages. The SDK uses gRPC for queries, and gRPC uses protobuf as its serialization format by definition. For a hands-on example, see [query.proto](/sdk/latest/tutorials/example/03-build-a-module#queryproto) in the Build a Module tutorial. ### State types Data stored in the KV store is protobuf-encoded. A module that stores a custom struct first marshals it to bytes using the codec, then writes those bytes to the store. When reading, it unmarshals the bytes back into the struct. Note that only *values* are protobuf-encoded; *keys* are manually constructed byte sequences, not protobuf. Key layout is covered in the [State, Storage, and Genesis](/sdk/latest/learn/concepts/store) section. ### Genesis Genesis state is defined in `genesis.proto`. `InitGenesis` and `ExportGenesis` use protobuf to deserialize genesis state from `genesis.json` and serialize it back. A concrete example shows how a module reads and writes typed state as bytes: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // write: marshal the coin amount to bytes, then set in store bz, err := k.cdc.Marshal(&amount) store.Set(key, bz) // read: get bytes from store, unmarshal back to coin var amount sdk.Coin bz := store.Get(key) k.cdc.Unmarshal(bz, &amount) ``` The codec (`k.cdc`) is the protobuf codec described in the next section. ## The codec and interface registry The Cosmos SDK wraps protobuf in a **codec** that modules use for marshaling and unmarshaling. The primary implementation is [`ProtoCodec`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/codec/proto_codec.go), which calls protobuf's `Marshal` and `Unmarshal` under the hood. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type ProtoCodec struct { interfaceRegistry types.InterfaceRegistry } func (pc *ProtoCodec) Marshal(o ProtoMarshaler) ([]byte, error) func (pc *ProtoCodec) Unmarshal(bz []byte, ptr ProtoMarshaler) error ``` Keepers hold a reference to the codec and use it to encode and decode state: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type Keeper struct { cdc codec.BinaryCodec store storetypes.StoreKey } ``` The codec is initialized once at app startup and passed to each keeper during initialization. ### Interface types and `Any` Protobuf is strongly typed. You cannot store a field as "some implementation of an interface" directly in a protobuf message. The Cosmos SDK solves this using protobuf's [`google.protobuf.Any`](https://protobuf.dev/programming-guides/proto3/#any), which wraps an arbitrary message type alongside a URL that identifies what type it contains. `Any` is used anywhere the SDK needs to serialize a value whose concrete type is not known at compile time. The most common example is public keys. An account might use a secp256k1 key, an ed25519 key, or a multisig key. The `BaseAccount` stores the public key as `Any`: ```proto theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message BaseAccount { string address = 1; google.protobuf.Any pub_key = 2; uint64 account_number = 3; uint64 sequence = 4; } ``` The `Any` field holds the serialized public key bytes plus a type URL like `/cosmos.crypto.secp256k1.PubKey`. When the SDK reads the account, it uses the type URL to look up the concrete Go type, then unmarshals the bytes into that type. #### Messages inside transactions Transaction messages are the most common use of `Any` in the SDK. A transaction can carry multiple message types from different modules (`bank.MsgSend`, `staking.MsgDelegate`, `gov.MsgVote`) in a single `TxBody`. Because protobuf requires concrete types at the field level, each message is packed into an `Any` before being placed inside the transaction: ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} MsgSend ↓ pack into Any Any { type_url: "/cosmos.bank.v1beta1.MsgSend" value: } ↓ placed in TxBody.messages repeated google.protobuf.Any messages ``` During decoding, the SDK reads the `type_url`, looks up the concrete type in the interface registry, and unmarshals the bytes into the correct message struct. This is why every `sdk.Msg` implementation must be registered with `RegisterInterfaces` before the application starts. The Cosmos SDK uses type URLs with a leading `/` but without the `type.googleapis.com` prefix (e.g. `/cosmos.bank.v1beta1.MsgSend`, not `type.googleapis.com/cosmos.bank.v1beta1.MsgSend`). If you need to pack a value into an `Any` manually, use `anyutil.New` from `github.com/cosmos/cosmos-proto/anyutil` rather than `anypb.New` from `google.golang.org/protobuf/types/known/anypb` — the standard library helper inserts the `type.googleapis.com` prefix, which breaks SDK type resolution. This lookup is handled by the **interface registry**. ### Interface registry The [`InterfaceRegistry`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/codec/types/interface_registry.go) is a runtime map from type URLs to Go types. When the SDK encounters an `Any` value, it queries the registry with the type URL to find the concrete Go type, then uses protobuf to unmarshal the bytes. ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} Any { type_url, value_bytes } ↓ InterfaceRegistry.Resolve(type_url) ↓ concrete Go type ↓ proto.Unmarshal(value_bytes, concreteType) ``` Without the interface registry, the SDK cannot decode `Any` values. This is why types must be explicitly registered before they can be deserialized. ## Registering interface implementations Because the interface registry is a runtime lookup table, every concrete type that implements an SDK interface must be registered before the application starts. This is done with `RegisterInterfaces`: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // in codec registration, typically in module.go or types/codec.go func RegisterInterfaces(registry codectypes.InterfaceRegistry) { registry.RegisterImplementations( (*cryptotypes.PubKey)(nil), &secp256k1.PubKey{}, &ed25519.PubKey{}, ) } ``` This tells the registry: "a `PubKey` interface can be a `secp256k1.PubKey` or an `ed25519.PubKey`." If a type is used in an `Any` field anywhere in the application and is not registered, the codec will fail to unmarshal it and return an error. Each module calls `RegisterInterfaces` during app initialization, and `app.go` calls these registration functions through the module manager when building the app. Custom types that implement SDK interfaces must follow the same pattern. ### `codec.go` By convention, modules collect all codec registration in a single file: `x/mymodule/types/codec.go`. This file typically contains two functions: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // RegisterInterfaces registers protobuf interface implementations with the registry. // Called during app initialization so the SDK can decode Any values at runtime. func RegisterInterfaces(registry codectypes.InterfaceRegistry) { registry.RegisterImplementations((*sdk.Msg)(nil), &MsgAdd{}, &MsgUpdateParams{}, ) } // RegisterLegacyAminoCodec registers message types for Amino JSON encoding. // Required only if you want messages signable via SIGN_MODE_LEGACY_AMINO_JSON // (e.g., Ledger hardware wallets using older firmware). func RegisterLegacyAminoCodec(cdc *codec.LegacyAmino) { cdc.RegisterConcrete(&MsgAdd{}, "mymodule/Add", nil) } ``` `RegisterInterfaces` is required for every module that defines message types. Without it, the SDK cannot decode those messages from transactions. `RegisterLegacyAminoCodec` is optional and only needed for Ledger hardware wallet support via `SIGN_MODE_LEGACY_AMINO_JSON`. For an example of interface registration in a working module, see [Interface Registration](/sdk/latest/tutorials/example/03-build-a-module#interface-registration) in the Build a Module tutorial. ## Proto-to-code generation workflow Writing `.proto` files produces `.pb.go` files through a code generation step. The generated Go code contains struct definitions, marshal/unmarshal methods, and gRPC service stubs. You never edit these generated files directly. The workflow is: **1. Write the `.proto` file** Proto files for a module live in the `proto/` directory at the repository root: ``` proto/myapp/mymodule/v1/ ├── tx.proto # message types (MsgAdd, MsgAddResponse, ...) ├── query.proto # query service (QueryCount, ...) ├── state.proto # on-chain state types └── genesis.proto # genesis state ``` A message definition: ```proto theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} syntax = "proto3"; package myapp.mymodule.v1; message MsgAdd { string sender = 1; uint64 add = 2; } message MsgAddResponse { uint64 updated_count = 1; } service Msg { rpc Add(MsgAdd) returns (MsgAddResponse); } ``` **2. Run code generation** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} # example — the exact target varies by project make proto-gen ``` This runs [`buf`](https://buf.build/cosmos/cosmos-sdk/docs/main) (or `protoc` with plugins) against the `.proto` files and produces Go code under the module's `types/` directory. The full generated API reference for the Cosmos SDK is published at [buf.build/cosmos/cosmos-sdk/docs/main](https://buf.build/cosmos/cosmos-sdk/docs/main). ``` x/mymodule/types/ ├── tx.pb.go # generated: MsgAdd, MsgAddResponse, Marshal/Unmarshal methods ├── query.pb.go # generated: query request/response types ├── query.pb.gw.go # generated: gRPC-gateway REST handlers └── state.pb.go # generated: on-chain state types ``` **3. Use the generated types** The generated structs implement `proto.Message` and can be passed directly to the codec for marshaling, registered with the interface registry, and used in keeper methods and message handlers: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // handler receives the generated type func (m msgServer) Add(ctx context.Context, req *types.MsgAdd) (*types.MsgAddResponse, error) { count, err := m.AddCount(ctx, req.Sender, req.Add) if err != nil { return nil, err } return &types.MsgAddResponse{UpdatedCount: count}, nil } ``` The generated gRPC service stub is registered with BaseApp's message router, connecting the handler to the transaction execution pipeline automatically. To learn how to build a module from scratch using this workflow, visit the [Module building tutorial](/sdk/latest/tutorials/example/00-overview). ## Legacy Amino encoding Before protobuf, the Cosmos SDK used a custom serialization format called **Amino** for transaction encoding, JSON signing documents, and interface serialization. Protobuf has replaced it in all of those roles. The `LegacyAmino` codec still exists for backward compatibility, but is not used in the consensus-critical path. Some legacy components still reference it: * `LegacyAmino` is still present in the codec package for backward-compatibility * `LegacyAminoPubKey` (multisig) is registered alongside protobuf public key types * Some older chains, hardware wallets, and client tooling depend on Amino JSON signing New modules and chains should use protobuf exclusively. ## Encoding in context Every layer of the Cosmos SDK depends on encoding: ``` Transaction (binary protobuf) ↓ broadcast over p2p CometBFT ↓ passes raw bytes to application BaseApp ↓ decodes transaction, extracts messages Module MsgServer ↓ processes message, calls keeper Keeper ↓ marshals state value to bytes KVStore (raw bytes) ↓ committed to disk AppHash (Merkle root over all KV bytes) ``` Determinism comes from the combination of canonical transaction encoding (ADR-027), deterministic application logic, and consistent protobuf serialization of stored state. Two validators executing the same transactions under these rules always produce the same bytes at every layer, and therefore always arrive at the same AppHash. The next section, [Execution Context, Gas, and Events](/sdk/latest/learn/concepts/context-gas-events), explains the runtime execution environment that modules operate within: `sdk.Context`, gas metering, and events. # Transaction Lifecycle Source: https://docs.cosmos.network/sdk/latest/learn/concepts/lifecycle In the [Transactions, Messages, and Queries](/sdk/latest/learn/concepts/transactions) page, you learned that transactions are the actual mechanism that authorizes and executes logic on the chain. This page explains how transactions are validated, executed, and committed in the Cosmos SDK. Before building with the Cosmos SDK, it's important to connect the high-level architecture from [SDK Application Architecture](/sdk/latest/learn/intro/sdk-app-architecture) with how blocks and transactions actually execute in code. The following components are essential for understanding the lifecycle of a transaction in the Cosmos SDK: * [CometBFT](/cometbft) (consensus engine) — orders and proposes blocks * [ABCI](/sdk/latest/learn/intro/sdk-app-architecture#abci-application-blockchain-interface) (Application-Blockchain Interface) — the protocol CometBFT uses to talk to the Cosmos SDK application * SDK application ([`BaseApp`](/sdk/latest/learn/concepts/baseapp) + [modules](/sdk/latest/learn/concepts/modules)) — the deterministic state machine that executes transactions * [Protobuf schemas](/sdk/latest/learn/concepts/encoding) — define transactions, messages, state, and query types This page maps the block and transaction lifecycle back to those components. ## ABCI overview CometBFT and the SDK application are two separate processes with distinct responsibilities. * [CometBFT](/cometbft/latest/docs/introduction/intro) handles consensus: ordering transactions, managing validators, and driving block production. * The [SDK application](/sdk/latest/learn/concepts/sdk-structure) handles state: executing transactions and updating the chain's data. The [ABCI](/sdk/latest/learn/intro/sdk-app-architecture#abci-application-blockchain-interface) (Application Blockchain Interface) is the protocol that connects them: CometBFT calls ABCI methods on the application to drive each phase of the block lifecycle, and the application responds. [`BaseApp`](/sdk/latest/learn/concepts/baseapp) is the SDK's implementation of the ABCI interface. It receives these calls from CometBFT and orchestrates execution across modules. Modules plug into `BaseApp` and execute their logic during the appropriate phases. ```python theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} +---------------------+ | +-------------------------+ | CometBFT | | | SDK Application | | (Consensus) | ABCI | (BaseApp + modules) | +---------------------+ | +-------------------------+ | InitChain (once) | Chain start -------------------|------> InitGenesis per module | CheckTx (per submitted tx) | Mempool validation ------------|------> decode · verify · validate |<------ accept → Mempool | PrepareProposal (proposer only) | Build block proposal ----------|------> select txs (MaxTxBytes, MaxGas) | ProcessProposal (all validators) | Evaluate proposal -------------|------> verify txs → ACCEPT / REJECT | FinalizeBlock (per block) | Execute block -----------------|------> PreBlock hooks | BeginBlock hooks | For each tx: | AnteHandler | → message routing | → MsgServer (module logic) | EndBlock hooks | Return AppHash Commit | Persist state -----------------|------> persist state to disk |<------ return AppHash ``` ## InitChain (genesis only) `InitChain` runs once when the chain starts for the first time. `BaseApp` loads `genesis.json`, which defines the chain's initial state, and calls each module's `InitGenesis` to populate its store. The initial validator set is established. Genesis runs before the first block begins. For how `genesis.json` becomes module state, see [Genesis and chain initialization](/sdk/latest/learn/concepts/store#genesis-and-chain-initialization). ## CheckTx and the mempool Before a transaction can enter a block, it goes through `CheckTx`: ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} User ↓ Node ↓ ABCI: CheckTx ↓ Mempool ``` Transactions are sent as raw protobuf-encoded bytes. For how those bytes are encoded deterministically, see [Encoding and Protobuf](/sdk/latest/learn/concepts/encoding). During `CheckTx`, the SDK application's `BaseApp` decodes the transaction, verifies signatures and sequences, validates fees and gas, and performs basic message validation. For the account sequence model, see [Accounts](/sdk/latest/learn/concepts/accounts). For gas metering and fee-related execution details, see [Execution Context, Gas, and Events](/sdk/latest/learn/concepts/context-gas-events). If validation fails, the transaction is rejected. If it passes, it enters the mempool. The mempool is a node's in-memory pool of validated transactions waiting to be included in a block. Validated transactions wait in the mempool until CometBFT selects a block proposer for the next round. ## PrepareProposal Each round, [CometBFT](/cometbft/latest/docs/introduction/intro#intro-to-abci) selects one validator to propose a block. `PrepareProposal` is called on that validator only. `BaseApp` selects transactions from the mempool respecting the block's `MaxTxBytes` and `MaxGas` limits and returns the final transaction list. For where this handler is configured, see [Block proposal and vote extensions](/sdk/latest/learn/concepts/baseapp#block-proposal-and-vote-extensions). ## ProcessProposal Once the other validators receive the proposed block, CometBFT calls `ProcessProposal`. `BaseApp` verifies each transaction and returns `ACCEPT` or `REJECT`. No state is written. Once more than two-thirds of voting power accepts the block and consensus is reached, CometBFT calls `FinalizeBlock`. For the execution-model view of these handlers, see [Block proposal and vote extensions](/sdk/latest/learn/concepts/baseapp#block-proposal-and-vote-extensions). ## FinalizeBlock CometBFT calls `FinalizeBlock` once per block. Inside `FinalizeBlock`, `BaseApp` runs these phases in order: ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} PreBlock → BeginBlock → transaction execution → EndBlock ``` ### `PreBlock` `PreBlock` runs before `BeginBlock` and is generally used for logic that must affect consensus-critical state before the block begins, such as activating a chain upgrade or modifying consensus parameters. Because these changes need to take effect before any block logic runs, they cannot happen inside `BeginBlock`. Modules may implement this via the `HasPreBlocker` extension interface on their `AppModule` (typically in `x//module.go`), and the application's `ModuleManager` invokes all registered PreBlockers during `FinalizeBlock`. If a `PreBlocker` modifies consensus parameters, it signals this by returning `ConsensusParamsChanged=true` in its `ResponsePreBlock`. `BaseApp` then refreshes the consensus params in the current context before proceeding to `BeginBlock`: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} app.finalizeBlockState.ctx = app.finalizeBlockState.ctx.WithConsensusParams(app.GetConsensusParams()) ``` ### `BeginBlock` `BeginBlock` runs after `PreBlock` and handles per-block housekeeping that must happen before any transactions execute, regardless of the transactions in the block. Common uses include minting inflation rewards, distributing staking rewards, and resetting per-block state. Modules implement this via the `BeginBlock` function in `x//module.go`. Because `BeginBlock` and `EndBlock` run on every block, complex or expensive logic in these hooks can slow block execution; keep their work lightweight. ### Transaction execution After `BeginBlock`, `BaseApp` iterates over each transaction in the block and runs it through a fixed pipeline. #### Step 1: `AnteHandler` Configured in a Cosmos SDK chain's [`app.go`](/sdk/latest/learn/concepts/app-go), the `AnteHandler` runs first for every transaction. For standard ordered transactions, it verifies signatures, checks sequence numbers, deducts fees, and meters gas. See [BaseApp](/sdk/latest/learn/concepts/baseapp#antehandler) for the full middleware model. If the `AnteHandler` fails, the transaction aborts and its messages do not execute. #### Step 2: Message routing and execution Each message in a transaction is routed via `BaseApp`'s `MsgServiceRouter` to the appropriate module's protobuf `Msg` service. Messages are module-specific and typically defined in a module's `tx.proto`. `BaseApp` routes these messages to the module's registered protobuf `Msg` service handler, which calls the module's `MsgServer` implementation. See [Message routing](/sdk/latest/learn/concepts/baseapp#message-routing) for the router's role in the execution pipeline. The `MsgServer` contains the execution logic for that message type. It validates the message content, applies business rules, and updates state. State is read and written through the module's keeper, which manages access to the module's KV store and encapsulates its storage keys. [Intro to Modules](/sdk/latest/learn/concepts/modules) explains how `MsgServer` and `Keeper` divide responsibilities. Messages execute sequentially in the order they appear in the transaction. #### Step 3: Atomicity Message execution is atomic: all messages succeed or none of the message execution writes are committed. ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} Tx ├─ Msg 1 ├─ Msg 2 └─ Msg 3 ``` If any message fails, the message execution branch for that transaction is discarded and the transaction returns an error. The next transaction in the block is then executed. `BaseApp` uses cached stores internally to implement this. `AnteHandler` side effects may already have been applied before message execution begins. If the chain enables unordered transactions, the normal sequence check is bypassed and replay protection uses a timeout timestamp plus unordered nonce tracking. For the client-facing flow, see [Generating an Unordered Transaction](/sdk/latest/node/txs#generating-an-unordered-transaction). ### `EndBlock` `EndBlock` runs after all transactions in the block have executed. It is used for logic that depends on the block's cumulative state, like tallying governance votes after all vote transactions have been processed, or recalculating validator power after all delegation changes in the block. Modules implement this via the `EndBlock` function in `x//module.go`. ## Commit After `FinalizeBlock` returns, CometBFT calls `Commit`. This persists the state changes to the node's local disk. ## Deterministic execution Across all validators, the block execution is deterministic. Blocks must contain the same ordered transactions, and transactions must use canonical protobuf binary encoding. State transitions must be deterministic, which ensures that every validator computes the same app hash during `FinalizeBlock`, which guarantees consensus safety. If validators holding more than 1/3 of voting power disagree on the app hash, consensus halts. ## Complete lifecycle overview ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} CometBFT ↓ ABCI InitChain BaseApp → x//InitGenesis For each submitted transaction (async): ↓ ABCI CheckTx → decode, verify, validate → insert into mempool For every block: ↓ ABCI PrepareProposal (proposer only) → select txs from mempool (MaxTxBytes, MaxGas) → return tx list to CometBFT ↓ ABCI ProcessProposal (all validators) → verify txs, check gas limit → ACCEPT or REJECT ↓ ABCI FinalizeBlock → PreBlock → x//BeginBlock For each tx (in the block): → AnteHandler → Message routing → Message execution (atomic) → x//EndBlock ↓ ABCI Commit BaseApp commits KVStores ``` The hooks that run at each phase (the `AnteHandler`, `BeginBlocker`, `EndBlocker`, and `InitChainer`) are registered in your chain's [`app.go`](/sdk/latest/learn/concepts/app-go) before any block executes. `app.go` is the configuration layer that wires modules into `BaseApp`. CometBFT drives block processing through ABCI. `BaseApp` implements ABCI and orchestrates execution. * Transactions are validated in `CheckTx` before entering the mempool * `PrepareProposal` runs on the proposer to build the final tx set for the block * `ProcessProposal` runs on all validators to accept or reject the proposed block * Each block is executed inside a single `FinalizeBlock` call * Within `FinalizeBlock`: `PreBlock` → `BeginBlock` → transactions → `EndBlock` * Each transaction runs through `AnteHandler` → message routing → message execution * Message execution within a transaction is atomic: all messages commit or none do * `FinalizeBlock` computes and returns the app hash; `Commit` persists state to disk [Protobuf](/sdk/latest/learn/concepts/encoding) ensures canonical encoding so all validators interpret transactions identically. The next section, [Intro to Modules](/sdk/latest/learn/concepts/modules), turns from block execution to the module structure that actually implements chain logic. # Intro to Modules Source: https://docs.cosmos.network/sdk/latest/learn/concepts/modules In the previous section, you saw how blocks and transactions are processed. But where does the actual application logic of a blockchain live? In the Cosmos SDK, **modules** define that logic. Modules are the fundamental building blocks of a Cosmos SDK application. Each module encapsulates a specific piece of functionality, such as accounts, token transfers, validator management, governance, or any custom logic you define. To see a complete working module, follow the [Build a module tutorial series](/sdk/latest/tutorials/example/00-overview). ## Why modules exist A blockchain application needs to manage many independent concerns (accounts, balances, validator management, etc). Instead of placing all logic in a single monolithic state machine, the Cosmos SDK divides the application into modules. The SDK provides a base layer that allows these modules to operate together as a cohesive blockchain. Each module owns a slice of state, defines its messages and queries, implements its business rules, and hooks into the block lifecycle and genesis as needed. This keeps the application organized, composable, and easier to reason about. It also separates safety concerns between modules, creating a more secure system. ## What a module defines A module is a self-contained unit of state and logic. At a high level, a module defines: * [State](#state): a `KVStore` namespace that contains the module's data * [Messages](#messages): the actions the module allows * [Queries](#queries): read-only access to the module's state * [`MsgServer`](#message-execution-msgserver): validates, applies business logic, and delegates to the keeper * [Keeper](#keeper): the state access layer: the only sanctioned path to the store * [Params](#params): governance-controlled configuration stored on-chain ### State State is the data persisted on the chain: everything that transactions can read from or write to. When a transaction is executed, modules apply state transitions or deterministic updates to this stored data. Each module owns its own part of the blockchain state. For example: * The `x/auth` module stores account metadata. * The `x/bank` module stores account balances. Modules do not share storage directly. Each module has its own key-value store namespace located in a `multistore`. For example, the bank module's store might contain entries like: ``` // x/bank store (conceptual) balances | cosmos1abc...xyz | uatom → 1000000 balances | cosmos1def...uvw | uatom → 500000 ``` The key encodes the namespace, address, and denomination. The value is the encoded amount. No other module can read or write these entries directly, only the [module's keeper](#keeper) can. To learn more about how state is stored and accessed, see [Store](/sdk/latest/learn/concepts/store). ### Messages As you learned in the [Transactions, Messages, and Queries](/sdk/latest/learn/concepts/transactions) section, each module defines the actions it allows via messages (`sdk.Msg`). Messages are defined in the module's `tx.proto` file under a `service Msg` block, and implemented by that module's `MsgServer`. Here is a simplified example from the bank module: ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Message type definition message MsgSend { string from_address = 1; string to_address = 2; repeated cosmos.base.v1beta1.Coin amount = 3; } // Msg service — groups all messages for this module service Msg { rpc Send(MsgSend) returns (MsgSendResponse); } ``` The `service Msg` block is referred to as the `Msg service` in the Cosmos SDK. The protobuf compiler generates a `MsgServer` interface from it, which the module implements in Go. [`BaseApp`](/sdk/latest/learn/concepts/baseapp) routes each incoming message by its type URL (for example, `/cosmos.bank.v1beta1.MsgSend`) to the correct implementation. `MsgSend` above is an example of a message type definition. It represents a request to transfer tokens from one account to another. When included in a transaction and executed: 1. The sender's balance is checked. 2. The amount is deducted from `from_address`. 3. The amount is credited to `to_address`. ### Queries Modules expose read-only access to their state through query services, defined in `query.proto`. Queries do not modify state and do not go through block execution. For example: ``` // query.proto rpc Balance(QueryBalanceRequest) returns (QueryBalanceResponse); ``` In this case, the caller provides an address and denomination; the query reads the balance from the x/bank keeper and returns it without modifying state. ### Business logic While messages define intent, the `MsgServer` and `Keeper` work together to execute that intent and apply state transitions to the module. Business logic is conceptually split across two layers: * The [`MsgServer`](#message-execution-msgserver) handles the transaction-facing logic: it validates inputs, applies message-level business rules, and where required checks authorization. Once satisfied, it delegates state transitions to the keeper. * The [`Keeper`](#keeper) owns the module's state: it defines the storage schema and provides the only authorized path for reading and writing it. Message handlers, block hooks, and governance proposals all go through keeper methods to make state changes. All state changes must go through the keeper, nothing accesses the store directly. ### Message execution (`MsgServer`) Each module implements a `MsgServer`, which is invoked by `BaseApp`'s message router when a message is routed to that module. The `MsgServer` is responsible for: * Checking authorization when required * Delegating to keeper methods that validate inputs, enforce business rules, and perform state transitions * Returning a response The following example is from the minimal counter module tutorial example, which walks you through building a module that lets users increment a shared counter. See the [Build a Module from Scratch](/sdk/latest/tutorials/example/03-build-a-module) tutorial. The following is the counter module's `Add` handler which is the `MsgServer` method that processes a request to increment the counter: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (m msgServer) Add(ctx context.Context, request *types.MsgAddRequest) (*types.MsgAddResponse, error) { newCount, err := m.AddCount(ctx, request.GetAdd()) if err != nil { return nil, err } return &types.MsgAddResponse{UpdatedCount: newCount}, nil } ``` In the minimal tutorial, `Add` is permissionless and delegates directly to the keeper. The handler itself contains almost no business logic. That keeps `msg_server.go` focused on request handling, while the keeper owns the actual state transition. The full counter module example adds richer patterns on top of that minimal shape. For privileged messages like `MsgUpdateParams`, the `MsgServer` checks the caller against the stored authority before proceeding. See the [Full Counter Module Walkthrough](/sdk/latest/tutorials/example/04-counter-walkthrough#params-and-authority). ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (m msgServer) UpdateParams(ctx context.Context, msg *types.MsgUpdateParams) (*types.MsgUpdateParamsResponse, error) { if m.authority != msg.Authority { return nil, sdkerrors.Wrapf( govtypes.ErrInvalidSigner, "invalid authority; expected %s, got %s", m.authority, msg.Authority, ) } if err := m.SetParams(ctx, msg.Params); err != nil { return nil, err } return &types.MsgUpdateParamsResponse{}, nil } ``` ## Keeper A module's **`Keeper`** is its state access layer. It owns the module's `KVStore` and provides typed methods for reading and writing state. The store fields are unexported, so nothing outside the `keeper` package can access them directly. The `MsgServer` and `QueryServer` both embed the keeper. It is best practice for business logic to be implemented in keeper methods rather than the `MsgServer`, so the same rules apply whether the caller is a message handler, a block hook, or a governance proposal. The following keeper example is from the minimal counter module example. It holds a single state item and shows the smallest useful keeper shape. See [Step 5: Keeper](/sdk/latest/tutorials/example/03-build-a-module#step-5-keeper) in the tutorial. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type Keeper struct { Schema collections.Schema counter collections.Item[uint64] } func (k *Keeper) AddCount(ctx context.Context, amount uint64) (uint64, error) { count, err := k.GetCount(ctx) if err != nil { return 0, err } newCount := count + amount return newCount, k.counter.Set(ctx, newCount) } ``` `counter` uses `collections.Item[uint64]`, a typed single-value entry backed by the module's KV store using the [Collections API](/sdk/latest/learn/concepts/store#collections-api-typed-state-access). `AddCount` reads the current value, increments it, and writes it back. All state access goes through the keeper: nothing outside the `keeper` package can reach `counter` directly. For details on how the keeper opens its store from the context it receives, see [Execution Context](/sdk/latest/learn/concepts/context-gas-events). ### Inter-module access Modules are isolated by default. Each module owns its state, and direct access to that state is restricted through the module's keeper. Other modules cannot arbitrarily mutate another module's storage. Instead, modules interact through explicitly defined keeper interfaces. For example: * The staking module calls methods on the bank keeper to transfer tokens. * The governance module calls parameter update methods on other modules. Each module defines an `expected_keepers.go` file that declares the interfaces it requires from other modules. This makes cross-module dependencies explicit: a module can only call methods the other module has chosen to expose. This design keeps dependencies auditable and prevents accidental or unsafe cross-module state mutation. To see expected keepers and cross-module fee collection in practice, see [Expected keepers and fee collection](/sdk/latest/tutorials/example/04-counter-walkthrough#expected-keepers-and-fee-collection) in the Full Counter Module Walkthrough. ### Params Most modules expose a `Params` struct: a set of configuration values stored on-chain that control the module's behavior. Unlike regular state, params are intentionally stable: they only change through governance, not through user transactions. Examples include the minimum governance deposit, the maximum number of validators, or mint module inflation bounds. Params are stored under a single key (typically a `collections.Item[Params]`) and updated by submitting a governance proposal. To update params, a governance proposal submits a [`MsgUpdateParams`](/sdk/latest/modules/consensus/README#msgupdateparams) message. The `MsgServer` checks that the caller is the designated authority address (usually the governance module account) before writing the new values to the store. See [Message execution (`MsgServer`)](#message-execution-msgserver) for a code example of this pattern. The authority address is set at keeper construction time in [`app.go`](/sdk/latest/learn/concepts/app-go#initializing-keepers). For chain-level consensus parameters, the [`x/consensus`](/sdk/latest/modules/consensus) module manages them centrally; it also supports [`AuthorityParams`](/sdk/latest/modules/consensus/README#authorityparams), which lets governance update the authority address on-chain without a software upgrade. To see params implemented in a working module, see [Params and authority](/sdk/latest/tutorials/example/04-counter-walkthrough#params-and-authority) in the Full Counter Module Walkthrough. ### Block hooks Modules may execute logic at specific points in the block lifecycle by implementing optional hook interfaces in the `AppModule` struct in `module.go`: * `HasBeginBlocker` — runs logic at the start of each block * `HasEndBlocker` — runs logic at the end of each block * `HasPreBlocker` — runs logic before `BeginBlock`, used for consensus parameter changes Hooks are optional, and modules should only implement the hooks they need. These hooks are invoked during block execution by the `ModuleManager` in [`BaseApp`](/sdk/latest/learn/concepts/baseapp#module-manager), which calls each registered module's hooks in a configured order. For the application wiring side, see [Module Manager in `app.go`](/sdk/latest/learn/concepts/app-go#module-manager). A module implements a hook by defining the corresponding method on its `AppModule` struct in `module.go`: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (a AppModule) PreBlock(ctx context.Context) (appmodule.ResponsePreBlock, error) { // runs before BeginBlock; used for logic that must take effect before block execution return &sdk.ResponsePreBlock{}, nil } func (a AppModule) BeginBlock(ctx context.Context) error { // runs at the start of each block return nil } func (a AppModule) EndBlock(ctx context.Context) error { // runs at the end of each block return nil } ``` Defining these methods opts the module into the corresponding block hooks. The module also needs to be registered with the `ModuleManager` in [`app.go`](/sdk/latest/learn/concepts/app-go#module-manager) for the hooks to be called. ### Genesis initialization Modules define how their state is initialized when the chain starts. Each module implements: * `DefaultGenesis`: returns the module's default genesis state * `ValidateGenesis`: validates the genesis state before the chain starts * `InitGenesis`: writes the genesis state into the module's store at chain start * `ExportGenesis`: reads the module's current state and serializes it as genesis data During `InitChain`, `BaseApp` calls each module's `InitGenesis` to populate its state from `genesis.json`. For where this happens in the block lifecycle, see [InitChain (genesis only)](/sdk/latest/learn/concepts/lifecycle#initchain-genesis-only). For a walkthrough of genesis implementation, see [Step 2: Proto files](/sdk/latest/tutorials/example/03-build-a-module#step-2-proto-files) and [Step 8: module.go](/sdk/latest/tutorials/example/03-build-a-module#step-8-modulego) in the Build a Module tutorial. ## Built-in and custom modules The Cosmos SDK ships with a set of core modules that most chains use. Click any module to learn more: | Module | What it does | | -------------------------------------------------- | ------------------------------------------------- | | [x/auth](/sdk/latest/modules/auth) | Accounts, authentication, and transaction signing | | [x/bank](/sdk/latest/modules/bank) | Token balances and transfers | | [x/staking](/sdk/latest/modules/staking) | Validator set and delegations | | [x/gov](/sdk/latest/modules/gov) | On-chain governance and proposals | | [x/distribution](/sdk/latest/modules/distribution) | Staking reward distribution | | [x/slashing](/sdk/latest/modules/slashing) | Validator penalty enforcement | | [x/mint](/sdk/latest/modules/mint) | Token issuance | | [x/evidence](/sdk/latest/modules/evidence) | Submission and handling of validator misbehavior | | [x/upgrade](/sdk/latest/modules/upgrade) | Coordinated chain upgrades | | [x/authz](/sdk/latest/modules/authz) | Delegated message authorization | | [x/feegrant](/sdk/latest/modules/feegrant) | Fee allowances between accounts | | [x/consensus](/sdk/latest/modules/consensus) | On-chain management of CometBFT consensus params | The above are just a few of the modules available. Applications can include any subset of these modules and can define entirely new custom modules. For the full list, see [List of Modules](/sdk/latest/modules/modules). [Cosmos Enterprise](/enterprise/overview) provides additional hardened modules for production networks with more demanding requirements: | Module | What it does | | --------------------------------------------------------- | ------------------------------------------------------------------------------------------- | | [Proof of Authority](/enterprise/components/poa/overview) | Permissioned validator set managed by an on-chain authority, replacing token-based staking | | [Group](/enterprise/components/group/overview) | On-chain multisig accounts and collective decision-making with configurable voting policies | | [Network Manager](/enterprise/components/network-manager) | Infrastructure management for high-availability node operations | A Cosmos SDK blockchain is ultimately a collection of modules assembled into a single application. This customization of modules and application logic down to the lowest levels of a chain is what makes the Cosmos SDK so flexible and powerful. ## Anatomy of a module (high-level) Modules live under the `x/` directory of an SDK application: ``` x/ ├── auth/ # Accounts and authentication ├── bank/ # Token balances and transfers ├── poa/ # Proof-of-authority validator management ├── gov/ # Governance system └── mymodule/ # Your custom module ``` Each subdirectory under `x/` is a self-contained module. An application composes multiple modules together to form a complete blockchain. Inside a module, you will typically see a structure like this: ``` x/mymodule/ ├── keeper/ │ ├── keeper.go # Keeper struct and state access methods │ ├── msg_server.go # MsgServer implementation │ └── query_server.go # QueryServer implementation ├── types/ │ ├── expected_keepers.go # Interfaces for other modules' keepers │ ├── keys.go # Store key definitions │ └── *.pb.go # Generated from proto definitions └── module.go # AppModule implementation and hook registration ``` Proto files live separately at the repository root, not inside `x/`: ``` proto/myapp/mymodule/v1/ ├── tx.proto # Message definitions ├── query.proto # Query service definitions ├── state.proto # On-chain state types └── genesis.proto # Genesis state definition ``` * `keeper/`: Contains the `Keeper` struct (state access) and implementations of the `MsgServer` and `QueryServer` interfaces. * `types/`: Defines the module's public types: generated protobuf structs, store keys, and the `expected_keepers.go` interfaces that declare what this module needs from other modules. * `module.go`: Connects the module to the application and registers genesis handlers, block hooks, and message and query services. * `.proto` files: Define messages, queries, state schemas, and genesis state. Go code is generated from these files and used throughout the module. Check out the [Module Tutorial](/sdk/latest/tutorials/example/00-overview) to learn how to build a module from scratch. ## Modules in context Putting everything together you've learned so far: * **Accounts** authorize transactions. * **Transactions** carry messages and execution constraints. * **Blocks** order transactions and define commit boundaries. * **Modules** define the business rules of execution. * **MsgServer** validates messages and orchestrates state transitions. * **Keeper** performs controlled reads and writes to module state. * **State** persists the deterministic result of execution. ``` Account → Transaction → Message → Module → MsgServer → Keeper → State ``` In the next section, [State, Storage, and Genesis](/sdk/latest/learn/concepts/store), you will look more closely at how module state is stored, how genesis initializes it, and how the application commits deterministic state transitions. Before writing a module, review [Module Design Considerations](/sdk/latest/guides/module-design/module-design-considerations) for guidance on state structure, message surface, inter-module dependencies, and upgrade planning. When you are ready to build, follow the [Build a Module tutorial](/sdk/latest/tutorials/example/00-overview) for a step-by-step walkthrough. # Cosmos Blockchain Structure Source: https://docs.cosmos.network/sdk/latest/learn/concepts/sdk-structure Before writing a module or chain, it helps to understand how the Cosmos SDK organizes code and how the pieces connect. This page maps the directory structure of a Cosmos SDK application, explains what lives inside a module, and shows how modules are assembled into a running application. ## What is an SDK application A Cosmos SDK application is a Go binary that implements a deterministic state machine. It runs alongside CometBFT inside the node daemon process. CometBFT drives consensus and the SDK application executes transactions and maintains state. Every SDK application is composed of three main elements: * [`BaseApp`](/sdk/latest/learn/concepts/baseapp): the execution engine that implements the ABCI and orchestrates transaction processing * [Modules](/sdk/latest/learn/concepts/modules): self-contained units of business logic, state, messages, and queries * [`app.go`](/sdk/latest/learn/concepts/app-go): the wiring layer that instantiates `BaseApp`, registers modules, and configures the application at startup `BaseApp` and `app.go` are covered in depth in the next two sections. This page focuses on how they are organized in the codebase. To see all of this in action, follow the [Build a Chain tutorial series](/sdk/latest/tutorials/example/00-overview). ## Repository structure SDK applications repositories generally use the following layout: ``` myapp/ ├── app/ │ └── app.go # Application wiring: configures BaseApp, registers modules ├── cmd/ │ └── main.go # Node binary entrypoint ├── x/ │ ├── mymodule/ # Custom module │ └── ... └── proto/ └── myapp/ └── mymodule/ └── v1/ ├── tx.proto ├── query.proto ├── state.proto └── genesis.proto ``` Each directory has a distinct responsibility: * `x/` contains the modules. Each subdirectory is a separate, self-contained module. Built-in Cosmos SDK modules (`x/auth`, `x/bank`, `x/staking`, etc.) follow the same layout and are imported as Go packages. Your custom modules live alongside them. * `app/` contains `app.go`, which assembles the application: creating `BaseApp`, mounting stores, initializing keepers, and registering all modules with the `ModuleManager`. * `cmd/` contains `main.go`, the entrypoint for the node binary. It parses command-line flags, reads configuration files, and starts the daemon process that runs both the CometBFT node and the SDK application. * `proto/` contains the Protobuf definitions for all custom types: messages, queries, state schemas, and genesis. Go code is generated from these files and consumed throughout the module. Proto files live at the repository root, not inside `x/`, so they can be shared across languages and tooling. ## What lives inside a module Each module under `x/` follows a consistent internal layout: ``` x/mymodule/ ├── keeper/ # Keeper (state access), MsgServer, QueryServer ├── types/ # Generated proto types, store keys, expected_keepers.go └── module.go # AppModule: wires the module into the application ``` Proto definitions live separately in `proto/`, not inside `x/`. See [Intro to Modules](/sdk/latest/learn/concepts/modules#anatomy-of-a-module-high-level) for a complete walkthrough of each file and the role it plays. ## How modules are assembled into an application Modules are assembled through the `ModuleManager` in [`app.go`](/sdk/latest/learn/concepts/app-go). The `ModuleManager` holds the full set of registered modules and coordinates their lifecycle hooks (`InitGenesis`, `BeginBlock`, `EndBlock`, and service registration) across the application. Module ordering is configured explicitly in `app.go` and matters: for example, in `simapp` the distribution module runs before slashing in `BeginBlock` so validator rewards are handled before slashing updates are applied. See [Module Manager](/sdk/latest/learn/concepts/baseapp#module-manager) for details on how `BaseApp` integrates with it. `BaseApp` implements the ABCI interface that CometBFT calls to drive block execution. When CometBFT calls `FinalizeBlock`, `BaseApp` runs the block through all its phases (`PreBlock`, `BeginBlock`, transactions, `EndBlock`) and returns the resulting app hash. `BaseApp` is covered in detail in [BaseApp Overview](/sdk/latest/learn/concepts/baseapp). ## The role of `app.go` [`app.go`](/sdk/latest/learn/concepts/app-go) is the single file that defines a specific chain. It is where the application is assembled from its parts. Here is a breakdown of the steps it takes: 1. Create a `BaseApp` instance with the application name, logger, database, and codec. 2. Create a `StoreKey` for each module and mount it to the multistore. 3. Instantiate each `Keeper`, passing in the codec, store key, and references to other keepers the module depends on. 4. Create the `ModuleManager` with all module instances. 5. Configure execution ordering: which modules run first during genesis, `BeginBlock`, and `EndBlock`. 6. Register all gRPC services (message and query handlers) through the `ModuleManager`. 7. Set the `AnteHandler` and other middleware. Because `app.go` is plain Go code, it is fully customizable. A chain includes exactly the modules it needs, wires keepers together as required, and controls the execution order of all lifecycle hooks. ## Other files in a chain A complete SDK chain repository contains more than just `x/`, `app/`, `cmd/`, and `proto/`. Below are some other files you will typically find in a Cosmos SDK chain repository: ### Additional files in `app/` Real-world applications typically split the `app/` directory across multiple files to keep `app.go` focused on wiring: ``` app/ ├── app.go # Main wiring: BaseApp, keepers, module registration ├── export.go # Exports current state as a genesis file (hard forks, snapshots) ├── upgrades.go # Upgrade handlers for consensus-breaking software changes └── genesis.go # Helpers for genesis state initialization (optional) ``` * `export.go`: Implements `ExportAppStateAndValidators`, which serializes all module state into a `genesis.json`. This is used when migrating to a new chain version (hard fork) or creating a testnet from a live chain snapshot. * `upgrades.go`: Registers named upgrade handlers consumed by the `x/upgrade` module. Each handler runs exactly once, at the block where the governance-approved upgrade height is reached, and performs any necessary state migrations. ### At the repository root ``` myapp/ ├── go.mod # Go module definition: SDK version and all dependencies ├── go.sum # Cryptographic checksums for all dependencies ├── Makefile # Build, test, and codegen tasks └── scripts/ # Automation scripts (proto generation, linting) ``` * `go.mod` / `go.sum`: Standard Go module files. `go.mod` declares the Cosmos SDK version and all other imported packages. `go.sum` provides verifiable checksums for the full dependency tree. * `Makefile`: The standard entry point for development tasks: `make build` compiles the binary, `make test` runs unit tests, `make proto-gen` regenerates Go code from `.proto` files. Most SDK chains include targets for linting, simulation tests, and Docker builds. * `scripts/`: Shell scripts and configuration for tooling that the Makefile invokes. ### The node binary (`cmd/`) ``` cmd/ └── myappdaemon/ ├── main.go # Binary entrypoint └── root.go # Root Cobra command: subcommands (start, tx, query, keys, ...) ``` The `cmd/` directory produces the node daemon binary (e.g., `simd`, `gaiad`, `wasmd`). It uses [Cobra](https://github.com/spf13/cobra) to expose subcommands for starting the node, submitting transactions, querying state, managing keys, and running genesis initialization. The `start` command spins up CometBFT and the SDK application together in a single process. ### Node home directory A typical repository contains the source code for a chain. When you actually run a node, the binary generates a separate home directory on disk that holds runtime configuration and chain data. Running `myappdaemon init` creates this directory: ``` ~/.myapp/ # Node home directory (configurable with --home) ├── config/ │ ├── app.toml # SDK server configuration │ ├── config.toml # CometBFT configuration │ ├── client.toml # CLI client defaults │ └── genesis.json # Initial chain state └── data/ # Database files (block store, state store, snapshots) ``` The location defaults to `~/.myapp` but can be overridden with the `--home` flag or the `MYAPP_HOME` environment variable. Each configuration file controls a distinct layer of the node: * `app.toml`: SDK-level server settings. Controls whether the gRPC server and REST API are enabled, their bind addresses, state sync configuration, pruning strategy, and mempool parameters. * `config.toml`: CometBFT-level settings. Controls P2P networking (seeds, peers, listen address), consensus timeouts, the CometBFT RPC server address, and block size limits. * `client.toml`: Default values for CLI client commands. Stores the chain ID, keyring backend, and the node RPC address so you don't have to pass `--chain-id` and `--node` on every command. * `genesis.json`: The initial state of the chain at block 0. It is distributed out-of-band when joining a network, or generated locally for a new chain. Once the chain starts, this file is no longer read. ## Summary An SDK application is a deterministic state machine composed of modules assembled in `app.go`. The codebase follows a conventional layout: modules in `x/`, application wiring in `app/`, the binary entrypoint in `cmd/`, and Protobuf definitions in `proto/`. The `ModuleManager` assembles modules and coordinates their lifecycle hooks across the application. `BaseApp` provides the ABCI implementation that connects the state machine to CometBFT's consensus engine. The next section, [BaseApp Overview](/sdk/latest/learn/concepts/baseapp), explains what `BaseApp` is and how it coordinates transaction execution in detail. # State, Storage, and Genesis Source: https://docs.cosmos.network/sdk/latest/learn/concepts/store In the previous section, you learned that modules define business logic and that keepers are responsible for reading and writing module state. This page explains how that state is actually stored, committed, and made verifiable across the network. ## What is state? State is the persistent data of the blockchain: account balances, delegations, governance proposals, [module parameters](/sdk/latest/learn/concepts/modules#params), and any other data that survives between blocks. When a transaction executes, modules update state. When a block is committed, that updated state becomes the starting point for the next block: ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} State0 ↓ apply Block 1 State1 ↓ apply Block 2 State2 ``` ## The KVStore model At its lowest level, the Cosmos SDK stores state as **key-value pairs**. Both keys and values are byte arrays. Modules encode structured data into those bytes using Protocol Buffers, and decode them back when reading. See [Encoding and Protobuf](/sdk/latest/learn/concepts/encoding) for details on how modules serialize data into bytes. The following example shows a conceptual example of how the bank module stores balances: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} key: 0x2 | len(address) | address_bytes | denom_bytes value: ProtocolBuffer(amount) # example key: 0x2 | 20 | cosmos1abc...xyz | uatom value: ProtocolBuffer(1000000) ``` The key encodes the store prefix, address length, address, and denomination. The value is a Protocol Buffer-encoded amount. See [`x/bank/types/keys.go`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/x/bank/types/keys.go) for the actual implementation. Each module owns its own namespace in the key-value store. Keys are defined by the module and typically begin with a byte prefix that distinguishes them from other module keys. ## Multistore A single module store is only part of the picture. At the application level, all module stores are committed together. Every module has its own KVStore, and all module stores are mounted inside a [multistore](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/store/rootmulti/store.go) that is committed as a single state root. A module can only read and write to its own store through its keeper. Access is gated by a `StoreKey`, which is a typed capability object registered at app startup. Modules that don't hold the key cannot open the store. This isolation follows an object-capabilities model: 1. Modules cannot directly mutate another module's state 2. Cross-module interaction must go through exposed keeper methods When a block finishes executing, the multistore computes a new root hash (the **app hash**) that represents the entire application state. That hash is returned to CometBFT, included in the block header, and is what makes the chain's state verifiable. [Transaction Lifecycle](/sdk/latest/learn/concepts/lifecycle) explains where that app hash is produced and committed. The full storage stack from top to bottom is: ``` Module keeper ↓ KVStore (namespaced, wrapped with gas/trace) ↓ CommitMultiStore (multistore, computes app hash) ↓ IAVL tree (versioned Merkle tree) ↓ Database backend (goleveldb by default) ``` ## How state is stored (IAVL and commit stores) Each module's KVStore is backed by a [`CommitKVStore`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/store/iavl/store.go#L36). [See the store spec for more details.](/sdk/latest/guides/state/store) In the current SDK store implementation described here, the Cosmos SDK uses [IAVL](https://github.com/cosmos/iavl), a versioned AVL Merkle tree. IAVL gives every read and write of the tree `O(log n)` complexity, meaning the time to read or write a key scales with the height of the tree, not the total number of keys. It also versions state on each block commit, and produces deterministic root hashes that can be used to generate Merkle proofs for light clients. Each block commit produces a new tree version with a new root hash: ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} Block 1 Block 2 Block 3 [root h1] [root h2] [root h3] / \ / \ / \ [branch1] [branch2] [branch1] [branch2'] [branch1] [branch2''] / \ / \ / \ / \ / \ / \ [a] [b] [c] [d] [a] [b] [c] [d'] [a] [b] [c'] [d'] ↑ ↑ (updated) (updated) // branch1 and branch2 are internal nodes (they store hashes, not data). // Leaf nodes (a, b, c, d) are actual key-value entries. // When a leaf changes, only nodes on the path to the root are rewritten (marked '). // Unmodified subtrees (branch1, a, b) are shared across all three versions. ``` Only modified nodes are rewritten, and unchanged nodes are shared across versions. The root hash changes any time any leaf changes. All validators must compute the same root hash. If they disagree, consensus halts. Because of this, state transitions must be deterministic, encoding must be deterministic, and transaction ordering must be consistent. ### App hash The **app hash** is the cryptographic root hash of the application's committed state. It summarizes all module stores together through the `CommitMultiStore`. Because every validator executes the same state transitions deterministically, they should all compute the same app hash for a given block. ### Database backend The IAVL tree does not store data in memory. It writes versioned nodes to a database backend, which is a key-value store on disk. The Cosmos SDK uses [CometBFT's `db` package](https://github.com/cometbft/cometbft-db) to abstract over the database implementation. The default backend is [goleveldb](https://github.com/syndtr/goleveldb). Other supported backends include [PebbleDB](https://github.com/cockroachdb/pebble), [RocksDB](https://github.com/facebook/rocksdb), and memDB (in-memory, for testing). The database backend is selected at node startup and configured in [`app.toml`](/sdk/latest/tutorials/example/05-run-and-test#apptoml). Application code never interacts with it directly; the store layer owns that boundary. ## Store types in the SDK Beyond the base KVStore, the SDK provides several specialized store wrappers. * [CommitKVStore](#commitkvstore-persistent-store) * [CacheMultiStore](#cachemultistore-transaction-isolation) * [Ephemeral store types](#ephemeral-store-types) * [Gas and trace store wrappers](#gas-and-trace-store-wrappers) * [Prefix store](#prefix-store) ### CommitKVStore (persistent store) The [`CommitKVStore`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/store/iavl/store.go#L36) is the main persistent store backed by IAVL. It persists across blocks, produces versioned commits, and contributes to the app hash. ### CacheMultiStore (transaction isolation) Before executing each transaction, the Cosmos SDK's `BaseApp` creates a [`CacheMultiStore`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/store/cachemulti/store.go) — a cached, copy-on-write view of the multistore. All writes during that transaction occur in this cached layer: ``` Multistore ↓ CacheWrap (per transaction) ↓ Execute tx → Success → commit changes → Failure → discard ``` * If the transaction succeeds, changes are written to the underlying store. * If the transaction fails, the cache is discarded and no state changes are committed. This is how transaction atomicity is implemented in the store layer. ### Ephemeral store types [Transient stores](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/store/transient/store.go) are cleared at the end of each block. They are used for temporary per-block data such as counters or intermediate calculations, and do not affect the app hash. [Memory stores](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/store/mem/store.go) survive block commits but reset when the node restarts — their `Commit()` is a no-op and data is never written to disk. They are used for in-process caching of data that is expensive to recompute each block but does not need to survive a restart. Modules access them via `MemoryStoreKey`, mounted with `MountMemoryStores` in `app.go`. | Store type | Survives block commit | Survives restart | | -------------------- | ----------------------- | ---------------- | | Transient | No (cleared each block) | No | | Memory | Yes | No | | IAVL (CommitKVStore) | Yes | Yes | ### Gas and trace store wrappers All store accesses are wrapped with additional behavior by the [`GasKVStore`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/store/gaskv/store.go) and [`TraceKVStore`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/store/tracekv/store.go) wrappers. * `GasKVStore` charges gas for each read and write * `TraceKVStore` logs each store operation for debugging Because state access is the dominant cost of transaction execution, the SDK charges gas at the store layer so that expensive reads and writes are reflected in transaction fees. Every read and write of a KVStore costs gas, and expensive operations naturally cost more. [Execution Context, Gas, and Events](/sdk/latest/learn/concepts/context-gas-events) explains how gas metering works at runtime. ### Prefix store A [**prefix store**](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/store/prefix/store.go) wraps a KVStore and automatically prepends a fixed byte prefix to every key. This lets keepers scope their reads and writes to a sub-namespace without manually constructing prefixed keys on every call. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} prefixStore := prefix.NewStore(kvStore, types.KeyPrefix("balances")) prefixStore.Set(key, value) // stored as "balances" + key ``` This is how modules avoid key collisions within their own store. ## Collections API (typed state access) In the Cosmos SDK, modules commonly use the collections API to define typed state access. Instead of manually constructing byte keys, modules define typed collections such as: * `collections.Item[T]` * `collections.Map[K, V]` * `collections.Sequence` Example: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // declared in the keeper struct Counter collections.Item[uint64] // read and write in message handlers count, _ := k.Counter.Get(ctx) k.Counter.Set(ctx, count+1) ``` The Collections API defines the storage schema, handles encoding and decoding, ensures consistent key construction, and makes state access type-safe. Under the hood, collections still store data in a KVStore. Collections are used to provide a safer abstraction over raw byte keys. See [`collections/collections.go`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/collections/collections.go) for the base interface definitions. For the full package guide, see [Collections](/sdk/latest/guides/state/collections). ## How modules access state Modules do not interact with the multistore directly. Instead, each module defines a keeper that opens its KVStore through the execution `Context` it receives on each call. For details on how `Context` carries the store reference at runtime, see [Execution Context](/sdk/latest/learn/concepts/context-gas-events). A keeper typically holds: * the module's **store key** (an object-capability used to open the module's `KVStore` from `Context`), * a **Protobuf codec** used to encode and decode values stored as bytes, * **references (interfaces) to other keepers** the module depends on. State access typically flows through the keeper: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} MsgServer / QueryServer ↓ Keeper ↓ KVStore ``` The keeper exposes high-level methods that construct keys, encode values, and enforce business logic: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (k Keeper) GetBalance(ctx sdk.Context, addr sdk.AccAddress) sdk.Coins func (k Keeper) SetParams(ctx sdk.Context, params types.Params) ``` For the keeper's role within a module, see [Keeper](/sdk/latest/learn/concepts/modules#keeper). ## Genesis and chain initialization Before the first block executes, the chain must start with an initial state called **genesis**, defined in `genesis.json`. Genesis is the first write to the KVStores — it is how every module's state exists before any transaction runs. During `InitChain`, `BaseApp` calls each module's `InitGenesis` to populate its store: ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} genesis.json ↓ BaseApp.InitChain ↓ Module.InitGenesis ↓ KVStores populated ``` For information on how modules define their genesis methods (`DefaultGenesis`, `ValidateGenesis`, `InitGenesis`, `ExportGenesis`) and initialization ordering, see [Intro to Modules](/sdk/latest/learn/concepts/modules) and [Transaction Lifecycle](/sdk/latest/learn/concepts/lifecycle). For a walkthrough of genesis implementation in a module, see [Step 2: Proto files](/sdk/latest/tutorials/example/03-build-a-module#step-2-proto-files) and [Step 8: module.go](/sdk/latest/tutorials/example/03-build-a-module#step-8-modulego) in the Build a Module tutorial. ## Next steps For more information on stores, pruning strategies, and store configuration, see the [store spec](/sdk/latest/guides/state/store). For the full store interface definitions, see [`store/types/store.go`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/store/types/store.go) in the SDK source. Because KV stores only hold raw bytes, modules must serialize structured data before writing it. The next section, [Encoding and Protobuf](/sdk/latest/learn/concepts/encoding), explains how the Cosmos SDK uses Protocol Buffers to encode that data deterministically, and why every validator must produce exactly the same bytes. # Testing in the SDK Source: https://docs.cosmos.network/sdk/latest/learn/concepts/testing The Cosmos SDK provides a layered testing approach that mirrors the architecture of the framework itself. Tests are organized into three levels, each testing a progressively larger slice of the application. This page uses the counter module example in the `example` repo, not the minimal counter module example, because the fuller module includes the testing surfaces needed for these examples. The examples on this page come from the [Full Counter Module Walkthrough](/sdk/latest/tutorials/example/04-counter-walkthrough#unit-tests) and [Running and Testing](/sdk/latest/tutorials/example/05-run-and-test) tutorials. ## Three testing levels ### Keeper unit tests Keeper unit tests verify keeper logic in isolation, without starting a full application. They construct a minimal in-memory context with a real KV store, initialize the keeper under test, and call its methods directly. No server, no network, no block processing. The counter module keeper tests live in `x/counter/keeper/keeper_test.go`. The test suite sets up a keeper with a live store and mock dependencies: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type KeeperTestSuite struct { suite.Suite ctx sdk.Context keeper *keeper.Keeper queryClient types.QueryClient msgServer types.MsgServer bankKeeper *MockBankKeeper authority string } func (s *KeeperTestSuite) SetupTest() { key := storetypes.NewKVStoreKey("counter") storeService := runtime.NewKVStoreService(key) testCtx := testutil.DefaultContextWithDB(s.T(), key, storetypes.NewTransientStoreKey("transient_test")) ctx := testCtx.Ctx.WithBlockHeader(cmtproto.Header{Time: cmttime.Now()}) encCfg := moduletestutil.MakeTestEncodingConfig() s.authority = "cosmos10d07y265gmmuvt4z0w9aw880jnsr700j6zn9kn" s.bankKeeper = &MockBankKeeper{} k := keeper.NewKeeper(storeService, encCfg.Codec, s.bankKeeper, keeper.WithAuthority(s.authority)) s.ctx = ctx s.keeper = k queryHelper := baseapp.NewQueryServerTestHelper(ctx, encCfg.InterfaceRegistry) types.RegisterQueryServer(queryHelper, keeper.NewQueryServer(k)) s.queryClient = types.NewQueryClient(queryHelper) s.msgServer = keeper.NewMsgServerImpl(k) } ``` `testutil.DefaultContextWithDB` creates a real KV store backed by an in-memory database. `moduletestutil.MakeTestEncodingConfig` returns a codec configured for the test. The `MockBankKeeper` replaces the real bank keeper with a struct whose behavior can be controlled per test case: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type MockBankKeeper struct { SendCoinsFromAccountToModuleFn func(ctx context.Context, senderAddr sdk.AccAddress, recipientModule string, amt sdk.Coins) error } ``` A typical keeper test case covers the happy path and the error conditions with table-driven tests: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (s *KeeperTestSuite) TestAddCount() { testCases := []struct { name string setup func() sender string amount uint64 expErr bool expErrMsg string expPostCount uint64 }{ { name: "add to zero counter", setup: func() { err := s.keeper.InitGenesis(s.ctx, &types.GenesisState{ Count: 0, Params: types.Params{MaxAddValue: 100}, }) s.Require().NoError(err) }, sender: "cosmos1test", amount: 10, expErr: false, expPostCount: 10, }, { name: "add exceeds max_add_value - should error", setup: func() { err := s.keeper.InitGenesis(s.ctx, &types.GenesisState{ Count: 0, Params: types.Params{MaxAddValue: 50}, }) s.Require().NoError(err) }, sender: "cosmos1test", amount: 100, expErr: true, expErrMsg: "exceeds max allowed", }, } for _, tc := range testCases { s.Run(tc.name, func() { s.SetupTest() tc.setup() newCount, err := s.keeper.AddCount(s.ctx, tc.sender, tc.amount) if tc.expErr { s.Require().Error(err) if tc.expErrMsg != "" { s.Require().Contains(err.Error(), tc.expErrMsg) } } else { s.Require().NoError(err) s.Require().Equal(tc.expPostCount, newCount) count, err := s.keeper.GetCount(s.ctx) s.Require().NoError(err) s.Require().Equal(tc.expPostCount, count) } }) } } ``` `msg_server_test.go` uses the same suite to test the `MsgServer` layer, including event emission: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (s *KeeperTestSuite) TestMsgAddEmitsEvent() { s.SetupTest() err := s.keeper.InitGenesis(s.ctx, &types.GenesisState{ Count: 0, Params: types.Params{MaxAddValue: 100}, }) s.Require().NoError(err) _, err = s.msgServer.Add(s.ctx, &types.MsgAddRequest{Sender: "cosmos1test", Add: 42}) s.Require().NoError(err) events := s.ctx.EventManager().Events() s.Require().NotEmpty(events) found := false for _, event := range events { if event.Type == "count_increased" { found = true } } s.Require().True(found, "count_increased event not found") } ``` Keeper unit tests are fast, deterministic, and surgical. They are the right level for testing business logic, error conditions, edge cases, and event emission. ### Integration tests Integration tests verify behavior across the full application stack. They start a real in-memory network with one or more validators, wait for blocks to be produced, broadcast actual signed transactions via gRPC, and query the resulting state. These tests exercise the AnteHandler, message routing, block execution, and state commitment together. The counter module integration tests live in `tests/counter_test.go`. The test suite uses `testutil/network` from the Cosmos SDK to spin up a full in-memory chain: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type E2ETestSuite struct { suite.Suite cfg network.Config network *network.Network conn *grpc.ClientConn } func (s *E2ETestSuite) SetupSuite() { s.T().Log("setting up e2e test suite") var err error s.cfg = network.DefaultConfig(NewTestNetworkFixture) s.cfg.NumValidators = 1 // Customize counter genesis to set initial count and permissive params genesisState := s.cfg.GenesisState counterGenesis := countertypes.GenesisState{ Count: 0, Params: countertypes.Params{ MaxAddValue: 1000, AddCost: nil, }, } counterGenesisBz, err := s.cfg.Codec.MarshalJSON(&counterGenesis) s.Require().NoError(err) genesisState[countertypes.ModuleName] = counterGenesisBz s.cfg.GenesisState = genesisState s.network, err = network.New(s.T(), s.T().TempDir(), s.cfg) s.Require().NoError(err) _, err = s.network.WaitForHeight(2) s.Require().NoError(err) val0 := s.network.Validators[0] s.conn, err = grpc.NewClient( val0.AppConfig.GRPC.Address, grpc.WithTransportCredentials(insecure.NewCredentials()), grpc.WithDefaultCallOptions(grpc.ForceCodec(codec.NewProtoCodec(s.cfg.InterfaceRegistry).GRPCCodec())), ) s.Require().NoError(err) } ``` `NewTestNetworkFixture` (in `tests/test_helpers.go`) constructs the `ExampleApp` with `dbm.NewMemDB()` and returns a `network.TestFixture` that configures the in-memory validator. This lets the SDK's network test helper start a real application with real consensus. A test that exercises the full transaction path: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (s *E2ETestSuite) TestAddCounter() { val := s.network.Validators[0] initialCount := s.getCurrentCount() txBuilder := s.mkCounterAddTx(val, 42) txBytes, err := val.ClientCtx.TxConfig.TxEncoder()(txBuilder.GetTx()) s.Require().NoError(err) txClient := txtypes.NewServiceClient(s.conn) grpcRes, err := txClient.BroadcastTx( context.Background(), &txtypes.BroadcastTxRequest{ Mode: txtypes.BroadcastMode_BROADCAST_MODE_SYNC, TxBytes: txBytes, }, ) s.Require().NoError(err) s.Require().Equal(uint32(0), grpcRes.TxResponse.Code, "tx failed: %s", grpcRes.TxResponse.RawLog) s.Require().NoError(s.network.WaitForNextBlock()) finalCount := s.getCurrentCount() s.Require().Equal(initialCount+42, finalCount) } ``` Integration tests are slower than keeper unit tests because they start a real consensus engine and wait for blocks. They exist to catch failures at the boundaries: AnteHandler rejections, routing errors, genesis state mismatches, and cross-module interactions that only manifest when the full stack is running. ### Simulation tests Simulation tests are property-based tests. Instead of testing specific inputs, they generate large volumes of random operations and verify that the application's invariants hold throughout. They catch bugs that deterministic test cases miss: unexpected ordering effects, state corruption under high load, and invariant violations that only appear after many sequential operations. The Cosmos SDK simulation framework drives this through [`simsx`](#simsx-and-simd). The counter module defines a message factory that generates random `MsgAddRequest` messages: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // x/counter/simulation/msg_factory.go func MsgAddFactory() simsx.SimMsgFactoryFn[*types.MsgAddRequest] { return func(ctx context.Context, testData *simsx.ChainDataSource, reporter simsx.SimulationReporter) ([]simsx.SimAccount, *types.MsgAddRequest) { sender := testData.AnyAccount(reporter) if reporter.IsSkipped() { return nil, nil } r := testData.Rand() addAmount := uint64(r.Intn(100) + 1) msg := &types.MsgAddRequest{ Sender: sender.AddressBech32, Add: addAmount, } return []simsx.SimAccount{sender}, msg } } ``` The simulation runner selects a random account and a random add amount within valid bounds, then executes the message against the live application. This runs thousands of times across a simulated block sequence. The top-level simulation test in `sim_test.go` wires everything together: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} //go:build sims func TestFullAppSimulation(t *testing.T) { simsx.Run(t, NewExampleApp, setupStateFactory) } func setupStateFactory(app *ExampleApp) simsx.SimStateFactory { return simsx.SimStateFactory{ Codec: app.AppCodec(), AppStateFn: simtestutil.AppStateFn(app.AppCodec(), app.SimulationManager(), app.DefaultGenesis()), BlockedAddr: BlockedAddresses(), AccountSource: app.AccountKeeper, BalanceSource: app.BankKeeper, } } ``` The `//go:build sims` build tag means simulation tests are excluded from regular `go test` runs and only execute when explicitly requested with `-tags sims`. This keeps CI fast. The simulation manager is initialized in `app.go`: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} overrideModules := map[string]module.AppModuleSimulation{ authtypes.ModuleName: auth.NewAppModule(appCodec, app.AccountKeeper, authsims.RandomGenesisAccounts, nil), } app.sm = module.NewSimulationManagerFromAppModules(app.ModuleManager.Modules, overrideModules) app.sm.RegisterStoreDecoders() ``` `NewSimulationManagerFromAppModules` collects simulation support from all modules that implement `AppModuleSimulation`. `RegisterStoreDecoders` registers human-readable decoders for each module's store entries, used when the simulation framework logs state for debugging. ## Test utilities ### testutil The [`testutil`](https://github.com/cosmos/cosmos-sdk/tree/main/testutil) package provides helpers for constructing in-memory contexts for unit tests: * `testutil.DefaultContextWithDB` creates a real `sdk.Context` backed by an in-memory KV store. Keeper unit tests use this to get a realistic execution context without starting a full node. * `moduletestutil.MakeTestEncodingConfig` returns a codec with standard interface registration, suitable for keeper tests. * `baseapp.NewQueryServerTestHelper` creates a `QueryServiceTestHelper` that implements both the gRPC Server and ClientConn interfaces, allowing keeper tests to register query services and invoke them directly without a network connection. ### testify suite The SDK's test files use the [`testify/suite`](https://pkg.go.dev/github.com/stretchr/testify/suite) package. A `suite.Suite` groups test setup, teardown, and test methods into a single struct. `SetupTest` runs before each test method; `SetupSuite` runs once before all tests in the suite. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func TestKeeperTestSuite(t *testing.T) { suite.Run(t, new(KeeperTestSuite)) } ``` `suite.Run` discovers methods on the struct whose names start with `Test` and runs them as individual test cases. `s.Require()` returns assertion helpers that stop the test immediately on failure, while `s.Assert()` continues after a failure. ## simsx and simd For a full guide on configuring and running simulations, see the [Module Simulation](/sdk/latest/guides/testing/simulator) page. [`simsx`](https://github.com/cosmos/cosmos-sdk/tree/main/testutil/simsx) is the simulation execution framework. It provides: * `SimMsgFactoryFn`: a function type that implements the `SimMsgFactoryX` interface for message factories. Each factory selects random accounts and parameters, constructs a message, and returns it for execution. * `ChainDataSource`: provides access to random accounts, balances, and other chain data during message construction. * `SimulationReporter`: allows a factory to signal that it should be skipped (for example, if no suitable account exists). * `simsx.Run`: the top-level entry point that drives a full simulation run against the application. [`simd`](/sdk/latest/node/prerequisites) is the reference simulation binary provided by the Cosmos SDK. It is a fully configured simapp (`simapp`) compiled as a standalone binary, used to run simulations against the SDK's own module set without setting up a custom chain. For a custom chain like the example app, you use your own binary with the `sims` build tag. To learn how to run an example chain, visit the [simd node tutorial](/sdk/latest/node/run-node). To run simulations against the example app: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} go test -tags sims -run TestFullAppSimulation ./... ``` To add simulation support to your own module — implementing `AppModuleSimulation`, writing message factories, and wiring the `SimulationManager` — see [Module Simulation](/sdk/latest/guides/testing/simulator). ## Telemetry The counter module uses OpenTelemetry to emit metrics from keeper operations: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} var ( meter = otel.Meter("github.com/cosmos/example/x/counter") countMetric metric.Int64Counter ) ``` The SDK provides a telemetry package built around OpenTelemetry, with legacy support for go-metrics. Modules can emit counters, gauges, and histograms from keeper methods to expose runtime behavior for monitoring. See [Telemetry](/sdk/latest/guides/testing/telemetry) for full details. # Transactions, Messages, and Queries Source: https://docs.cosmos.network/sdk/latest/learn/concepts/transactions In the previous section, you learned that [accounts](/sdk/latest/learn/concepts/accounts) authorize activity on a chain using digital signatures and sequence numbers. Accounts provide identity and permission, but transactions are the actual mechanism that authorizes and executes logic on the chain. ## Interacting with a chain A Cosmos SDK blockchain is a deterministic state machine. Its state changes only when transactions are executed and committed in blocks. Users and applications interact with the blockchain in two fundamental ways: * **Transactions** modify state and are included in blocks. When a user wants to **change** something (transfer tokens, delegate stake, submit a governance proposal), they submit a transaction. * **Queries** read state and are not included in blocks. When a user wants to **inspect** something (check a balance, view delegations, read proposal details), they perform a query. Only transactions affect consensus state. ## Transactions A **transaction** is a signed container that carries one or more actions to be executed on the blockchain. A transaction includes: * Messages: one or more actions you want to execute (send tokens, delegate stake, vote on a proposal) * Signatures: cryptographic proof that you authorize these actions * Sequence number: prevents someone from resubmitting your transaction (replay protection) * Gas limit: the maximum computational resources you're willing to spend * Fees: what you pay for the transaction to be processed The transaction itself does not define business logic. Instead, it packages intent (messages) to change state, proves authorization (signatures), and specifies execution limits (gas and fees). You can think of a transaction as an envelope you send to the blockchain, with a message inside containing instructions, a signature to prove authenticity, and a stamp to pay for postage. ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} Transaction ├── Message 1 ├── Message 2 ├── ... ├── Signature(s) ├── Sequence ├── Gas limit └── Fees ``` In the Cosmos SDK, account metadata and transaction authorization are handled by the `x/auth` module. Transaction construction and encoding are configured through the SDK's transaction system (commonly via `x/auth/tx`). ## Messages A **message** (`sdk.Msg`) is the actual instruction inside a transaction. Each message is defined by a specific module and represents a single action. Messages are located in that module's `types` package (like `x/bank/types` or `x/staking/types`). Modules define which messages they support and the rules for executing them. While the transaction provides the envelope with signatures and fees, the message defines the specific action to execute. Examples include `MsgSend` (transfer tokens), `MsgDelegate` (delegate stake), and `MsgVote` (vote on proposals). If a transaction contains multiple messages, they execute in order. See [Message execution and atomicity](#message-execution-and-atomicity) below for details. ### How messages are defined Messages in the Cosmos SDK are defined in each module's [`tx.proto` file](/sdk/latest/tutorials/example/03-build-a-module#txproto) using [Protocol Buffers (protobuf)](/sdk/latest/learn/concepts/encoding), which provides deterministic serialization, backward compatibility, and cross-language support. Each message is defined in a `.proto` file that specifies its fields, data types, and unique identifiers. From this schema, code is generated that allows the message to be constructed, serialized, and validated. Here's an example of a transaction in JSON format: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "body": { "messages": [ { "@type": "/cosmos.bank.v1beta1.MsgSend", "from_address": "cosmos1...", "to_address": "cosmos1...", "amount": [{"denom": "uatom", "amount": "1000000"}] } ], "memo": "", "timeout_height": "0", "extension_options": [], "non_critical_extension_options": [] }, "auth_info": { "signer_infos": [ { "public_key": { "@type": "/cosmos.crypto.secp256k1.PubKey", "key": "A..." }, "mode_info": {"single": {"mode": "SIGN_MODE_DIRECT"}}, "sequence": "0" } ], "fee": { "amount": [{"denom": "uatom", "amount": "500"}], "gas_limit": "200000", "payer": "", "granter": "" } }, "signatures": ["MEUCIQDx..."] } ``` This transaction transfers 1 ATOM (1,000,000 uatom) from one account to another. You can see the message in the `body.messages` array, the sender's public key and sequence in `auth_info.signer_infos`, the fee and gas limit in `auth_info.fee`, and the cryptographic signature in the `signatures` array. When broadcast, this JSON is serialized into bytes using protobuf, ensuring every validator interprets the transaction identically. ### Message execution and atomicity When a transaction contains multiple messages, they are executed **in the order they appear** in the transaction. For example, a transaction might: 1. Send tokens to another account. 2. Delegate those tokens to a validator. If the order were reversed, the delegation could fail due to insufficient balance. At execution time, messages inside a transaction are applied sequentially. The transaction succeeds only if all messages execute successfully. Conceptually: ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} Transaction ├── Msg 1 → execute ├── Msg 2 → execute ├── Msg 3 → execute ``` If any message fails, the transaction returns an error and none of the message execution writes from that transaction are committed. Message execution inside a transaction is atomic: all messages commit or none do. The [transaction lifecycle](/sdk/latest/learn/concepts/lifecycle) page covers this execution pipeline in more detail. In v0.53, transactions support an optional **unordered** mode. When `unordered=true`, the normal per-signer sequence check is bypassed and replay protection is handled through `timeout_timestamp` plus unordered nonce tracking in `x/auth`. This enables fire-and-forget and concurrent transaction submission without coordinating sequence numbers. Unordered transactions must have a `timeout_timestamp` set and a sequence of `0`. For how clients build and submit them, see [Generating an Unordered Transaction](/sdk/latest/node/txs#generating-an-unordered-transaction). ## Blocks and transactions A blockchain can be understood as a sequence of blocks. Each block contains an ordered list of transactions. When a new block is committed: 1. Each transaction in the block is applied to the current state. 2. Each transaction executes its messages in order. 3. Modules update their portion of state. 4. The resulting state becomes the starting point for the next block. Conceptually: ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} State₀ ↓ apply Block 1 (Tx₁, Tx₂, Tx₃) State₁ ↓ apply Block 2 (Tx₄, Tx₅) State₂ ↓ apply Block 3 (...) State₃ ``` In this way, the blockchain is a deterministic sequence of state transitions driven entirely by transactions. Blocks group transactions, transactions drive execution, and execution updates state. ## Queries A **query** retrieves data from the blockchain's state without modifying it. Queries are read-only. They don't require signatures, aren't included in blocks, and don't affect consensus state. Modules define query services using protobuf in a [`query.proto` file](/sdk/latest/tutorials/example/03-build-a-module#queryproto), exposed over gRPC and REST. For example: * Query an account's balance (the `x/bank` module) * Query staking delegations (the `x/staking` module) * Query governance proposal details (the `x/gov` module) ## Transaction and query flow
Transaction Flow Query Flow
User
      ↓ signs
      Transaction
      ↓ contains
      Message(s)
      ↓ handled by
      Module(s)
      ↓ update
      State
User
      ↓
      Query
      ↓
      Module
      ↓
      State (read-only)
Transactions modify the blockchain. Messages define what modifications occur. Modules execute those modifications in order. Queries allow anyone to observe the resulting state. To see this flow in action with a working chain, see the [Quickstart](/sdk/latest/tutorials/example/02-quickstart) tutorial. The next section, [Transaction Lifecycle](/sdk/latest/learn/concepts/lifecycle), follows a transaction from broadcast through validation, block inclusion, execution, and state commitment to show how these components work together in practice. # Blockchain Basics Source: https://docs.cosmos.network/sdk/latest/learn/intro/blockchain-basics Learn the fundamentals of blockchains, state machines, and how Cosmos SDK applications work. ## What Is a Blockchain? A blockchain is a decentralized ledger that multiple independent computers (called nodes) maintain together. Instead of relying on a single authority to track transactions and maintain state, blockchain networks distribute this responsibility across many nodes. Each node keeps its own copy of the ledger and works with other nodes to agree on what transactions are valid and in what order they should be applied. You can think of a blockchain or decentralized ledger as a shared spreadsheet that dozens of people maintain independently. Everyone has their own copy, and they all follow the same rules for updating it. When someone wants to make a change, the group agrees on whether that change is valid and what order it should happen in. If everyone follows the rules correctly, all copies end up identical. If someone tries to modify their copy without following the consensus rules, the other nodes will reject their version because it doesn't match what the network agreed upon. This makes blockchains resistant to tampering: you'd need to control a majority of the network to force through an invalid change. ## Why Blockchains? Traditional digital systems usually rely on a central authority to maintain accurate records. A bank, for example, maintains the definitive record of account balances. Users trust the bank to process transactions correctly and prevent problems like spending the same money twice (also known as the double-spend problem). Blockchains solve a more difficult challenge: maintaining accurate, trustworthy records without relying on a singular, central authority. In a decentralized network, no single entity has the final say. Instead, independent nodes must agree on the state of the ledger even though they don't trust each other. This requires solving several problems simultaneously: * [Agreement through consensus](#consensus): How do nodes agree on which transactions are included and in what order they’re applied? * [Security through tamper-evident cryptography](#how-blocks-are-linked): How can the network prevent malicious nodes from creating fraudulent transactions or rewriting history? * [Consistency through deterministic execution](#why-“deterministic”): How do all nodes maintain identical copies of the ledger despite network delays and potential failures? Blockchains address these challenges through cryptographic linking, deterministic execution, and decentralized consensus mechanisms. The result is a system where no single party controls the ledger, yet all participants can verify its accuracy and trust its contents. ## State Machines: The Foundation of Blockchains At their core, blockchains are **replicated, deterministic state machines**. ### What Is a State Machine? In computer science, **State** represents all the current data in a system at a specific point in time. For example, in a bank application, the state includes all account balances. In the context of a blockchain or decentralized ledger, the state includes all account balances, smart contract data, and other information the chain tracks. A **state machine** is a system that moves from one state to another by applying transactions. Each transaction describes an action that should change the state. Here's a simple example of a state machine using a bank account: ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} Current State: User A's balance: $100 User B's balance: $50 Transaction: User A sends $30 to User B New State: User A's balance: $70 User B's balance: $80 ``` The state machine takes the current state (User A has \$100, User B has \$50), applies a transaction (transfer \$30), and produces a new state (User A has \$70, User B has \$80). ### Why "Deterministic"? **Deterministic** means that the same transaction applied to the same state will always produce the same result. This property is critical for blockchains and decentralized ledgers. Using the bank example: if User A starts with \$100 and sends User B \$30, their balance will always become \$70. It doesn't matter who processes this transaction, when they process it, or how many times they recalculate it from the initial state: the result will always be the same. In a blockchain, determinism ensures that all nodes independently arrive at the same final state. If the logic weren't deterministic, different nodes would end up with different versions of the ledger, and the network would break down. In practice, blockchain applications must avoid sources of non-determinism such as local time, floating-point math, or external network calls. ### Why "Replicated"? **Replicated** refers to the fact that many independent nodes each run their own copy of the same state machine. Instead of one central server maintaining the state, multiple independent nodes each maintain their own complete copy. When a new block is added to the blockchain, every node: 1. Receives the block with its ordered list of transactions 2. Independently executes each transaction through their local state machine 3. Arrives at the same new state (thanks to determinism). This replication is what makes blockchains decentralized and resilient. If any single node fails, goes offline, or acts maliciously, the network continues operating as long as a majority of the network's consensus power still have complete, accurate copies of the state. The network doesn't depend on any one node being available or trustworthy. ## How Blockchains Work With an understanding of state machines, the next step is to see how blockchains use them to maintain a shared ledger across many independent nodes. ### Nodes A **node** is a computer that participates in the blockchain network. Each node stores a complete copy of the blockchain's state, receives and validates new transactions, participates in consensus to agree on new blocks, and executes transactions to update its local state. Some nodes, called validators, participate directly in consensus by proposing and voting on blocks, while other nodes simply replicate and verify the chain. In public, permissionless blockchains, anyone can typically run a node, which makes the network decentralized: no single entity controls the ledger. ### Transactions A **transaction (tx)** is a request to change the blockchain's state. In Cosmos SDK blockchains, transactions contain one or more **messages** that represent the specific actions to be executed. These messages can represent many different actions: * Transferring tokens from one account to another * Creating or updating a smart contract * Staking tokens to become a validator * Voting on a governance proposal When a user creates a transaction, it gets broadcast to nodes in the network. Nodes verify that the transaction is valid (proper signature, sufficient balance, etc.) before accepting it into their mempool. ### Blocks Transactions are grouped together into **blocks** for efficiency. A block is a batch of transactions that the network processes together. Each block is cryptographically linked to the previous block, forming a **chain of blocks**. This chain structure creates a permanent, tamper-evident history: if someone tries to alter a past transaction, it would break the cryptographic link to all subsequent blocks, making the tampering obvious to the network. ### From Transactions to Blocks Rather than processing transactions one at a time, blockchains group them into **blocks** for efficiency. Here's how it works: 1. **Transaction pool (Mempool)**: Nodes collect valid transactions into a waiting area called the mempool 2. **Block proposal**: A designated node (called a validator or block proposer) selects transactions from the mempool and proposes them as the next block 3. **Consensus**: Nodes run a consensus algorithm to agree on which proposed block to accept and in what order 4. **Block commitment**: Once consensus is reached, the block becomes final and is added to the blockchain 5. **State transition**: Each node applies the transactions in the new block to their local state machine, updating their copy of the state ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} Mempool (pending txs) ↓ Block B [Tx1, Tx2, Tx3, ...] ↓ Consensus ↓ Apply to State Machine ↓ New State ``` This process repeats for every block, creating a chain of blocks, or a "blockchain". ### Consensus **Consensus** is the mechanism by which nodes agree on a single, authoritative version of the blockchain despite operating independently. In step 3 above, nodes must reach consensus on which block to add next and in what order. Transaction ordering is critical. Consider two transactions: "User A sends 100 tokens to User B" and "User A sends 100 tokens to User C." If User A only has 100 tokens, the order matters—only the first transaction can succeed. Different nodes might receive these transactions in different orders, so consensus is used to establish a single, canonical ordering that all nodes follow. This prevents the double-spend problem and ensures that deterministic execution produces identical results on every node. Consensus algorithms ensure that: * All honest nodes agree on the same sequence of blocks * The network can continue operating even if some nodes are offline or malicious * Transactions are ordered consistently across all nodes Most Cosmos SDK blockchains use the CometBFT consensus engine, which implements a Byzantine Fault Tolerant (BFT) consensus algorithm. This means the network can reach agreement as long as more than two-thirds of the voting power comes from honest validators. The specifics of how consensus works are covered in the [Blockchain Architecture](/sdk/latest/learn/intro/sdk-app-architecture) section. It's important to note that consensus only determines the ordering and inclusion of transactions into blocks. Whether a transaction is valid is ultimately determined by the application’s state machine when the block is executed. ### How Blocks Are Linked Each block contains a **block header** with metadata about the block. Critically, every block header includes a cryptographic hash of the previous block's header. A **hash** is like a digital fingerprint: it takes data of any size and produces a unique, fixed-length string of characters. For example, hashing the text "Hello World" might produce something like "a591a6d4...". The key property is that even a tiny change to the input (like changing "Hello World" to "Hello World!") produces a completely different hash. Hash functions are one-way, which means you can't reverse a hash back to the original data. Hash functions are also collision-resistant: no two different inputs produce the same hash. Cosmos blockchains use [SHA-256](https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.180-4.pdf) which was created by the NSA as the hash function for block headers and other cryptographic operations to securely link blocks together. This provides **cryptographic security**: finding a different input that produces the same hash output is computationally infeasible, making it virtually impossible to tamper with block data without detection. Block headers also include Merkle roots that commit to the block’s transactions and state, allowing nodes and light clients to verify data efficiently. This hashing mechanism creates a tamper-evident chain. You can see this in action in the demo in the next section. ### Blockchain Demo: Immutability The demo below shows a blockchain with three blocks. You can see how each block is linked to the previous block by the hash in the block header. Try changing the data in a block to see how it changes the hash of that block and invalidates all subsequent blocks. You can add new blocks to the chain by clicking the "Add Block" button. This is a simplified demonstration. Actual Cosmos SDK blocks include additional security features like validator signatures, timestamps, consensus information, and Merkle roots for transaction verification. The cryptographic linking shown here is just one part of blockchain security. If someone tries to alter a transaction in Block 1, it would change the contents of Block 1, which would change Block 1's hash. But Block 2 stores Block 1's original hash in its header. The mismatch would be immediately obvious, and Block 2 would be pointing to a hash that no longer matches Block 1. This broken link would invalidate Block 2 and all subsequent blocks, making the tampering evident to the entire network. This is why blockchains are resistant to any changes: you’d need to control a supermajority of the network’s consensus power to rewrite history. This cryptographic linking is what makes blockchain history **immutable**, or unchangeable. The further back in history a block is, the more subsequent blocks depend on it remaining unchanged, making older blocks increasingly difficult to tamper with. In BFT-based systems like CometBFT, blocks have instant finality: once a block is committed, it cannot be reverted without violating consensus assumptions. ## What's Next? Now that you understand blockchain fundamentals (state machines, deterministic execution, replication, and cryptographic linking), the next step is to learn how Cosmos SDK actually implements these concepts. In [Blockchain Architecture](/sdk/latest/learn/intro/sdk-app-architecture), you'll explore: * How CometBFT handles consensus and networking to maintain the replicated state machine * The Application Blockchain Interface (ABCI) that connects consensus to application logic * How the Cosmos SDK implements the state machine layer * The complete architecture of a Cosmos blockchain application # The Cosmos Stack Source: https://docs.cosmos.network/sdk/latest/learn/intro/cosmos-stack Understanding the modular architecture of the Cosmos blockchain stack Performant, customizable, and EVM-compatible, the Cosmos stack offers builders full control of their blockchain infrastructure and implementation. Its stable and secure open-source codebase enables blockchains to achieve high throughput of 10,000+ TPS, tuned, and fast finality for instant transaction settlement. Development on the Cosmos stack began in 2016, and today, hundreds of public and private blockchains use the Cosmos stack in production. The stack is modular: leverage pre-built components or integrate custom features for your specific use case, from consensus mechanisms to governance and compliance. The components of the stack work together to create a complete blockchain network solution that is secure, performant, scalable, and endlessly customizable. Cosmos Stack Architecture At its core, the Cosmos stack is composed of several interoperable layers: the [Cosmos SDK](/sdk/latest/learn/intro/overview) for application logic, [CometBFT](/cometbft/latest/docs/README) for consensus and networking, [Cosmos EVM](/evm/v0.5.0/documentation/overview) for Ethereum compatibility, and [the Inter-Blockchain Communication Protocol (IBC)](/ibc) for trust-minimized cross-chain communication. For teams running enterprise systems on blockchains that require a high degree of security, control, and compliance, [Cosmos Enterprise](/enterprise/overview) offers resilient enterprise features that meet enterprise operational standards, robust additional infrastructure capabilities, and proactive, hands-on engineering support. Together, these components and services form a flexible, battle-tested stack for developing performant, reliable, interoperable, and secure blockchains ## Cosmos SDK: Business Logic Layer The Cosmos SDK is the business logic layer of the Cosmos stack. It provides a customizable base layer for building blockchains and digital ledgers, made up of interoperable modules that work together to define how a blockchain behaves, from accounts and transactions to tokenization, compliance, and custom application logic. Builders can compose [pre-built modules](/sdk/latest/modules/modules) and [develop bespoke ones](/sdk/latest/guides/module-design/module-design-considerations) to embed their unique business logic directly into the foundation of the chain, rather than deploying it as isolated smart contracts. This approach enables a level of customization, interoperability, and performance that traditional smart contract platforms and Layer-2 blockchains cannot offer. Developers gain access to block lifecycle hooks ([BeginBlocker and EndBlocker](/sdk/latest/learn/intro/sdk-app-architecture#block-lifecycle-hooks)), fine-grained state separation, and scoped permissions through a security-first [Object Capability Model](/sdk/latest/guides/module-design/ocap). Native execution alongside [CometBFT](/cometbft/latest/docs/README) consensus unlocks significantly higher throughput and deterministic behavior, while upgrade tools like [Cosmovisor](/sdk/latest/guides/upgrades/cosmovisor) make chains easy to maintain and evolve over time. [Explore the Cosmos SDK →](/sdk/latest/learn) ## Cosmos Enterprise: Enterprise-Grade Infrastructure & Support Cosmos enterprise solutions give you a comprehensive suite of blockchain services and technologies to future-proof your organization’s digital ledger capabilities. It combines hardened protocol modules, on-premises and managed infrastructure components, and proactive support from the engineers building the Cosmos technology stack. Cosmos Enterprise is built for organizations that require reliability, security, and operational confidence as they scale critical blockchain infrastructure in enterprise production environments. [Learn about Cosmos Enterprise →](/enterprise/overview) ## Cosmos EVM: Ethereum Compatibility Layer Cosmos EVM enables plug-and-play Ethereum Virtual Machine compatibility for Cosmos SDK–based chains. It allows developers to deploy Solidity smart contracts, use familiar Ethereum tooling, and interact with native Cosmos modules (including IBC) through precompiles and extensions. It provides software engineers with functionality beyond standard EVM for new use cases and workflows by allowing them to run existing Ethereum contracts without modification while also extending the EVM with new capabilities at the chain level. [Learn about Cosmos EVM →](/evm) ## IBC Protocol: Interoperability Layer The Inter-Blockchain Communication (IBC) protocol is the interoperability layer of the Cosmos stack, enabling blockchains to securely transfer tokens, messages, and arbitrary data. Blockchains communicate over IBC with self-hosted infrastructure through point-to-point connections. It connects them into an interoperable network through trust-minimized communication with configurable permissioning while maintaining secure, independent execution. [Explore IBC Documentation →](/ibc) ## CometBFT: Highly-Performant Consensus Layer CometBFT is the consensus layer of the Cosmos stack and one of the most widely adopted, battle-tested consensus engines for building blockchains and decentralized ledger networks. It is a Byzantine Fault Tolerant (BFT) middleware that takes a deterministic state transition machine (which can be written in any programming language) and securely replicates it across a distributed set of nodes. By separating consensus from application logic, CometBFT allows developers to build custom blockchains without implementing their own networking or consensus protocols. Responsible for proposing blocks, ordering transactions, and finalizing state transitions, CometBFT ensures that all nodes reach agreement on the canonical state of the chain. Highly performant and deterministic, it provides fast finality and can achieve throughput of up to 10,000 transactions per second (TPS), making it well-suited for high-performance, application-specific blockchains. [Learn about CometBFT →](/cometbft/latest/docs/README) # What is the Cosmos SDK Source: https://docs.cosmos.network/sdk/latest/learn/intro/overview The [Cosmos SDK](https://github.com/cosmos/cosmos-sdk) is a secure, open-source framework for building application-specific blockchains and digital ledgers. It gives engineers full control over access control, security, business logic, and governance, while providing a robust set of pre-built modules covering common blockchain functionality. Organizations can use the Cosmos SDK to easily build and maintain any type of blockchain network, including private permissioned networks, public networks, and consortia. Cosmos SDK blockchains are natively interoperable through [IBC](/ibc). As the business logic layer of the Cosmos stack, the Cosmos SDK provides a fully customizable foundation for building blockchains from customizable modules that define how a chain behaves, from accounts and transactions to tokenization, compliance, and custom application logic. Development on the Cosmos SDK has been continuous since 2016. Today, companies in banking and finance, SaaS, AI, and other industries use the Cosmos SDK’s open-source codebase with proven consensus via [CometBFT](/cometbft) and native interoperability through IBC for business use cases like interbank networks, asset tokenization, and business automation. It offers fast performance of 10,000+ transactions per second, strong security, and resiliency in production. ## Purpose of the Cosmos SDK At its core, the Cosmos SDK is designed to give developers full flexibility across the entire blockchain stack. Business logic can run natively at the protocol level, be exposed through modular components, or be extended with optional [virtual machine layers](/evm), depending on the needs of the application. The Cosmos SDK is interoperable by design. Chains built with the SDK can communicate with other blockchains via [IBC](/ibc) while maintaining independent execution and security. ## Modularity of the Cosmos SDK ## Interoperable Modules Blockchains built with the Cosmos SDK are composed of interoperable modules, each responsible for a specific function, such as [accounts](/sdk/latest/learn/concepts/accounts), transactions, governance, tokenization, or compliance logic. These modules are built on top of the [SDK's base application framework](/sdk/latest/learn/intro/sdk-app-architecture), which provides the shared execution environment that allows modules to operate together as a cohesive blockchain. The Cosmos SDK offers engineers a robust set of [predefined modules](/sdk/latest/modules/modules) that cover standard blockchain features, such as consensus, accounts, transfers, governance permissioning, fee distribution, and more. In addition, engineers can [build their own modules](/sdk/latest/guides/module-design/module-design-considerations) tailored to their application's requirements. By composing and extending modules, developers can build blockchains and ledgers that are optimized for performance, security, and long-term maintainability. The SDK's base layer handles core concerns such as [message routing, module lifecycle orchestration, and interaction with the underlying consensus engine](/sdk/latest/learn/intro/sdk-app-architecture). It defines clear boundaries between modules by isolating their state into [independent stores](/sdk/latest/learn/intro/sdk-app-architecture), while providing secure, well-defined interfaces for [cross-module communication](/sdk/latest/learn/concepts/modules#keeper). This structure allows modules to operate and evolve independently while preserving the overall system. ## Cosmos SDK and the modularity of the Cosmos stack While SDK modules define how application logic is composed within a chain, the Cosmos SDK also supports modularity at the component level. Consensus, networking, and data availability are provided by external components like CometBFT. Execution logic is defined by SDK modules. We recommend building Cosmos SDK-based blockchains using [CometBFT](/cometbft/latest/docs/README) for consensus because it offers best-in-class performance and out-of-the-box interoperability between components. Alternatively, engineers can pair the Cosmos SDK with other consensus engines like modular execution and settlement architectures depending on their performance and security requirements. This flexibility allows ledger chains to evolve alongside their application and operational needs. ## Application-Specific Blockchains A common development paradigm in blockchain ecosystems is the use of general-purpose virtual machine chains, where applications are deployed as smart contracts on top of a shared execution environment. While this approach is well suited for some use cases, it imposes constraints around performance, customization, and protocol-level control, which impact an organization’s infrastructure costs and security and compliance profile. Application-specific blockchains offer a different model. An application-specific blockchain is a blockchain or digital ledger that runs custom business logic at the protocol or chain level to accomplish a particular business use case. With the Cosmos SDK, developers can tailor execution logic, fee models, governance rules, and state transitions directly at the protocol level, enabling greater flexibility and performance. ## Virtual Machine Layers The Cosmos SDK allows for the use of smart contracts. Developers can add a virtual machine layer to add smart contract support. The [Cosmos EVM](/evm) supports Ethereum-compatible smart contracts and tooling. Engineers can also choose other VMs. ## Security via the Object-Capability Model The Cosmos SDK uses a capabilities-based security model to enforce strict boundaries between modules. Rather than granting broad access to shared state, modules are given only the specific capabilities they require, making it easier to reason about authority, permissions, and potential attack surfaces. This design improves the security and composability of complex blockchain applications. For a deeper dive into this model, see the [Object-Capability Model](/sdk/latest/guides/module-design/ocap). ## Cosmos Enterprise Cosmos enterprise solutions give you a comprehensive suite of blockchain services and technology to future-proof your organization’s digital ledger capabilities. It combines hardened protocol modules, on-premises and managed infrastructure components, and proactive support from the engineers building the Cosmos technology stack. Cosmos Enterprise is built for organizations that require reliability, security, and operational confidence as they scale critical blockchain infrastructure in enterprise production environments. [Learn about Cosmos Enterprise →](/enterprise/overview) ## Why Build with the Cosmos SDK The Cosmos SDK is one of the most mature and widely adopted frameworks for building custom, modular blockchains. Key advantages include: * **Proven in production**: 200+ blockchains use the Cosmos SDK in production today for use cases such as interbank networks, regulated lending, and banking asset tokenization. * **Strong security foundations**: Capabilities-based security informed by years of production experience. * **Protocol-level customization**: Define application logic, governance, and economic models directly in the blockchain, not just in smart contracts. * **Built-in interoperability**: Native interoperability through [IBC](/ibc) and extensible chain-level integration. * **Flexible execution models**: Combine native modules with optional VM layers such as [Cosmos EVM](/evm/v0.5.0/documentation/overview). ## Getting Started with the Cosmos SDK * Learn about the [architecture of a Cosmos SDK application](/sdk/latest/learn/intro/sdk-app-architecture) * Run a blockchain in under 5 minutes with the [Cosmos SDK Node Tutorial](/sdk/latest/tutorials) # Cosmos Architecture Source: https://docs.cosmos.network/sdk/latest/learn/intro/sdk-app-architecture How Cosmos SDK implements blockchains through separation of consensus, interface, and application logic. In [Blockchain Basics](/sdk/latest/learn/intro/blockchain-basics), you learned that a blockchain is a replicated, deterministic state machine maintained by independent nodes through consensus. The Cosmos SDK implements this model through a clean separation of concerns: CometBFT handles consensus, networking, and block production; ABCI (Application Blockchain Interface) defines the boundary between consensus and application; and the Cosmos SDK implements the application logic and state machine. This page explains the high-level architecture of Cosmos SDK blockchains: how components interact, what each layer is responsible for, and why this separation matters. ## Cosmos Application Architecture A Cosmos blockchain consists of 2 distinct layers: CometBFT for consensus and block production, and the Cosmos SDK layer which contains the application logic, modules, and transaction execution logic. Between these layers is the ABCI (Application Blockchain Interface), which is the interface that connects them. ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} +---------------------------------------+ | | | Cosmos SDK Application | <- State machine | (Modules, Keepers, State) | - Transaction execution | | - Business logic +------------------+--------------------+ | ABCI <- Application Blockchain Interface | +------------------+--------------------+ | | | CometBFT | <- Consensus engine | (Consensus, Networking, | - Block production | Block Replication) | - p2p networking | | +---------------------------------------+ ``` * **[CometBFT](#cometbft)**: the consensus engine that handles networking, block production, and Byzantine Fault Tolerant consensus. * **[ABCI](#abci-application-blockchain-interface)**: the interface boundary that defines when and how CometBFT invokes the application, enforcing separation between consensus and execution. The ABCI is implemented in a Cosmos application via [`BaseApp`](#baseapp-and-appgo). * **[Cosmos SDK](#cosmos-sdk-application)**: the framework for building the blockchain application (often simply called "the application"), which is a state machine assembled from composable [modules](/sdk/latest/learn/concepts/modules). Each module owns a specific domain of logic, state, and transactions; together, modules define the chain's business logic, execute transactions, and produce cryptographic state commitments. This layer is defined in the [`app.go`](#baseapp-and-appgo) file of a Cosmos application. ### Separating Consensus and Application Logic Separating consensus from application logic provides several benefits. Security improves because isolating consensus logic prevents application bugs from affecting block production or network stability. Flexibility increases as developers can build application-specific blockchains without reimplementing consensus. The modular design allows consensus and application layers to evolve independently, and reusability means one consensus engine (CometBFT) can power many different blockchains without modification. ## Nodes and Daemons A blockchain is made up of nodes, or participants in the blockchain network. Each node runs a daemon process that includes both a CometBFT instance for consensus and networking, and a Cosmos SDK application for the state machine and execution logic. The daemon participates in networking, reaches consensus with other nodes, and executes transactions to update the application state. Nodes can operate in different roles. There are two main types of nodes, **validators** and **full nodes**: * **Validators** are nodes that participate in consensus by proposing and voting on blocks, with consensus power staked or attributed to them making them responsible for block production. They are in charge of validating blocks before voting, ensuring that only valid blocks are finalized. * **Full nodes** replicate and verify blocks without participating in consensus voting, maintaining complete state and answering queries but not voting on proposals. Although anyone can typically run a full node, becoming a validator depends on the chain's staking, governance, or permissioning rules. In a proof-of-stake blockchain, validator candidates are selected based on their stake, and they must meet certain criteria to be elected as validators. In a proof-of-authority blockchain, validators are selected based on permissioned criteria. To learn how to run a node, visit the [Cosmos Node Tutorial](/sdk/latest/tutorials). ## CometBFT [CometBFT](/cometbft/latest/docs/README) is a Byzantine Fault Tolerant consensus engine that provides fast, deterministic finality. It's used by Cosmos SDK blockchains to replicate the state machine across a decentralized network. ### Core Responsibilities CometBFT handles several core responsibilities: * **Peer-to-peer networking**: CometBFT manages node discovery, establishes and maintains connections with other validators and full nodes, and implements gossip protocols for propagating information across the network. * **Transaction propagation and mempool management**: When users submit transactions to a node, CometBFT gossips them to other nodes. Each node maintains a **mempool** (memory pool), which is a waiting area for valid transactions that have not yet been included in a block. The mempool holds transactions temporarily until a validator includes them in a block proposal. * **Block proposal and transaction ordering**: CometBFT uses deterministic proposer selection to choose which validator will propose the next block. The proposer selects transactions from the mempool, orders them, and packages them into a block proposal. * **Byzantine Fault Tolerant consensus**: CometBFT coordinates voting rounds where validators vote on block proposals. When a proposal receives votes from more than two-thirds of voting power, consensus is reached and the block is finalized. Validators cryptographically sign their votes, ensuring against double-voting and other forms of malicious behavior. * **Block replication**: Once finalized, CometBFT ensures the block is replicated across all nodes in the network, maintaining a consistent, ordered history of all committed blocks. To learn more about how CometbFT works, visit the [CometBFT Documentation](/cometbft/latest/docs/README). ### Byzantine Fault Tolerance and Finality Byzantine Fault Tolerance (BFT) is a system's ability to reach consensus even when some participants crash, send conflicting messages, or act maliciously. The name comes from the [Byzantine Generals Problem](https://lamport.azurewebsites.net/pubs/byz.pdf). CometBFT tolerates up to one-third of voting power being faulty while still producing valid blocks, and maintains liveness as long as more than two-thirds of validators are online. Unlike proof-of-work blockchains, CometBFT provides instant finality: once a block is committed, it cannot be reverted. ### Orchestrating Block Production CometBFT drives the entire block production lifecycle. It determines when blocks are produced (maintaining consistent block times), which validator proposes each block (through deterministic proposer selection), and the order in which transactions are included in blocks. CometBFT coordinates the consensus process by managing the voting rounds where validators evaluate and vote on block proposals. To learn more about how consensus in CometBFT works, visit the [CometBFT Documentation](/cometbft/latest/docs/introduction/intro). ### Content Agnosticism CometBFT is agnostic to block content and application implementation. It treats transactions as opaque byte arrays, ensuring all nodes receive the same ordered sequence without interpreting what they mean. All application-specific logic lives in the SDK layer, which uses [Protocol Buffers](/sdk/latest/learn/concepts/encoding) to serialize structured messages into bytes CometBFT can carry. ## ABCI (Application Blockchain Interface) ABCI is a strict request/response interface between CometBFT and a Cosmos SDK blockchain application. It defines the block execution lifecycle and ensures a clean separation between consensus and application logic to ensure secure, reliable block production that cannot be compromised by application faults or bugs. The ABCI is unidirectional: all calls flow from CometBFT to the application, and the application cannot call into or control CometBFT. The Cosmos SDK application must respond deterministically to all ABCI calls, producing the same results given the same inputs. The ABCI itself is a stateless protocol containing no business logic. It only provides the interface definitions between the CometBFT consensus engine, which handles block production, and the Cosmos SDK application, which defines the business logic and state machine. ### Block Lifecycle Methods As the driver of block production, CometBFT controls when the Cosmos SDK application is invoked. It calls the application through ABCI at specific points during block production to validate transactions for the mempool (CheckTx), to construct or evaluate block proposals (PrepareProposal and ProcessProposal), to execute finalized blocks (FinalizeBlock), and to persist state (Commit). The SDK application responds to these calls but cannot initiate them. This means CometBFT, not the SDK application, determines the timing and cadence of block production and state transitions. This ABCI boundary also prevents application logic from influencing consensus. ```python theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} +---------------+ | +--------------------------+ | CometBFT | | | SDK Application | | (Consensus) | ABCI | (State Machine Logic) | +---------------+ | +--------------------------+ 1) CheckTx | Mempool validation ---------|-------> Validate tx (sigs, fees) | 2) PrepareProposal | Proposer builds block ------|--------> Construct block proposal | 3) ProcessProposal | Validators evaluate --------|--------> Validate proposal | 4) Consensus | BFT voting, finalizing block| (SDK not involved) | | 5) FinalizeBlock | Execute block --------------|--------> PreBlock hooks | BeginBlock hooks | Execute transactions | EndBlock hooks | Return AppHash 6) Commit | Persist state --------------|--------> Persist to disk |<-------- Return AppHash ``` 1. **CheckTx** validates transactions before adding them to the mempool. It checks that transactions are well-formed and economically viable (proper signature, sufficient fees) without making state changes. This protects the mempool against spam. 2. **PrepareProposal** is invoked when a validator constructs a new block proposal. This gives the application limited control over block construction, allowing it to reorder transactions or add application-specific data. 3. **ProcessProposal** is called when validators evaluate a block proposal from another validator. The application can validate the proposed block according to application-specific rules before voting to accept it. 4. **Consensus** occurs entirely within CometBFT. After validators evaluate the proposal (via ProcessProposal), they participate in BFT voting rounds. If the proposal receives votes from more than two-thirds of voting power, consensus is reached and the block is finalized. 5. **FinalizeBlock** is invoked after consensus on a block is reached. This is where state transitions occur, executing the entire block atomically. `BaseApp` invokes lifecycle hooks in this order: PreBlock hooks, BeginBlock hooks, transaction execution, and EndBlock hooks. The application returns the new AppHash (a cryptographic commitment to the state) and any validator set changes. 6. **Commit** is called after FinalizeBlock to persist the finalized state to the nodes' local disk and return the [AppHash](#state) that gets included in the next block header. **PreBlock** hooks were introduced in SDK v0.50, which run before BeginBlock. PreBlockers must be explicitly ordered using `SetOrderPreBlockers`. Some core modules (notably `x/auth`) require PreBlock execution; missing PreBlock wiring will cause runtime errors. CometBFT decides when blocks happen and what order transactions appear in, the Cosmos SDK decides whether those transactions are valid and how they change state, and the ABCI defines the interface between the two. For a more in-depth look at the ABCI, visit the [ABCI page](/sdk/latest/guides/abci/abci). ## Cosmos SDK Application A Cosmos SDK application is a deterministic state machine that defines a blockchain's behavior. It focuses entirely on defining what state the blockchain tracks, what transactions are valid, and how transactions change state. The applications defines transaction and message formats using Protocol Buffers for serialization, validates transactions by checking signatures and fees, executes message handlers to apply state transitions, maintains state across all modules, and produces the AppHash that cryptographically commits to the current state. ### Application Structure A typical Cosmos SDK application consists of: * **[`BaseApp`](#baseapp-and-app-go)**: boilerplate code that provides the ABCI implementation and execution framework for a chain to interact with CometBFT. * **[Modules](#modules-transactions-and-application-logic)**: building blocks of domain-specific logic such as transactions, custom business logic, and governance/permissioning. * **[State Multistore](#kv-stores-and-multistore)**: a collection of key-value stores that store the state of the application, isolated by module. * **[Keepers](#keepers)**: providing interfaces for accessing module state while enforcing access control. * **[app.go](#baseapp-and-app-go)**: serving as the composition root that wires everything together. For a complete overview of Cosmos SDK structure, visit the [Intro to SDK Structure](/sdk/latest/learn/concepts/sdk-structure) ### BaseApp and app.go `BaseApp` is the Cosmos SDK's standard implementation of the ABCI interface. It handles all ABCI method calls from CometBFT, routes messages to the appropriate [module handlers](#modules-transactions-and-application-logic), manages state versioning and caching, and enforces transaction execution semantics. Developers do not implement ABCI directly; instead, they extend `BaseApp` and register their modules, handlers, and execution logic with it. To learn more about `BaseApp`, visit the [`BaseApp` page](/sdk/latest/learn/concepts/baseapp). The `app.go` file is the composition root of a Cosmos SDK application. This is where a specific blockchain is assembled by creating the `BaseApp` instance, instantiating all module keepers with their dependencies, registering store keys for each module's state, wiring module [lifecycle hooks](#block-lifecycle-hooks), and configuring transaction processing. To learn more about `app.go`, visit the [`app.go` page](/sdk/latest/learn/concepts/app-go). Modules also define genesis state initialization and migration logic to support chain upgrades, allowing application state to evolve safely over time. ### Modules, Transactions, and Application Logic Modules are the building blocks of Cosmos SDK applications. Each module implements a specific domain of functionality: the bank module handles token transfers, the staking module manages validator delegation, the governance module implements on-chain proposals, and so on. Every module acts as its own mini state machine, processing transactions and updating state according to its own rules. Together, modules for the entire application form a single, cohesive state machine. A module provides its business logic through message handlers. Messages work like function calls that specify an operation (like "send 100 tokens to address X") with typed parameters. Users invoke module logic by submitting transactions that contain these messages. During execution, each message gets routed to its module's handler, which runs the business logic and updates state. In the Cosmos SDK, a **transaction** is a signed, serialized container that wraps one or more **messages**. Messages represent the actual operations to execute (like "send tokens" or "delegate stake"), while the transaction adds metadata like signatures, fees, and gas limit. Blocks contain transactions, and during block execution (FinalizeBlock), each transaction's messages are extracted and routed to the appropriate module handlers for execution. Transactions contain signatures from their creators authorizing the requested state change. Modules define message types using [Protocol Buffers (Protobuf)](/sdk/latest/learn/concepts/encoding), which provide type-safe, cross-language serialization. Since CometBFT treats transactions as raw bytes and [KV stores](#kv-stores-and-multistore) only accept byte arrays, Protobuf serializes structured messages and state data into bytes for transmission and storage. Modules also define state schemas (what data the module stores), state transitions (how messages modify state), queries (allowing clients to read module state), and optional lifecycle hooks for tasks that run at block boundaries. To learn more about the transactions and messages, visit the [Transactions page](/sdk/latest/learn/concepts/transactions). For more in-depth information on modules, visit the [Intro to Modules page](/sdk/latest/learn/concepts/modules) or check out [the Module Tutorial](/sdk/latest/tutorials/example/00-overview) to learn how to build a module from scratch. ### Block Lifecycle Hooks Beyond processing individual transactions, modules can define lifecycle hooks that run at specific points during block execution. These hooks allow modules to perform tasks at block boundaries, such as minting rewards, updating validator sets, or preparing state before transactions execute. During FinalizeBlock, `BaseApp` invokes module hooks in this sequence: 1. **PreBlock** - Prepare state before block execution begins 2. **BeginBlock** - Perform tasks at block start (e.g., minting rewards) 3. **Transaction execution** - For each transaction: run AnteHandler, execute message handlers, run PostHandler (if configured) 4. **EndBlock** - Perform tasks at block end (e.g., updating validator sets) Modules are coordinated by a [`ModuleManager`](/sdk/latest/learn/concepts/baseapp#module-manager), which orchestrates these lifecycle events along with genesis initialization and module upgrades. ### State In a Cosmos SDK application, state is stored as a collection of key-value pairs in a **multistore**. State changes occur during block execution. After consensus finalizes a block, `FinalizeBlock` is invoked by CometBFT via the ABCI, which executes each transaction in the Cosmos SDK application in order. For each transaction, the messages are extracted and routed to the `MsgServer` of the corresponding module. The `MsgServer` validates messages, and the [keepers](#keepers) executes the business logic of the module and updates the state. After executing the transactions in the block and updating state, each node computes the `AppHash` from its local state. The **`AppHash`** is a cryptographic proof of the state of the application at the end of the block, and is included in the next block header. This ensures that state updates only take effect once consensus is reached. By design, all state changes in a Cosmos SDK application are deterministic and replayable: executing the same block against the same initial state will always produce the same final state. #### Keepers Keepers are the gatekeepers to module state. They provide the only interface for accessing and mutating a module's state, enforce access control between modules, and encapsulate state access logic. This aligns with the [object capability model](/sdk/latest/guides/module-design/ocap), where modules can only access capabilities (other keepers) explicitly passed to them during initialization. Modules interact through keeper interfaces rather than directly accessing each other's state, enforcing modularity and preventing tight coupling. To learn more about keepers, visit the [Intro to Modules page](/sdk/latest/learn/concepts/modules#keeper). #### KV Stores and Multistore The state of a Cosmos SDK application is stored in a **multistore**, which is a collection of key-value stores. Each module owns a namespaced key-value store, isolating its data from other modules. The multistore combines all module stores into a single, unified state representation with height-based versioning for historical queries. KV stores only accept byte arrays (`[]byte`) as values, so any custom data structures must be marshaled using a [codec](/sdk/latest/learn/concepts/encoding) before being stored. This ensures consistent serialization across the application, typically using Protocol Buffers. ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} +-----------------------------------------+ | Multistore (Root) | | | | +----------+ +----------+ +-----+ | | | Bank | | Staking | | ... | | | | Store | | Store | | | | | +----------+ +----------+ +-----+ | | | +-----------------------------------------+ ``` To learn more, visit the [Store page](/sdk/latest/learn/concepts/store). #### Merkle Trees and Commitments Application state is committed using Merkle trees. Each module store is organized as a Merkle tree (implemented as an IAVL tree), and the multistore root is a tree of module store roots. The AppHash is the root hash of this multistore structure, uniquely identifying a state version. ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} AppHash (Root) | +-------------+-------------+ | | BankRoot StakingRoot | | +----+----+ +----+----+ | | | | Key1 Key2 Key3 Key4 ``` CometBFT includes the AppHash in block headers, allowing anyone to verify state commitments. This is crucial for light clients, which can verify state without downloading all blocks. Visit the [Store page](/sdk/latest/learn/concepts/store#how-state-is-stored-iavl-and-commit-stores) to learn more about how Cosmos uses IAVL trees. ### How State Replicates Across Nodes The state changes described above happen independently on every node in the network. CometBFT does not replicate application state directly—instead, it replicates blocks (ordered transactions) and consensus decisions. Each node then independently executes these blocks through its local Cosmos SDK application. This design relies entirely on deterministic execution. When all nodes execute the same ordered transactions, deterministic execution guarantees they arrive at identical state. Nodes verify this agreement by comparing AppHash commitments. If execution were non-deterministic, nodes would compute different AppHashes and consensus would fail. After executing a block, each node computes the AppHash from its local state. This AppHash is included in the next block header. Validators cryptographically sign block headers that include the AppHash, attesting that they executed the block and arrived at that state. Nodes with divergent state produce different AppHashes, and validators will not sign blocks with incorrect commitments. This makes state disagreement detectable and ensures that consensus implies state agreement across all honest nodes. To maintain determinism, applications should avoid common pitfalls: use block time instead of local timestamps (which vary across nodes), use integer math instead of floating-point arithmetic (which can differ by hardware), use deterministic randomness seeded with block data, and include all necessary data in transactions rather than making external API calls. ## Communication Layers Cosmos SDK blockchains use different communication mechanisms depending on the context. * **Node-to-Node Communication**: Nodes communicate with each other using CometBFT's custom peer-to-peer gossip protocol. This handles transaction propagation across the network, block replication to all nodes, and consensus messages like votes and proposals. This communication is entirely within CometBFT and is application-agnostic. * **Consensus to Application Communication**: CometBFT communicates with the Cosmos SDK application through ABCI using local function calls. This is in-process communication within the same daemon, not a network protocol. CometBFT invokes the application at key points in the block lifecycle, and the application returns execution results and state commitments. * **Client to Node Communication**: Clients (wallets, explorers, and other external applications) interact with nodes through the Cosmos SDK's gRPC API, with HTTP/REST available via gRPC-Gateway. This external API allows clients to query application state and submit transactions for mempool inclusion. This communication is completely separate from consensus—clients never directly interact with CometBFT's consensus protocols. To learn more, visit [the CLI, gRPC, and REST API page](/sdk/latest/learn/concepts/cli-grpc-rest). ## Summary The architecture of a Cosmos SDK blockchain cleanly separates three concerns. CometBFT determines when blocks happen and which transactions they contain through its consensus mechanism. ABCI defines how and when the application is invoked as part of the block lifecycle. The Cosmos SDK defines what those transactions mean and how they change state deterministically. ## What's Next * The [Transaction Lifecycle](/sdk/latest/learn/concepts/lifecycle) page follows a transaction from submission through mempool admission, consensus, and execution. * The [Application Anatomy](/sdk/latest/learn/intro/sdk-app-architecture) page provides a deep dive into building a Cosmos SDK application, exploring modules, keepers, and the composition of app.go in detail. # Start Here Source: https://docs.cosmos.network/sdk/latest/learn/start-here Pick the path that matches what you want to do with the Cosmos SDK. These docs are designed to get you building quickly. Choose the path that best matches your goal: ## What do you want to do? * Understand how the SDK works → [Concepts + Tutorial](#learn-+-build-recommended-path) * New to Cosmos or Blockchain? → [Start here](#new-to-cosmos) * Build something quickly → [Quickstart](#get-running-fast) * Run a node → [Operations](#run-a-node) * Explore modules → [Module Directory](#browse-modules) * Go deeper on advanced topics → [In-depth Guides](#go-deeper) ## Learn + Build (Recommended Path) Get a solid mental model of the SDK and build your first chain. 1. Read the [intro pages](/sdk/latest/learn/intro/overview) and [Cosmos Architecture](/sdk/latest/learn/intro/sdk-app-architecture) for an overview of how a Cosmos chain is structured 2. Read the [Concepts](/sdk/latest/learn/concepts/accounts) section (Fundamentals, Modules, SDK Internals) 3. Follow the [Build a Chain Tutorial](/sdk/latest/tutorials/example/00-overview) This is the most complete path for developers new to the Cosmos SDK. ## Get running fast Spin up a local chain in minutes. 1. [Prerequisites](/sdk/latest/tutorials/example/01-prerequisites) 2. [Chain Quickstart](/sdk/latest/tutorials/example/02-quickstart) From there, continue the tutorial to learn more about building modules: * [Build a Module from Scratch](/sdk/latest/tutorials/example/03-build-a-module): write your first custom module with messages, queries, and state * [Full Counter Module Walkthrough](/sdk/latest/tutorials/example/04-counter-walkthrough): add advanced features like fees, events, and module accounts * [Run, Test, and Configure](/sdk/latest/tutorials/example/05-run-and-test): test your chain and configure it for different environments ## New to Cosmos? Start here if you're new to the ecosystem. * [The Cosmos Stack](/sdk/latest/learn/intro/cosmos-stack) * [What is the Cosmos SDK](/sdk/latest/learn/intro/overview) New to blockchain development? Read [Blockchain Basics](/sdk/latest/learn/intro/blockchain-basics) For a technical overview of how a Cosmos chain is structured: [Cosmos Architecture](/sdk/latest/learn/intro/sdk-app-architecture) ## Run a node Set up and operate a Cosmos node (validators, operators). [Run a Node](/sdk/latest/tutorials) ## Browse modules Explore production-ready SDK modules and their documentation. [Module Directory](/sdk/latest/modules/modules) ## Go deeper Dive into advanced topics like ABCI++, vote extensions, mempool design, upgrades, and observability. [In-depth Guides](/sdk/latest/guides/guides) # x/auth Source: https://docs.cosmos.network/sdk/latest/modules/auth/auth This document specifies the auth module of the Cosmos SDK. ## Abstract This document specifies the auth module of the Cosmos SDK. The auth module is responsible for specifying the base transaction and account types for an application, since the SDK itself is agnostic to these particulars. It contains the middlewares, where all basic transaction validity checks (signatures, nonces, auxiliary fields) are performed, and exposes the account keeper, which allows other modules to read, write, and modify accounts. This module is used in the Cosmos Hub. ## Contents * [Concepts](#concepts) * [Gas & Fees](#gas-&-fees) * [State](#state) * [Accounts](#accounts) * [AnteHandlers](#antehandlers) * [Keepers](#keepers) * [Account Keeper](#account-keeper) * [Parameters](#parameters) * [Client](#client) * [CLI](#cli) * [gRPC](#grpc) * [REST](#rest) ## Concepts **Note:** The auth module is different from the [authz module](/sdk/latest/modules/authz/README). The differences are: * `auth` - authentication of accounts and transactions for Cosmos SDK applications and is responsible for specifying the base transaction and account types. * `authz` - authorization for accounts to perform actions on behalf of other accounts and enables a granter to grant authorizations to a grantee that allows the grantee to execute messages on behalf of the granter. ### Gas & Fees Fees serve two purposes for an operator of the network. Fees limit the growth of the state stored by every full node and allow for general purpose censorship of transactions of little economic value. Fees are best suited as an anti-spam mechanism where validators are disinterested in the use of the network and identities of users. Fees are determined by the gas limits and gas prices transactions provide, where `fees = ceil(gasLimit * gasPrices)`. Txs incur gas costs for all state reads/writes, signature verification, as well as costs proportional to the tx size. Operators should set minimum gas prices when starting their nodes. They must set the unit costs of gas in each token denomination they wish to support: `simd start ... --minimum-gas-prices=0.00001stake;0.05photinos` When adding transactions to mempool or gossipping transactions, validators check if the transaction's gas prices, which are determined by the provided fees, meet any of the validator's minimum gas prices. In other words, a transaction must provide a fee of at least one denomination that matches a validator's minimum gas price. CometBFT does not currently provide fee based mempool prioritization, and fee based mempool filtering is local to node and not part of consensus. But with minimum gas prices set, such a mechanism could be implemented by node operators. Because the market value for tokens will fluctuate, validators are expected to dynamically adjust their minimum gas prices to a level that would encourage the use of the network. ## State ### Accounts Accounts contain authentication information for a uniquely identified external user of an SDK blockchain, including public key, address, and account number / sequence number for replay protection. For efficiency, since account balances must also be fetched to pay fees, account structs also store the balance of a user as `sdk.Coins`. Accounts are exposed externally as an interface, and stored internally as either a base account or vesting account. Module clients wishing to add more account types may do so. * `0x01 | Address -> ProtocolBuffer(account)` #### Account Interface The account interface exposes methods to read and write standard account information. Note that all of these methods operate on an account struct conforming to the interface - in order to write the account to the store, the account keeper will need to be used. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // AccountI is an interface used to store coins at a given address within state. // It presumes a notion of sequence numbers for replay protection, // a notion of account numbers for replay protection for previously pruned accounts, // and a pubkey for authentication purposes. // // Many complex conditions can be used in the concrete struct which implements AccountI. type AccountI interface { proto.Message GetAddress() sdk.AccAddress SetAddress(sdk.AccAddress) error // errors if already set. GetPubKey() crypto.PubKey // can return nil. SetPubKey(crypto.PubKey) error GetAccountNumber() uint64 SetAccountNumber(uint64) error GetSequence() uint64 SetSequence(uint64) error // Ensure that account implements stringer String() string } ``` ##### Base Account A base account is the simplest and most common account type, which just stores all requisite fields directly in a struct. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // BaseAccount defines a base account type. It contains all the necessary fields // for basic account functionality. Any custom account type should extend this // type for additional functionality (e.g. vesting). message BaseAccount { string address = 1; google.protobuf.Any pub_key = 2; uint64 account_number = 3; uint64 sequence = 4; } ``` ### Vesting Account See [Vesting](/sdk/latest/modules/auth/auth). ## AnteHandlers The `x/auth` module presently has no transaction handlers of its own, but does expose the special `AnteHandler`, used for performing basic validity checks on a transaction, such that it could be thrown out of the mempool. The `AnteHandler` can be seen as a set of decorators that check transactions within the current context, per [ADR 010](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-010-modular-antehandler.md). Note that the `AnteHandler` is called on both `CheckTx` and `DeliverTx`, as CometBFT proposers presently have the ability to include in their proposed block transactions which fail `CheckTx`. ### Decorators The auth module provides `AnteDecorator`s that are recursively chained together into a single `AnteHandler` in the following order: * `SetUpContextDecorator`: Sets the `GasMeter` in the `Context` and wraps the next `AnteHandler` with a defer clause to recover from any downstream `OutOfGas` panics in the `AnteHandler` chain to return an error with information on gas provided and gas used. * `RejectExtensionOptionsDecorator`: Rejects all extension options which can optionally be included in protobuf transactions. * `MempoolFeeDecorator`: Checks if the `tx` fee is above local mempool `minFee` parameter during `CheckTx`. * `ValidateBasicDecorator`: Calls `tx.ValidateBasic` and returns any non-nil error. * `TxTimeoutHeightDecorator`: Check for a `tx` height timeout. * `ValidateMemoDecorator`: Validates `tx` memo with application parameters and returns any non-nil error. * `ConsumeGasTxSizeDecorator`: Consumes gas proportional to the `tx` size based on application parameters. * `DeductFeeDecorator`: Deducts the `FeeAmount` from first signer of the `tx`. If the `x/feegrant` module is enabled and a fee granter is set, it deducts fees from the fee granter account. * `SetPubKeyDecorator`: Sets the pubkey from a `tx`'s signers that does not already have its corresponding pubkey saved in the state machine and in the current context. * `ValidateSigCountDecorator`: Validates the number of signatures in `tx` based on app-parameters. * `SigGasConsumeDecorator`: Consumes parameter-defined amount of gas for each signature. This requires pubkeys to be set in context for all signers as part of `SetPubKeyDecorator`. * `SigVerificationDecorator`: Verifies all signatures are valid. This requires pubkeys to be set in context for all signers as part of `SetPubKeyDecorator`. * `IncrementSequenceDecorator`: Increments the account sequence for each signer to prevent replay attacks. ## Keepers The auth module only exposes one keeper, the account keeper, which can be used to read and write accounts. ### Account Keeper Presently only one fully-permissioned account keeper is exposed, which has the ability to both read and write all fields of all accounts, and to iterate over all stored accounts. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // AccountKeeperI is the interface contract that x/auth's keeper implements. type AccountKeeperI interface { // Return a new account with the next account number and the specified address. Does not save the new account to the store. NewAccountWithAddress(sdk.Context, sdk.AccAddress) types.AccountI // Return a new account with the next account number. Does not save the new account to the store. NewAccount(sdk.Context, types.AccountI) types.AccountI // Check if an account exists in the store. HasAccount(sdk.Context, sdk.AccAddress) bool // Retrieve an account from the store. GetAccount(sdk.Context, sdk.AccAddress) types.AccountI // Set an account in the store. SetAccount(sdk.Context, types.AccountI) // Remove an account from the store. RemoveAccount(sdk.Context, types.AccountI) // Iterate over all accounts, calling the provided function. Stop iteration when it returns true. IterateAccounts(sdk.Context, func(types.AccountI) bool) // Fetch the public key of an account at a specified address GetPubKey(sdk.Context, sdk.AccAddress) (crypto.PubKey, error) // Fetch the sequence of an account at a specified address. GetSequence(sdk.Context, sdk.AccAddress) (uint64, error) // Fetch the next account number, and increment the internal counter. NextAccountNumber(sdk.Context) uint64 } ``` ## Parameters The auth module contains the following parameters: | Key | Type | Example | | ---------------------- | ------ | ------- | | MaxMemoCharacters | uint64 | 256 | | TxSigLimit | uint64 | 7 | | TxSizeCostPerByte | uint64 | 10 | | SigVerifyCostED25519 | uint64 | 590 | | SigVerifyCostSecp256k1 | uint64 | 1000 | ## Client ### CLI A user can query and interact with the `auth` module using the CLI. ### Query The `query` commands allow users to query `auth` state. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query auth --help ``` #### account The `account` command allow users to query for an account by it's address. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query auth account [address] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query auth account cosmos1... ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} '@type': /cosmos.auth.v1beta1.BaseAccount account_number: "0" address: cosmos1zwg6tpl8aw4rawv8sgag9086lpw5hv33u5ctr2 pub_key: '@type': /cosmos.crypto.secp256k1.PubKey key: ApDrE38zZdd7wLmFS9YmqO684y5DG6fjZ4rVeihF/AQD sequence: "1" ``` #### accounts The `accounts` command allow users to query all the available accounts. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query auth accounts [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query auth accounts ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} accounts: - '@type': /cosmos.auth.v1beta1.BaseAccount account_number: "0" address: cosmos1zwg6tpl8aw4rawv8sgag9086lpw5hv33u5ctr2 pub_key: '@type': /cosmos.crypto.secp256k1.PubKey key: ApDrE38zZdd7wLmFS9YmqO684y5DG6fjZ4rVeihF/AQD sequence: "1" - '@type': /cosmos.auth.v1beta1.ModuleAccount base_account: account_number: "8" address: cosmos1yl6hdjhmkf37639730gffanpzndzdpmhwlkfhr pub_key: null sequence: "0" name: transfer permissions: - minter - burner - '@type': /cosmos.auth.v1beta1.ModuleAccount base_account: account_number: "4" address: cosmos1fl48vsnmsdzcv85q5d2q4z5ajdha8yu34mf0eh pub_key: null sequence: "0" name: bonded_tokens_pool permissions: - burner - staking - '@type': /cosmos.auth.v1beta1.ModuleAccount base_account: account_number: "5" address: cosmos1tygms3xhhs3yv487phx3dw4a95jn7t7lpm470r pub_key: null sequence: "0" name: not_bonded_tokens_pool permissions: - burner - staking - '@type': /cosmos.auth.v1beta1.ModuleAccount base_account: account_number: "6" address: cosmos10d07y265gmmuvt4z0w9aw880jnsr700j6zn9kn pub_key: null sequence: "0" name: gov permissions: - burner - '@type': /cosmos.auth.v1beta1.ModuleAccount base_account: account_number: "3" address: cosmos1jv65s3grqf6v6jl3dp4t6c9t9rk99cd88lyufl pub_key: null sequence: "0" name: distribution permissions: [] - '@type': /cosmos.auth.v1beta1.BaseAccount account_number: "1" address: cosmos147k3r7v2tvwqhcmaxcfql7j8rmkrlsemxshd3j pub_key: null sequence: "0" - '@type': /cosmos.auth.v1beta1.ModuleAccount base_account: account_number: "7" address: cosmos1m3h30wlvsf8llruxtpukdvsy0km2kum8g38c8q pub_key: null sequence: "0" name: mint permissions: - minter - '@type': /cosmos.auth.v1beta1.ModuleAccount base_account: account_number: "2" address: cosmos17xpfvakm2amg962yls6f84z3kell8c5lserqta pub_key: null sequence: "0" name: fee_collector permissions: [] pagination: next_key: null total: "0" ``` #### params The `params` command allow users to query the current auth parameters. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query auth params [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query auth params ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} max_memo_characters: "256" sig_verify_cost_ed25519: "590" sig_verify_cost_secp256k1: "1000" tx_sig_limit: "7" tx_size_cost_per_byte: "10" ``` ### Transactions The `auth` module supports transactions commands to help you with signing and more. Compared to other modules you can access directly the `auth` module transactions commands using the only `tx` command. Use directly the `--help` flag to get more information about the `tx` command. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx --help ``` #### `sign` The `sign` command allows users to sign transactions that was generated offline. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx sign tx.json --from $ALICE > tx.signed.json ``` The result is a signed transaction that can be broadcasted to the network thanks to the broadcast command. More information about the `sign` command can be found running `simd tx sign --help`. #### `sign-batch` The `sign-batch` command allows users to sign multiples offline generated transactions. The transactions can be in one file, with one tx per line, or in multiple files. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx sign txs.json --from $ALICE > tx.signed.json ``` or ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx sign tx1.json tx2.json tx3.json --from $ALICE > tx.signed.json ``` The result is multiples signed transactions. For combining the signed transactions into one transactions, use the `--append` flag. More information about the `sign-batch` command can be found running `simd tx sign-batch --help`. #### `multi-sign` The `multi-sign` command allows users to sign transactions that was generated offline by a multisig account. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx multisign transaction.json k1k2k3 k1sig.json k2sig.json k3sig.json ``` Where `k1k2k3` is the multisig account address, `k1sig.json` is the signature of the first signer, `k2sig.json` is the signature of the second signer, and `k3sig.json` is the signature of the third signer. ##### Nested multisig transactions To allow transactions to be signed by nested multisigs, meaning that a participant of a multisig account can be another multisig account, the `--skip-signature-verification` flag must be used. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} # First aggregate signatures of the multisig participant simd tx multi-sign transaction.json ms1 ms1p1sig.json ms1p2sig.json --signature-only --skip-signature-verification > ms1sig.json # Then use the aggregated signatures and the other signatures to sign the final transaction simd tx multi-sign transaction.json k1ms1 k1sig.json ms1sig.json --skip-signature-verification ``` Where `ms1` is the nested multisig account address, `ms1p1sig.json` is the signature of the first participant of the nested multisig account, `ms1p2sig.json` is the signature of the second participant of the nested multisig account, and `ms1sig.json` is the aggregated signature of the nested multisig account. `k1ms1` is a multisig account comprised of an individual signer and another nested multisig account (`ms1`). `k1sig.json` is the signature of the first signer of the individual member. More information about the `multi-sign` command can be found running `simd tx multi-sign --help`. #### `multisign-batch` The `multisign-batch` works the same way as `sign-batch`, but for multisig accounts. With the difference that the `multisign-batch` command requires all transactions to be in one file, and the `--append` flag does not exist. More information about the `multisign-batch` command can be found running `simd tx multisign-batch --help`. #### `validate-signatures` The `validate-signatures` command allows users to validate the signatures of a signed transaction. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} $ simd tx validate-signatures tx.signed.json Signers: 0: cosmos1l6vsqhh7rnwsyr2kyz3jjg3qduaz8gwgyl8275 Signatures: 0: cosmos1l6vsqhh7rnwsyr2kyz3jjg3qduaz8gwgyl8275 [OK] ``` More information about the `validate-signatures` command can be found running `simd tx validate-signatures --help`. #### `broadcast` The `broadcast` command allows users to broadcast a signed transaction to the network. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx broadcast tx.signed.json ``` More information about the `broadcast` command can be found running `simd tx broadcast --help`. ### gRPC A user can query the `auth` module using gRPC endpoints. #### Account The `account` endpoint allow users to query for an account by it's address. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.auth.v1beta1.Query/Account ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"address":"cosmos1.."}' \ localhost:9090 \ cosmos.auth.v1beta1.Query/Account ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "account":{ "@type":"/cosmos.auth.v1beta1.BaseAccount", "address":"cosmos1zwg6tpl8aw4rawv8sgag9086lpw5hv33u5ctr2", "pubKey":{ "@type":"/cosmos.crypto.secp256k1.PubKey", "key":"ApDrE38zZdd7wLmFS9YmqO684y5DG6fjZ4rVeihF/AQD" }, "sequence":"1" } } ``` #### Accounts The `accounts` endpoint allow users to query all the available accounts. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.auth.v1beta1.Query/Accounts ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ localhost:9090 \ cosmos.auth.v1beta1.Query/Accounts ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "accounts":[ { "@type":"/cosmos.auth.v1beta1.BaseAccount", "address":"cosmos1zwg6tpl8aw4rawv8sgag9086lpw5hv33u5ctr2", "pubKey":{ "@type":"/cosmos.crypto.secp256k1.PubKey", "key":"ApDrE38zZdd7wLmFS9YmqO684y5DG6fjZ4rVeihF/AQD" }, "sequence":"1" }, { "@type":"/cosmos.auth.v1beta1.ModuleAccount", "baseAccount":{ "address":"cosmos1yl6hdjhmkf37639730gffanpzndzdpmhwlkfhr", "accountNumber":"8" }, "name":"transfer", "permissions":[ "minter", "burner" ] }, { "@type":"/cosmos.auth.v1beta1.ModuleAccount", "baseAccount":{ "address":"cosmos1fl48vsnmsdzcv85q5d2q4z5ajdha8yu34mf0eh", "accountNumber":"4" }, "name":"bonded_tokens_pool", "permissions":[ "burner", "staking" ] }, { "@type":"/cosmos.auth.v1beta1.ModuleAccount", "baseAccount":{ "address":"cosmos1tygms3xhhs3yv487phx3dw4a95jn7t7lpm470r", "accountNumber":"5" }, "name":"not_bonded_tokens_pool", "permissions":[ "burner", "staking" ] }, { "@type":"/cosmos.auth.v1beta1.ModuleAccount", "baseAccount":{ "address":"cosmos10d07y265gmmuvt4z0w9aw880jnsr700j6zn9kn", "accountNumber":"6" }, "name":"gov", "permissions":[ "burner" ] }, { "@type":"/cosmos.auth.v1beta1.ModuleAccount", "baseAccount":{ "address":"cosmos1jv65s3grqf6v6jl3dp4t6c9t9rk99cd88lyufl", "accountNumber":"3" }, "name":"distribution" }, { "@type":"/cosmos.auth.v1beta1.BaseAccount", "accountNumber":"1", "address":"cosmos147k3r7v2tvwqhcmaxcfql7j8rmkrlsemxshd3j" }, { "@type":"/cosmos.auth.v1beta1.ModuleAccount", "baseAccount":{ "address":"cosmos1m3h30wlvsf8llruxtpukdvsy0km2kum8g38c8q", "accountNumber":"7" }, "name":"mint", "permissions":[ "minter" ] }, { "@type":"/cosmos.auth.v1beta1.ModuleAccount", "baseAccount":{ "address":"cosmos17xpfvakm2amg962yls6f84z3kell8c5lserqta", "accountNumber":"2" }, "name":"fee_collector" } ], "pagination":{ "total":"9" } } ``` #### Params The `params` endpoint allow users to query the current auth parameters. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.auth.v1beta1.Query/Params ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ localhost:9090 \ cosmos.auth.v1beta1.Query/Params ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "params": { "maxMemoCharacters": "256", "txSigLimit": "7", "txSizeCostPerByte": "10", "sigVerifyCostEd25519": "590", "sigVerifyCostSecp256k1": "1000" } } ``` ### REST A user can query the `auth` module using REST endpoints. #### Account The `account` endpoint allow users to query for an account by it's address. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/auth/v1beta1/account?address={address} ``` #### Accounts The `accounts` endpoint allow users to query all the available accounts. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/auth/v1beta1/accounts ``` #### Params The `params` endpoint allow users to query the current auth parameters. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/auth/v1beta1/params ``` # x/auth/tx Source: https://docs.cosmos.network/sdk/latest/modules/auth/tx **Prerequisite Readings** * [Transactions](/sdk/latest/learn/concepts/lifecycle#transaction-generation) * [Encoding](/sdk/latest/learn/concepts/encoding#transaction-encoding) ## Abstract This document specifies the `x/auth/tx` package of the Cosmos SDK. This package represents the Cosmos SDK implementation of the `client.TxConfig`, `client.TxBuilder`, `client.TxEncoder` and `client.TxDecoder` interfaces. ## Contents * [Transactions](#transactions) * [`TxConfig`](#txconfig) * [`TxBuilder`](#txbuilder) * [`TxEncoder`/ `TxDecoder`](#txencoder-txdecoder) * [Client](#client) * [CLI](#cli) * [gRPC](#grpc) ## Transactions ### `TxConfig` `client.TxConfig` defines an interface a client can utilize to generate an application-defined concrete transaction type. The interface defines a set of methods for creating a `client.TxBuilder`. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/client/tx_config.go#L25-L31 ``` The default implementation of `client.TxConfig` is instantiated by `NewTxConfig` in `x/auth/tx` module. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/x/auth/tx/config.go#L22-L28 ``` ### `TxBuilder` ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/client/tx_config.go#L33-L50 ``` The [`client.TxBuilder`](/sdk/latest/learn/concepts/lifecycle#transaction-generation) interface is as well implemented by `x/auth/tx`. A `client.TxBuilder` can be accessed with `TxConfig.NewTxBuilder()`. ### `TxEncoder`/ `TxDecoder` More information about `TxEncoder` and `TxDecoder` can be found [here](/sdk/latest/learn/concepts/encoding#transaction-encoding). ## Client ### CLI #### Query The `x/auth/tx` module provides a CLI command to query any transaction, given its hash, transaction sequence or signature. Without any argument, the command will query the transaction using the transaction hash. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query tx DFE87B78A630C0EFDF76C80CD24C997E252792E0317502AE1A02B9809F0D8685 ``` When querying a transaction from an account given its sequence, use the `--type=acc_seq` flag: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query tx --type=acc_seq cosmos1u69uyr6v9qwe6zaaeaqly2h6wnedac0xpxq325/1 ``` When querying a transaction given its signature, use the `--type=signature` flag: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query tx --type=signature Ofjvgrqi8twZfqVDmYIhqwRLQjZZ40XbxEamk/veH3gQpRF0hL2PH4ejRaDzAX+2WChnaWNQJQ41ekToIi5Wqw== ``` When querying a transaction given its events, use the `--type=events` flag: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query txs --events 'message.sender=cosmos...' --page 1 --limit 30 ``` The `x/auth/block` module provides a CLI command to query any block, given its hash, height, or events. When querying a block by its hash, use the `--type=hash` flag: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query block --type=hash DFE87B78A630C0EFDF76C80CD24C997E252792E0317502AE1A02B9809F0D8685 ``` When querying a block by its height, use the `--type=height` flag: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query block --type=height 1357 ``` When querying a block by its events, use the `--query` flag: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query blocks --query 'message.sender=cosmos...' --page 1 --limit 30 ``` #### Transactions The `x/auth/tx` module provides a convenient CLI command for decoding and encoding transactions. #### `encode` The `encode` command encodes a transaction created with the `--generate-only` flag or signed with the sign command. The transaction is serialized to Protobuf and returned as base64. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} $ simd tx encode tx.json Co8BCowBChwvY29zbW9zLmJhbmsudjFiZXRhMS5Nc2dTZW5kEmwKLWNvc21vczFsNnZzcWhoN3Jud3N5cjJreXozampnM3FkdWF6OGd3Z3lsODI3NRItY29zbW9zMTU4c2FsZHlnOHBteHU3Znd2dDBkNng3amVzd3A0Z3d5a2xrNnkzGgwKBXN0YWtlEgMxMDASBhIEEMCaDA== $ simd tx encode tx.signed.json ``` More information about the `encode` command can be found running `simd tx encode --help`. #### `decode` The `decode` command decodes a transaction encoded with the `encode` command. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx decode Co8BCowBChwvY29zbW9zLmJhbmsudjFiZXRhMS5Nc2dTZW5kEmwKLWNvc21vczFsNnZzcWhoN3Jud3N5cjJreXozampnM3FkdWF6OGd3Z3lsODI3NRItY29zbW9zMTU4c2FsZHlnOHBteHU3Znd2dDBkNng3amVzd3A0Z3d5a2xrNnkzGgwKBXN0YWtlEgMxMDASBhIEEMCaDA== ``` More information about the `decode` command can be found running `simd tx decode --help`. ### gRPC A user can query the `x/auth/tx` module using gRPC endpoints. #### `TxDecode` The `TxDecode` endpoint allows to decode a transaction. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.tx.v1beta1.Service/TxDecode ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"tx_bytes":"Co8BCowBChwvY29zbW9zLmJhbmsudjFiZXRhMS5Nc2dTZW5kEmwKLWNvc21vczFsNnZzcWhoN3Jud3N5cjJreXozampnM3FkdWF6OGd3Z3lsODI3NRItY29zbW9zMTU4c2FsZHlnOHBteHU3Znd2dDBkNng3amVzd3A0Z3d5a2xrNnkzGgwKBXN0YWtlEgMxMDASBhIEEMCaDA=="}' \ localhost:9090 \ cosmos.tx.v1beta1.Service/TxDecode ``` Example Output: ```json expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "tx": { "body": { "messages": [ { "@type": "/cosmos.bank.v1beta1.MsgSend", "amount": [ { "denom": "stake", "amount": "100" } ], "fromAddress": "cosmos1l6vsqhh7rnwsyr2kyz3jjg3qduaz8gwgyl8275", "toAddress": "cosmos158saldyg8pmxu7fwvt0d6x7jeswp4gwyklk6y3" } ] }, "authInfo": { "fee": { "gasLimit": "200000" } } } } ``` #### `TxEncode` The `TxEncode` endpoint allows to encode a transaction. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.tx.v1beta1.Service/TxEncode ``` Example: ```shell expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"tx": { "body": { "messages": [ {"@type":"/cosmos.bank.v1beta1.MsgSend","amount":[{"denom":"stake","amount":"100"}],"fromAddress":"cosmos1l6vsqhh7rnwsyr2kyz3jjg3qduaz8gwgyl8275","toAddress":"cosmos158saldyg8pmxu7fwvt0d6x7jeswp4gwyklk6y3"} ] }, "authInfo": { "fee": { "gasLimit": "200000" } } }}' \ localhost:9090 \ cosmos.tx.v1beta1.Service/TxEncode ``` Example Output: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "txBytes": "Co8BCowBChwvY29zbW9zLmJhbmsudjFiZXRhMS5Nc2dTZW5kEmwKLWNvc21vczFsNnZzcWhoN3Jud3N5cjJreXozampnM3FkdWF6OGd3Z3lsODI3NRItY29zbW9zMTU4c2FsZHlnOHBteHU3Znd2dDBkNng3amVzd3A0Z3d5a2xrNnkzGgwKBXN0YWtlEgMxMDASBhIEEMCaDA==" } ``` #### `TxDecodeAmino` The `TxDecode` endpoint allows to decode an amino transaction. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.tx.v1beta1.Service/TxDecodeAmino ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"amino_binary": "KCgWqQpvqKNhmgotY29zbW9zMXRzeno3cDJ6Z2Q3dnZrYWh5ZnJlNHduNXh5dTgwcnB0ZzZ2OWg1Ei1jb3Ntb3MxdHN6ejdwMnpnZDd2dmthaHlmcmU0d241eHl1ODBycHRnNnY5aDUaCwoFc3Rha2USAjEwEhEKCwoFc3Rha2USAjEwEMCaDCIGZm9vYmFy"}' \ localhost:9090 \ cosmos.tx.v1beta1.Service/TxDecodeAmino ``` Example Output: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "aminoJson": "{\"type\":\"cosmos-sdk/StdTx\",\"value\":{\"msg\":[{\"type\":\"cosmos-sdk/MsgSend\",\"value\":{\"from_address\":\"cosmos1tszz7p2zgd7vvkahyfre4wn5xyu80rptg6v9h5\",\"to_address\":\"cosmos1tszz7p2zgd7vvkahyfre4wn5xyu80rptg6v9h5\",\"amount\":[{\"denom\":\"stake\",\"amount\":\"10\"}]}}],\"fee\":{\"amount\":[{\"denom\":\"stake\",\"amount\":\"10\"}],\"gas\":\"200000\"},\"signatures\":null,\"memo\":\"foobar\",\"timeout_height\":\"0\"}}" } ``` #### `TxEncodeAmino` The `TxEncodeAmino` endpoint allows to encode an amino transaction. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.tx.v1beta1.Service/TxEncodeAmino ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"amino_json":"{\"type\":\"cosmos-sdk/StdTx\",\"value\":{\"msg\":[{\"type\":\"cosmos-sdk/MsgSend\",\"value\":{\"from_address\":\"cosmos1tszz7p2zgd7vvkahyfre4wn5xyu80rptg6v9h5\",\"to_address\":\"cosmos1tszz7p2zgd7vvkahyfre4wn5xyu80rptg6v9h5\",\"amount\":[{\"denom\":\"stake\",\"amount\":\"10\"}]}}],\"fee\":{\"amount\":[{\"denom\":\"stake\",\"amount\":\"10\"}],\"gas\":\"200000\"},\"signatures\":null,\"memo\":\"foobar\",\"timeout_height\":\"0\"}}"}' \ localhost:9090 \ cosmos.tx.v1beta1.Service/TxEncodeAmino ``` Example Output: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "amino_binary": "KCgWqQpvqKNhmgotY29zbW9zMXRzeno3cDJ6Z2Q3dnZrYWh5ZnJlNHduNXh5dTgwcnB0ZzZ2OWg1Ei1jb3Ntb3MxdHN6ejdwMnpnZDd2dmthaHlmcmU0d241eHl1ODBycHRnNnY5aDUaCwoFc3Rha2USAjEwEhEKCwoFc3Rha2USAjEwEMCaDCIGZm9vYmFy" } ``` # x/auth/vesting Source: https://docs.cosmos.network/sdk/latest/modules/auth/vesting * [Intro and Requirements](#intro-and-requirements) * [Note](#note) * [Vesting Account Types](#vesting-account-types) * [BaseVestingAccount](#basevestingaccount) * [ContinuousVestingAccount](#continuousvestingaccount) * [DelayedVestingAccount](#delayedvestingaccount) * [Period](#period) * [PeriodicVestingAccount](#periodicvestingaccount) * [PermanentLockedAccount](#permanentlockedaccount) * [Vesting Account Specification](#vesting-account-specification) * [Determining Vesting & Vested Amounts](#determining-vesting-&-vested-amounts) * [Periodic Vesting Accounts](#periodic-vesting-accounts) * [Transferring/Sending](#transferringsending) * [Delegating](#delegating) * [Undelegating](#undelegating) * [Keepers & Handlers](#keepers-&-handlers) * [Genesis Initialization](#genesis-initialization) * [Examples](#examples) * [Simple](#simple) * [Slashing](#slashing) * [Periodic Vesting](#periodic-vesting) * [Glossary](#glossary) ## Intro and Requirements This specification defines the vesting account implementation that is used by the Cosmos Hub. The requirements for this vesting account is that it should be initialized during genesis with a starting balance `X` and a vesting end time `ET`. A vesting account may be initialized with a vesting start time `ST` and a number of vesting periods `P`. If a vesting start time is included, the vesting period does not begin until start time is reached. If vesting periods are included, the vesting occurs over the specified number of periods. For all vesting accounts, the owner of the vesting account is able to delegate and undelegate from validators, however they cannot transfer coins to another account until those coins are vested. This specification allows for four different kinds of vesting: * Delayed vesting, where all coins are vested once `ET` is reached. * Continuous vesting, where coins begin to vest at `ST` and vest linearly with respect to time until `ET` is reached * Periodic vesting, where coins begin to vest at `ST` and vest periodically according to number of periods and the vesting amount per period. The number of periods, length per period, and amount per period are configurable. A periodic vesting account is distinguished from a continuous vesting account in that coins can be released in staggered tranches. For example, a periodic vesting account could be used for vesting arrangements where coins are released quarterly, yearly, or over any other function of tokens over time. * Permanent locked vesting, where coins are locked forever. Coins in this account can still be used for delegating and for governance votes even while locked. ## Note Vesting accounts can be initialized with some vesting and non-vesting coins. The non-vesting coins would be immediately transferable. DelayedVesting ContinuousVesting, PeriodicVesting and PermanentVesting accounts can be created with normal messages after genesis. Other types of vesting accounts must be created at genesis, or as part of a manual network upgrade. The current specification only allows for *unconditional* vesting (ie. there is no possibility of reaching `ET` and having coins fail to vest). ## Vesting Account Types ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // VestingAccount defines an interface that any vesting account type must // implement. type VestingAccount interface { Account GetVestedCoins(Time) Coins GetVestingCoins(Time) Coins // TrackDelegation performs internal vesting accounting necessary when // delegating from a vesting account. It accepts the current block time, the // delegation amount and balance of all coins whose denomination exists in // the account's original vesting balance. TrackDelegation(Time, Coins, Coins) // TrackUndelegation performs internal vesting accounting necessary when a // vesting account performs an undelegation. TrackUndelegation(Coins) GetStartTime() int64 GetEndTime() int64 } ``` ### BaseVestingAccount ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/vesting/v1beta1/vesting.proto#L11-L35 ``` ### ContinuousVestingAccount ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/vesting/v1beta1/vesting.proto#L37-L46 ``` ### DelayedVestingAccount ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/vesting/v1beta1/vesting.proto#L48-L57 ``` ### Period ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/vesting/v1beta1/vesting.proto#L59-L69 ``` ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Stores all vesting periods passed as part of a PeriodicVestingAccount type Periods []Period ``` ### PeriodicVestingAccount ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/vesting/v1beta1/vesting.proto#L71-L81 ``` In order to facilitate less ad-hoc type checking and assertions and to support flexibility in account balance usage, the existing `x/bank` `ViewKeeper` interface is updated to contain the following: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type ViewKeeper interface { // ... // Calculates the total locked account balance. LockedCoins(ctx sdk.Context, addr sdk.AccAddress) sdk.Coins // Calculates the total spendable balance that can be sent to other accounts. SpendableCoins(ctx sdk.Context, addr sdk.AccAddress) sdk.Coins } ``` ### PermanentLockedAccount ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/vesting/v1beta1/vesting.proto#L83-L94 ``` ## Vesting Account Specification Given a vesting account, we define the following in the proceeding operations: * `OV`: The original vesting coin amount. It is a constant value. * `V`: The number of `OV` coins that are still *vesting*. It is derived by `OV`, `StartTime` and `EndTime`. This value is computed on demand and not on a per-block basis. * `V'`: The number of `OV` coins that are *vested* (unlocked). This value is computed on demand and not a per-block basis. * `DV`: The number of delegated *vesting* coins. It is a variable value. It is stored and modified directly in the vesting account. * `DF`: The number of delegated *vested* (unlocked) coins. It is a variable value. It is stored and modified directly in the vesting account. * `BC`: The number of `OV` coins less any coins that are transferred (which can be negative or delegated). It is considered to be balance of the embedded base account. It is stored and modified directly in the vesting account. ### Determining Vesting & Vested Amounts It is important to note that these values are computed on demand and not on a mandatory per-block basis (e.g. `BeginBlocker` or `EndBlocker`). #### Continuously Vesting Accounts To determine the amount of coins that are vested for a given block time `T`, the following is performed: 1. Compute `X := T - StartTime` 2. Compute `Y := EndTime - StartTime` 3. Compute `V' := OV * (X / Y)` 4. Compute `V := OV - V'` Thus, the total amount of *vested* coins is `V'` and the remaining amount, `V`, is *vesting*. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (cva ContinuousVestingAccount) GetVestedCoins(t Time) Coins { if t <= cva.StartTime { // We must handle the case where the start time for a vesting account has // been set into the future or when the start of the chain is not exactly // known. return ZeroCoins } else if t >= cva.EndTime { return cva.OriginalVesting } x := t - cva.StartTime y := cva.EndTime - cva.StartTime return cva.OriginalVesting * (x / y) } func (cva ContinuousVestingAccount) GetVestingCoins(t Time) Coins { return cva.OriginalVesting - cva.GetVestedCoins(t) } ``` ### Periodic Vesting Accounts Periodic vesting accounts require calculating the coins released during each period for a given block time `T`. Note that multiple periods could have passed when calling `GetVestedCoins`, so we must iterate over each period until the end of that period is after `T`. 1. Set `CT := StartTime` 2. Set `V' := 0` For each Period P: 1. Compute `X := T - CT` 2. IF `X >= P.Length` 1. Compute `V' += P.Amount` 2. Compute `CT += P.Length` 3. ELSE break 3. Compute `V := OV - V'` ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (pva PeriodicVestingAccount) GetVestedCoins(t Time) Coins { if t < pva.StartTime { return ZeroCoins } ct := pva.StartTime // The start of the vesting schedule vested := 0 periods = pva.GetPeriods() for _, period := range periods { if t - ct < period.Length { break } vested += period.Amount ct += period.Length // increment ct to the start of the next vesting period } return vested } func (pva PeriodicVestingAccount) GetVestingCoins(t Time) Coins { return pva.OriginalVesting - cva.GetVestedCoins(t) } ``` #### Delayed/Discrete Vesting Accounts Delayed vesting accounts are easier to reason about as they only have the full amount vesting up until a certain time, then all the coins become vested (unlocked). This does not include any unlocked coins the account may have initially. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (dva DelayedVestingAccount) GetVestedCoins(t Time) Coins { if t >= dva.EndTime { return dva.OriginalVesting } return ZeroCoins } func (dva DelayedVestingAccount) GetVestingCoins(t Time) Coins { return dva.OriginalVesting - dva.GetVestedCoins(t) } ``` ### Transferring/Sending At any given time, a vesting account may transfer: `min((BC + DV) - V, BC)`. In other words, a vesting account may transfer the minimum of the base account balance and the base account balance plus the number of currently delegated vesting coins less the number of coins vested so far. However, given that account balances are tracked via the `x/bank` module and that we want to avoid loading the entire account balance, we can instead determine the locked balance, which can be defined as `max(V - DV, 0)`, and infer the spendable balance from that. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (va VestingAccount) LockedCoins(t Time) Coins { return max(va.GetVestingCoins(t) - va.DelegatedVesting, 0) } ``` The `x/bank` `ViewKeeper` can then provide APIs to determine locked and spendable coins for any account: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (k Keeper) LockedCoins(ctx Context, addr AccAddress) Coins { acc := k.GetAccount(ctx, addr) if acc != nil { if acc.IsVesting() { return acc.LockedCoins(ctx.BlockTime()) } } // non-vesting accounts do not have any locked coins return NewCoins() } ``` #### Keepers/Handlers The corresponding `x/bank` keeper should appropriately handle sending coins based on if the account is a vesting account or not. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (k Keeper) SendCoins(ctx Context, from Account, to Account, amount Coins) { bc := k.GetBalances(ctx, from) v := k.LockedCoins(ctx, from) spendable := bc - v newCoins := spendable - amount assert(newCoins >= 0) from.SetBalance(newCoins) to.AddBalance(amount) // save balances... } ``` ### Delegating For a vesting account attempting to delegate `D` coins, the following is performed: 1. Verify `BC >= D > 0` 2. Compute `X := min(max(V - DV, 0), D)` (portion of `D` that is vesting) 3. Compute `Y := D - X` (portion of `D` that is free) 4. Set `DV += X` 5. Set `DF += Y` ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (va VestingAccount) TrackDelegation(t Time, balance Coins, amount Coins) { assert(balance <= amount) x := min(max(va.GetVestingCoins(t) - va.DelegatedVesting, 0), amount) y := amount - x va.DelegatedVesting += x va.DelegatedFree += y } ``` **Note** `TrackDelegation` only modifies the `DelegatedVesting` and `DelegatedFree` fields, so upstream callers MUST modify the `Coins` field by subtracting `amount`. #### Keepers/Handlers ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func DelegateCoins(t Time, from Account, amount Coins) { if isVesting(from) { from.TrackDelegation(t, amount) } else { from.SetBalance(sc - amount) } // save account... } ``` ### Undelegating For a vesting account attempting to undelegate `D` coins, the following is performed: > NOTE: `DV < D` and `(DV + DF) < D` may be possible due to quirks in the rounding of delegation/undelegation logic. 1. Verify `D > 0` 2. Compute `X := min(DF, D)` (portion of `D` that should become free, prioritizing free coins) 3. Compute `Y := min(DV, D - X)` (portion of `D` that should remain vesting) 4. Set `DF -= X` 5. Set `DV -= Y` ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (cva ContinuousVestingAccount) TrackUndelegation(amount Coins) { x := min(cva.DelegatedFree, amount) y := amount - x cva.DelegatedFree -= x cva.DelegatedVesting -= y } ``` **Note** `TrackUnDelegation` only modifies the `DelegatedVesting` and `DelegatedFree` fields, so upstream callers MUST modify the `Coins` field by adding `amount`. **Note**: If a delegation is slashed, the continuous vesting account ends up with an excess `DV` amount, even after all its coins have vested. This is because undelegating free coins are prioritized. **Note**: The undelegation (bond refund) amount may exceed the delegated vesting (bond) amount due to the way undelegation truncates the bond refund, which can increase the validator's exchange rate (tokens/shares) slightly if the undelegated tokens are non-integral. #### Keepers/Handlers ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func UndelegateCoins(to Account, amount Coins) { if isVesting(to) { if to.DelegatedFree + to.DelegatedVesting >= amount { to.TrackUndelegation(amount) // save account ... } } else { AddBalance(to, amount) // save account... } } ``` ## Keepers & Handlers The `VestingAccount` implementations reside in `x/auth`. However, any keeper in a module (e.g. staking in `x/staking`) wishing to potentially utilize any vesting coins, must call explicit methods on the `x/bank` keeper (e.g. `DelegateCoins`) opposed to `SendCoins` and `SubtractCoins`. In addition, the vesting account should also be able to spend any coins it receives from other users. Thus, the bank module's `MsgSend` handler should error if a vesting account is trying to send an amount that exceeds their unlocked coin amount. See the above specification for full implementation details. ## Genesis Initialization To initialize both vesting and non-vesting accounts, the `GenesisAccount` struct includes new fields: `Vesting`, `StartTime`, and `EndTime`. Accounts meant to be of type `BaseAccount` or any non-vesting type have `Vesting = false`. The genesis initialization logic (e.g. `initFromGenesisState`) must parse and return the correct accounts accordingly based off of these fields. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type GenesisAccount struct { // ... // vesting account fields OriginalVesting sdk.Coins `json:"original_vesting"` DelegatedFree sdk.Coins `json:"delegated_free"` DelegatedVesting sdk.Coins `json:"delegated_vesting"` StartTime int64 `json:"start_time"` EndTime int64 `json:"end_time"` } func ToAccount(gacc GenesisAccount) Account { bacc := NewBaseAccount(gacc) if gacc.OriginalVesting > 0 { if ga.StartTime != 0 && ga.EndTime != 0 { // return a continuous vesting account } else if ga.EndTime != 0 { // return a delayed vesting account } else { // invalid genesis vesting account provided panic() } } return bacc } ``` ## Examples ### Simple Given a continuous vesting account with 10 vesting coins. ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} OV = 10 DF = 0 DV = 0 BC = 10 V = 10 V' = 0 ``` 1. Immediately receives 1 coin ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} BC = 11 ``` 2. Time passes, 2 coins vest ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} V = 8 V' = 2 ``` 3. Delegates 4 coins to validator A ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} DV = 4 BC = 7 ``` 4. Sends 3 coins ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} BC = 4 ``` 5. More time passes, 2 more coins vest ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} V = 6 V' = 4 ``` 6. Sends 2 coins. At this point the account cannot send anymore until further coins vest or it receives additional coins. It can still however, delegate. ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} BC = 2 ``` ### Slashing Same initial starting conditions as the simple example. 1. Time passes, 5 coins vest ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} V = 5 V' = 5 ``` 2. Delegate 5 coins to validator A ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} DV = 5 BC = 5 ``` 3. Delegate 5 coins to validator B ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} DF = 5 BC = 0 ``` 4. Validator A gets slashed by 50%, making the delegation to A now worth 2.5 coins 5. Undelegate from validator A (2.5 coins) ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} DF = 5 - 2.5 = 2.5 BC = 0 + 2.5 = 2.5 ``` 6. Undelegate from validator B (5 coins). The account at this point can only send 2.5 coins unless it receives more coins or until more coins vest. It can still however, delegate. ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} DV = 5 - 2.5 = 2.5 DF = 2.5 - 2.5 = 0 BC = 2.5 + 5 = 7.5 ``` Notice how we have an excess amount of `DV`. ### Periodic Vesting A vesting account is created where 100 tokens will be released over 1 year, with 1/4 of tokens vesting each quarter. The vesting schedule would be as follows: ```yaml theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} Periods: - amount: 25stake, length: 7884000 - amount: 25stake, length: 7884000 - amount: 25stake, length: 7884000 - amount: 25stake, length: 7884000 ``` ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} OV = 100 DF = 0 DV = 0 BC = 100 V = 100 V' = 0 ``` 1. Immediately receives 1 coin ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} BC = 101 ``` 2. Vesting period 1 passes, 25 coins vest ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} V = 75 V' = 25 ``` 3. During vesting period 2, 5 coins are transferred and 5 coins are delegated ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} DV = 5 BC = 91 ``` 4. Vesting period 2 passes, 25 coins vest ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} V = 50 V' = 50 ``` ## Glossary * OriginalVesting: The amount of coins (per denomination) that are initially part of a vesting account. These coins are set at genesis. * StartTime: The BFT time at which a vesting account starts to vest. * EndTime: The BFT time at which a vesting account is fully vested. * DelegatedFree: The tracked amount of coins (per denomination) that are delegated from a vesting account that have been fully vested at time of delegation. * DelegatedVesting: The tracked amount of coins (per denomination) that are delegated from a vesting account that were vesting at time of delegation. * ContinuousVestingAccount: A vesting account implementation that vests coins linearly over time. * DelayedVestingAccount: A vesting account implementation that only fully vests all coins at a given time. * PeriodicVestingAccount: A vesting account implementation that vests coins according to a custom vesting schedule. * PermanentLockedAccount: It does not ever release coins, locking them indefinitely. Coins in this account can still be used for delegating and for governance votes even while locked. ## CLI A user can query and interact with the `vesting` module using the CLI. ### Transactions The `tx` commands allow users to interact with the `vesting` module. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx vesting --help ``` #### create-periodic-vesting-account The `create-periodic-vesting-account` command creates a new vesting account funded with an allocation of tokens, where a sequence of coins and period length in seconds. Periods are sequential, in that the duration of a period only starts at the end of the previous period. The duration of the first period starts upon account creation. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx vesting create-periodic-vesting-account [to_address] [periods_json_file] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx vesting create-periodic-vesting-account cosmos1.. periods.json ``` #### create-vesting-account The `create-vesting-account` command creates a new vesting account funded with an allocation of tokens. The account can either be a delayed or continuous vesting account, which is determined by the '--delayed' flag. All vesting accouts created will have their start time set by the committed block's time. The end\_time must be provided as a UNIX epoch timestamp. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx vesting create-vesting-account [to_address] [amount] [end_time] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx vesting create-vesting-account cosmos1.. 100stake 2592000 ``` # x/authz Source: https://docs.cosmos.network/sdk/latest/modules/authz/README ## Abstract `x/authz` is an implementation of a Cosmos SDK module, per [ADR 30](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-030-authz-module.md), that allows granting arbitrary privileges from one account (the granter) to another account (the grantee). Authorizations must be granted for a particular Msg service method one by one using an implementation of the `Authorization` interface. ## Contents * [Concepts](#concepts) * [Authorization and Grant](#authorization-and-grant) * [Built-in Authorizations](#built-in-authorizations) * [Gas](#gas) * [State](#state) * [Grant](#grant) * [GrantQueue](#grantqueue) * [Messages](#messages) * [MsgGrant](#msggrant) * [MsgRevoke](#msgrevoke) * [MsgExec](#msgexec) * [Events](#events) * [Client](#client) * [CLI](#cli) * [gRPC](#grpc) * [REST](#rest) ## Concepts ### Authorization and Grant The `x/authz` module defines interfaces and messages grant authorizations to perform actions on behalf of one account to other accounts. The design is defined in the [ADR 030](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-030-authz-module.md). A *grant* is an allowance to execute a Msg by the grantee on behalf of the granter. Authorization is an interface that must be implemented by a concrete authorization logic to validate and execute grants. Authorizations are extensible and can be defined for any Msg service method even outside of the module where the Msg method is defined. See the `SendAuthorization` example in the next section for more details. **Note:** The authz module is different from the [auth (authentication)](/sdk/latest/modules/auth/auth/) module that is responsible for specifying the base transaction and account types. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package authz import ( "github.com/cosmos/gogoproto/proto" sdk "github.com/cosmos/cosmos-sdk/types" ) // Authorization represents the interface of various Authorization types implemented // by other modules. type Authorization interface { proto.Message // MsgTypeURL returns the fully-qualified Msg service method URL (as described in ADR 031), // which will process and accept or reject a request. MsgTypeURL() string // Accept determines whether this grant permits the provided sdk.Msg to be performed, // and if so provides an upgraded authorization instance. Accept(ctx sdk.Context, msg sdk.Msg) (AcceptResponse, error) // ValidateBasic does a simple validation check that // doesn't require access to any other information. ValidateBasic() error } // AcceptResponse instruments the controller of an authz message if the request is accepted // and if it should be updated or deleted. type AcceptResponse struct { // If Accept=true, the controller can accept and authorization and handle the update. Accept bool // If Delete=true, the controller must delete the authorization object and release // storage resources. Delete bool // Controller, who is calling Authorization.Accept must check if `Updated != nil`. If yes, // it must use the updated version and handle the update on the storage level. Updated Authorization } ``` ### Built-in Authorizations The Cosmos SDK `x/authz` module comes with following authorization types: #### GenericAuthorization `GenericAuthorization` implements the `Authorization` interface that gives unrestricted permission to execute the provided Msg on behalf of granter's account. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/authz/v1beta1/authz.proto#L14-L22 ``` ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package authz import ( sdk "github.com/cosmos/cosmos-sdk/types" ) var _ Authorization = &GenericAuthorization{ } // NewGenericAuthorization creates a new GenericAuthorization object. func NewGenericAuthorization(msgTypeURL string) *GenericAuthorization { return &GenericAuthorization{ Msg: msgTypeURL, } } // MsgTypeURL implements Authorization.MsgTypeURL. func (a GenericAuthorization) MsgTypeURL() string { return a.Msg } // Accept implements Authorization.Accept. func (a GenericAuthorization) Accept(ctx sdk.Context, msg sdk.Msg) (AcceptResponse, error) { return AcceptResponse{ Accept: true }, nil } // ValidateBasic implements Authorization.ValidateBasic. func (a GenericAuthorization) ValidateBasic() error { return nil } ``` * `msg` stores Msg type URL. #### SendAuthorization `SendAuthorization` implements the `Authorization` interface for the `cosmos.bank.v1beta1.MsgSend` Msg. * It takes a (positive) `SpendLimit` that specifies the maximum amount of tokens the grantee can spend. The `SpendLimit` is updated as the tokens are spent. * It takes an (optional) `AllowList` that specifies to which addresses a grantee can send token. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/bank/v1beta1/authz.proto#L11-L30 ``` ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package types import ( sdk "github.com/cosmos/cosmos-sdk/types" sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" "github.com/cosmos/cosmos-sdk/x/authz" ) // TODO: Revisit this once we have proper gas fee framework. // Ref: https://github.com/cosmos/cosmos-sdk/issues/9054 // Ref: https://github.com/cosmos/cosmos-sdk/discussions/9072 const gasCostPerIteration = uint64(10) var _ authz.Authorization = &SendAuthorization{ } // NewSendAuthorization creates a new SendAuthorization object. func NewSendAuthorization(spendLimit sdk.Coins, allowed []sdk.AccAddress) *SendAuthorization { return &SendAuthorization{ AllowList: toBech32Addresses(allowed), SpendLimit: spendLimit, } } // MsgTypeURL implements Authorization.MsgTypeURL. func (a SendAuthorization) MsgTypeURL() string { return sdk.MsgTypeURL(&MsgSend{ }) } // Accept implements Authorization.Accept. func (a SendAuthorization) Accept(ctx sdk.Context, msg sdk.Msg) (authz.AcceptResponse, error) { mSend, ok := msg.(*MsgSend) if !ok { return authz.AcceptResponse{ }, sdkerrors.ErrInvalidType.Wrap("type mismatch") } toAddr := mSend.ToAddress limitLeft, isNegative := a.SpendLimit.SafeSub(mSend.Amount...) if isNegative { return authz.AcceptResponse{ }, sdkerrors.ErrInsufficientFunds.Wrapf("requested amount is more than spend limit") } if limitLeft.IsZero() { return authz.AcceptResponse{ Accept: true, Delete: true }, nil } isAddrExists := false allowedList := a.GetAllowList() for _, addr := range allowedList { ctx.GasMeter().ConsumeGas(gasCostPerIteration, "send authorization") if addr == toAddr { isAddrExists = true break } } if len(allowedList) > 0 && !isAddrExists { return authz.AcceptResponse{ }, sdkerrors.ErrUnauthorized.Wrapf("cannot send to %s address", toAddr) } return authz.AcceptResponse{ Accept: true, Delete: false, Updated: &SendAuthorization{ SpendLimit: limitLeft, AllowList: allowedList }}, nil } // ValidateBasic implements Authorization.ValidateBasic. func (a SendAuthorization) ValidateBasic() error { if a.SpendLimit == nil { return sdkerrors.ErrInvalidCoins.Wrap("spend limit cannot be nil") } if !a.SpendLimit.IsAllPositive() { return sdkerrors.ErrInvalidCoins.Wrapf("spend limit must be positive") } found := make(map[string]bool, 0) for i := 0; i < len(a.AllowList); i++ { if found[a.AllowList[i]] { return ErrDuplicateEntry } found[a.AllowList[i]] = true } return nil } func toBech32Addresses(allowed []sdk.AccAddress) []string { if len(allowed) == 0 { return nil } allowedAddrs := make([]string, len(allowed)) for i, addr := range allowed { allowedAddrs[i] = addr.String() } return allowedAddrs } ``` * `spend_limit` keeps track of how many coins are left in the authorization. * `allow_list` specifies an optional list of addresses to whom the grantee can send tokens on behalf of the granter. #### StakeAuthorization `StakeAuthorization` implements the `Authorization` interface for messages in the [staking module](/sdk/latest/modules/staking). It takes an `AuthorizationType` to specify whether you want to authorise delegating, undelegating or redelegating (i.e. these have to be authorised separately). It also takes an optional `MaxTokens` that keeps track of a limit to the amount of tokens that can be delegated/undelegated/redelegated. If left empty, the amount is unlimited. Additionally, this Msg takes an `AllowList` or a `DenyList`, which allows you to select which validators you allow or deny grantees to stake with. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/authz.proto#L11-L35 ``` ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package types import ( sdk "github.com/cosmos/cosmos-sdk/types" sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" "github.com/cosmos/cosmos-sdk/x/authz" ) // TODO: Revisit this once we have proper gas fee framework. // Tracking issues https://github.com/cosmos/cosmos-sdk/issues/9054, https://github.com/cosmos/cosmos-sdk/discussions/9072 const gasCostPerIteration = uint64(10) var _ authz.Authorization = &StakeAuthorization{ } // NewStakeAuthorization creates a new StakeAuthorization object. func NewStakeAuthorization(allowed []sdk.ValAddress, denied []sdk.ValAddress, authzType AuthorizationType, amount *sdk.Coin) (*StakeAuthorization, error) { allowedValidators, deniedValidators, err := validateAllowAndDenyValidators(allowed, denied) if err != nil { return nil, err } a := StakeAuthorization{ } if allowedValidators != nil { a.Validators = &StakeAuthorization_AllowList{ AllowList: &StakeAuthorization_Validators{ Address: allowedValidators }} } else { a.Validators = &StakeAuthorization_DenyList{ DenyList: &StakeAuthorization_Validators{ Address: deniedValidators }} } if amount != nil { a.MaxTokens = amount } a.AuthorizationType = authzType return &a, nil } // MsgTypeURL implements Authorization.MsgTypeURL. func (a StakeAuthorization) MsgTypeURL() string { authzType, err := normalizeAuthzType(a.AuthorizationType) if err != nil { panic(err) } return authzType } func (a StakeAuthorization) ValidateBasic() error { if a.MaxTokens != nil && a.MaxTokens.IsNegative() { return sdkerrors.Wrapf(authz.ErrNegativeMaxTokens, "negative coin amount: %v", a.MaxTokens) } if a.AuthorizationType == AuthorizationType_AUTHORIZATION_TYPE_UNSPECIFIED { return authz.ErrUnknownAuthorizationType } return nil } // Accept implements Authorization.Accept. func (a StakeAuthorization) Accept(ctx sdk.Context, msg sdk.Msg) (authz.AcceptResponse, error) { var validatorAddress string var amount sdk.Coin switch msg := msg.(type) { case *MsgDelegate: validatorAddress = msg.ValidatorAddress amount = msg.Amount case *MsgUndelegate: validatorAddress = msg.ValidatorAddress amount = msg.Amount case *MsgBeginRedelegate: validatorAddress = msg.ValidatorDstAddress amount = msg.Amount default: return authz.AcceptResponse{ }, sdkerrors.ErrInvalidRequest.Wrap("unknown msg type") } isValidatorExists := false allowedList := a.GetAllowList().GetAddress() for _, validator := range allowedList { ctx.GasMeter().ConsumeGas(gasCostPerIteration, "stake authorization") if validator == validatorAddress { isValidatorExists = true break } } denyList := a.GetDenyList().GetAddress() for _, validator := range denyList { ctx.GasMeter().ConsumeGas(gasCostPerIteration, "stake authorization") if validator == validatorAddress { return authz.AcceptResponse{ }, sdkerrors.ErrUnauthorized.Wrapf("cannot delegate/undelegate to %s validator", validator) } } if len(allowedList) > 0 && !isValidatorExists { return authz.AcceptResponse{ }, sdkerrors.ErrUnauthorized.Wrapf("cannot delegate/undelegate to %s validator", validatorAddress) } if a.MaxTokens == nil { return authz.AcceptResponse{ Accept: true, Delete: false, Updated: &StakeAuthorization{ Validators: a.GetValidators(), AuthorizationType: a.GetAuthorizationType() }, }, nil } limitLeft, err := a.MaxTokens.SafeSub(amount) if err != nil { return authz.AcceptResponse{ }, err } if limitLeft.IsZero() { return authz.AcceptResponse{ Accept: true, Delete: true }, nil } return authz.AcceptResponse{ Accept: true, Delete: false, Updated: &StakeAuthorization{ Validators: a.GetValidators(), AuthorizationType: a.GetAuthorizationType(), MaxTokens: &limitLeft }, }, nil } func validateAllowAndDenyValidators(allowed []sdk.ValAddress, denied []sdk.ValAddress) ([]string, []string, error) { if len(allowed) == 0 && len(denied) == 0 { return nil, nil, sdkerrors.ErrInvalidRequest.Wrap("both allowed & deny list cannot be empty") } if len(allowed) > 0 && len(denied) > 0 { return nil, nil, sdkerrors.ErrInvalidRequest.Wrap("cannot set both allowed & deny list") } allowedValidators := make([]string, len(allowed)) if len(allowed) > 0 { for i, validator := range allowed { allowedValidators[i] = validator.String() } return allowedValidators, nil, nil } deniedValidators := make([]string, len(denied)) for i, validator := range denied { deniedValidators[i] = validator.String() } return nil, deniedValidators, nil } // Normalized Msg type URLs func normalizeAuthzType(authzType AuthorizationType) (string, error) { switch authzType { case AuthorizationType_AUTHORIZATION_TYPE_DELEGATE: return sdk.MsgTypeURL(&MsgDelegate{ }), nil case AuthorizationType_AUTHORIZATION_TYPE_UNDELEGATE: return sdk.MsgTypeURL(&MsgUndelegate{ }), nil case AuthorizationType_AUTHORIZATION_TYPE_REDELEGATE: return sdk.MsgTypeURL(&MsgBeginRedelegate{ }), nil default: return "", sdkerrors.Wrapf(authz.ErrUnknownAuthorizationType, "cannot normalize authz type with %T", authzType) } } ``` ### Gas In order to prevent DoS attacks, granting `StakeAuthorization`s with `x/authz` incurs gas. `StakeAuthorization` allows you to authorize another account to delegate, undelegate, or redelegate to validators. The authorizer can define a list of validators they allow or deny delegations to. The Cosmos SDK iterates over these lists and charge 10 gas for each validator in both of the lists. Since the state maintains a list for granter, grantee pair with the same expiration, we are iterating over the list to remove the grant (in case of any revoke of a particular `msgType`) from the list and we are charging 20 gas per iteration. ## State ### Grant Grants are identified by combining granter address (the address bytes of the granter), grantee address (the address bytes of the grantee) and Authorization type (its type URL). Hence we only allow one grant for the (granter, grantee, Authorization) triple. * Grant: `0x01 | granter_address_len (1 byte) | granter_address_bytes | grantee_address_len (1 byte) | grantee_address_bytes | msgType_bytes -> ProtocolBuffer(AuthorizationGrant)` The grant object encapsulates an `Authorization` type and an expiration timestamp: ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/authz/v1beta1/authz.proto#L24-L32 ``` ### GrantQueue We are maintaining a queue for authz pruning. Whenever a grant is created, an item will be added to `GrantQueue` with a key of expiration, granter, grantee. In `EndBlock` (which runs for every block) we continuously check and prune the expired grants by forming a prefix key with current blocktime that passed the stored expiration in `GrantQueue`, we iterate through all the matched records from `GrantQueue` and delete them from the `GrantQueue` & `Grant`s store. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package keeper import ( "fmt" "strconv" "time" "github.com/cosmos/gogoproto/proto" abci "github.com/tendermint/tendermint/abci/types" "github.com/tendermint/tendermint/libs/log" "github.com/cosmos/cosmos-sdk/baseapp" "github.com/cosmos/cosmos-sdk/codec" codectypes "github.com/cosmos/cosmos-sdk/codec/types" storetypes "github.com/cosmos/cosmos-sdk/store/types" sdk "github.com/cosmos/cosmos-sdk/types" sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" "github.com/cosmos/cosmos-sdk/x/authz" ) // TODO: Revisit this once we have proper gas fee framework. // Tracking issues https://github.com/cosmos/cosmos-sdk/issues/9054, // https://github.com/cosmos/cosmos-sdk/discussions/9072 const gasCostPerIteration = uint64(20) type Keeper struct { storeKey storetypes.StoreKey cdc codec.BinaryCodec router *baseapp.MsgServiceRouter authKeeper authz.AccountKeeper } // NewKeeper constructs a message authorization Keeper func NewKeeper(storeKey storetypes.StoreKey, cdc codec.BinaryCodec, router *baseapp.MsgServiceRouter, ak authz.AccountKeeper) Keeper { return Keeper{ storeKey: storeKey, cdc: cdc, router: router, authKeeper: ak, } } // Logger returns a module-specific logger. func (k Keeper) Logger(ctx sdk.Context) log.Logger { return ctx.Logger().With("module", fmt.Sprintf("x/%s", authz.ModuleName)) } // getGrant returns grant stored at skey. func (k Keeper) getGrant(ctx sdk.Context, skey []byte) (grant authz.Grant, found bool) { store := ctx.KVStore(k.storeKey) bz := store.Get(skey) if bz == nil { return grant, false } k.cdc.MustUnmarshal(bz, &grant) return grant, true } func (k Keeper) update(ctx sdk.Context, grantee sdk.AccAddress, granter sdk.AccAddress, updated authz.Authorization) error { skey := grantStoreKey(grantee, granter, updated.MsgTypeURL()) grant, found := k.getGrant(ctx, skey) if !found { return authz.ErrNoAuthorizationFound } msg, ok := updated.(proto.Message) if !ok { return sdkerrors.ErrPackAny.Wrapf("cannot proto marshal %T", updated) } any, err := codectypes.NewAnyWithValue(msg) if err != nil { return err } grant.Authorization = any store := ctx.KVStore(k.storeKey) store.Set(skey, k.cdc.MustMarshal(&grant)) return nil } // DispatchActions attempts to execute the provided messages via authorization // grants from the message signer to the grantee. func (k Keeper) DispatchActions(ctx sdk.Context, grantee sdk.AccAddress, msgs []sdk.Msg) ([][]byte, error) { results := make([][]byte, len(msgs)) now := ctx.BlockTime() for i, msg := range msgs { signers := msg.GetSigners() if len(signers) != 1 { return nil, authz.ErrAuthorizationNumOfSigners } granter := signers[0] // If granter != grantee then check authorization.Accept, otherwise we // implicitly accept. if !granter.Equals(grantee) { skey := grantStoreKey(grantee, granter, sdk.MsgTypeURL(msg)) grant, found := k.getGrant(ctx, skey) if !found { return nil, sdkerrors.Wrapf(authz.ErrNoAuthorizationFound, "failed to update grant with key %s", string(skey)) } if grant.Expiration != nil && grant.Expiration.Before(now) { return nil, authz.ErrAuthorizationExpired } authorization, err := grant.GetAuthorization() if err != nil { return nil, err } resp, err := authorization.Accept(ctx, msg) if err != nil { return nil, err } if resp.Delete { err = k.DeleteGrant(ctx, grantee, granter, sdk.MsgTypeURL(msg)) } else if resp.Updated != nil { err = k.update(ctx, grantee, granter, resp.Updated) } if err != nil { return nil, err } if !resp.Accept { return nil, sdkerrors.ErrUnauthorized } } handler := k.router.Handler(msg) if handler == nil { return nil, sdkerrors.ErrUnknownRequest.Wrapf("unrecognized message route: %s", sdk.MsgTypeURL(msg)) } msgResp, err := handler(ctx, msg) if err != nil { return nil, sdkerrors.Wrapf(err, "failed to execute message; message %v", msg) } results[i] = msgResp.Data // emit the events from the dispatched actions events := msgResp.Events sdkEvents := make([]sdk.Event, 0, len(events)) for _, event := range events { e := event e.Attributes = append(e.Attributes, abci.EventAttribute{ Key: "authz_msg_index", Value: strconv.Itoa(i) }) sdkEvents = append(sdkEvents, sdk.Event(e)) } ctx.EventManager().EmitEvents(sdkEvents) } return results, nil } // SaveGrant method grants the provided authorization to the grantee on the granter's account // with the provided expiration time and insert authorization key into the grants queue. If there is an existing authorization grant for the // same `sdk.Msg` type, this grant overwrites that. func (k Keeper) SaveGrant(ctx sdk.Context, grantee, granter sdk.AccAddress, authorization authz.Authorization, expiration *time.Time) error { store := ctx.KVStore(k.storeKey) msgType := authorization.MsgTypeURL() skey := grantStoreKey(grantee, granter, msgType) grant, err := authz.NewGrant(ctx.BlockTime(), authorization, expiration) if err != nil { return err } var oldExp *time.Time if oldGrant, found := k.getGrant(ctx, skey); found { oldExp = oldGrant.Expiration } if oldExp != nil && (expiration == nil || !oldExp.Equal(*expiration)) { if err = k.removeFromGrantQueue(ctx, skey, granter, grantee, *oldExp); err != nil { return err } } // If the expiration didn't change, then we don't remove it and we should not insert again if expiration != nil && (oldExp == nil || !oldExp.Equal(*expiration)) { if err = k.insertIntoGrantQueue(ctx, granter, grantee, msgType, *expiration); err != nil { return err } } bz := k.cdc.MustMarshal(&grant) store.Set(skey, bz) return ctx.EventManager().EmitTypedEvent(&authz.EventGrant{ MsgTypeUrl: authorization.MsgTypeURL(), Granter: granter.String(), Grantee: grantee.String(), }) } // DeleteGrant revokes any authorization for the provided message type granted to the grantee // by the granter. func (k Keeper) DeleteGrant(ctx sdk.Context, grantee sdk.AccAddress, granter sdk.AccAddress, msgType string) error { store := ctx.KVStore(k.storeKey) skey := grantStoreKey(grantee, granter, msgType) grant, found := k.getGrant(ctx, skey) if !found { return sdkerrors.Wrapf(authz.ErrNoAuthorizationFound, "failed to delete grant with key %s", string(skey)) } if grant.Expiration != nil { err := k.removeFromGrantQueue(ctx, skey, granter, grantee, *grant.Expiration) if err != nil { return err } } store.Delete(skey) return ctx.EventManager().EmitTypedEvent(&authz.EventRevoke{ MsgTypeUrl: msgType, Granter: granter.String(), Grantee: grantee.String(), }) } // GetAuthorizations Returns list of `Authorizations` granted to the grantee by the granter. func (k Keeper) GetAuthorizations(ctx sdk.Context, grantee sdk.AccAddress, granter sdk.AccAddress) ([]authz.Authorization, error) { store := ctx.KVStore(k.storeKey) key := grantStoreKey(grantee, granter, "") iter := sdk.KVStorePrefixIterator(store, key) defer iter.Close() var authorization authz.Grant var authorizations []authz.Authorization for ; iter.Valid(); iter.Next() { if err := k.cdc.Unmarshal(iter.Value(), &authorization); err != nil { return nil, err } a, err := authorization.GetAuthorization() if err != nil { return nil, err } authorizations = append(authorizations, a) } return authorizations, nil } // GetAuthorization returns an Authorization and it's expiration time. // A nil Authorization is returned under the following circumstances: // - No grant is found. // - A grant is found, but it is expired. // - There was an error getting the authorization from the grant. func (k Keeper) GetAuthorization(ctx sdk.Context, grantee sdk.AccAddress, granter sdk.AccAddress, msgType string) (authz.Authorization, *time.Time) { grant, found := k.getGrant(ctx, grantStoreKey(grantee, granter, msgType)) if !found || (grant.Expiration != nil && grant.Expiration.Before(ctx.BlockHeader().Time)) { return nil, nil } auth, err := grant.GetAuthorization() if err != nil { return nil, nil } return auth, grant.Expiration } // IterateGrants iterates over all authorization grants // This function should be used with caution because it can involve significant IO operations. // It should not be used in query or msg services without charging additional gas. // The iteration stops when the handler function returns true or the iterator exhaust. func (k Keeper) IterateGrants(ctx sdk.Context, handler func(granterAddr sdk.AccAddress, granteeAddr sdk.AccAddress, grant authz.Grant) bool, ) { store := ctx.KVStore(k.storeKey) iter := sdk.KVStorePrefixIterator(store, GrantKey) defer iter.Close() for ; iter.Valid(); iter.Next() { var grant authz.Grant granterAddr, granteeAddr, _ := parseGrantStoreKey(iter.Key()) k.cdc.MustUnmarshal(iter.Value(), &grant) if handler(granterAddr, granteeAddr, grant) { break } } } func (k Keeper) getGrantQueueItem(ctx sdk.Context, expiration time.Time, granter, grantee sdk.AccAddress) (*authz.GrantQueueItem, error) { store := ctx.KVStore(k.storeKey) bz := store.Get(GrantQueueKey(expiration, granter, grantee)) if bz == nil { return &authz.GrantQueueItem{ }, nil } var queueItems authz.GrantQueueItem if err := k.cdc.Unmarshal(bz, &queueItems); err != nil { return nil, err } return &queueItems, nil } func (k Keeper) setGrantQueueItem(ctx sdk.Context, expiration time.Time, granter sdk.AccAddress, grantee sdk.AccAddress, queueItems *authz.GrantQueueItem, ) error { store := ctx.KVStore(k.storeKey) bz, err := k.cdc.Marshal(queueItems) if err != nil { return err } store.Set(GrantQueueKey(expiration, granter, grantee), bz) return nil } // insertIntoGrantQueue inserts a grant key into the grant queue func (k Keeper) insertIntoGrantQueue(ctx sdk.Context, granter, grantee sdk.AccAddress, msgType string, expiration time.Time) error { queueItems, err := k.getGrantQueueItem(ctx, expiration, granter, grantee) if err != nil { return err } if len(queueItems.MsgTypeUrls) == 0 { k.setGrantQueueItem(ctx, expiration, granter, grantee, &authz.GrantQueueItem{ MsgTypeUrls: []string{ msgType }, }) } else { queueItems.MsgTypeUrls = append(queueItems.MsgTypeUrls, msgType) k.setGrantQueueItem(ctx, expiration, granter, grantee, queueItems) } return nil } // removeFromGrantQueue removes a grant key from the grant queue func (k Keeper) removeFromGrantQueue(ctx sdk.Context, grantKey []byte, granter, grantee sdk.AccAddress, expiration time.Time) error { store := ctx.KVStore(k.storeKey) key := GrantQueueKey(expiration, granter, grantee) bz := store.Get(key) if bz == nil { return sdkerrors.Wrap(authz.ErrNoGrantKeyFound, "can't remove grant from the expire queue, grant key not found") } var queueItem authz.GrantQueueItem if err := k.cdc.Unmarshal(bz, &queueItem); err != nil { return err } _, _, msgType := parseGrantStoreKey(grantKey) queueItems := queueItem.MsgTypeUrls for index, typeURL := range queueItems { ctx.GasMeter().ConsumeGas(gasCostPerIteration, "grant queue") if typeURL == msgType { end := len(queueItem.MsgTypeUrls) - 1 queueItems[index] = queueItems[end] queueItems = queueItems[:end] if err := k.setGrantQueueItem(ctx, expiration, granter, grantee, &authz.GrantQueueItem{ MsgTypeUrls: queueItems, }); err != nil { return err } break } } return nil } // DequeueAndDeleteExpiredGrants deletes expired grants from the state and grant queue. func (k Keeper) DequeueAndDeleteExpiredGrants(ctx sdk.Context) error { store := ctx.KVStore(k.storeKey) iterator := store.Iterator(GrantQueuePrefix, sdk.InclusiveEndBytes(GrantQueueTimePrefix(ctx.BlockTime()))) defer iterator.Close() for ; iterator.Valid(); iterator.Next() { var queueItem authz.GrantQueueItem if err := k.cdc.Unmarshal(iterator.Value(), &queueItem); err != nil { return err } _, granter, grantee, err := parseGrantQueueKey(iterator.Key()) if err != nil { return err } store.Delete(iterator.Key()) for _, typeURL := range queueItem.MsgTypeUrls { store.Delete(grantStoreKey(grantee, granter, typeURL)) } } return nil } ``` * GrantQueue: `0x02 | expiration_bytes | granter_address_len (1 byte) | granter_address_bytes | grantee_address_len (1 byte) | grantee_address_bytes -> ProtocalBuffer(GrantQueueItem)` The `expiration_bytes` are the expiration date in UTC with the format `"2006-01-02T15:04:05.000000000"`. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package keeper import ( "time" "github.com/cosmos/cosmos-sdk/internal/conv" sdk "github.com/cosmos/cosmos-sdk/types" "github.com/cosmos/cosmos-sdk/types/address" "github.com/cosmos/cosmos-sdk/types/kv" "github.com/cosmos/cosmos-sdk/x/authz" ) // Keys for store prefixes // Items are stored with the following key: values // // - 0x01: Grant // - 0x02: GrantQueueItem var ( GrantKey = []byte{0x01 } // prefix for each key GrantQueuePrefix = []byte{0x02 } ) var lenTime = len(sdk.FormatTimeBytes(time.Now())) // StoreKey is the store key string for authz const StoreKey = authz.ModuleName // grantStoreKey - return authorization store key // Items are stored with the following key: values // // - 0x01: Grant func grantStoreKey(grantee sdk.AccAddress, granter sdk.AccAddress, msgType string) []byte { m := conv.UnsafeStrToBytes(msgType) granter = address.MustLengthPrefix(granter) grantee = address.MustLengthPrefix(grantee) key := sdk.AppendLengthPrefixedBytes(GrantKey, granter, grantee, m) return key } // parseGrantStoreKey - split granter, grantee address and msg type from the authorization key func parseGrantStoreKey(key []byte) (granterAddr, granteeAddr sdk.AccAddress, msgType string) { // key is of format: // 0x01 granterAddrLen, granterAddrLenEndIndex := sdk.ParseLengthPrefixedBytes(key, 1, 1) // ignore key[0] since it is a prefix key granterAddr, granterAddrEndIndex := sdk.ParseLengthPrefixedBytes(key, granterAddrLenEndIndex+1, int(granterAddrLen[0])) granteeAddrLen, granteeAddrLenEndIndex := sdk.ParseLengthPrefixedBytes(key, granterAddrEndIndex+1, 1) granteeAddr, granteeAddrEndIndex := sdk.ParseLengthPrefixedBytes(key, granteeAddrLenEndIndex+1, int(granteeAddrLen[0])) kv.AssertKeyAtLeastLength(key, granteeAddrEndIndex+1) return granterAddr, granteeAddr, conv.UnsafeBytesToStr(key[(granteeAddrEndIndex + 1):]) } // parseGrantQueueKey split expiration time, granter and grantee from the grant queue key func parseGrantQueueKey(key []byte) (time.Time, sdk.AccAddress, sdk.AccAddress, error) { // key is of format: // 0x02 expBytes, expEndIndex := sdk.ParseLengthPrefixedBytes(key, 1, lenTime) exp, err := sdk.ParseTimeBytes(expBytes) if err != nil { return exp, nil, nil, err } granterAddrLen, granterAddrLenEndIndex := sdk.ParseLengthPrefixedBytes(key, expEndIndex+1, 1) granter, granterEndIndex := sdk.ParseLengthPrefixedBytes(key, granterAddrLenEndIndex+1, int(granterAddrLen[0])) granteeAddrLen, granteeAddrLenEndIndex := sdk.ParseLengthPrefixedBytes(key, granterEndIndex+1, 1) grantee, _ := sdk.ParseLengthPrefixedBytes(key, granteeAddrLenEndIndex+1, int(granteeAddrLen[0])) return exp, granter, grantee, nil } // GrantQueueKey - return grant queue store key. If a given grant doesn't have a defined // expiration, then it should not be used in the pruning queue. // Key format is: // // 0x02: GrantQueueItem func GrantQueueKey(expiration time.Time, granter sdk.AccAddress, grantee sdk.AccAddress) []byte { exp := sdk.FormatTimeBytes(expiration) granter = address.MustLengthPrefix(granter) grantee = address.MustLengthPrefix(grantee) return sdk.AppendLengthPrefixedBytes(GrantQueuePrefix, exp, granter, grantee) } // GrantQueueTimePrefix - return grant queue time prefix func GrantQueueTimePrefix(expiration time.Time) []byte { return append(GrantQueuePrefix, sdk.FormatTimeBytes(expiration)...) } // firstAddressFromGrantStoreKey parses the first address only func firstAddressFromGrantStoreKey(key []byte) sdk.AccAddress { addrLen := key[0] return sdk.AccAddress(key[1 : 1+addrLen]) } ``` The `GrantQueueItem` object contains the list of type urls between granter and grantee that expire at the time indicated in the key. ## Messages In this section we describe the processing of messages for the authz module. ### MsgGrant An authorization grant is created using the `MsgGrant` message. If there is already a grant for the `(granter, grantee, Authorization)` triple, then the new grant overwrites the previous one. To update or extend an existing grant, a new grant with the same `(granter, grantee, Authorization)` triple should be created. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/authz/v1beta1/tx.proto#L35-L45 ``` The message handling should fail if: * both granter and grantee have the same address. * provided `Expiration` time is less than current unix timestamp (but a grant will be created if no `expiration` time is provided since `expiration` is optional). * provided `Grant.Authorization` is not implemented. * `Authorization.MsgTypeURL()` is not defined in the router (there is no defined handler in the app router to handle that Msg types). ### MsgRevoke A grant can be removed with the `MsgRevoke` message. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/authz/v1beta1/tx.proto#L69-L78 ``` The message handling should fail if: * both granter and grantee have the same address. * provided `MsgTypeUrl` is empty. NOTE: The `MsgExec` message removes a grant if the grant has expired. ### MsgExec When a grantee wants to execute a transaction on behalf of a granter, they must send `MsgExec`. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/authz/v1beta1/tx.proto#L52-L63 ``` The message handling should fail if: * provided `Authorization` is not implemented. * grantee doesn't have permission to run the transaction. * if granted authorization is expired. ## Events The authz module emits proto events defined in [the Protobuf reference](https://buf.build/cosmos/cosmos-sdk/docs/main/cosmos.authz.v1beta1#cosmos.authz.v1beta1.EventGrant). ## Client ### CLI A user can query and interact with the `authz` module using the CLI. #### Query The `query` commands allow users to query `authz` state. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query authz --help ``` ##### grants The `grants` command allows users to query grants for a granter-grantee pair. If the message type URL is set, it selects grants only for that message type. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query authz grants [granter-addr] [grantee-addr] [msg-type-url]? [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query authz grants cosmos1.. cosmos1.. /cosmos.bank.v1beta1.MsgSend ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grants: - authorization: '@type': /cosmos.bank.v1beta1.SendAuthorization spend_limit: - amount: "100" denom: stake expiration: "2022-01-01T00:00:00Z" pagination: null ``` #### Transactions The `tx` commands allow users to interact with the `authz` module. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx authz --help ``` ##### exec The `exec` command allows a grantee to execute a transaction on behalf of granter. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx authz exec [tx-json-file] --from [grantee] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx authz exec tx.json --from=cosmos1.. ``` ##### grant The `grant` command allows a granter to grant an authorization to a grantee. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx authz grant --from [flags] ``` * The `send` authorization\_type refers to the built-in `SendAuthorization` type. The custom flags available are `spend-limit` (required) and `allow-list` (optional) , documented [here](#SendAuthorization) Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx authz grant cosmos1.. send --spend-limit=100stake --allow-list=cosmos1...,cosmos2... --from=cosmos1.. ``` * The `generic` authorization\_type refers to the built-in `GenericAuthorization` type. The custom flag available is `msg-type` ( required) documented [here](#GenericAuthorization). > Note: `msg-type` is any valid Cosmos SDK `Msg` type url. Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx authz grant cosmos1.. generic --msg-type=/cosmos.bank.v1beta1.MsgSend --from=cosmos1.. ``` * The `delegate`,`unbond`,`redelegate` authorization\_types refer to the built-in `StakeAuthorization` type. The custom flags available are `spend-limit` (optional), `allowed-validators` (optional) and `deny-validators` (optional) documented [here](#StakeAuthorization). > Note: `allowed-validators` and `deny-validators` cannot both be empty. `spend-limit` represents the `MaxTokens` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx authz grant cosmos1.. delegate --spend-limit=100stake --allowed-validators=cosmos...,cosmos... --deny-validators=cosmos... --from=cosmos1.. ``` ##### revoke The `revoke` command allows a granter to revoke an authorization from a grantee. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx authz revoke [grantee] [msg-type-url] --from=[granter] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx authz revoke cosmos1.. /cosmos.bank.v1beta1.MsgSend --from=cosmos1.. ``` ### gRPC A user can query the `authz` module using gRPC endpoints. #### Grants The `Grants` endpoint allows users to query grants for a granter-grantee pair. If the message type URL is set, it selects grants only for that message type. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.authz.v1beta1.Query/Grants ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"granter":"cosmos1..","grantee":"cosmos1..","msg_type_url":"/cosmos.bank.v1beta1.MsgSend"}' \ localhost:9090 \ cosmos.authz.v1beta1.Query/Grants ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "grants": [ { "authorization": { "@type": "/cosmos.bank.v1beta1.SendAuthorization", "spendLimit": [ { "denom":"stake", "amount":"100" } ] }, "expiration": "2022-01-01T00:00:00Z" } ] } ``` ### REST A user can query the `authz` module using REST endpoints. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/authz/v1beta1/grants ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl "localhost:1317/cosmos/authz/v1beta1/grants?granter=cosmos1..&grantee=cosmos1..&msg_type_url=/cosmos.bank.v1beta1.MsgSend" ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "grants": [ { "authorization": { "@type": "/cosmos.bank.v1beta1.SendAuthorization", "spend_limit": [ { "denom": "stake", "amount": "100" } ] }, "expiration": "2022-01-01T00:00:00Z" } ], "pagination": null } ``` # x/bank Source: https://docs.cosmos.network/sdk/latest/modules/bank/README This document specifies the bank module of the Cosmos SDK. ## Abstract This document specifies the bank module of the Cosmos SDK. The bank module is responsible for handling multi-asset coin transfers between accounts and tracking special-case pseudo-transfers which must work differently with particular kinds of accounts (notably delegating/undelegating for vesting accounts). It exposes several interfaces with varying capabilities for secure interaction with other modules which must alter user balances. In addition, the bank module tracks and provides query support for the total supply of all assets used in the application. This module is used in the Cosmos Hub. ## Contents * [Supply](#supply) * [Total Supply](#total-supply) * [Module Accounts](#module-accounts) * [Permissions](#permissions) * [State](#state) * [Params](#params) * [Keepers](#keepers) * [Messages](#messages) * [Events](#events) * [Message Events](#message-events) * [Keeper Events](#keeper-events) * [Parameters](#parameters) * [SendEnabled](#sendenabled) * [DefaultSendEnabled](#defaultsendenabled) * [Client](#client) * [CLI](#cli) * [Query](#query) * [Transactions](#transactions) * [gRPC](#grpc) ## Supply The `supply` functionality: * passively tracks the total supply of coins within a chain, * provides a pattern for modules to hold/interact with `Coins`, and * introduces the invariant check to verify a chain's total supply. ### Total Supply The total `Supply` of the network is equal to the sum of all coins from the account. The total supply is updated every time a `Coin` is minted (eg: as part of the inflation mechanism) or burned (eg: due to slashing or if a governance proposal is vetoed). ## Module Accounts The supply functionality introduces a new type of `auth.Account` which can be used by modules to allocate tokens and in special cases mint or burn tokens. At a base level these module accounts are capable of sending/receiving tokens to and from `auth.Account`s and other module accounts. This design replaces previous alternative designs where, to hold tokens, modules would burn the incoming tokens from the sender account, and then track those tokens internally. Later, in order to send tokens, the module would need to effectively mint tokens within a destination account. The new design removes duplicate logic between modules to perform this accounting. The `ModuleAccount` interface is defined as follows: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type ModuleAccount interface { auth.Account // same methods as the Account interface GetName() string // name of the module; used to obtain the address GetPermissions() []string // permissions of module account HasPermission(string) bool } ``` > **WARNING!** > Any module or message handler that allows either direct or indirect sending of funds must explicitly guarantee those funds cannot be sent to module accounts (unless allowed). The supply `Keeper` also introduces new wrapper functions for the auth `Keeper` and the bank `Keeper` that are related to `ModuleAccount`s in order to be able to: * Get and set `ModuleAccount`s by providing the `Name`. * Send coins from and to other `ModuleAccount`s or standard `Account`s (`BaseAccount` or `VestingAccount`) by passing only the `Name`. * `Mint` or `Burn` coins for a `ModuleAccount` (restricted to its permissions). ### Permissions Each `ModuleAccount` has a different set of permissions that provide different object capabilities to perform certain actions. Permissions need to be registered upon the creation of the supply `Keeper` so that every time a `ModuleAccount` calls the allowed functions, the `Keeper` can lookup the permissions to that specific account and perform or not perform the action. The available permissions are: * `Minter`: allows for a module to mint a specific amount of coins. * `Burner`: allows for a module to burn a specific amount of coins. * `Staking`: allows for a module to delegate and undelegate a specific amount of coins. ## State The `x/bank` module keeps state of the following primary objects: 1. Account balances 2. Denomination metadata 3. The total supply of all balances 4. Information on which denominations are allowed to be sent. In addition, the `x/bank` module keeps the following indexes to manage the aforementioned state: * Supply Index: `0x0 | byte(denom) -> byte(amount)` * Denom Metadata Index: `0x1 | byte(denom) -> ProtocolBuffer(Metadata)` * Balances Index: `0x2 | byte(address length) | []byte(address) | []byte(balance.Denom) -> ProtocolBuffer(balance)` * Reverse Denomination to Address Index: `0x03 | byte(denom) | 0x00 | []byte(address) -> 0` ## Params The bank module stores its params in state with the prefix of `0x05`, it can be updated with governance or the address with authority. * Params: `0x05 | ProtocolBuffer(Params)` ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/bank/v1beta1/bank.proto#L12-L23 ``` ## Keepers The bank module provides these exported keeper interfaces that can be passed to other modules that read or update account balances. Modules should use the least-permissive interface that provides the functionality they require. Best practices dictate careful review of `bank` module code to ensure that permissions are limited in the way that you expect. ### Denied Addresses The `x/bank` module accepts a map of addresses that are considered blocklisted from directly and explicitly receiving funds through means such as `MsgSend` and `MsgMultiSend` and direct API calls like `SendCoinsFromModuleToAccount`. Typically, these addresses are module accounts. If these addresses receive funds outside the expected rules of the state machine, invariants are likely to be broken and could result in a halted network. By providing the `x/bank` module with a blocklisted set of addresses, an error occurs for the operation if a user or client attempts to directly or indirectly send funds to a blocklisted account, for example, by using [IBC](/ibc/latest/intro). ### Common Types #### Input An input of a multiparty transfer ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Input models transaction input. message Input { string address = 1; repeated cosmos.base.v1beta1.Coin coins = 2; } ``` #### Output An output of a multiparty transfer. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Output models transaction outputs. message Output { string address = 1; repeated cosmos.base.v1beta1.Coin coins = 2; } ``` ### BaseKeeper The base keeper provides full-permission access: the ability to arbitrary modify any account's balance and mint or burn coins. Restricted permission to mint per module could be achieved by using baseKeeper with `WithMintCoinsRestriction` to give specific restrictions to mint (e.g. only minting certain denom). ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Keeper defines a module interface that facilitates the transfer of coins // between accounts. type Keeper interface { SendKeeper WithMintCoinsRestriction(MintingRestrictionFn) BaseKeeper InitGenesis(context.Context, *types.GenesisState) ExportGenesis(context.Context) *types.GenesisState GetSupply(ctx context.Context, denom string) sdk.Coin HasSupply(ctx context.Context, denom string) bool GetPaginatedTotalSupply(ctx context.Context, pagination *query.PageRequest) (sdk.Coins, *query.PageResponse, error) IterateTotalSupply(ctx context.Context, cb func(sdk.Coin) bool) GetDenomMetaData(ctx context.Context, denom string) (types.Metadata, bool) HasDenomMetaData(ctx context.Context, denom string) bool SetDenomMetaData(ctx context.Context, denomMetaData types.Metadata) IterateAllDenomMetaData(ctx context.Context, cb func(types.Metadata) bool) SendCoinsFromModuleToAccount(ctx context.Context, senderModule string, recipientAddr sdk.AccAddress, amt sdk.Coins) error SendCoinsFromModuleToModule(ctx context.Context, senderModule, recipientModule string, amt sdk.Coins) error SendCoinsFromAccountToModule(ctx context.Context, senderAddr sdk.AccAddress, recipientModule string, amt sdk.Coins) error DelegateCoinsFromAccountToModule(ctx context.Context, senderAddr sdk.AccAddress, recipientModule string, amt sdk.Coins) error UndelegateCoinsFromModuleToAccount(ctx context.Context, senderModule string, recipientAddr sdk.AccAddress, amt sdk.Coins) error MintCoins(ctx context.Context, moduleName string, amt sdk.Coins) error BurnCoins(ctx context.Context, moduleName string, amt sdk.Coins) error DelegateCoins(ctx context.Context, delegatorAddr, moduleAccAddr sdk.AccAddress, amt sdk.Coins) error UndelegateCoins(ctx context.Context, moduleAccAddr, delegatorAddr sdk.AccAddress, amt sdk.Coins) error // GetAuthority gets the address capable of executing governance proposal messages. Usually the gov module account. GetAuthority() string types.QueryServer } ``` ### SendKeeper The send keeper provides access to account balances and the ability to transfer coins between accounts. The send keeper does not alter the total supply (mint or burn coins). ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // SendKeeper defines a module interface that facilitates the transfer of coins // between accounts without the possibility of creating coins. type SendKeeper interface { ViewKeeper AppendSendRestriction(restriction SendRestrictionFn) PrependSendRestriction(restriction SendRestrictionFn) ClearSendRestriction() InputOutputCoins(ctx context.Context, input types.Input, outputs []types.Output) error SendCoins(ctx context.Context, fromAddr, toAddr sdk.AccAddress, amt sdk.Coins) error GetParams(ctx context.Context) types.Params SetParams(ctx context.Context, params types.Params) error IsSendEnabledDenom(ctx context.Context, denom string) bool SetSendEnabled(ctx context.Context, denom string, value bool) SetAllSendEnabled(ctx context.Context, sendEnableds []*types.SendEnabled) DeleteSendEnabled(ctx context.Context, denom string) IterateSendEnabledEntries(ctx context.Context, cb func(denom string, sendEnabled bool) (stop bool)) GetAllSendEnabledEntries(ctx context.Context) []types.SendEnabled IsSendEnabledCoin(ctx context.Context, coin sdk.Coin) bool IsSendEnabledCoins(ctx context.Context, coins ...sdk.Coin) error BlockedAddr(addr sdk.AccAddress) bool } ``` #### Send Restrictions The `SendKeeper` applies a `SendRestrictionFn` before each transfer of funds. ```golang theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // A SendRestrictionFn can restrict sends and/or provide a new receiver address. type SendRestrictionFn func(ctx context.Context, fromAddr, toAddr sdk.AccAddress, amt sdk.Coins) (newToAddr sdk.AccAddress, err error) ``` After the `SendKeeper` (or `BaseKeeper`) has been created, send restrictions can be added to it using the `AppendSendRestriction` or `PrependSendRestriction` functions. Both functions compose the provided restriction with any previously provided restrictions. `AppendSendRestriction` adds the provided restriction to be run after any previously provided send restrictions. `PrependSendRestriction` adds the restriction to be run before any previously provided send restrictions. The composition will short-circuit when an error is encountered. I.e. if the first one returns an error, the second is not run. During `SendCoins`, the send restriction is applied before coins are removed from the from address and adding them to the to address. During `InputOutputCoins`, the send restriction is applied after the input coins are removed and once for each output before the funds are added. A send restriction function should make use of a custom value in the context to allow bypassing that specific restriction. Send Restrictions are not placed on `ModuleToAccount` or `ModuleToModule` transfers. This is done due to modules needing to move funds to user accounts and other module accounts. This is a design decision to allow for more flexibility in the state machine. The state machine should be able to move funds between module accounts and user accounts without restrictions. Secondly this limitation would limit the usage of the state machine even for itself. users would not be able to receive rewards, not be able to move funds between module accounts. In the case that a user sends funds from a user account to the community pool and then a governance proposal is used to get those tokens into the users account this would fall under the discretion of the app chain developer to what they would like to do here. We can not make strong assumptions here. Thirdly, this issue could lead into a chain halt if a token is disabled and the token is moved in the begin/endblock. This is the last reason we see the current change and more damaging then beneficial for users. For example, in your module's keeper package, you'd define the send restriction function: ```golang expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} var _ banktypes.SendRestrictionFn = Keeper{ }.SendRestrictionFn func (k Keeper) SendRestrictionFn(ctx context.Context, fromAddr, toAddr sdk.AccAddress, amt sdk.Coins) (sdk.AccAddress, error) { // Bypass if the context says to. if mymodule.HasBypass(ctx) { return toAddr, nil } // Your custom send restriction logic goes here. return nil, errors.New("not implemented") } ``` The bank keeper should be provided to your keeper's constructor so the send restriction can be added to it: ```golang theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func NewKeeper(cdc codec.BinaryCodec, storeKey storetypes.StoreKey, bankKeeper mymodule.BankKeeper) Keeper { rv := Keeper{/*...*/ } bankKeeper.AppendSendRestriction(rv.SendRestrictionFn) return rv } ``` Then, in the `mymodule` package, define the context helpers: ```golang expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} const bypassKey = "bypass-mymodule-restriction" // WithBypass returns a new context that will cause the mymodule bank send restriction to be skipped. func WithBypass(ctx context.Context) context.Context { return sdk.UnwrapSDKContext(ctx).WithValue(bypassKey, true) } // WithoutBypass returns a new context that will cause the mymodule bank send restriction to not be skipped. func WithoutBypass(ctx context.Context) context.Context { return sdk.UnwrapSDKContext(ctx).WithValue(bypassKey, false) } // HasBypass checks the context to see if the mymodule bank send restriction should be skipped. func HasBypass(ctx context.Context) bool { bypassValue := ctx.Value(bypassKey) if bypassValue == nil { return false } bypass, isBool := bypassValue.(bool) return isBool && bypass } ``` Now, anywhere where you want to use `SendCoins` or `InputOutputCoins`, but you don't want your send restriction applied: ```golang theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (k Keeper) DoThing(ctx context.Context, fromAddr, toAddr sdk.AccAddress, amt sdk.Coins) error { return k.bankKeeper.SendCoins(mymodule.WithBypass(ctx), fromAddr, toAddr, amt) } ``` ### ViewKeeper The view keeper provides read-only access to account balances. The view keeper does not have balance alteration functionality. All balance lookups are `O(1)`. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // ViewKeeper defines a module interface that facilitates read only access to // account balances. type ViewKeeper interface { ValidateBalance(ctx context.Context, addr sdk.AccAddress) error HasBalance(ctx context.Context, addr sdk.AccAddress, amt sdk.Coin) bool GetAllBalances(ctx context.Context, addr sdk.AccAddress) sdk.Coins GetAccountsBalances(ctx context.Context) []types.Balance GetBalance(ctx context.Context, addr sdk.AccAddress, denom string) sdk.Coin LockedCoins(ctx context.Context, addr sdk.AccAddress) sdk.Coins SpendableCoins(ctx context.Context, addr sdk.AccAddress) sdk.Coins SpendableCoin(ctx context.Context, addr sdk.AccAddress, denom string) sdk.Coin IterateAccountBalances(ctx context.Context, addr sdk.AccAddress, cb func(coin sdk.Coin) (stop bool)) IterateAllBalances(ctx context.Context, cb func(address sdk.AccAddress, coin sdk.Coin) (stop bool)) } ``` ## Messages ### MsgSend Send coins from one address to another. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/bank/v1beta1/tx.proto#L38-L53 ``` The message will fail under the following conditions: * The coins do not have sending enabled * The `to` address is restricted ### MsgMultiSend Send coins from one sender and to a series of different address. If any of the receiving addresses do not correspond to an existing account, a new account is created. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/bank/v1beta1/tx.proto#L58-L69 ``` The message will fail under the following conditions: * Any of the coins do not have sending enabled * Any of the `to` addresses are restricted * Any of the coins are locked * The inputs and outputs do not correctly correspond to one another ### MsgUpdateParams The `bank` module params can be updated through `MsgUpdateParams`, which can be done using governance proposal. The signer will always be the `gov` module account address. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/bank/v1beta1/tx.proto#L74-L88 ``` The message handling can fail if: * signer is not the gov module account address. ### MsgSetSendEnabled Used with the x/gov module to set create/edit SendEnabled entries. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/bank/v1beta1/tx.proto#L96-L117 ``` The message will fail under the following conditions: * The authority is not a bech32 address. * The authority is not x/gov module's address. * There are multiple SendEnabled entries with the same Denom. * One or more SendEnabled entries has an invalid Denom. ## Events The bank module emits the following events: ### Message Events #### MsgSend | Type | Attribute Key | Attribute Value | | -------- | ------------- | -------------------- | | transfer | recipient | `{recipientAddress}` | | transfer | amount | `{amount}` | | message | module | bank | | message | action | send | | message | sender | `{senderAddress}` | #### MsgMultiSend | Type | Attribute Key | Attribute Value | | -------- | ------------- | -------------------- | | transfer | recipient | `{recipientAddress}` | | transfer | amount | `{amount}` | | message | module | bank | | message | action | multisend | | message | sender | `{senderAddress}` | ### Keeper Events In addition to message events, the bank keeper will produce events when the following methods are called (or any method which ends up calling them) #### MintCoins ```json expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "type": "coinbase", "attributes": [ { "key": "minter", "value": "{{sdk.AccAddress of the module minting coins}}", "index": true }, { "key": "amount", "value": "{{sdk.Coins being minted}}", "index": true } ] } ``` ```json expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "type": "coin_received", "attributes": [ { "key": "receiver", "value": "{{sdk.AccAddress of the module minting coins}}", "index": true }, { "key": "amount", "value": "{{sdk.Coins being received}}", "index": true } ] } ``` #### BurnCoins ```json expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "type": "burn", "attributes": [ { "key": "burner", "value": "{{sdk.AccAddress of the module burning coins}}", "index": true }, { "key": "amount", "value": "{{sdk.Coins being burned}}", "index": true } ] } ``` ```json expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "type": "coin_spent", "attributes": [ { "key": "spender", "value": "{{sdk.AccAddress of the module burning coins}}", "index": true }, { "key": "amount", "value": "{{sdk.Coins being burned}}", "index": true } ] } ``` #### addCoins ```json expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "type": "coin_received", "attributes": [ { "key": "receiver", "value": "{{sdk.AccAddress of the address beneficiary of the coins}}", "index": true }, { "key": "amount", "value": "{{sdk.Coins being received}}", "index": true } ] } ``` #### subUnlockedCoins/DelegateCoins ```json expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "type": "coin_spent", "attributes": [ { "key": "spender", "value": "{{sdk.AccAddress of the address which is spending coins}}", "index": true }, { "key": "amount", "value": "{{sdk.Coins being spent}}", "index": true } ] } ``` ## Parameters The bank module contains the following parameters ### SendEnabled The SendEnabled parameter is now deprecated and not to be use. It is replaced with state store records. ### DefaultSendEnabled The default send enabled value controls send transfer capability for all coin denominations unless specifically included in the array of `SendEnabled` parameters. ## Client ### CLI A user can query and interact with the `bank` module using the CLI. #### Query The `query` commands allow users to query `bank` state. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query bank --help ``` ##### balances The `balances` command allows users to query account balances by address. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query bank balances [address] [flags] ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query bank balances cosmos1.. ``` Example Output: ```yml theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} balances: - amount: "1000000000" denom: stake pagination: next_key: null total: "0" ``` ##### denom-metadata The `denom-metadata` command allows users to query metadata for coin denominations. A user can query metadata for a single denomination using the `--denom` flag or all denominations without it. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query bank denom-metadata [flags] ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query bank denom-metadata --denom stake ``` Example Output: ```yml theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} metadata: base: stake denom_units: - aliases: - STAKE denom: stake description: native staking token of simulation app display: stake name: SimApp Token symbol: STK ``` ##### total The `total` command allows users to query the total supply of coins. A user can query the total supply for a single coin using the `--denom` flag or all coins without it. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query bank total [flags] ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query bank total --denom stake ``` Example Output: ```yml theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} amount: "10000000000" denom: stake ``` ##### send-enabled The `send-enabled` command allows users to query for all or some SendEnabled entries. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query bank send-enabled [denom1 ...] [flags] ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query bank send-enabled ``` Example output: ```yml theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} send_enabled: - denom: foocoin enabled: true - denom: barcoin pagination: next-key: null total: 2 ``` #### Transactions The `tx` commands allow users to interact with the `bank` module. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx bank --help ``` ##### send The `send` command allows users to send funds from one account to another. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx bank send [from_key_or_address] [to_address] [amount] [flags] ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx bank send cosmos1.. cosmos1.. 100stake ``` ## gRPC A user can query the `bank` module using gRPC endpoints. ### Balance The `Balance` endpoint allows users to query account balance by address for a given denomination. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.bank.v1beta1.Query/Balance ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"address":"cosmos1..","denom":"stake"}' \ localhost:9090 \ cosmos.bank.v1beta1.Query/Balance ``` Example Output: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "balance": { "denom": "stake", "amount": "1000000000" } } ``` ### AllBalances The `AllBalances` endpoint allows users to query account balance by address for all denominations. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.bank.v1beta1.Query/AllBalances ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"address":"cosmos1.."}' \ localhost:9090 \ cosmos.bank.v1beta1.Query/AllBalances ``` Example Output: ```json expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "balances": [ { "denom": "stake", "amount": "1000000000" } ], "pagination": { "total": "1" } } ``` ### DenomMetadata The `DenomMetadata` endpoint allows users to query metadata for a single coin denomination. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.bank.v1beta1.Query/DenomMetadata ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"denom":"stake"}' \ localhost:9090 \ cosmos.bank.v1beta1.Query/DenomMetadata ``` Example Output: ```json expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "metadata": { "description": "native staking token of simulation app", "denomUnits": [ { "denom": "stake", "aliases": [ "STAKE" ] } ], "base": "stake", "display": "stake", "name": "SimApp Token", "symbol": "STK" } } ``` ### DenomsMetadata The `DenomsMetadata` endpoint allows users to query metadata for all coin denominations. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.bank.v1beta1.Query/DenomsMetadata ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ localhost:9090 \ cosmos.bank.v1beta1.Query/DenomsMetadata ``` Example Output: ```json expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "metadatas": [ { "description": "native staking token of simulation app", "denomUnits": [ { "denom": "stake", "aliases": [ "STAKE" ] } ], "base": "stake", "display": "stake", "name": "SimApp Token", "symbol": "STK" } ], "pagination": { "total": "1" } } ``` ### DenomOwners The `DenomOwners` endpoint allows users to query metadata for a single coin denomination. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.bank.v1beta1.Query/DenomOwners ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"denom":"stake"}' \ localhost:9090 \ cosmos.bank.v1beta1.Query/DenomOwners ``` Example Output: ```json expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "denomOwners": [ { "address": "cosmos1..", "balance": { "denom": "stake", "amount": "5000000000" } }, { "address": "cosmos1..", "balance": { "denom": "stake", "amount": "5000000000" } }, ], "pagination": { "total": "2" } } ``` ### TotalSupply The `TotalSupply` endpoint allows users to query the total supply of all coins. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.bank.v1beta1.Query/TotalSupply ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ localhost:9090 \ cosmos.bank.v1beta1.Query/TotalSupply ``` Example Output: ```json expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "supply": [ { "denom": "stake", "amount": "10000000000" } ], "pagination": { "total": "1" } } ``` ### SupplyOf The `SupplyOf` endpoint allows users to query the total supply of a single coin. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.bank.v1beta1.Query/SupplyOf ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"denom":"stake"}' \ localhost:9090 \ cosmos.bank.v1beta1.Query/SupplyOf ``` Example Output: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "amount": { "denom": "stake", "amount": "10000000000" } } ``` ### Params The `Params` endpoint allows users to query the parameters of the `bank` module. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.bank.v1beta1.Query/Params ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ localhost:9090 \ cosmos.bank.v1beta1.Query/Params ``` Example Output: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "params": { "defaultSendEnabled": true } } ``` ### SendEnabled The `SendEnabled` enpoints allows users to query the SendEnabled entries of the `bank` module. Any denominations NOT returned, use the `Params.DefaultSendEnabled` value. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.bank.v1beta1.Query/SendEnabled ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ localhost:9090 \ cosmos.bank.v1beta1.Query/SendEnabled ``` Example Output: ```json expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "send_enabled": [ { "denom": "foocoin", "enabled": true }, { "denom": "barcoin" } ], "pagination": { "next-key": null, "total": 2 } } ``` # x/circuit Source: https://docs.cosmos.network/sdk/latest/modules/circuit/README `x/circuit` has been moved to [`./contrib/x/circuit`](https://github.com/cosmos/cosmos-sdk/tree/main/contrib/x/circuit) and is no longer actively maintained as part of the core Cosmos SDK. It is still available for use but is not included in the SDK Bug Bounty program. It was moved because it was never widely adopted. ## Concepts Circuit Breaker is a module that is meant to avoid a chain needing to halt/shut down in the presence of a vulnerability, instead the module will allow specific messages or all messages to be disabled. When operating a chain, if it is app specific then a halt of the chain is less detrimental, but if there are applications built on top of the chain then halting is expensive due to the disturbance to applications. Circuit Breaker works with the idea that an address or set of addresses have the right to block messages from being executed and/or included in the mempool. Any address with a permission is able to reset the circuit breaker for the message. The transactions are checked and can be rejected at two points: * In `CircuitBreakerDecorator` [ante handler](/sdk/latest/learn/concepts/baseapp#antehandler): ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package ante import ( "context" "github.com/cockroachdb/errors" sdk "github.com/cosmos/cosmos-sdk/types" ) // CircuitBreaker is an interface that defines the methods for a circuit breaker. type CircuitBreaker interface { IsAllowed(ctx context.Context, typeURL string) (bool, error) } // CircuitBreakerDecorator is an AnteDecorator that checks if the transaction type is allowed to enter the mempool or be executed type CircuitBreakerDecorator struct { circuitKeeper CircuitBreaker } func NewCircuitBreakerDecorator(ck CircuitBreaker) CircuitBreakerDecorator { return CircuitBreakerDecorator{ circuitKeeper: ck, } } func (cbd CircuitBreakerDecorator) AnteHandle(ctx sdk.Context, tx sdk.Tx, simulate bool, next sdk.AnteHandler) (sdk.Context, error) { // loop through all the messages and check if the message type is allowed for _, msg := range tx.GetMsgs() { isAllowed, err := cbd.circuitKeeper.IsAllowed(ctx, sdk.MsgTypeURL(msg)) if err != nil { return ctx, err } if !isAllowed { return ctx, errors.New("tx type not allowed") } } return next(ctx, tx, simulate) } ``` * With a [message router check](/sdk/latest/learn/concepts/baseapp#msg-service-router): ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package baseapp import ( "context" "fmt" gogogrpc "github.com/cosmos/gogoproto/grpc" "github.com/cosmos/gogoproto/proto" "google.golang.org/grpc" "google.golang.org/protobuf/runtime/protoiface" errorsmod "cosmossdk.io/errors" "github.com/cosmos/cosmos-sdk/baseapp/internal/protocompat" "github.com/cosmos/cosmos-sdk/codec" codectypes "github.com/cosmos/cosmos-sdk/codec/types" sdk "github.com/cosmos/cosmos-sdk/types" sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" ) // MessageRouter ADR 031 request type routing // https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-031-msg-service.md type MessageRouter interface { Handler(msg sdk.Msg) MsgServiceHandler HandlerByTypeURL(typeURL string) MsgServiceHandler } // MsgServiceRouter routes fully-qualified Msg service methods to their handler. type MsgServiceRouter struct { interfaceRegistry codectypes.InterfaceRegistry routes map[string]MsgServiceHandler hybridHandlers map[string]func(ctx context.Context, req, resp protoiface.MessageV1) error circuitBreaker CircuitBreaker } var _ gogogrpc.Server = &MsgServiceRouter{ } // NewMsgServiceRouter creates a new MsgServiceRouter. func NewMsgServiceRouter() *MsgServiceRouter { return &MsgServiceRouter{ routes: map[string]MsgServiceHandler{ }, hybridHandlers: map[string]func(ctx context.Context, req, resp protoiface.MessageV1) error{ }, } } func (msr *MsgServiceRouter) SetCircuit(cb CircuitBreaker) { msr.circuitBreaker = cb } // MsgServiceHandler defines a function type which handles Msg service message. type MsgServiceHandler = func(ctx sdk.Context, req sdk.Msg) (*sdk.Result, error) // Handler returns the MsgServiceHandler for a given msg or nil if not found. func (msr *MsgServiceRouter) Handler(msg sdk.Msg) MsgServiceHandler { return msr.routes[sdk.MsgTypeURL(msg)] } // HandlerByTypeURL returns the MsgServiceHandler for a given query route path or nil // if not found. func (msr *MsgServiceRouter) HandlerByTypeURL(typeURL string) MsgServiceHandler { return msr.routes[typeURL] } // RegisterService implements the gRPC Server.RegisterService method. sd is a gRPC // service description, handler is an object which implements that gRPC service. // // This function PANICs: // - if it is called before the service `Msg`s have been registered using // RegisterInterfaces, // - or if a service is being registered twice. func (msr *MsgServiceRouter) RegisterService(sd *grpc.ServiceDesc, handler interface{ }) { // Adds a top-level query handler based on the gRPC service name. for _, method := range sd.Methods { err := msr.registerMsgServiceHandler(sd, method, handler) if err != nil { panic(err) } err = msr.registerHybridHandler(sd, method, handler) if err != nil { panic(err) } } } func (msr *MsgServiceRouter) HybridHandlerByMsgName(msgName string) func(ctx context.Context, req, resp protoiface.MessageV1) error { return msr.hybridHandlers[msgName] } func (msr *MsgServiceRouter) registerHybridHandler(sd *grpc.ServiceDesc, method grpc.MethodDesc, handler interface{ }) error { inputName, err := protocompat.RequestFullNameFromMethodDesc(sd, method) if err != nil { return err } cdc := codec.NewProtoCodec(msr.interfaceRegistry) hybridHandler, err := protocompat.MakeHybridHandler(cdc, sd, method, handler) if err != nil { return err } // if circuit breaker is not nil, then we decorate the hybrid handler with the circuit breaker if msr.circuitBreaker == nil { msr.hybridHandlers[string(inputName)] = hybridHandler return nil } // decorate the hybrid handler with the circuit breaker circuitBreakerHybridHandler := func(ctx context.Context, req, resp protoiface.MessageV1) error { messageName := codectypes.MsgTypeURL(req) allowed, err := msr.circuitBreaker.IsAllowed(ctx, messageName) if err != nil { return err } if !allowed { return fmt.Errorf("circuit breaker disallows execution of message %s", messageName) } return hybridHandler(ctx, req, resp) } msr.hybridHandlers[string(inputName)] = circuitBreakerHybridHandler return nil } func (msr *MsgServiceRouter) registerMsgServiceHandler(sd *grpc.ServiceDesc, method grpc.MethodDesc, handler interface{ }) error { fqMethod := fmt.Sprintf("/%s/%s", sd.ServiceName, method.MethodName) methodHandler := method.Handler var requestTypeName string // NOTE: This is how we pull the concrete request type for each handler for registering in the InterfaceRegistry. // This approach is maybe a bit hacky, but less hacky than reflecting on the handler object itself. // We use a no-op interceptor to avoid actually calling into the handler itself. _, _ = methodHandler(nil, context.Background(), func(i interface{ }) error { msg, ok := i.(sdk.Msg) if !ok { // We panic here because there is no other alternative and the app cannot be initialized correctly // this should only happen if there is a problem with code generation in which case the app won't // work correctly anyway. panic(fmt.Errorf("unable to register service method %s: %T does not implement sdk.Msg", fqMethod, i)) } requestTypeName = sdk.MsgTypeURL(msg) return nil }, noopInterceptor) // Check that the service Msg fully-qualified method name has already // been registered (via RegisterInterfaces). If the user registers a // service without registering according service Msg type, there might be // some unexpected behavior down the road. Since we can't return an error // (`Server.RegisterService` interface restriction) we panic (at startup). reqType, err := msr.interfaceRegistry.Resolve(requestTypeName) if err != nil || reqType == nil { return fmt.Errorf( "type_url %s has not been registered yet. "+ "Before calling RegisterService, you must register all interfaces by calling the `RegisterInterfaces` "+ "method on module.BasicManager. Each module should call `msgservice.RegisterMsgServiceDesc` inside its "+ "`RegisterInterfaces` method with the `_Msg_serviceDesc` generated by proto-gen", requestTypeName, ) } // Check that each service is only registered once. If a service is // registered more than once, then we should error. Since we can't // return an error (`Server.RegisterService` interface restriction) we // panic (at startup). _, found := msr.routes[requestTypeName] if found { return fmt.Errorf( "msg service %s has already been registered. Please make sure to only register each service once. "+ "This usually means that there are conflicting modules registering the same msg service", fqMethod, ) } msr.routes[requestTypeName] = func(ctx sdk.Context, msg sdk.Msg) (*sdk.Result, error) { ctx = ctx.WithEventManager(sdk.NewEventManager()) interceptor := func(goCtx context.Context, _ interface{ }, _ *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (interface{ }, error) { goCtx = context.WithValue(goCtx, sdk.SdkContextKey, ctx) return handler(goCtx, msg) } if m, ok := msg.(sdk.HasValidateBasic); ok { if err := m.ValidateBasic(); err != nil { return nil, err } } if msr.circuitBreaker != nil { msgURL := sdk.MsgTypeURL(msg) isAllowed, err := msr.circuitBreaker.IsAllowed(ctx, msgURL) if err != nil { return nil, err } if !isAllowed { return nil, fmt.Errorf("circuit breaker disables execution of this message: %s", msgURL) } } // Call the method handler from the service description with the handler object. // We don't do any decoding here because the decoding was already done. res, err := methodHandler(handler, ctx, noopDecoder, interceptor) if err != nil { return nil, err } resMsg, ok := res.(proto.Message) if !ok { return nil, errorsmod.Wrapf(sdkerrors.ErrInvalidType, "Expecting proto.Message, got %T", resMsg) } return sdk.WrapServiceResult(ctx, resMsg, err) } return nil } // SetInterfaceRegistry sets the interface registry for the router. func (msr *MsgServiceRouter) SetInterfaceRegistry(interfaceRegistry codectypes.InterfaceRegistry) { msr.interfaceRegistry = interfaceRegistry } func noopDecoder(_ interface{ }) error { return nil } func noopInterceptor(_ context.Context, _ interface{ }, _ *grpc.UnaryServerInfo, _ grpc.UnaryHandler) (interface{ }, error) { return nil, nil } ``` The `CircuitBreakerDecorator` works for most use cases, but [does not check the inner messages of a transaction](/sdk/latest/learn/concepts/lifecycle#antehandler). This some transactions (such as `x/authz` transactions or some `x/gov` transactions) may pass the ante handler. **This does not affect the circuit breaker** as the message router check will still fail the transaction. This tradeoff is to avoid introducing more dependencies in the `x/circuit` module. Chains can re-define the `CircuitBreakerDecorator` to check for inner messages if they wish to do so. ## State ### Accounts * AccountPermissions `0x1 | account_address -> ProtocolBuffer(CircuitBreakerPermissions)` ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type level int32 const ( // LEVEL_NONE_UNSPECIFIED indicates that the account will have no circuit // breaker permissions. LEVEL_NONE_UNSPECIFIED = iota // LEVEL_SOME_MSGS indicates that the account will have permission to // trip or reset the circuit breaker for some Msg type URLs. If this level // is chosen, a non-empty list of Msg type URLs must be provided in // limit_type_urls. LEVEL_SOME_MSGS // LEVEL_ALL_MSGS indicates that the account can trip or reset the circuit // breaker for Msg's of all type URLs. LEVEL_ALL_MSGS // LEVEL_SUPER_ADMIN indicates that the account can take all circuit breaker // actions and can grant permissions to other accounts. LEVEL_SUPER_ADMIN ) type Access struct { level int32 msgs []string // if full permission, msgs can be empty } ``` ### Disable List List of type urls that are disabled. * DisableList `0x2 | msg_type_url -> []byte{}` ## State Transitions ### Authorize Authorize, is called by the module authority (default governance module account) or any account with `LEVEL_SUPER_ADMIN` to give permission to disable/enable messages to another account. There are three levels of permissions that can be granted. `LEVEL_SOME_MSGS` limits the number of messages that can be disabled. `LEVEL_ALL_MSGS` permits all messages to be disabled. `LEVEL_SUPER_ADMIN` allows an account to take all circuit breaker actions including authorizing and deauthorizing other accounts. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // AuthorizeCircuitBreaker allows a super-admin to grant (or revoke) another // account's circuit breaker permissions. rpc AuthorizeCircuitBreaker(MsgAuthorizeCircuitBreaker) returns (MsgAuthorizeCircuitBreakerResponse); ``` ### Trip Trip, is called by an authorized account to disable message execution for a specific msgURL. If empty, all the msgs will be disabled. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // TripCircuitBreaker pauses processing of Msg's in the state machine. rpc TripCircuitBreaker(MsgTripCircuitBreaker) returns (MsgTripCircuitBreakerResponse); ``` ### Reset Reset is called by an authorized account to enable execution for a specific msgURL of previously disabled message. If empty, all the disabled messages will be enabled. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // ResetCircuitBreaker resumes processing of Msg's in the state machine that // have been paused using TripCircuitBreaker. rpc ResetCircuitBreaker(MsgResetCircuitBreaker) returns (MsgResetCircuitBreakerResponse); ``` ## Messages ### MsgAuthorizeCircuitBreaker ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/main/proto/cosmos/circuit/v1/tx.proto#L25-L75 ``` This message is expected to fail if: * the granter is not an account with permission level `LEVEL_SUPER_ADMIN` or the module authority ### MsgTripCircuitBreaker ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/main/proto/cosmos/circuit/v1/tx.proto#L77-L93 ``` This message is expected to fail if: * if the signer does not have a permission level with the ability to disable the specified type url message ### MsgResetCircuitBreaker ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/main/proto/cosmos/circuit/v1/tx.proto#L95-109 ``` This message is expected to fail if: * if the type url is not disabled ## Events - list and describe event tags The circuit module emits the following events: ### Message Events #### MsgAuthorizeCircuitBreaker | Type | Attribute Key | Attribute Value | | ------- | ------------- | --------------------------- | | string | granter | `{granterAddress}` | | string | grantee | `{granteeAddress}` | | string | permission | `{granteePermissions}` | | message | module | circuit | | message | action | authorize\_circuit\_breaker | #### MsgTripCircuitBreaker | Type | Attribute Key | Attribute Value | | --------- | ------------- | ---------------------- | | string | authority | `{authorityAddress}` | | \[]string | msg\_urls | \[]string`{msg\_urls}` | | message | module | circuit | | message | action | trip\_circuit\_breaker | #### ResetCircuitBreaker | Type | Attribute Key | Attribute Value | | --------- | ------------- | ----------------------- | | string | authority | `{authorityAddress}` | | \[]string | msg\_urls | \[]string`{msg\_urls}` | | message | module | circuit | | message | action | reset\_circuit\_breaker | ## Keys - list of key prefixes used by the circuit module * `AccountPermissionPrefix` - `0x01` * `DisableListPrefix` - `0x02` ## Client - list and describe CLI commands and gRPC and REST endpoints ## Examples: Using Circuit Breaker CLI Commands This section provides practical examples for using the Circuit Breaker module through the command-line interface (CLI). These examples demonstrate how to authorize accounts, disable (trip) specific message types, and re-enable (reset) them when needed. ### Querying Circuit Breaker Permissions Check an account's current circuit breaker permissions: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} # Query permissions for a specific account query circuit account-permissions # Example: simd query circuit account-permissions cosmos1... ``` Check which message types are currently disabled: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} # Query all disabled message types query circuit disabled-list # Example: simd query circuit disabled-list ``` ### Authorizing an Account as Circuit Breaker Only a super-admin or the module authority (typically the governance module account) can grant circuit breaker permissions to other accounts: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} # Grant LEVEL_ALL_MSGS permission (can disable any message type) tx circuit authorize --level=ALL_MSGS --from= --gas=auto --gas-adjustment=1.5 # Grant LEVEL_SOME_MSGS permission (can only disable specific message types) tx circuit authorize --level=SOME_MSGS --limit-type-urls="/cosmos.bank.v1beta1.MsgSend,/cosmos.staking.v1beta1.MsgDelegate" --from= --gas=auto --gas-adjustment=1.5 # Grant LEVEL_SUPER_ADMIN permission (can disable messages and authorize other accounts) tx circuit authorize --level=SUPER_ADMIN --from= --gas=auto --gas-adjustment=1.5 ``` ### Disabling Message Processing (Trip) Disable specific message types to prevent their execution (requires authorization): ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} # Disable a single message type tx circuit trip --type-urls="/cosmos.bank.v1beta1.MsgSend" --from= --gas=auto --gas-adjustment=1.5 # Disable multiple message types tx circuit trip --type-urls="/cosmos.bank.v1beta1.MsgSend,/cosmos.staking.v1beta1.MsgDelegate" --from= --gas=auto --gas-adjustment=1.5 # Disable all message types (emergency measure) tx circuit trip --from= --gas=auto --gas-adjustment=1.5 ``` ### Re-enabling Message Processing (Reset) Re-enable previously disabled message types (requires authorization): ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} # Re-enable a single message type tx circuit reset --type-urls="/cosmos.bank.v1beta1.MsgSend" --from= --gas=auto --gas-adjustment=1.5 # Re-enable multiple message types tx circuit reset --type-urls="/cosmos.bank.v1beta1.MsgSend,/cosmos.staking.v1beta1.MsgDelegate" --from= --gas=auto --gas-adjustment=1.5 # Re-enable all disabled message types tx circuit reset --from= --gas=auto --gas-adjustment=1.5 ``` ### Usage in Emergency Scenarios In case of a critical vulnerability in a specific message type: 1. Quickly disable the vulnerable message type: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} tx circuit trip --type-urls="/cosmos.vulnerable.v1beta1.MsgVulnerable" --from= --gas=auto --gas-adjustment=1.5 ``` 2. After a fix is deployed, re-enable the message type: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} tx circuit reset --type-urls="/cosmos.vulnerable.v1beta1.MsgVulnerable" --from= --gas=auto --gas-adjustment=1.5 ``` This allows chains to surgically disable problematic functionality without halting the entire chain, providing time for developers to implement and deploy fixes. # x/consensus Source: https://docs.cosmos.network/sdk/latest/modules/consensus/README Functionality to modify CometBFT's ABCI consensus params. The `x/consensus` module allows governance to update CometBFT's ABCI consensus parameters on a live chain without a software upgrade. ## Consensus Parameters The module manages the following CometBFT consensus parameters: ### Block Parameters | Parameter | Description | | ---------- | ------------------------------------------ | | `MaxBytes` | Maximum block size in bytes | | `MaxGas` | Maximum gas per block (`-1` for unlimited) | ### Evidence Parameters | Parameter | Description | | ----------------- | ---------------------------------------------- | | `MaxAgeNumBlocks` | Maximum age of evidence in blocks | | `MaxAgeDuration` | Maximum age of evidence as a duration | | `MaxBytes` | Maximum total evidence size per block in bytes | ### Validator Parameters | Parameter | Description | | ------------- | ------------------------------------------------------------------------------------ | | `PubKeyTypes` | Supported public key types for validators (e.g., `ed25519`, `secp256k1`, `bls12381`) | ### ABCI Parameters | Parameter | Description | | ---------------------------- | ------------------------------------------------------------------ | | `VoteExtensionsEnableHeight` | Block height at which vote extensions are enabled (`0` to disable) | ## Messages ### MsgUpdateParams Updates consensus parameters via governance. All of `block`, `evidence`, and `validator` must be provided. `abci` is optional. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} msg := &types.MsgUpdateParams{ Authority: authtypes.NewModuleAddress(govtypes.ModuleName).String(), Block: &cmtproto.BlockParams{ MaxBytes: 200000, MaxGas: 100000000, }, Evidence: &cmtproto.EvidenceParams{ MaxAgeNumBlocks: 302400, MaxAgeDuration: 504 * time.Hour, MaxBytes: 10000, }, Validator: &cmtproto.ValidatorParams{ PubKeyTypes: []string{"ed25519"}, }, Abci: &cmtproto.ABCIParams{ VoteExtensionsEnableHeight: 0, }, } ``` ## AuthorityParams Authority management can be centralized via the `x/consensus` module using `AuthorityParams`. The `AuthorityParams` field in `ConsensusParams` stores the authority address on-chain. When set, it takes precedence over the per-keeper authority parameter. Keeper constructors still accept the `authority` parameter. It is used as a fallback when no authority is configured in consensus params. ### How It Works When a module validates authority (e.g., in `UpdateParams`), it checks consensus params first. If no authority is set there, it falls back to the keeper's `authority` field: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} authority := sdkCtx.Authority() // from consensus params if authority == "" { authority = k.authority // fallback to keeper field } if authority != msg.Authority { return nil, errors.Wrapf(...) } ``` To enable centralized authority, set the `AuthorityParams` in consensus params via a governance proposal targeting the `x/consensus` module's `MsgUpdateParams`. ## CLI ### Query #### params Query the current consensus parameters: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query consensus params ``` Example Output: ```yml theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} params: abci: vote_extensions_enable_height: "0" block: max_bytes: "200000" max_gas: "-1" evidence: max_age_duration: 1814400s max_age_num_blocks: "302400" max_bytes: "10000" validator: pub_key_types: - ed25519 ``` ### Transactions #### update-params-proposal Submit a governance proposal to update consensus parameters: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx consensus update-params-proposal [block] [evidence] [validator] [abci] [flags] ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx consensus update-params-proposal \ '{"max_bytes":"200000","max_gas":"100000000"}' \ '{"max_age_num_blocks":"302400","max_age_duration":"1814400s","max_bytes":"10000"}' \ '{"pub_key_types":["ed25519"]}' \ '{"vote_extensions_enable_height":"0"}' \ --from mykey ``` ## gRPC ### Params Query the current consensus parameters: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext localhost:9090 cosmos.consensus.v1.Query/Params ``` Example Output: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "params": { "block": { "maxBytes": "200000", "maxGas": "-1" }, "evidence": { "maxAgeNumBlocks": "302400", "maxAgeDuration": "1814400s", "maxBytes": "10000" }, "validator": { "pubKeyTypes": ["ed25519"] }, "abci": { "voteExtensionsEnableHeight": "0" } } } ``` ## REST ``` GET /cosmos/consensus/v1/params ``` # x/crisis Source: https://docs.cosmos.network/sdk/latest/modules/crisis/README x/crisis has been moved to ./contrib/x/crisis and is no longer part of the core Cosmos SDK. `x/crisis` has been moved to [`./contrib/x/crisis`](https://github.com/cosmos/cosmos-sdk/tree/main/contrib/x/crisis) and is no longer actively maintained as part of the core Cosmos SDK. It is still available for use but is not included in the SDK Bug Bounty program. The module was moved because it never worked as intended. ## Overview The crisis module halts the blockchain under the circumstance that a blockchain invariant is broken. Invariants can be registered with the application during the application initialization process. ## Contents * [State](#state) * [Messages](#messages) * [Events](#events) * [Parameters](#parameters) * [Client](#client) * [CLI](#cli) ## State ### ConstantFee Due to the anticipated large gas cost requirement to verify an invariant (and potential to exceed the maximum allowable block gas limit) a constant fee is used instead of the standard gas consumption method. The constant fee is intended to be larger than the anticipated gas cost of running the invariant with the standard gas consumption method. The ConstantFee param is stored in the module params state with the prefix of `0x01`, it can be updated with governance or the address with authority. * Params: `mint/params -> legacy_amino(sdk.Coin)` ## Messages In this section we describe the processing of the crisis messages and the corresponding updates to the state. ### MsgVerifyInvariant Blockchain invariants can be checked using the `MsgVerifyInvariant` message. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/crisis/v1beta1/tx.proto#L26-L42 ``` This message is expected to fail if: * the sender does not have enough coins for the constant fee * the invariant route is not registered This message checks the invariant provided, and if the invariant is broken it panics, halting the blockchain. If the invariant is broken, the constant fee is never deducted as the transaction is never committed to a block (equivalent to being refunded). However, if the invariant is not broken, the constant fee will not be refunded. ## Events The crisis module emits the following events: ### Handlers #### MsgVerifyInvariant | Type | Attribute Key | Attribute Value | | --------- | ------------- | ------------------ | | invariant | route | `{invariantRoute}` | | message | module | crisis | | message | action | verify\_invariant | | message | sender | `{senderAddress}` | ## Parameters The crisis module contains the following parameters: | Key | Type | Example | | ----------- | ------------- | ----------------------------------- | | ConstantFee | object (coin) | `{"denom":"uatom","amount":"1000"}` | ## Client ### CLI A user can query and interact with the `crisis` module using the CLI. #### Transactions The `tx` commands allow users to interact with the `crisis` module. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx crisis --help ``` ##### invariant-broken The `invariant-broken` command submits proof when an invariant was broken to halt the chain ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx crisis invariant-broken [module-name] [invariant-route] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx crisis invariant-broken bank total-supply --from=[keyname or address] ``` # x/distribution Source: https://docs.cosmos.network/sdk/latest/modules/distribution/README ## Overview This *simple* distribution mechanism describes a functional way to passively distribute rewards between validators and delegators. Note that this mechanism does not distribute funds in as precisely as active reward distribution mechanisms and will therefore be upgraded in the future. The mechanism operates as follows. Collected rewards are pooled globally and divided out passively to validators and delegators. Each validator has the opportunity to charge commission to the delegators on the rewards collected on behalf of the delegators. Fees are collected directly into a global reward pool and validator proposer-reward pool. Due to the nature of passive accounting, whenever changes to parameters which affect the rate of reward distribution occurs, withdrawal of rewards must also occur. * Whenever withdrawing, one must withdraw the maximum amount they are entitled to, leaving nothing in the pool. * Whenever bonding, unbonding, or re-delegating tokens to an existing account, a full withdrawal of the rewards must occur (as the rules for lazy accounting change). * Whenever a validator chooses to change the commission on rewards, all accumulated commission rewards must be simultaneously withdrawn. The above scenarios are covered in `hooks.md`. The distribution mechanism outlined herein is used to lazily distribute the following rewards between validators and associated delegators: * multi-token fees to be socially distributed * inflated staked asset provisions * validator commission on all rewards earned by their delegators stake Fees are pooled within a global pool. The mechanisms used allow for validators and delegators to independently and lazily withdraw their rewards. ## Shortcomings As a part of the lazy computations, each delegator holds an accumulation term specific to each validator which is used to estimate what their approximate fair portion of tokens held in the global fee pool is owed to them. ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} entitlement = delegator-accumulation / all-delegators-accumulation ``` Under the circumstance that there was constant and equal flow of incoming reward tokens every block, this distribution mechanism would be equal to the active distribution (distribute individually to all delegators each block). However, this is unrealistic so deviations from the active distribution will occur based on fluctuations of incoming reward tokens as well as timing of reward withdrawal by other delegators. If you happen to know that incoming rewards are about to significantly increase, you are incentivized to not withdraw until after this event, increasing the worth of your existing *accum*. See [#2764](https://github.com/cosmos/cosmos-sdk/issues/2764) for further details. ## Effect on Staking Charging commission on Atom provisions while also allowing for Atom-provisions to be auto-bonded (distributed directly to the validators bonded stake) is problematic within BPoS. Fundamentally, these two mechanisms are mutually exclusive. If both commission and auto-bonding mechanisms are simultaneously applied to the staking-token then the distribution of staking-tokens between any validator and its delegators will change with each block. This then necessitates a calculation for each delegation records for each block - which is considered computationally expensive. In conclusion, we can only have Atom commission and unbonded atoms provisions or bonded atom provisions with no Atom commission, and we elect to implement the former. Stakeholders wishing to rebond their provisions may elect to set up a script to periodically withdraw and rebond rewards. ## Contents * [Concepts](#concepts) * [State](#state) * [FeePool](#feepool) * [Validator Distribution](#validator-distribution) * [Delegation Distribution](#delegation-distribution) * [Params](#params) * [Begin Block](#begin-block) * [Messages](#messages) * [Hooks](#hooks) * [Events](#events) * [Parameters](#parameters) * [Client](#client) * [CLI](#cli) * [gRPC](#grpc) ## Concepts In Proof of Stake (PoS) blockchains, rewards gained from transaction fees are paid to validators. The fee distribution module fairly distributes the rewards to the validators' constituent delegators. Rewards are calculated per period. The period is updated each time a validator's delegation changes, for example, when the validator receives a new delegation. The rewards for a single validator can then be calculated by taking the total rewards for the period before the delegation started, minus the current total rewards. To learn more, see the [F1 Fee Distribution paper](https://github.com/cosmos/cosmos-sdk/tree/main/docs/spec/fee_distribution/f1_fee_distr.pdf). The commission to the validator is paid when the validator is removed or when the validator requests a withdrawal. The commission is calculated and incremented at every `BeginBlock` operation to update accumulated fee amounts. The rewards to a delegator are distributed when the delegation is changed or removed, or a withdrawal is requested. Before rewards are distributed, all slashes to the validator that occurred during the current delegation are applied. ### Reference Counting in F1 Fee Distribution In F1 fee distribution, the rewards a delegator receives are calculated when their delegation is withdrawn. This calculation must read the terms of the summation of rewards divided by the share of tokens from the period which they ended when they delegated, and the final period that was created for the withdrawal. Additionally, as slashes change the amount of tokens a delegation will have (but we calculate this lazily, only when a delegator un-delegates), we must calculate rewards in separate periods before / after any slashes which occurred in between when a delegator delegated and when they withdrew their rewards. Thus slashes, like delegations, reference the period which was ended by the slash event. All stored historical rewards records for periods which are no longer referenced by any delegations or any slashes can thus be safely removed, as they will never be read (future delegations and future slashes will always reference future periods). This is implemented by tracking a `ReferenceCount` along with each historical reward storage entry. Each time a new object (delegation or slash) is created which might need to reference the historical record, the reference count is incremented. Each time one object which previously needed to reference the historical record is deleted, the reference count is decremented. If the reference count hits zero, the historical record is deleted. ### External Community Pool Keepers An external pool community keeper is defined as: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // ExternalCommunityPoolKeeper is the interface that an external community pool module keeper must fulfill // for x/distribution to properly accept it as a community pool fund destination. type ExternalCommunityPoolKeeper interface { // GetCommunityPoolModule gets the module name that funds should be sent to for the community pool. // This is the address that x/distribution will send funds to for external management. GetCommunityPoolModule() string // FundCommunityPool allows an account to directly fund the community fund pool. FundCommunityPool(ctx sdk.Context, amount sdk.Coins, senderAddr sdk.AccAddress) error // DistributeFromCommunityPool distributes funds from the community pool module account to // a receiver address. DistributeFromCommunityPool(ctx sdk.Context, amount sdk.Coins, receiveAddr sdk.AccAddress) error } ``` By default, the distribution module will use a community pool implementation that is internal. An external community pool can be provided to the module which will have funds be diverted to it instead of the internal implementation. The reference external community pool maintained by the Cosmos SDK is [`x/protocolpool`](/sdk/latest/modules/protocolpool/README). ## State ### FeePool All globally tracked parameters for distribution are stored within `FeePool`. Rewards are collected and added to the reward pool and distributed to validators/delegators from here. Note that the reward pool holds decimal coins (`DecCoins`) to allow for fractions of coins to be received from operations like inflation. When coins are distributed from the pool they are truncated back to `sdk.Coins` which are non-decimal. * FeePool: `0x00 -> ProtocolBuffer(FeePool)` ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // coins with decimal type DecCoins []DecCoin type DecCoin struct { Amount math.LegacyDec Denom string } ``` ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/distribution/v1beta1/distribution.proto#L116-L123 ``` ### Validator Distribution Validator distribution information for the relevant validator is updated each time: 1. delegation amount to a validator is updated, 2. any delegator withdraws from a validator, or 3. the validator withdraws its commission. * ValidatorDistInfo: `0x02 | ValOperatorAddrLen (1 byte) | ValOperatorAddr -> ProtocolBuffer(validatorDistribution)` ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type ValidatorDistInfo struct { OperatorAddress sdk.AccAddress SelfBondRewards sdkmath.DecCoins ValidatorCommission types.ValidatorAccumulatedCommission } ``` ### Delegation Distribution Each delegation distribution only needs to record the height at which it last withdrew fees. Because a delegation must withdraw fees each time it's properties change (aka bonded tokens etc.) its properties will remain constant and the delegator's *accumulation* factor can be calculated passively knowing only the height of the last withdrawal and its current properties. * DelegationDistInfo: `0x02 | DelegatorAddrLen (1 byte) | DelegatorAddr | ValOperatorAddrLen (1 byte) | ValOperatorAddr -> ProtocolBuffer(delegatorDist)` ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type DelegationDistInfo struct { WithdrawalHeight int64 // last time this delegation withdrew rewards } ``` ### Params The distribution module stores its params in state with the prefix of `0x09`, it can be updated with governance or the address with authority. * Params: `0x09 | ProtocolBuffer(Params)` ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/distribution/v1beta1/distribution.proto#L12-L42 ``` ## Begin Block At each `BeginBlock`, all fees received in the previous block are transferred to the distribution `ModuleAccount` account. When a delegator or validator withdraws their rewards, they are taken out of the `ModuleAccount`. During begin block, the different claims on the fees collected are updated as follows: * The reserve community tax is charged. * The remainder is distributed proportionally by voting power to all bonded validators ### The Distribution Scheme See [params](#params) for description of parameters. Let `fees` be the total fees collected in the previous block, including inflationary rewards to the stake. All fees are collected in a specific module account during the block. During `BeginBlock`, they are sent to the `"distribution"` `ModuleAccount`. No other sending of tokens occurs. Instead, the rewards each account is entitled to are stored, and withdrawals can be triggered through the messages `FundCommunityPool`, `WithdrawValidatorCommission` and `WithdrawDelegatorReward`. #### Reward to the Community Pool The community pool gets `community_tax * fees`, plus any remaining dust after validators get their rewards that are always rounded down to the nearest integer value. #### Using an External Community Pool Starting with Cosmos SDK v0.53.0, an external community pool, such as `x/protocolpool`, can be used in place of the `x/distribution` managed community pool. Please view the warning in the next section before deciding to use an external community pool. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // ExternalCommunityPoolKeeper is the interface that an external community pool module keeper must fulfill // for x/distribution to properly accept it as a community pool fund destination. type ExternalCommunityPoolKeeper interface { // GetCommunityPoolModule gets the module name that funds should be sent to for the community pool. // This is the address that x/distribution will send funds to for external management. GetCommunityPoolModule() string // FundCommunityPool allows an account to directly fund the community fund pool. FundCommunityPool(ctx sdk.Context, amount sdk.Coins, senderAddr sdk.AccAddress) error // DistributeFromCommunityPool distributes funds from the community pool module account to // a receiver address. DistributeFromCommunityPool(ctx sdk.Context, amount sdk.Coins, receiveAddr sdk.AccAddress) error } ``` ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} app.DistrKeeper = distrkeeper.NewKeeper( appCodec, runtime.NewKVStoreService(keys[distrtypes.StoreKey]), app.AccountKeeper, app.BankKeeper, app.StakingKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String(), distrkeeper.WithExternalCommunityPool(app.ProtocolPoolKeeper), // New option. ) ``` #### External Community Pool Usage Warning When using an external community pool with `x/distribution`, the following handlers will return an error: **QueryService** * `CommunityPool` **MsgService** * `CommunityPoolSpend` * `FundCommunityPool` If you have services that rely on this functionality from `x/distribution`, please update them to use the `x/protocolpool` equivalents. #### Reward To the Validators The proposer receives no extra rewards. All fees are distributed among all the bonded validators, including the proposer, in proportion to their consensus power. ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} powFrac = validator power / total bonded validator power voteMul = 1 - community_tax ``` All validators receive `fees * voteMul * powFrac`. #### Rewards to Delegators Each validator's rewards are distributed to its delegators. The validator also has a self-delegation that is treated like a regular delegation in distribution calculations. The validator sets a commission rate. The commission rate is flexible, but each validator sets a maximum rate and a maximum daily increase. These maximums cannot be exceeded and protect delegators from sudden increases of validator commission rates to prevent validators from taking all of the rewards. The outstanding rewards that the operator is entitled to are stored in `ValidatorAccumulatedCommission`, while the rewards the delegators are entitled to are stored in `ValidatorCurrentRewards`. The [F1 fee distribution scheme](#concepts) is used to calculate the rewards per delegator as they withdraw or update their delegation, and is thus not handled in `BeginBlock`. #### Example Distribution For this example distribution, the underlying consensus engine selects block proposers in proportion to their power relative to the entire bonded power. All validators are equally performant at including pre-commits in their proposed blocks. Then hold `(pre_commits included) / (total bonded validator power)` constant so that the amortized block reward for the validator is `( validator power / total bonded power) * (1 - community tax rate)` of the total rewards. Consequently, the reward for a single delegator is: ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} (delegator proportion of the validator power / validator power) * (validator power / total bonded power) * (1 - community tax rate) * (1 - validator commission rate) = (delegator proportion of the validator power / total bonded power) * (1 - community tax rate) * (1 - validator commission rate) ``` ## Messages ### MsgSetWithdrawAddress By default, the withdraw address is the delegator address. To change its withdraw address, a delegator must send a `MsgSetWithdrawAddress` message. Changing the withdraw address is possible only if the parameter `WithdrawAddrEnabled` is set to `true`. The withdraw address cannot be any of the module accounts. These accounts are blocked from being withdraw addresses by being added to the distribution keeper's `blockedAddrs` array at initialization. Response: ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/distribution/v1beta1/tx.proto#L49-L60 ``` ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (k Keeper) SetWithdrawAddr(ctx context.Context, delegatorAddr sdk.AccAddress, withdrawAddr sdk.AccAddress) error if k.blockedAddrs[withdrawAddr.String()] { fail with "`{ withdrawAddr }` is not allowed to receive external funds" } if !k.GetWithdrawAddrEnabled(ctx) { fail with `ErrSetWithdrawAddrDisabled` } k.SetDelegatorWithdrawAddr(ctx, delegatorAddr, withdrawAddr) ``` ### MsgWithdrawDelegatorReward A delegator can withdraw its rewards. Internally in the distribution module, this transaction simultaneously removes the previous delegation with associated rewards, the same as if the delegator simply started a new delegation of the same value. The rewards are sent immediately from the distribution `ModuleAccount` to the withdraw address. Any remainder (truncated decimals) are sent to the community pool. The starting height of the delegation is set to the current validator period, and the reference count for the previous period is decremented. The amount withdrawn is deducted from the `ValidatorOutstandingRewards` variable for the validator. In the F1 distribution, the total rewards are calculated per validator period, and a delegator receives a piece of those rewards in proportion to their stake in the validator. In basic F1, the total rewards that all the delegators are entitled to between to periods is calculated the following way. Let `R(X)` be the total accumulated rewards up to period `X` divided by the tokens staked at that time. The delegator allocation is `R(X) * delegator_stake`. Then the rewards for all the delegators for staking between periods `A` and `B` are `(R(B) - R(A)) * total stake`. However, these calculated rewards don't account for slashing. Taking the slashes into account requires iteration. Let `F(X)` be the fraction a validator is to be slashed for a slashing event that happened at period `X`. If the validator was slashed at periods `P1, ..., PN`, where `A < P1`, `PN < B`, the distribution module calculates the individual delegator's rewards, `T(A, B)`, as follows: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} stake := initial stake rewards := 0 previous := A for P in P1, ..., PN`: rewards = (R(P) - previous) * stake stake = stake * F(P) previous = P rewards = rewards + (R(B) - R(PN)) * stake ``` The historical rewards are calculated retroactively by playing back all the slashes and then attenuating the delegator's stake at each step. The final calculated stake is equivalent to the actual staked coins in the delegation with a margin of error due to rounding errors. Response: ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/distribution/v1beta1/tx.proto#L66-L77 ``` ### WithdrawValidatorCommission The validator can send the WithdrawValidatorCommission message to withdraw their accumulated commission. The commission is calculated in every block during `BeginBlock`, so no iteration is required to withdraw. The amount withdrawn is deducted from the `ValidatorOutstandingRewards` variable for the validator. Only integer amounts can be sent. If the accumulated awards have decimals, the amount is truncated before the withdrawal is sent, and the remainder is left to be withdrawn later. ### FundCommunityPool This handler will return an error if an `ExternalCommunityPool` is used. This message sends coins directly from the sender to the community pool. The transaction fails if the amount cannot be transferred from the sender to the distribution module account. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (k Keeper) FundCommunityPool(ctx context.Context, amount sdk.Coins, sender sdk.AccAddress) error { if err := k.bankKeeper.SendCoinsFromAccountToModule(ctx, sender, types.ModuleName, amount); err != nil { return err } feePool, err := k.FeePool.Get(ctx) if err != nil { return err } feePool.CommunityPool = feePool.CommunityPool.Add(sdk.NewDecCoinsFromCoins(amount...)...) if err := k.FeePool.Set(ctx, feePool); err != nil { return err } return nil } ``` ### Common distribution operations These operations take place during many different messages. #### Initialize delegation Each time a delegation is changed, the rewards are withdrawn and the delegation is reinitialized. Initializing a delegation increments the validator period and keeps track of the starting period of the delegation. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // initialize starting info for a new delegation func (k Keeper) initializeDelegation(ctx context.Context, val sdk.ValAddress, del sdk.AccAddress) { // period has already been incremented - we want to store the period ended by this delegation action previousPeriod := k.GetValidatorCurrentRewards(ctx, val).Period - 1 // increment reference count for the period we're going to track k.incrementReferenceCount(ctx, val, previousPeriod) validator := k.stakingKeeper.Validator(ctx, val) delegation := k.stakingKeeper.Delegation(ctx, del, val) // calculate delegation stake in tokens // we don't store directly, so multiply delegation shares * (tokens per share) // note: necessary to truncate so we don't allow withdrawing more rewards than owed stake := validator.TokensFromSharesTruncated(delegation.GetShares()) k.SetDelegatorStartingInfo(ctx, val, del, types.NewDelegatorStartingInfo(previousPeriod, stake, uint64(ctx.BlockHeight()))) } ``` ### MsgUpdateParams Distribution module params can be updated through `MsgUpdateParams`, which can be done using governance proposal and the signer will always be gov module account address. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/distribution/v1beta1/tx.proto#L133-L147 ``` The message handling can fail if: * signer is not the gov module account address. ## Hooks Available hooks that can be called by and from this module. ### Create or modify delegation distribution * triggered-by: `staking.MsgDelegate`, `staking.MsgBeginRedelegate`, `staking.MsgUndelegate` #### Before * The delegation rewards are withdrawn to the withdraw address of the delegator. The rewards include the current period and exclude the starting period. * The validator period is incremented. The validator period is incremented because the validator's power and share distribution might have changed. * The reference count for the delegator's starting period is decremented. #### After The starting height of the delegation is set to the previous period. Because of the `Before`-hook, this period is the last period for which the delegator was rewarded. ### Validator created * triggered-by: `staking.MsgCreateValidator` When a validator is created, the following validator variables are initialized: * Historical rewards * Current accumulated rewards * Accumulated commission * Total outstanding rewards * Period By default, all values are set to a `0`, except period, which is set to `1`. ### Validator removed * triggered-by: `staking.RemoveValidator` Outstanding commission is sent to the validator's self-delegation withdrawal address. Remaining delegator rewards get sent to the community fee pool. Note: The validator gets removed only when it has no remaining delegations. At that time, all outstanding delegator rewards will have been withdrawn. Any remaining rewards are dust amounts. ### Validator is slashed * triggered-by: `staking.Slash` * The current validator period reference count is incremented. The reference count is incremented because the slash event has created a reference to it. * The validator period is incremented. * The slash event is stored for later use. The slash event will be referenced when calculating delegator rewards. ## Events The distribution module emits the following events: ### BeginBlocker | Type | Attribute Key | Attribute Value | | ---------------- | ------------- | -------------------- | | proposer\_reward | validator | `{validatorAddress}` | | proposer\_reward | reward | `{proposerReward}` | | commission | amount | `{commissionAmount}` | | commission | validator | `{validatorAddress}` | | rewards | amount | `{rewardAmount}` | | rewards | validator | `{validatorAddress}` | ### Handlers #### MsgSetWithdrawAddress | Type | Attribute Key | Attribute Value | | ---------------------- | ----------------- | ---------------------- | | set\_withdraw\_address | withdraw\_address | `{withdrawAddress}` | | message | module | distribution | | message | action | set\_withdraw\_address | | message | sender | `{senderAddress}` | #### MsgWithdrawDelegatorReward | Type | Attribute Key | Attribute Value | | ----------------- | ------------- | --------------------------- | | withdraw\_rewards | amount | `{rewardAmount}` | | withdraw\_rewards | validator | `{validatorAddress}` | | message | module | distribution | | message | action | withdraw\_delegator\_reward | | message | sender | `{senderAddress}` | #### MsgWithdrawValidatorCommission | Type | Attribute Key | Attribute Value | | -------------------- | ------------- | ------------------------------- | | withdraw\_commission | amount | `{commissionAmount}` | | message | module | distribution | | message | action | withdraw\_validator\_commission | | message | sender | `{senderAddress}` | ## Parameters The distribution module contains the following parameters: | Key | Type | Example | | ------------------- | ------------ | --------------------------- | | communitytax | string (dec) | "0.020000000000000000" \[0] | | withdrawaddrenabled | bool | true | * \[0] `communitytax` must be positive and cannot exceed 1.00. * `baseproposerreward` and `bonusproposerreward` were parameters that are deprecated in v0.47 and are not used. The reserve pool is the pool of collected funds for use by governance taken via the `CommunityTax`. Currently with the Cosmos SDK, tokens collected by the CommunityTax are accounted for but unspendable. ## Client ## CLI A user can query and interact with the `distribution` module using the CLI. #### Query The `query` commands allow users to query `distribution` state. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query distribution --help ``` ##### commission The `commission` command allows users to query validator commission rewards by address. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query distribution commission [address] [flags] ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query distribution commission cosmosvaloper1... ``` Example Output: ```yml theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} commission: - amount: "1000000.000000000000000000" denom: stake ``` ##### community-pool The `community-pool` command allows users to query all coin balances within the community pool. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query distribution community-pool [flags] ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query distribution community-pool ``` Example Output: ```yml theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} pool: - amount: "1000000.000000000000000000" denom: stake ``` ##### params The `params` command allows users to query the parameters of the `distribution` module. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query distribution params [flags] ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query distribution params ``` Example Output: ```yml theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} base_proposer_reward: "0.000000000000000000" bonus_proposer_reward: "0.000000000000000000" community_tax: "0.020000000000000000" withdraw_addr_enabled: true ``` ##### rewards The `rewards` command allows users to query delegator rewards. Users can optionally include the validator address to query rewards earned from a specific validator. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query distribution rewards [delegator-addr] [validator-addr] [flags] ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query distribution rewards cosmos1... ``` Example Output: ```yml theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} rewards: - reward: - amount: "1000000.000000000000000000" denom: stake validator_address: cosmosvaloper1.. total: - amount: "1000000.000000000000000000" denom: stake ``` ##### slashes The `slashes` command allows users to query all slashes for a given block range. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query distribution slashes [validator] [start-height] [end-height] [flags] ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query distribution slashes cosmosvaloper1... 1 1000 ``` Example Output: ```yml theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} pagination: next_key: null total: "0" slashes: - validator_period: 20, fraction: "0.009999999999999999" ``` ##### validator-outstanding-rewards The `validator-outstanding-rewards` command allows users to query all outstanding (un-withdrawn) rewards for a validator and all their delegations. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query distribution validator-outstanding-rewards [validator] [flags] ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query distribution validator-outstanding-rewards cosmosvaloper1... ``` Example Output: ```yml theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} rewards: - amount: "1000000.000000000000000000" denom: stake ``` ##### validator-distribution-info The `validator-distribution-info` command allows users to query validator commission and self-delegation rewards for validator. ```shell expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query distribution validator-distribution-info cosmosvaloper1... ``` Example Output: ```yml theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} commission: - amount: "100000.000000000000000000" denom: stake operator_address: cosmosvaloper1... self_bond_rewards: - amount: "100000.000000000000000000" denom: stake ``` ##### validator-historical-rewards The `validator-historical-rewards` command allows users to query historical rewards for a validator at a specific period. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query distribution validator-historical-rewards [validator] [period] [flags] ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query distribution validator-historical-rewards cosmosvaloper1... 5 ``` Example Output: ```yml theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} rewards: cumulative_reward_ratio: - amount: "1000000.000000000000000000" denom: stake reference_count: 2 ``` ##### validator-current-rewards The `validator-current-rewards` command allows users to query current rewards for a validator. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query distribution validator-current-rewards [validator] [flags] ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query distribution validator-current-rewards cosmosvaloper1... ``` Example Output: ```yml theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} rewards: period: "3" rewards: - amount: "1000000.000000000000000000" denom: stake ``` ##### delegator-starting-info The `delegator-starting-info` command allows users to query the starting info for a delegator on a given validator. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query distribution delegator-starting-info [delegator-address] [validator-address] [flags] ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query distribution delegator-starting-info cosmos1... cosmosvaloper1... ``` Example Output: ```yml theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} starting_info: creation_height: "10" previous_period: "2" stake: "1000000.000000000000000000" ``` #### Transactions The `tx` commands allow users to interact with the `distribution` module. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx distribution --help ``` ##### fund-community-pool The `fund-community-pool` command allows users to send funds to the community pool. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx distribution fund-community-pool [amount] [flags] ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx distribution fund-community-pool 100stake --from cosmos1... ``` ##### set-withdraw-addr The `set-withdraw-addr` command allows users to set the withdraw address for rewards associated with a delegator address. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx distribution set-withdraw-addr [withdraw-addr] [flags] ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx distribution set-withdraw-addr cosmos1... --from cosmos1... ``` ##### withdraw-all-rewards The `withdraw-all-rewards` command allows users to withdraw all rewards for a delegator. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx distribution withdraw-all-rewards [flags] ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx distribution withdraw-all-rewards --from cosmos1... ``` ##### withdraw-rewards The `withdraw-rewards` command allows users to withdraw all rewards from a given delegation address, and optionally withdraw validator commission if the delegation address given is a validator operator and the user proves the `--commission` flag. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx distribution withdraw-rewards [validator-addr] [flags] ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx distribution withdraw-rewards cosmosvaloper1... --from cosmos1... --commission ``` ### gRPC A user can query the `distribution` module using gRPC endpoints. #### Params The `Params` endpoint allows users to query parameters of the `distribution` module. Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ localhost:9090 \ cosmos.distribution.v1beta1.Query/Params ``` Example Output: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "params": { "communityTax": "20000000000000000", "baseProposerReward": "00000000000000000", "bonusProposerReward": "00000000000000000", "withdrawAddrEnabled": true } } ``` #### ValidatorDistributionInfo The `ValidatorDistributionInfo` queries validator commission and self-delegation rewards for validator. Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"validator_address":"cosmosvalop1..."}' \ localhost:9090 \ cosmos.distribution.v1beta1.Query/ValidatorDistributionInfo ``` Example Output: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "commission": { "commission": [ { "denom": "stake", "amount": "1000000000000000" } ] }, "self_bond_rewards": [ { "denom": "stake", "amount": "1000000000000000" } ], "validator_address": "cosmosvalop1..." } ``` #### ValidatorOutstandingRewards The `ValidatorOutstandingRewards` endpoint allows users to query rewards of a validator address. Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"validator_address":"cosmosvalop1.."}' \ localhost:9090 \ cosmos.distribution.v1beta1.Query/ValidatorOutstandingRewards ``` Example Output: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "rewards": { "rewards": [ { "denom": "stake", "amount": "1000000000000000" } ] } } ``` #### ValidatorCommission The `ValidatorCommission` endpoint allows users to query accumulated commission for a validator. Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"validator_address":"cosmosvalop1.."}' \ localhost:9090 \ cosmos.distribution.v1beta1.Query/ValidatorCommission ``` Example Output: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "commission": { "commission": [ { "denom": "stake", "amount": "1000000000000000" } ] } } ``` #### ValidatorSlashes The `ValidatorSlashes` endpoint allows users to query slash events of a validator. Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"validator_address":"cosmosvalop1.."}' \ localhost:9090 \ cosmos.distribution.v1beta1.Query/ValidatorSlashes ``` Example Output: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "slashes": [ { "validator_period": "20", "fraction": "0.009999999999999999" } ], "pagination": { "total": "1" } } ``` #### DelegationRewards The `DelegationRewards` endpoint allows users to query the total rewards accrued by a delegation. Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"delegator_address":"cosmos1...","validator_address":"cosmosvalop1..."}' \ localhost:9090 \ cosmos.distribution.v1beta1.Query/DelegationRewards ``` Example Output: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "rewards": [ { "denom": "stake", "amount": "1000000000000000" } ] } ``` #### DelegationTotalRewards The `DelegationTotalRewards` endpoint allows users to query the total rewards accrued by each validator. Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"delegator_address":"cosmos1..."}' \ localhost:9090 \ cosmos.distribution.v1beta1.Query/DelegationTotalRewards ``` Example Output: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "rewards": [ { "validatorAddress": "cosmosvaloper1...", "reward": [ { "denom": "stake", "amount": "1000000000000000" } ] } ], "total": [ { "denom": "stake", "amount": "1000000000000000" } ] } ``` #### DelegatorValidators The `DelegatorValidators` endpoint allows users to query all validators for given delegator. Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"delegator_address":"cosmos1..."}' \ localhost:9090 \ cosmos.distribution.v1beta1.Query/DelegatorValidators ``` Example Output: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "validators": ["cosmosvaloper1..."] } ``` #### DelegatorWithdrawAddress The `DelegatorWithdrawAddress` endpoint allows users to query the withdraw address of a delegator. Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"delegator_address":"cosmos1..."}' \ localhost:9090 \ cosmos.distribution.v1beta1.Query/DelegatorWithdrawAddress ``` Example Output: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "withdrawAddress": "cosmos1..." } ``` #### CommunityPool The `CommunityPool` endpoint allows users to query the community pool coins. Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ localhost:9090 \ cosmos.distribution.v1beta1.Query/CommunityPool ``` Example Output: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "pool": [ { "denom": "stake", "amount": "1000000000000000000" } ] } ``` #### ValidatorHistoricalRewards The `ValidatorHistoricalRewards` endpoint allows users to query historical rewards for a validator at a specific period. This is useful for debugging reward calculations by inspecting internal distribution state. Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"validator_address":"cosmosvaloper1...","period":"5"}' \ localhost:9090 \ cosmos.distribution.v1beta1.Query/ValidatorHistoricalRewards ``` Example Output: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "rewards": { "cumulativeRewardRatio": [ { "denom": "stake", "amount": "1000000000000000" } ], "referenceCount": 2 } } ``` #### ValidatorCurrentRewards The `ValidatorCurrentRewards` endpoint allows users to query current rewards for a validator. Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"validator_address":"cosmosvaloper1..."}' \ localhost:9090 \ cosmos.distribution.v1beta1.Query/ValidatorCurrentRewards ``` Example Output: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "rewards": { "rewards": [ { "denom": "stake", "amount": "1000000000000000" } ], "period": "3" } } ``` #### DelegatorStartingInfo The `DelegatorStartingInfo` endpoint allows users to query the starting info for a delegator on a given validator. Combined with `ValidatorHistoricalRewards`, this enables verification of reward calculations by retrieving the previous period and stake, then looking up cumulative reward ratios for that period. Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"delegator_address":"cosmos1...","validator_address":"cosmosvaloper1..."}' \ localhost:9090 \ cosmos.distribution.v1beta1.Query/DelegatorStartingInfo ``` Example Output: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "startingInfo": { "previousPeriod": "2", "stake": "1000000000000000000", "creationHeight": "10" } } ``` ``` ``` # x/epochs Source: https://docs.cosmos.network/sdk/latest/modules/epochs/README ## Abstract Often in the SDK, we would like to run certain code every-so often. The purpose of `epochs` module is to allow other modules to set that they would like to be signaled once every period. So another module can specify it wants to execute code once a week, starting at UTC-time = x. `epochs` creates a generalized epoch interface to other modules so that they can easily be signaled upon such events. ## Contents 1. **[Concept](#concepts)** 2. **[State](#state)** 3. **[Events](#events)** 4. **[Keeper](#keepers)** 5. **[Hooks](#hooks)** 6. **[Queries](#queries)** ## Concepts The epochs module defines on-chain timers that execute at fixed time intervals. Other SDK modules can then register logic to be executed at the timer ticks. We refer to the period in between two timer ticks as an "epoch". Every timer has a unique identifier. Every epoch will have a start time, and an end time, where `end time = start time + timer interval`. On mainnet, we only utilize one identifier, with a time interval of `one day`. The timer will tick at the first block whose block time is greater than the timer end time, and set the start as the prior timer end time. (Notably, it's not set to the block time!) This means that if the chain has been down for a while, you will get one timer tick per block, until the timer has caught up. ## State The Epochs module keeps a single `EpochInfo` per identifier. This contains the current state of the timer with the corresponding identifier. Its fields are modified at every timer tick. EpochInfos are initialized as part of genesis initialization or upgrade logic, and are only modified on begin blockers. ## Events The `epochs` module emits the following events: ### BeginBlocker | Type | Attribute Key | Attribute Value | | ------------ | ------------- | ----------------- | | epoch\_start | epoch\_number | `{epoch\_number}` | | epoch\_start | start\_time | `{start\_time}` | ### EndBlocker | Type | Attribute Key | Attribute Value | | ---------- | ------------- | ----------------- | | epoch\_end | epoch\_number | `{epoch\_number}` | ## Keepers ### Keeper functions Epochs keeper module provides utility functions to manage epochs. ## Hooks ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // the first block whose timestamp is after the duration is counted as the end of the epoch AfterEpochEnd(ctx sdk.Context, epochIdentifier string, epochNumber int64) // new epoch is next block of epoch end block BeforeEpochStart(ctx sdk.Context, epochIdentifier string, epochNumber int64) ``` ### How modules receive hooks On hook receiver function of other modules, they need to filter `epochIdentifier` and only do executions for only specific epochIdentifier. Filtering epochIdentifier could be in `Params` of other modules so that they can be modified by governance. This is the standard dev UX of this: ```golang theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (k MyModuleKeeper) AfterEpochEnd(ctx sdk.Context, epochIdentifier string, epochNumber int64) { params := k.GetParams(ctx) if epochIdentifier == params.DistrEpochIdentifier { // my logic } } ``` ### Panic isolation If a given epoch hook panics, its state update is reverted, but we keep proceeding through the remaining hooks. This allows more advanced epoch logic to be used, without concern over state machine halting, or halting subsequent modules. This does mean that if there is behavior you expect from a prior epoch hook, and that epoch hook reverted, your hook may also have an issue. So do keep in mind "what if a prior hook didn't get executed" in the safety checks you consider for a new epoch hook. ## Queries The Epochs module provides the following queries to check the module's state. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} service Query { // EpochInfos provide running epochInfos rpc EpochInfos(QueryEpochsInfoRequest) returns (QueryEpochsInfoResponse) {} // CurrentEpoch provide current epoch of specified identifier rpc CurrentEpoch(QueryCurrentEpochRequest) returns (QueryCurrentEpochResponse) {} } ``` ### Epoch Infos Query the currently running epochInfos ```sh theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} query epochs epoch-infos ``` **Example** An example output: ```sh expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} epochs: - current_epoch: "183" current_epoch_start_height: "2438409" current_epoch_start_time: "2021-12-18T17:16:09.898160996Z" duration: 86400s epoch_counting_started: true identifier: day start_time: "2021-06-18T17:00:00Z" - current_epoch: "26" current_epoch_start_height: "2424854" current_epoch_start_time: "2021-12-17T17:02:07.229632445Z" duration: 604800s epoch_counting_started: true identifier: week start_time: "2021-06-18T17:00:00Z" ``` ### Current Epoch Query the current epoch by the specified identifier ```sh theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} query epochs current-epoch [identifier] ``` **Example** Query the current `day` epoch: ```sh theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} query epochs current-epoch day ``` Which in this example outputs: ```sh theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} current_epoch: "183" ``` # x/evidence Source: https://docs.cosmos.network/sdk/latest/modules/evidence/README Concepts State Messages Events Parameters BeginBlock Client CLI REST gRPC * [Concepts](#concepts) * [State](#state) * [Messages](#messages) * [Events](#events) * [Parameters](#parameters) * [BeginBlock](#beginblock) * [Client](#client) * [CLI](#cli) * [REST](#rest) * [gRPC](#grpc) ## Abstract `x/evidence` is an implementation of a Cosmos SDK module, per [ADR 009](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-009-evidence-module.md), that allows for the submission and handling of arbitrary evidence of misbehavior such as equivocation and counterfactual signing. The evidence module differs from standard evidence handling which typically expects the underlying consensus engine, e.g. CometBFT, to automatically submit evidence when it is discovered by allowing clients and foreign chains to submit more complex evidence directly. All concrete evidence types must implement the `Evidence` interface contract. Submitted `Evidence` is first routed through the evidence module's `Router` in which it attempts to find a corresponding registered `Handler` for that specific `Evidence` type. Each `Evidence` type must have a `Handler` registered with the evidence module's keeper in order for it to be successfully routed and executed. Each corresponding handler must also fulfill the `Handler` interface contract. The `Handler` for a given `Evidence` type can perform any arbitrary state transitions such as slashing, jailing, and tombstoning. ## Concepts ### Evidence Any concrete type of evidence submitted to the `x/evidence` module must fulfill the `Evidence` contract outlined below. Not all concrete types of evidence will fulfill this contract in the same way and some data may be entirely irrelevant to certain types of evidence. An additional `ValidatorEvidence`, which extends `Evidence`, has also been created to define a contract for evidence against malicious validators. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Evidence defines the contract which concrete evidence types of misbehavior // must implement. type Evidence interface { proto.Message Route() string String() string Hash() []byte ValidateBasic() error // Height at which the infraction occurred GetHeight() int64 } // ValidatorEvidence extends Evidence interface to define contract // for evidence against malicious validators type ValidatorEvidence interface { Evidence // The consensus address of the malicious validator at time of infraction GetConsensusAddress() sdk.ConsAddress // The total power of the malicious validator at time of infraction GetValidatorPower() int64 // The total validator set power at time of infraction GetTotalPower() int64 } ``` ### Registration & Handling The `x/evidence` module must first know about all types of evidence it is expected to handle. This is accomplished by registering the `Route` method in the `Evidence` contract with what is known as a `Router` (defined below). The `Router` accepts `Evidence` and attempts to find the corresponding `Handler` for the `Evidence` via the `Route` method. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type Router interface { AddRoute(r string, h Handler) Router HasRoute(r string) bool GetRoute(path string) Handler Seal() Sealed() bool } ``` The `Handler` (defined below) is responsible for executing the entirety of the business logic for handling `Evidence`. This typically includes validating the evidence, both stateless checks via `ValidateBasic` and stateful checks via any keepers provided to the `Handler`. In addition, the `Handler` may also perform capabilities such as slashing and jailing a validator. All `Evidence` handled by the `Handler` should be persisted. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Handler defines an agnostic Evidence handler. The handler is responsible // for executing all corresponding business logic necessary for verifying the // evidence as valid. In addition, the Handler may execute any necessary // slashing and potential jailing. type Handler func(context.Context, Evidence) error ``` ## State Currently the `x/evidence` module only stores valid submitted `Evidence` in state. The evidence state is also stored and exported in the `x/evidence` module's `GenesisState`. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // GenesisState defines the evidence module's genesis state. message GenesisState { // evidence defines all the evidence at genesis. repeated google.protobuf.Any evidence = 1; } ``` All `Evidence` is retrieved and stored via a prefix `KVStore` using prefix `0x00` (`KeyPrefixEvidence`). ## Messages ### MsgSubmitEvidence Evidence is submitted through a `MsgSubmitEvidence` message: ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // MsgSubmitEvidence represents a message that supports submitting arbitrary // Evidence of misbehavior such as equivocation or counterfactual signing. message MsgSubmitEvidence { string submitter = 1; google.protobuf.Any evidence = 2; } ``` Note, the `Evidence` of a `MsgSubmitEvidence` message must have a corresponding `Handler` registered with the `x/evidence` module's `Router` in order to be processed and routed correctly. Given the `Evidence` is registered with a corresponding `Handler`, it is processed as follows: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func SubmitEvidence(ctx Context, evidence Evidence) error { if _, err := GetEvidence(ctx, evidence.Hash()); err == nil { return errorsmod.Wrap(types.ErrEvidenceExists, strings.ToUpper(hex.EncodeToString(evidence.Hash()))) } if !router.HasRoute(evidence.Route()) { return errorsmod.Wrap(types.ErrNoEvidenceHandlerExists, evidence.Route()) } handler := router.GetRoute(evidence.Route()) if err := handler(ctx, evidence); err != nil { return errorsmod.Wrap(types.ErrInvalidEvidence, err.Error()) } ctx.EventManager().EmitEvent( sdk.NewEvent( types.EventTypeSubmitEvidence, sdk.NewAttribute(types.AttributeKeyEvidenceHash, strings.ToUpper(hex.EncodeToString(evidence.Hash()))), ), ) SetEvidence(ctx, evidence) return nil } ``` First, there must not already exist valid submitted `Evidence` of the exact same type. Secondly, the `Evidence` is routed to the `Handler` and executed. Finally, if there is no error in handling the `Evidence`, an event is emitted and it is persisted to state. ## Events The `x/evidence` module emits the following events: ### Handlers #### MsgSubmitEvidence | Type | Attribute Key | Attribute Value | | ---------------- | -------------- | ----------------- | | submit\_evidence | evidence\_hash | `{evidenceHash}` | | message | module | evidence | | message | sender | `{senderAddress}` | | message | action | submit\_evidence | ## Parameters The evidence module does not contain any parameters. ## BeginBlock ### Evidence Handling CometBFT blocks can include [Evidence](https://github.com/cometbft/cometbft/blob/main/spec/abci/abci%2B%2B_basic_concepts.md#evidence) that indicates if a validator committed malicious behavior. The relevant information is forwarded to the application as ABCI Evidence in `abci.RequestBeginBlock` so that the validator can be punished accordingly. #### Equivocation The Cosmos SDK handles two types of evidence inside the ABCI `BeginBlock`: * `DuplicateVoteEvidence`, * `LightClientAttackEvidence`. The evidence module handles these two evidence types the same way. First, the Cosmos SDK converts the CometBFT concrete evidence type to an SDK `Evidence` interface using `Equivocation` as the concrete type. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/evidence/v1beta1/evidence.proto#L12-L32 ``` For some `Equivocation` submitted in `block` to be valid, it must satisfy: `Evidence.Timestamp >= block.Timestamp - MaxEvidenceAge` Where: * `Evidence.Timestamp` is the timestamp in the block at height `Evidence.Height` * `block.Timestamp` is the current block timestamp. If valid `Equivocation` evidence is included in a block, the validator's stake is reduced (slashed) by `SlashFractionDoubleSign` as defined by the `x/slashing` module of what their stake was when the infraction occurred, rather than when the evidence was discovered. We want to "follow the stake", i.e., the stake that contributed to the infraction should be slashed, even if it has since been redelegated or started unbonding. In addition, the validator is permanently jailed and tombstoned to make it impossible for that validator to ever re-enter the validator set. The `Equivocation` evidence is handled as follows: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/x/evidence/keeper/infraction.go#L26-L140 ``` **Note:** The slashing, jailing, and tombstoning calls are delegated through the `x/slashing` module that emits informative events and finally delegates calls to the `x/staking` module. See documentation on slashing and jailing in [State Transitions](/sdk/latest/modules/staking/README#state-transitions). ## Client ### CLI A user can query and interact with the `evidence` module using the CLI. #### Query The `query` command allows users to query `evidence` state. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query evidence --help ``` #### evidence The `evidence` command allows users to list all evidence or evidence by hash. Usage: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query evidence evidence [flags] ``` To query evidence by hash Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query evidence evidence "DF0C23E8634E480F84B9D5674A7CDC9816466DEC28A3358F73260F68D28D7660" ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} evidence: consensus_address: cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h height: 11 power: 100 time: "2021-10-20T16:08:38.194017624Z" ``` To get all evidence Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query evidence list ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} evidence: consensus_address: cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h height: 11 power: 100 time: "2021-10-20T16:08:38.194017624Z" pagination: next_key: null total: "1" ``` ### REST A user can query the `evidence` module using REST endpoints. #### Evidence Get evidence by hash ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/evidence/v1beta1/evidence/{hash} ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl -X GET "http://localhost:1317/cosmos/evidence/v1beta1/evidence/DF0C23E8634E480F84B9D5674A7CDC9816466DEC28A3358F73260F68D28D7660" ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "evidence": { "consensus_address": "cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h", "height": "11", "power": "100", "time": "2021-10-20T16:08:38.194017624Z" } } ``` #### All evidence Get all evidence ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/evidence/v1beta1/evidence ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl -X GET "http://localhost:1317/cosmos/evidence/v1beta1/evidence" ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "evidence": [ { "consensus_address": "cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h", "height": "11", "power": "100", "time": "2021-10-20T16:08:38.194017624Z" } ], "pagination": { "total": "1" } } ``` ### gRPC A user can query the `evidence` module using gRPC endpoints. #### Evidence Get evidence by hash ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.evidence.v1beta1.Query/Evidence ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext -d '{"evidence_hash":"DF0C23E8634E480F84B9D5674A7CDC9816466DEC28A3358F73260F68D28D7660"}' localhost:9090 cosmos.evidence.v1beta1.Query/Evidence ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "evidence": { "consensus_address": "cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h", "height": "11", "power": "100", "time": "2021-10-20T16:08:38.194017624Z" } } ``` #### All evidence Get all evidence ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.evidence.v1beta1.Query/AllEvidence ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext localhost:9090 cosmos.evidence.v1beta1.Query/AllEvidence ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "evidence": [ { "consensus_address": "cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h", "height": "11", "power": "100", "time": "2021-10-20T16:08:38.194017624Z" } ], "pagination": { "total": "1" } } ``` # x/feegrant Source: https://docs.cosmos.network/sdk/latest/modules/feegrant/README This document specifies the fee grant module. For the full ADR, please see Fee Grant ADR-029. ## Abstract This document specifies the fee grant module. For the full ADR, please see [Fee Grant ADR-029](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-029-fee-grant-module.md). This module allows accounts to grant fee allowances and to use fees from their accounts. Grantees can execute any transaction without the need to maintain sufficient fees. ## Contents * [Concepts](#concepts) * [State](#state) * [FeeAllowance](#feeallowance) * [FeeAllowanceQueue](#feeallowancequeue) * [Messages](#messages) * [Msg/GrantAllowance](#msggrantallowance) * [Msg/RevokeAllowance](#msgrevokeallowance) * [Events](#events) * [Msg Server](#msg-server) * [MsgGrantAllowance](#msggrantallowance-1) * [MsgRevokeAllowance](#msgrevokeallowance-1) * [Exec fee allowance](#exec-fee-allowance) * [Client](#client) * [CLI](#cli) * [gRPC](#grpc) ## Concepts ### Grant `Grant` is stored in the KVStore to record a grant with full context. Every grant will contain `granter`, `grantee` and what kind of `allowance` is granted. `granter` is an account address who is giving permission to `grantee` (the beneficiary account address) to pay for some or all of `grantee`'s transaction fees. `allowance` defines what kind of fee allowance (`BasicAllowance` or `PeriodicAllowance`, see below) is granted to `grantee`. `allowance` accepts an interface which implements `FeeAllowanceI`, encoded as `Any` type. There can be only one existing fee grant allowed for a `grantee` and `granter`, self grants are not allowed. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/feegrant/v1beta1/feegrant.proto#L83-L93 ``` `FeeAllowanceI` looks like: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package feegrant import ( "time" sdk "github.com/cosmos/cosmos-sdk/types" ) // FeeAllowance implementations are tied to a given fee delegator and delegatee, // and are used to enforce fee grant limits. type FeeAllowanceI interface { // Accept can use fee payment requested as well as timestamp of the current block // to determine whether or not to process this. This is checked in // Keeper.UseGrantedFees and the return values should match how it is handled there. // // If it returns an error, the fee payment is rejected, otherwise it is accepted. // The FeeAllowance implementation is expected to update its internal state // and will be saved again after an acceptance. // // If remove is true (regardless of the error), the FeeAllowance will be deleted from storage // (eg. when it is used up). (See call to RevokeAllowance in Keeper.UseGrantedFees) Accept(ctx sdk.Context, fee sdk.Coins, msgs []sdk.Msg) (remove bool, err error) // ValidateBasic should evaluate this FeeAllowance for internal consistency. // Don't allow negative amounts, or negative periods for example. ValidateBasic() error // ExpiresAt returns the expiry time of the allowance. ExpiresAt() (*time.Time, error) } ``` ### Fee Allowance types There are two types of fee allowances present at the moment: * `BasicAllowance` * `PeriodicAllowance` * `AllowedMsgAllowance` ### BasicAllowance `BasicAllowance` is permission for `grantee` to use fee from a `granter`'s account. If any of the `spend_limit` or `expiration` reaches its limit, the grant will be removed from the state. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/feegrant/v1beta1/feegrant.proto#L15-L28 ``` * `spend_limit` is the limit of coins that are allowed to be used from the `granter` account. If it is empty, it assumes there's no spend limit, `grantee` can use any number of available coins from `granter` account address before the expiration. * `expiration` specifies an optional time when this allowance expires. If the value is left empty, there is no expiry for the grant. * When a grant is created with empty values for `spend_limit` and `expiration`, it is still a valid grant. It won't restrict the `grantee` to use any number of coins from `granter` and it won't have any expiration. The only way to restrict the `grantee` is by revoking the grant. ### PeriodicAllowance `PeriodicAllowance` is a repeating fee allowance for the mentioned period, we can mention when the grant can expire as well as when a period can reset. We can also define the maximum number of coins that can be used in a mentioned period of time. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/feegrant/v1beta1/feegrant.proto#L34-L68 ``` * `basic` is the instance of `BasicAllowance` which is optional for periodic fee allowance. If empty, the grant will have no `expiration` and no `spend_limit`. * `period` is the specific period of time, after each period passes, `period_can_spend` will be reset. * `period_spend_limit` specifies the maximum number of coins that can be spent in the period. * `period_can_spend` is the number of coins left to be spent before the period\_reset time. * `period_reset` keeps track of when a next period reset should happen. ### AllowedMsgAllowance `AllowedMsgAllowance` is a fee allowance, it can be any of `BasicFeeAllowance`, `PeriodicAllowance` but restricted only to the allowed messages mentioned by the granter. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/feegrant/v1beta1/feegrant.proto#L70-L81 ``` * `allowance` is either `BasicAllowance` or `PeriodicAllowance`. * `allowed_messages` is array of messages allowed to execute the given allowance. ### FeeGranter flag `feegrant` module introduces a `FeeGranter` flag for CLI for the sake of executing transactions with fee granter. When this flag is set, `clientCtx` will append the granter account address for transactions generated through CLI. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package client import ( "crypto/tls" "fmt" "strings" "github.com/pkg/errors" "github.com/spf13/cobra" "github.com/spf13/pflag" "github.com/tendermint/tendermint/libs/cli" "google.golang.org/grpc" "google.golang.org/grpc/credentials" "google.golang.org/grpc/credentials/insecure" "github.com/cosmos/cosmos-sdk/client/flags" "github.com/cosmos/cosmos-sdk/crypto/keyring" sdk "github.com/cosmos/cosmos-sdk/types" ) // ClientContextKey defines the context key used to retrieve a client.Context from // a command's Context. const ClientContextKey = sdk.ContextKey("client.context") // SetCmdClientContextHandler is to be used in a command pre-hook execution to // read flags that populate a Context and sets that to the command's Context. func SetCmdClientContextHandler(clientCtx Context, cmd *cobra.Command) (err error) { clientCtx, err = ReadPersistentCommandFlags(clientCtx, cmd.Flags()) if err != nil { return err } return SetCmdClientContext(cmd, clientCtx) } // ValidateCmd returns unknown command error or Help display if help flag set func ValidateCmd(cmd *cobra.Command, args []string) error { var unknownCmd string var skipNext bool for _, arg := range args { // search for help flag if arg == "--help" || arg == "-h" { return cmd.Help() } // check if the current arg is a flag switch { case len(arg) > 0 && (arg[0] == '-'): // the next arg should be skipped if the current arg is a // flag and does not use "=" to assign the flag's value if !strings.Contains(arg, "=") { skipNext = true } else { skipNext = false } case skipNext: // skip current arg skipNext = false case unknownCmd == "": // unknown command found // continue searching for help flag unknownCmd = arg } } // return the help screen if no unknown command is found if unknownCmd != "" { err := fmt.Sprintf("unknown command \"%s\" for \"%s\"", unknownCmd, cmd.CalledAs()) // build suggestions for unknown argument if suggestions := cmd.SuggestionsFor(unknownCmd); len(suggestions) > 0 { err += "\n\nDid you mean this?\n" for _, s := range suggestions { err += fmt.Sprintf("\t%v\n", s) } } return errors.New(err) } return cmd.Help() } // ReadPersistentCommandFlags returns a Context with fields set for "persistent" // or common flags that do not necessarily change with context. // // Note, the provided clientCtx may have field pre-populated. The following order // of precedence occurs: // // - client.Context field not pre-populated & flag not set: uses default flag value // - client.Context field not pre-populated & flag set: uses set flag value // - client.Context field pre-populated & flag not set: uses pre-populated value // - client.Context field pre-populated & flag set: uses set flag value func ReadPersistentCommandFlags(clientCtx Context, flagSet *pflag.FlagSet) (Context, error) { if clientCtx.OutputFormat == "" || flagSet.Changed(cli.OutputFlag) { output, _ := flagSet.GetString(cli.OutputFlag) clientCtx = clientCtx.WithOutputFormat(output) } if clientCtx.HomeDir == "" || flagSet.Changed(flags.FlagHome) { homeDir, _ := flagSet.GetString(flags.FlagHome) clientCtx = clientCtx.WithHomeDir(homeDir) } if !clientCtx.Simulate || flagSet.Changed(flags.FlagDryRun) { dryRun, _ := flagSet.GetBool(flags.FlagDryRun) clientCtx = clientCtx.WithSimulation(dryRun) } if clientCtx.KeyringDir == "" || flagSet.Changed(flags.FlagKeyringDir) { keyringDir, _ := flagSet.GetString(flags.FlagKeyringDir) // The keyring directory is optional and falls back to the home directory // if omitted. if keyringDir == "" { keyringDir = clientCtx.HomeDir } clientCtx = clientCtx.WithKeyringDir(keyringDir) } if clientCtx.ChainID == "" || flagSet.Changed(flags.FlagChainID) { chainID, _ := flagSet.GetString(flags.FlagChainID) clientCtx = clientCtx.WithChainID(chainID) } if clientCtx.Keyring == nil || flagSet.Changed(flags.FlagKeyringBackend) { keyringBackend, _ := flagSet.GetString(flags.FlagKeyringBackend) if keyringBackend != "" { kr, err := NewKeyringFromBackend(clientCtx, keyringBackend) if err != nil { return clientCtx, err } clientCtx = clientCtx.WithKeyring(kr) } } if clientCtx.Client == nil || flagSet.Changed(flags.FlagNode) { rpcURI, _ := flagSet.GetString(flags.FlagNode) if rpcURI != "" { clientCtx = clientCtx.WithNodeURI(rpcURI) client, err := NewClientFromNode(rpcURI) if err != nil { return clientCtx, err } clientCtx = clientCtx.WithClient(client) } } if clientCtx.GRPCClient == nil || flagSet.Changed(flags.FlagGRPC) { grpcURI, _ := flagSet.GetString(flags.FlagGRPC) if grpcURI != "" { var dialOpts []grpc.DialOption useInsecure, _ := flagSet.GetBool(flags.FlagGRPCInsecure) if useInsecure { dialOpts = append(dialOpts, grpc.WithTransportCredentials(insecure.NewCredentials())) } else { dialOpts = append(dialOpts, grpc.WithTransportCredentials(credentials.NewTLS(&tls.Config{ MinVersion: tls.VersionTLS12, }))) } grpcClient, err := grpc.Dial(grpcURI, dialOpts...) if err != nil { return Context{ }, err } clientCtx = clientCtx.WithGRPCClient(grpcClient) } } return clientCtx, nil } // readQueryCommandFlags returns an updated Context with fields set based on flags // defined in AddQueryFlagsToCmd. An error is returned if any flag query fails. // // Note, the provided clientCtx may have field pre-populated. The following order // of precedence occurs: // // - client.Context field not pre-populated & flag not set: uses default flag value // - client.Context field not pre-populated & flag set: uses set flag value // - client.Context field pre-populated & flag not set: uses pre-populated value // - client.Context field pre-populated & flag set: uses set flag value func readQueryCommandFlags(clientCtx Context, flagSet *pflag.FlagSet) (Context, error) { if clientCtx.Height == 0 || flagSet.Changed(flags.FlagHeight) { height, _ := flagSet.GetInt64(flags.FlagHeight) clientCtx = clientCtx.WithHeight(height) } if !clientCtx.UseLedger || flagSet.Changed(flags.FlagUseLedger) { useLedger, _ := flagSet.GetBool(flags.FlagUseLedger) clientCtx = clientCtx.WithUseLedger(useLedger) } return ReadPersistentCommandFlags(clientCtx, flagSet) } // readTxCommandFlags returns an updated Context with fields set based on flags // defined in AddTxFlagsToCmd. An error is returned if any flag query fails. // // Note, the provided clientCtx may have field pre-populated. The following order // of precedence occurs: // // - client.Context field not pre-populated & flag not set: uses default flag value // - client.Context field not pre-populated & flag set: uses set flag value // - client.Context field pre-populated & flag not set: uses pre-populated value // - client.Context field pre-populated & flag set: uses set flag value func readTxCommandFlags(clientCtx Context, flagSet *pflag.FlagSet) (Context, error) { clientCtx, err := ReadPersistentCommandFlags(clientCtx, flagSet) if err != nil { return clientCtx, err } if !clientCtx.GenerateOnly || flagSet.Changed(flags.FlagGenerateOnly) { genOnly, _ := flagSet.GetBool(flags.FlagGenerateOnly) clientCtx = clientCtx.WithGenerateOnly(genOnly) } if !clientCtx.Offline || flagSet.Changed(flags.FlagOffline) { offline, _ := flagSet.GetBool(flags.FlagOffline) clientCtx = clientCtx.WithOffline(offline) } if !clientCtx.UseLedger || flagSet.Changed(flags.FlagUseLedger) { useLedger, _ := flagSet.GetBool(flags.FlagUseLedger) clientCtx = clientCtx.WithUseLedger(useLedger) } if clientCtx.BroadcastMode == "" || flagSet.Changed(flags.FlagBroadcastMode) { bMode, _ := flagSet.GetString(flags.FlagBroadcastMode) clientCtx = clientCtx.WithBroadcastMode(bMode) } if !clientCtx.SkipConfirm || flagSet.Changed(flags.FlagSkipConfirmation) { skipConfirm, _ := flagSet.GetBool(flags.FlagSkipConfirmation) clientCtx = clientCtx.WithSkipConfirmation(skipConfirm) } if clientCtx.SignModeStr == "" || flagSet.Changed(flags.FlagSignMode) { signModeStr, _ := flagSet.GetString(flags.FlagSignMode) clientCtx = clientCtx.WithSignModeStr(signModeStr) } if clientCtx.FeePayer == nil || flagSet.Changed(flags.FlagFeePayer) { payer, _ := flagSet.GetString(flags.FlagFeePayer) if payer != "" { payerAcc, err := sdk.AccAddressFromBech32(payer) if err != nil { return clientCtx, err } clientCtx = clientCtx.WithFeePayerAddress(payerAcc) } } if clientCtx.FeeGranter == nil || flagSet.Changed(flags.FlagFeeGranter) { granter, _ := flagSet.GetString(flags.FlagFeeGranter) if granter != "" { granterAcc, err := sdk.AccAddressFromBech32(granter) if err != nil { return clientCtx, err } clientCtx = clientCtx.WithFeeGranterAddress(granterAcc) } } if clientCtx.From == "" || flagSet.Changed(flags.FlagFrom) { from, _ := flagSet.GetString(flags.FlagFrom) fromAddr, fromName, keyType, err := GetFromFields(clientCtx, clientCtx.Keyring, from) if err != nil { return clientCtx, err } clientCtx = clientCtx.WithFrom(from).WithFromAddress(fromAddr).WithFromName(fromName) // If the `from` signer account is a ledger key, we need to use // SIGN_MODE_AMINO_JSON, because ledger doesn't support proto yet. // ref: https://github.com/cosmos/cosmos-sdk/issues/8109 if keyType == keyring.TypeLedger && clientCtx.SignModeStr != flags.SignModeLegacyAminoJSON && !clientCtx.LedgerHasProtobuf { fmt.Println("Default sign-mode 'direct' not supported by Ledger, using sign-mode 'amino-json'.") clientCtx = clientCtx.WithSignModeStr(flags.SignModeLegacyAminoJSON) } } if !clientCtx.IsAux || flagSet.Changed(flags.FlagAux) { isAux, _ := flagSet.GetBool(flags.FlagAux) clientCtx = clientCtx.WithAux(isAux) if isAux { // If the user didn't explicitly set an --output flag, use JSON by // default. if clientCtx.OutputFormat == "" || !flagSet.Changed(cli.OutputFlag) { clientCtx = clientCtx.WithOutputFormat("json") } // If the user didn't explicitly set a --sign-mode flag, use // DIRECT_AUX by default. if clientCtx.SignModeStr == "" || !flagSet.Changed(flags.FlagSignMode) { clientCtx = clientCtx.WithSignModeStr(flags.SignModeDirectAux) } } } return clientCtx, nil } // GetClientQueryContext returns a Context from a command with fields set based on flags // defined in AddQueryFlagsToCmd. An error is returned if any flag query fails. // // - client.Context field not pre-populated & flag not set: uses default flag value // - client.Context field not pre-populated & flag set: uses set flag value // - client.Context field pre-populated & flag not set: uses pre-populated value // - client.Context field pre-populated & flag set: uses set flag value func GetClientQueryContext(cmd *cobra.Command) (Context, error) { ctx := GetClientContextFromCmd(cmd) return readQueryCommandFlags(ctx, cmd.Flags()) } // GetClientTxContext returns a Context from a command with fields set based on flags // defined in AddTxFlagsToCmd. An error is returned if any flag query fails. // // - client.Context field not pre-populated & flag not set: uses default flag value // - client.Context field not pre-populated & flag set: uses set flag value // - client.Context field pre-populated & flag not set: uses pre-populated value // - client.Context field pre-populated & flag set: uses set flag value func GetClientTxContext(cmd *cobra.Command) (Context, error) { ctx := GetClientContextFromCmd(cmd) return readTxCommandFlags(ctx, cmd.Flags()) } // GetClientContextFromCmd returns a Context from a command or an empty Context // if it has not been set. func GetClientContextFromCmd(cmd *cobra.Command) Context { if v := cmd.Context().Value(ClientContextKey); v != nil { clientCtxPtr := v.(*Context) return *clientCtxPtr } return Context{ } } // SetCmdClientContext sets a command's Context value to the provided argument. func SetCmdClientContext(cmd *cobra.Command, clientCtx Context) error { v := cmd.Context().Value(ClientContextKey) if v == nil { return errors.New("client context not set") } clientCtxPtr := v.(*Context) *clientCtxPtr = clientCtx return nil } ``` ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package tx import ( "bufio" "context" "encoding/json" "errors" "fmt" "os" gogogrpc "github.com/cosmos/gogoproto/grpc" "github.com/spf13/pflag" "github.com/cosmos/cosmos-sdk/client" "github.com/cosmos/cosmos-sdk/client/input" cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" sdk "github.com/cosmos/cosmos-sdk/types" sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" "github.com/cosmos/cosmos-sdk/types/tx" "github.com/cosmos/cosmos-sdk/types/tx/signing" authsigning "github.com/cosmos/cosmos-sdk/x/auth/signing" ) // GenerateOrBroadcastTxCLI will either generate and print and unsigned transaction // or sign it and broadcast it returning an error upon failure. func GenerateOrBroadcastTxCLI(clientCtx client.Context, flagSet *pflag.FlagSet, msgs ...sdk.Msg) error { txf := NewFactoryCLI(clientCtx, flagSet) return GenerateOrBroadcastTxWithFactory(clientCtx, txf, msgs...) } // GenerateOrBroadcastTxWithFactory will either generate and print and unsigned transaction // or sign it and broadcast it returning an error upon failure. func GenerateOrBroadcastTxWithFactory(clientCtx client.Context, txf Factory, msgs ...sdk.Msg) error { // Validate all msgs before generating or broadcasting the tx. // We were calling ValidateBasic separately in each CLI handler before. // Right now, we're factorizing that call inside this function. // ref: https://github.com/cosmos/cosmos-sdk/pull/9236#discussion_r623803504 for _, msg := range msgs { if err := msg.ValidateBasic(); err != nil { return err } } // If the --aux flag is set, we simply generate and print the AuxSignerData. if clientCtx.IsAux { auxSignerData, err := makeAuxSignerData(clientCtx, txf, msgs...) if err != nil { return err } return clientCtx.PrintProto(&auxSignerData) } if clientCtx.GenerateOnly { return txf.PrintUnsignedTx(clientCtx, msgs...) } return BroadcastTx(clientCtx, txf, msgs...) } // BroadcastTx attempts to generate, sign and broadcast a transaction with the // given set of messages. It will also simulate gas requirements if necessary. // It will return an error upon failure. func BroadcastTx(clientCtx client.Context, txf Factory, msgs ...sdk.Msg) error { txf, err := txf.Prepare(clientCtx) if err != nil { return err } if txf.SimulateAndExecute() || clientCtx.Simulate { _, adjusted, err := CalculateGas(clientCtx, txf, msgs...) if err != nil { return err } txf = txf.WithGas(adjusted) _, _ = fmt.Fprintf(os.Stderr, "%s\n", GasEstimateResponse{ GasEstimate: txf.Gas() }) } if clientCtx.Simulate { return nil } tx, err := txf.BuildUnsignedTx(msgs...) if err != nil { return err } if !clientCtx.SkipConfirm { txBytes, err := clientCtx.TxConfig.TxJSONEncoder()(tx.GetTx()) if err != nil { return err } if err := clientCtx.PrintRaw(json.RawMessage(txBytes)); err != nil { _, _ = fmt.Fprintf(os.Stderr, "%s\n", txBytes) } buf := bufio.NewReader(os.Stdin) ok, err := input.GetConfirmation("confirm transaction before signing and broadcasting", buf, os.Stderr) if err != nil || !ok { _, _ = fmt.Fprintf(os.Stderr, "%s\n", "cancelled transaction") return err } } err = Sign(txf, clientCtx.GetFromName(), tx, true) if err != nil { return err } txBytes, err := clientCtx.TxConfig.TxEncoder()(tx.GetTx()) if err != nil { return err } // broadcast to a Tendermint node res, err := clientCtx.BroadcastTx(txBytes) if err != nil { return err } return clientCtx.PrintProto(res) } // CalculateGas simulates the execution of a transaction and returns the // simulation response obtained by the query and the adjusted gas amount. func CalculateGas( clientCtx gogogrpc.ClientConn, txf Factory, msgs ...sdk.Msg, ) (*tx.SimulateResponse, uint64, error) { txBytes, err := txf.BuildSimTx(msgs...) if err != nil { return nil, 0, err } txSvcClient := tx.NewServiceClient(clientCtx) simRes, err := txSvcClient.Simulate(context.Background(), &tx.SimulateRequest{ TxBytes: txBytes, }) if err != nil { return nil, 0, err } return simRes, uint64(txf.GasAdjustment() * float64(simRes.GasInfo.GasUsed)), nil } // SignWithPrivKey signs a given tx with the given private key, and returns the // corresponding SignatureV2 if the signing is successful. func SignWithPrivKey( signMode signing.SignMode, signerData authsigning.SignerData, txBuilder client.TxBuilder, priv cryptotypes.PrivKey, txConfig client.TxConfig, accSeq uint64, ) (signing.SignatureV2, error) { var sigV2 signing.SignatureV2 // Generate the bytes to be signed. signBytes, err := txConfig.SignModeHandler().GetSignBytes(signMode, signerData, txBuilder.GetTx()) if err != nil { return sigV2, err } // Sign those bytes signature, err := priv.Sign(signBytes) if err != nil { return sigV2, err } // Construct the SignatureV2 struct sigData := signing.SingleSignatureData{ SignMode: signMode, Signature: signature, } sigV2 = signing.SignatureV2{ PubKey: priv.PubKey(), Data: &sigData, Sequence: accSeq, } return sigV2, nil } // countDirectSigners counts the number of DIRECT signers in a signature data. func countDirectSigners(data signing.SignatureData) int { switch data := data.(type) { case *signing.SingleSignatureData: if data.SignMode == signing.SignMode_SIGN_MODE_DIRECT { return 1 } return 0 case *signing.MultiSignatureData: directSigners := 0 for _, d := range data.Signatures { directSigners += countDirectSigners(d) } return directSigners default: panic("unreachable case") } } // checkMultipleSigners checks that there can be maximum one DIRECT signer in // a tx. func checkMultipleSigners(tx authsigning.Tx) error { directSigners := 0 sigsV2, err := tx.GetSignaturesV2() if err != nil { return err } for _, sig := range sigsV2 { directSigners += countDirectSigners(sig.Data) if directSigners > 1 { return sdkerrors.ErrNotSupported.Wrap("txs signed with CLI can have maximum 1 DIRECT signer") } } return nil } // Sign signs a given tx with a named key. The bytes signed over are canconical. // The resulting signature will be added to the transaction builder overwriting the previous // ones if overwrite=true (otherwise, the signature will be appended). // Signing a transaction with mutltiple signers in the DIRECT mode is not supprted and will // return an error. // An error is returned upon failure. func Sign(txf Factory, name string, txBuilder client.TxBuilder, overwriteSig bool) error { if txf.keybase == nil { return errors.New("keybase must be set prior to signing a transaction") } signMode := txf.signMode if signMode == signing.SignMode_SIGN_MODE_UNSPECIFIED { // use the SignModeHandler's default mode if unspecified signMode = txf.txConfig.SignModeHandler().DefaultMode() } k, err := txf.keybase.Key(name) if err != nil { return err } pubKey, err := k.GetPubKey() if err != nil { return err } signerData := authsigning.SignerData{ ChainID: txf.chainID, AccountNumber: txf.accountNumber, Sequence: txf.sequence, PubKey: pubKey, Address: sdk.AccAddress(pubKey.Address()).String(), } // For SIGN_MODE_DIRECT, calling SetSignatures calls setSignerInfos on // TxBuilder under the hood, and SignerInfos is needed to generated the // sign bytes. This is the reason for setting SetSignatures here, with a // nil signature. // // Note: this line is not needed for SIGN_MODE_LEGACY_AMINO, but putting it // also doesn't affect its generated sign bytes, so for code's simplicity // sake, we put it here. sigData := signing.SingleSignatureData{ SignMode: signMode, Signature: nil, } sig := signing.SignatureV2{ PubKey: pubKey, Data: &sigData, Sequence: txf.Sequence(), } var prevSignatures []signing.SignatureV2 if !overwriteSig { prevSignatures, err = txBuilder.GetTx().GetSignaturesV2() if err != nil { return err } } // Overwrite or append signer infos. var sigs []signing.SignatureV2 if overwriteSig { sigs = []signing.SignatureV2{ sig } } else { sigs = append(sigs, prevSignatures...) sigs = append(sigs, sig) } if err := txBuilder.SetSignatures(sigs...); err != nil { return err } if err := checkMultipleSigners(txBuilder.GetTx()); err != nil { return err } // Generate the bytes to be signed. bytesToSign, err := txf.txConfig.SignModeHandler().GetSignBytes(signMode, signerData, txBuilder.GetTx()) if err != nil { return err } // Sign those bytes sigBytes, _, err := txf.keybase.Sign(name, bytesToSign) if err != nil { return err } // Construct the SignatureV2 struct sigData = signing.SingleSignatureData{ SignMode: signMode, Signature: sigBytes, } sig = signing.SignatureV2{ PubKey: pubKey, Data: &sigData, Sequence: txf.Sequence(), } if overwriteSig { err = txBuilder.SetSignatures(sig) } else { prevSignatures = append(prevSignatures, sig) err = txBuilder.SetSignatures(prevSignatures...) } if err != nil { return fmt.Errorf("unable to set signatures on payload: %w", err) } // Run optional preprocessing if specified. By default, this is unset // and will return nil. return txf.PreprocessTx(name, txBuilder) } // GasEstimateResponse defines a response definition for tx gas estimation. type GasEstimateResponse struct { GasEstimate uint64 `json:"gas_estimate" yaml:"gas_estimate"` } func (gr GasEstimateResponse) String() string { return fmt.Sprintf("gas estimate: %d", gr.GasEstimate) } // makeAuxSignerData generates an AuxSignerData from the client inputs. func makeAuxSignerData(clientCtx client.Context, f Factory, msgs ...sdk.Msg) (tx.AuxSignerData, error) { b := NewAuxTxBuilder() fromAddress, name, _, err := client.GetFromFields(clientCtx, clientCtx.Keyring, clientCtx.From) if err != nil { return tx.AuxSignerData{ }, err } b.SetAddress(fromAddress.String()) if clientCtx.Offline { b.SetAccountNumber(f.accountNumber) b.SetSequence(f.sequence) } else { accNum, seq, err := clientCtx.AccountRetriever.GetAccountNumberSequence(clientCtx, fromAddress) if err != nil { return tx.AuxSignerData{ }, err } b.SetAccountNumber(accNum) b.SetSequence(seq) } err = b.SetMsgs(msgs...) if err != nil { return tx.AuxSignerData{ }, err } if f.tip != nil { if _, err := sdk.AccAddressFromBech32(f.tip.Tipper); err != nil { return tx.AuxSignerData{ }, sdkerrors.ErrInvalidAddress.Wrap("tipper must be a bech32 address") } b.SetTip(f.tip) } err = b.SetSignMode(f.SignMode()) if err != nil { return tx.AuxSignerData{ }, err } key, err := clientCtx.Keyring.Key(name) if err != nil { return tx.AuxSignerData{ }, err } pub, err := key.GetPubKey() if err != nil { return tx.AuxSignerData{ }, err } err = b.SetPubKey(pub) if err != nil { return tx.AuxSignerData{ }, err } b.SetChainID(clientCtx.ChainID) signBz, err := b.GetSignBytes() if err != nil { return tx.AuxSignerData{ }, err } sig, _, err := clientCtx.Keyring.Sign(name, signBz) if err != nil { return tx.AuxSignerData{ }, err } b.SetSignature(sig) return b.GetAuxSignerData() } ``` ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package tx import ( "github.com/cosmos/gogoproto/proto" "github.com/cosmos/cosmos-sdk/client" "github.com/cosmos/cosmos-sdk/codec" codectypes "github.com/cosmos/cosmos-sdk/codec/types" cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" sdk "github.com/cosmos/cosmos-sdk/types" sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" "github.com/cosmos/cosmos-sdk/types/tx" "github.com/cosmos/cosmos-sdk/types/tx/signing" "github.com/cosmos/cosmos-sdk/x/auth/ante" authsigning "github.com/cosmos/cosmos-sdk/x/auth/signing" ) // wrapper is a wrapper around the tx.Tx proto.Message which retain the raw // body and auth_info bytes. type wrapper struct { cdc codec.Codec tx *tx.Tx // bodyBz represents the protobuf encoding of TxBody. This should be encoding // from the client using TxRaw if the tx was decoded from the wire bodyBz []byte // authInfoBz represents the protobuf encoding of TxBody. This should be encoding // from the client using TxRaw if the tx was decoded from the wire authInfoBz []byte txBodyHasUnknownNonCriticals bool } var ( _ authsigning.Tx = &wrapper{ } _ client.TxBuilder = &wrapper{ } _ tx.TipTx = &wrapper{ } _ ante.HasExtensionOptionsTx = &wrapper{ } _ ExtensionOptionsTxBuilder = &wrapper{ } _ tx.TipTx = &wrapper{ } ) // ExtensionOptionsTxBuilder defines a TxBuilder that can also set extensions. type ExtensionOptionsTxBuilder interface { client.TxBuilder SetExtensionOptions(...*codectypes.Any) SetNonCriticalExtensionOptions(...*codectypes.Any) } func newBuilder(cdc codec.Codec) *wrapper { return &wrapper{ cdc: cdc, tx: &tx.Tx{ Body: &tx.TxBody{ }, AuthInfo: &tx.AuthInfo{ Fee: &tx.Fee{ }, }, }, } } func (w *wrapper) GetMsgs() []sdk.Msg { return w.tx.GetMsgs() } func (w *wrapper) ValidateBasic() error { return w.tx.ValidateBasic() } func (w *wrapper) getBodyBytes() []byte { if len(w.bodyBz) == 0 { // if bodyBz is empty, then marshal the body. bodyBz will generally // be set to nil whenever SetBody is called so the result of calling // this method should always return the correct bytes. Note that after // decoding bodyBz is derived from TxRaw so that it matches what was // transmitted over the wire var err error w.bodyBz, err = proto.Marshal(w.tx.Body) if err != nil { panic(err) } } return w.bodyBz } func (w *wrapper) getAuthInfoBytes() []byte { if len(w.authInfoBz) == 0 { // if authInfoBz is empty, then marshal the body. authInfoBz will generally // be set to nil whenever SetAuthInfo is called so the result of calling // this method should always return the correct bytes. Note that after // decoding authInfoBz is derived from TxRaw so that it matches what was // transmitted over the wire var err error w.authInfoBz, err = proto.Marshal(w.tx.AuthInfo) if err != nil { panic(err) } } return w.authInfoBz } func (w *wrapper) GetSigners() []sdk.AccAddress { return w.tx.GetSigners() } func (w *wrapper) GetPubKeys() ([]cryptotypes.PubKey, error) { signerInfos := w.tx.AuthInfo.SignerInfos pks := make([]cryptotypes.PubKey, len(signerInfos)) for i, si := range signerInfos { // NOTE: it is okay to leave this nil if there is no PubKey in the SignerInfo. // PubKey's can be left unset in SignerInfo. if si.PublicKey == nil { continue } pkAny := si.PublicKey.GetCachedValue() pk, ok := pkAny.(cryptotypes.PubKey) if ok { pks[i] = pk } else { return nil, sdkerrors.Wrapf(sdkerrors.ErrLogic, "Expecting PubKey, got: %T", pkAny) } } return pks, nil } func (w *wrapper) GetGas() uint64 { return w.tx.AuthInfo.Fee.GasLimit } func (w *wrapper) GetFee() sdk.Coins { return w.tx.AuthInfo.Fee.Amount } func (w *wrapper) FeePayer() sdk.AccAddress { feePayer := w.tx.AuthInfo.Fee.Payer if feePayer != "" { return sdk.MustAccAddressFromBech32(feePayer) } // use first signer as default if no payer specified return w.GetSigners()[0] } func (w *wrapper) FeeGranter() sdk.AccAddress { feePayer := w.tx.AuthInfo.Fee.Granter if feePayer != "" { return sdk.MustAccAddressFromBech32(feePayer) } return nil } func (w *wrapper) GetTip() *tx.Tip { return w.tx.AuthInfo.Tip } func (w *wrapper) GetMemo() string { return w.tx.Body.Memo } // GetTimeoutHeight returns the transaction's timeout height (if set). func (w *wrapper) GetTimeoutHeight() uint64 { return w.tx.Body.TimeoutHeight } func (w *wrapper) GetSignaturesV2() ([]signing.SignatureV2, error) { signerInfos := w.tx.AuthInfo.SignerInfos sigs := w.tx.Signatures pubKeys, err := w.GetPubKeys() if err != nil { return nil, err } n := len(signerInfos) res := make([]signing.SignatureV2, n) for i, si := range signerInfos { // handle nil signatures (in case of simulation) if si.ModeInfo == nil { res[i] = signing.SignatureV2{ PubKey: pubKeys[i], } } else { var err error sigData, err := ModeInfoAndSigToSignatureData(si.ModeInfo, sigs[i]) if err != nil { return nil, err } // sequence number is functionally a transaction nonce and referred to as such in the SDK nonce := si.GetSequence() res[i] = signing.SignatureV2{ PubKey: pubKeys[i], Data: sigData, Sequence: nonce, } } } return res, nil } func (w *wrapper) SetMsgs(msgs ...sdk.Msg) error { anys, err := tx.SetMsgs(msgs) if err != nil { return err } w.tx.Body.Messages = anys // set bodyBz to nil because the cached bodyBz no longer matches tx.Body w.bodyBz = nil return nil } // SetTimeoutHeight sets the transaction's height timeout. func (w *wrapper) SetTimeoutHeight(height uint64) { w.tx.Body.TimeoutHeight = height // set bodyBz to nil because the cached bodyBz no longer matches tx.Body w.bodyBz = nil } func (w *wrapper) SetMemo(memo string) { w.tx.Body.Memo = memo // set bodyBz to nil because the cached bodyBz no longer matches tx.Body w.bodyBz = nil } func (w *wrapper) SetGasLimit(limit uint64) { if w.tx.AuthInfo.Fee == nil { w.tx.AuthInfo.Fee = &tx.Fee{ } } w.tx.AuthInfo.Fee.GasLimit = limit // set authInfoBz to nil because the cached authInfoBz no longer matches tx.AuthInfo w.authInfoBz = nil } func (w *wrapper) SetFeeAmount(coins sdk.Coins) { if w.tx.AuthInfo.Fee == nil { w.tx.AuthInfo.Fee = &tx.Fee{ } } w.tx.AuthInfo.Fee.Amount = coins // set authInfoBz to nil because the cached authInfoBz no longer matches tx.AuthInfo w.authInfoBz = nil } func (w *wrapper) SetTip(tip *tx.Tip) { w.tx.AuthInfo.Tip = tip // set authInfoBz to nil because the cached authInfoBz no longer matches tx.AuthInfo w.authInfoBz = nil } func (w *wrapper) SetFeePayer(feePayer sdk.AccAddress) { if w.tx.AuthInfo.Fee == nil { w.tx.AuthInfo.Fee = &tx.Fee{ } } w.tx.AuthInfo.Fee.Payer = feePayer.String() // set authInfoBz to nil because the cached authInfoBz no longer matches tx.AuthInfo w.authInfoBz = nil } func (w *wrapper) SetFeeGranter(feeGranter sdk.AccAddress) { if w.tx.AuthInfo.Fee == nil { w.tx.AuthInfo.Fee = &tx.Fee{ } } w.tx.AuthInfo.Fee.Granter = feeGranter.String() // set authInfoBz to nil because the cached authInfoBz no longer matches tx.AuthInfo w.authInfoBz = nil } func (w *wrapper) SetSignatures(signatures ...signing.SignatureV2) error { n := len(signatures) signerInfos := make([]*tx.SignerInfo, n) rawSigs := make([][]byte, n) for i, sig := range signatures { var modeInfo *tx.ModeInfo modeInfo, rawSigs[i] = SignatureDataToModeInfoAndSig(sig.Data) any, err := codectypes.NewAnyWithValue(sig.PubKey) if err != nil { return err } signerInfos[i] = &tx.SignerInfo{ PublicKey: any, ModeInfo: modeInfo, Sequence: sig.Sequence, } } w.setSignerInfos(signerInfos) w.setSignatures(rawSigs) return nil } func (w *wrapper) setSignerInfos(infos []*tx.SignerInfo) { w.tx.AuthInfo.SignerInfos = infos // set authInfoBz to nil because the cached authInfoBz no longer matches tx.AuthInfo w.authInfoBz = nil } func (w *wrapper) setSignerInfoAtIndex(index int, info *tx.SignerInfo) { if w.tx.AuthInfo.SignerInfos == nil { w.tx.AuthInfo.SignerInfos = make([]*tx.SignerInfo, len(w.GetSigners())) } w.tx.AuthInfo.SignerInfos[index] = info // set authInfoBz to nil because the cached authInfoBz no longer matches tx.AuthInfo w.authInfoBz = nil } func (w *wrapper) setSignatures(sigs [][]byte) { w.tx.Signatures = sigs } func (w *wrapper) setSignatureAtIndex(index int, sig []byte) { if w.tx.Signatures == nil { w.tx.Signatures = make([][]byte, len(w.GetSigners())) } w.tx.Signatures[index] = sig } func (w *wrapper) GetTx() authsigning.Tx { return w } func (w *wrapper) GetProtoTx() *tx.Tx { return w.tx } // Deprecated: AsAny extracts proto Tx and wraps it into Any. // NOTE: You should probably use `GetProtoTx` if you want to serialize the transaction. func (w *wrapper) AsAny() *codectypes.Any { return codectypes.UnsafePackAny(w.tx) } // WrapTx creates a TxBuilder wrapper around a tx.Tx proto message. func WrapTx(protoTx *tx.Tx) client.TxBuilder { return &wrapper{ tx: protoTx, } } func (w *wrapper) GetExtensionOptions() []*codectypes.Any { return w.tx.Body.ExtensionOptions } func (w *wrapper) GetNonCriticalExtensionOptions() []*codectypes.Any { return w.tx.Body.NonCriticalExtensionOptions } func (w *wrapper) SetExtensionOptions(extOpts ...*codectypes.Any) { w.tx.Body.ExtensionOptions = extOpts w.bodyBz = nil } func (w *wrapper) SetNonCriticalExtensionOptions(extOpts ...*codectypes.Any) { w.tx.Body.NonCriticalExtensionOptions = extOpts w.bodyBz = nil } func (w *wrapper) AddAuxSignerData(data tx.AuxSignerData) error { err := data.ValidateBasic() if err != nil { return err } w.bodyBz = data.SignDoc.BodyBytes var body tx.TxBody err = w.cdc.Unmarshal(w.bodyBz, &body) if err != nil { return err } if w.tx.Body.Memo != "" && w.tx.Body.Memo != body.Memo { return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has memo %s, got %s in AuxSignerData", w.tx.Body.Memo, body.Memo) } if w.tx.Body.TimeoutHeight != 0 && w.tx.Body.TimeoutHeight != body.TimeoutHeight { return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has timeout height %d, got %d in AuxSignerData", w.tx.Body.TimeoutHeight, body.TimeoutHeight) } if len(w.tx.Body.ExtensionOptions) != 0 { if len(w.tx.Body.ExtensionOptions) != len(body.ExtensionOptions) { return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has %d extension options, got %d in AuxSignerData", len(w.tx.Body.ExtensionOptions), len(body.ExtensionOptions)) } for i, o := range w.tx.Body.ExtensionOptions { if !o.Equal(body.ExtensionOptions[i]) { return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has extension option %+v at index %d, got %+v in AuxSignerData", o, i, body.ExtensionOptions[i]) } } } if len(w.tx.Body.NonCriticalExtensionOptions) != 0 { if len(w.tx.Body.NonCriticalExtensionOptions) != len(body.NonCriticalExtensionOptions) { return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has %d non-critical extension options, got %d in AuxSignerData", len(w.tx.Body.NonCriticalExtensionOptions), len(body.NonCriticalExtensionOptions)) } for i, o := range w.tx.Body.NonCriticalExtensionOptions { if !o.Equal(body.NonCriticalExtensionOptions[i]) { return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has non-critical extension option %+v at index %d, got %+v in AuxSignerData", o, i, body.NonCriticalExtensionOptions[i]) } } } if len(w.tx.Body.Messages) != 0 { if len(w.tx.Body.Messages) != len(body.Messages) { return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has %d Msgs, got %d in AuxSignerData", len(w.tx.Body.Messages), len(body.Messages)) } for i, o := range w.tx.Body.Messages { if !o.Equal(body.Messages[i]) { return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has Msg %+v at index %d, got %+v in AuxSignerData", o, i, body.Messages[i]) } } } if w.tx.AuthInfo.Tip != nil && data.SignDoc.Tip != nil { if !w.tx.AuthInfo.Tip.Amount.IsEqual(data.SignDoc.Tip.Amount) { return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has tip %+v, got %+v in AuxSignerData", w.tx.AuthInfo.Tip.Amount, data.SignDoc.Tip.Amount) } if w.tx.AuthInfo.Tip.Tipper != data.SignDoc.Tip.Tipper { return sdkerrors.ErrInvalidRequest.Wrapf("TxBuilder has tipper %s, got %s in AuxSignerData", w.tx.AuthInfo.Tip.Tipper, data.SignDoc.Tip.Tipper) } } w.SetMemo(body.Memo) w.SetTimeoutHeight(body.TimeoutHeight) w.SetExtensionOptions(body.ExtensionOptions...) w.SetNonCriticalExtensionOptions(body.NonCriticalExtensionOptions...) msgs := make([]sdk.Msg, len(body.Messages)) for i, msgAny := range body.Messages { msgs[i] = msgAny.GetCachedValue().(sdk.Msg) } w.SetMsgs(msgs...) w.SetTip(data.GetSignDoc().GetTip()) // Get the aux signer's index in GetSigners. signerIndex := -1 for i, signer := range w.GetSigners() { if signer.String() == data.Address { signerIndex = i } } if signerIndex < 0 { return sdkerrors.ErrLogic.Wrapf("address %s is not a signer", data.Address) } w.setSignerInfoAtIndex(signerIndex, &tx.SignerInfo{ PublicKey: data.SignDoc.PublicKey, ModeInfo: &tx.ModeInfo{ Sum: &tx.ModeInfo_Single_{ Single: &tx.ModeInfo_Single{ Mode: data.Mode }}}, Sequence: data.SignDoc.Sequence, }) w.setSignatureAtIndex(signerIndex, data.Sig) return nil } ``` ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/tx/v1beta1/tx.proto#L203-L224 ``` Example cmd: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} ./simd tx gov submit-proposal --title="Test Proposal" --description="My awesome proposal" --type="Text" --from validator-key --fee-granter=cosmos1xh44hxt7spr67hqaa7nyx5gnutrz5fraw6grxn --chain-id=testnet --fees="10stake" ``` ### Granted Fee Deductions Fees are deducted from grants in the `x/auth` ante handler. To learn more about how ante handlers work, read the [Auth Module AnteHandlers Guide](/sdk/latest/modules/auth/auth#antehandlers). ### Gas In order to prevent DoS attacks, using a filtered `x/feegrant` incurs gas. The SDK must assure that the `grantee`'s transactions all conform to the filter set by the `granter`. The SDK does this by iterating over the allowed messages in the filter and charging 10 gas per filtered message. The SDK will then iterate over the messages being sent by the `grantee` to ensure the messages adhere to the filter, also charging 10 gas per message. The SDK will stop iterating and fail the transaction if it finds a message that does not conform to the filter. **WARNING**: The gas is charged against the granted allowance. Ensure your messages conform to the filter, if any, before sending transactions using your allowance. ### Pruning A queue in the state maintained with the prefix of expiration of the grants and checks them on EndBlock with the current block time for every block to prune. ## State ### FeeAllowance Fee Allowances are identified by combining `Grantee` (the account address of fee allowance grantee) with the `Granter` (the account address of fee allowance granter). Fee allowance grants are stored in the state as follows: * Grant: `0x00 | grantee_addr_len (1 byte) | grantee_addr_bytes | granter_addr_len (1 byte) | granter_addr_bytes -> ProtocolBuffer(Grant)` ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Code generated by protoc-gen-gogo. DO NOT EDIT. // source: cosmos/feegrant/v1beta1/feegrant.proto package feegrant import ( fmt "fmt" _ "github.com/cosmos/cosmos-proto" types1 "github.com/cosmos/cosmos-sdk/codec/types" github_com_cosmos_cosmos_sdk_types "github.com/cosmos/cosmos-sdk/types" types "github.com/cosmos/cosmos-sdk/types" _ "github.com/cosmos/cosmos-sdk/types/tx/amino" _ "github.com/cosmos/gogoproto/gogoproto" proto "github.com/cosmos/gogoproto/proto" github_com_cosmos_gogoproto_types "github.com/cosmos/gogoproto/types" _ "google.golang.org/protobuf/types/known/durationpb" _ "google.golang.org/protobuf/types/known/timestamppb" io "io" math "math" math_bits "math/bits" time "time" ) // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf var _ = time.Kitchen // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package // BasicAllowance implements Allowance with a one-time grant of coins // that optionally expires. The grantee can use up to SpendLimit to cover fees. type BasicAllowance struct { // spend_limit specifies the maximum amount of coins that can be spent // by this allowance and will be updated as coins are spent. If it is // empty, there is no spend limit and any amount of coins can be spent. SpendLimit github_com_cosmos_cosmos_sdk_types.Coins `protobuf:"bytes,1,rep,name=spend_limit,json=spendLimit,proto3,castrepeated=github.com/cosmos/cosmos-sdk/types.Coins" json:"spend_limit"` // expiration specifies an optional time when this allowance expires Expiration *time.Time `protobuf:"bytes,2,opt,name=expiration,proto3,stdtime" json:"expiration,omitempty"` } func (m *BasicAllowance) Reset() { *m = BasicAllowance{ } } func (m *BasicAllowance) String() string { return proto.CompactTextString(m) } func (*BasicAllowance) ProtoMessage() { } func (*BasicAllowance) Descriptor() ([]byte, []int) { return fileDescriptor_7279582900c30aea, []int{0 } } func (m *BasicAllowance) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) } func (m *BasicAllowance) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { if deterministic { return xxx_messageInfo_BasicAllowance.Marshal(b, m, deterministic) } else { b = b[:cap(b)] n, err := m.MarshalToSizedBuffer(b) if err != nil { return nil, err } return b[:n], nil } } func (m *BasicAllowance) XXX_Merge(src proto.Message) { xxx_messageInfo_BasicAllowance.Merge(m, src) } func (m *BasicAllowance) XXX_Size() int { return m.Size() } func (m *BasicAllowance) XXX_DiscardUnknown() { xxx_messageInfo_BasicAllowance.DiscardUnknown(m) } var xxx_messageInfo_BasicAllowance proto.InternalMessageInfo func (m *BasicAllowance) GetSpendLimit() github_com_cosmos_cosmos_sdk_types.Coins { if m != nil { return m.SpendLimit } return nil } func (m *BasicAllowance) GetExpiration() *time.Time { if m != nil { return m.Expiration } return nil } // PeriodicAllowance extends Allowance to allow for both a maximum cap, // as well as a limit per time period. type PeriodicAllowance struct { // basic specifies a struct of `BasicAllowance` Basic BasicAllowance `protobuf:"bytes,1,opt,name=basic,proto3" json:"basic"` // period specifies the time duration in which period_spend_limit coins can // be spent before that allowance is reset Period time.Duration `protobuf:"bytes,2,opt,name=period,proto3,stdduration" json:"period"` // period_spend_limit specifies the maximum number of coins that can be spent // in the period PeriodSpendLimit github_com_cosmos_cosmos_sdk_types.Coins `protobuf:"bytes,3,rep,name=period_spend_limit,json=periodSpendLimit,proto3,castrepeated=github.com/cosmos/cosmos-sdk/types.Coins" json:"period_spend_limit"` // period_can_spend is the number of coins left to be spent before the period_reset time PeriodCanSpend github_com_cosmos_cosmos_sdk_types.Coins `protobuf:"bytes,4,rep,name=period_can_spend,json=periodCanSpend,proto3,castrepeated=github.com/cosmos/cosmos-sdk/types.Coins" json:"period_can_spend"` // period_reset is the time at which this period resets and a new one begins, // it is calculated from the start time of the first transaction after the // last period ended PeriodReset time.Time `protobuf:"bytes,5,opt,name=period_reset,json=periodReset,proto3,stdtime" json:"period_reset"` } func (m *PeriodicAllowance) Reset() { *m = PeriodicAllowance{ } } func (m *PeriodicAllowance) String() string { return proto.CompactTextString(m) } func (*PeriodicAllowance) ProtoMessage() { } func (*PeriodicAllowance) Descriptor() ([]byte, []int) { return fileDescriptor_7279582900c30aea, []int{1 } } func (m *PeriodicAllowance) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) } func (m *PeriodicAllowance) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { if deterministic { return xxx_messageInfo_PeriodicAllowance.Marshal(b, m, deterministic) } else { b = b[:cap(b)] n, err := m.MarshalToSizedBuffer(b) if err != nil { return nil, err } return b[:n], nil } } func (m *PeriodicAllowance) XXX_Merge(src proto.Message) { xxx_messageInfo_PeriodicAllowance.Merge(m, src) } func (m *PeriodicAllowance) XXX_Size() int { return m.Size() } func (m *PeriodicAllowance) XXX_DiscardUnknown() { xxx_messageInfo_PeriodicAllowance.DiscardUnknown(m) } var xxx_messageInfo_PeriodicAllowance proto.InternalMessageInfo func (m *PeriodicAllowance) GetBasic() BasicAllowance { if m != nil { return m.Basic } return BasicAllowance{ } } func (m *PeriodicAllowance) GetPeriod() time.Duration { if m != nil { return m.Period } return 0 } func (m *PeriodicAllowance) GetPeriodSpendLimit() github_com_cosmos_cosmos_sdk_types.Coins { if m != nil { return m.PeriodSpendLimit } return nil } func (m *PeriodicAllowance) GetPeriodCanSpend() github_com_cosmos_cosmos_sdk_types.Coins { if m != nil { return m.PeriodCanSpend } return nil } func (m *PeriodicAllowance) GetPeriodReset() time.Time { if m != nil { return m.PeriodReset } return time.Time{ } } // AllowedMsgAllowance creates allowance only for specified message types. type AllowedMsgAllowance struct { // allowance can be any of basic and periodic fee allowance. Allowance *types1.Any `protobuf:"bytes,1,opt,name=allowance,proto3" json:"allowance,omitempty"` // allowed_messages are the messages for which the grantee has the access. AllowedMessages []string `protobuf:"bytes,2,rep,name=allowed_messages,json=allowedMessages,proto3" json:"allowed_messages,omitempty"` } func (m *AllowedMsgAllowance) Reset() { *m = AllowedMsgAllowance{ } } func (m *AllowedMsgAllowance) String() string { return proto.CompactTextString(m) } func (*AllowedMsgAllowance) ProtoMessage() { } func (*AllowedMsgAllowance) Descriptor() ([]byte, []int) { return fileDescriptor_7279582900c30aea, []int{2 } } func (m *AllowedMsgAllowance) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) } func (m *AllowedMsgAllowance) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { if deterministic { return xxx_messageInfo_AllowedMsgAllowance.Marshal(b, m, deterministic) } else { b = b[:cap(b)] n, err := m.MarshalToSizedBuffer(b) if err != nil { return nil, err } return b[:n], nil } } func (m *AllowedMsgAllowance) XXX_Merge(src proto.Message) { xxx_messageInfo_AllowedMsgAllowance.Merge(m, src) } func (m *AllowedMsgAllowance) XXX_Size() int { return m.Size() } func (m *AllowedMsgAllowance) XXX_DiscardUnknown() { xxx_messageInfo_AllowedMsgAllowance.DiscardUnknown(m) } var xxx_messageInfo_AllowedMsgAllowance proto.InternalMessageInfo // Grant is stored in the KVStore to record a grant with full context type Grant struct { // granter is the address of the user granting an allowance of their funds. Granter string `protobuf:"bytes,1,opt,name=granter,proto3" json:"granter,omitempty"` // grantee is the address of the user being granted an allowance of another user's funds. Grantee string `protobuf:"bytes,2,opt,name=grantee,proto3" json:"grantee,omitempty"` // allowance can be any of basic, periodic, allowed fee allowance. Allowance *types1.Any `protobuf:"bytes,3,opt,name=allowance,proto3" json:"allowance,omitempty"` } func (m *Grant) Reset() { *m = Grant{ } } func (m *Grant) String() string { return proto.CompactTextString(m) } func (*Grant) ProtoMessage() { } func (*Grant) Descriptor() ([]byte, []int) { return fileDescriptor_7279582900c30aea, []int{3 } } func (m *Grant) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) } func (m *Grant) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { if deterministic { return xxx_messageInfo_Grant.Marshal(b, m, deterministic) } else { b = b[:cap(b)] n, err := m.MarshalToSizedBuffer(b) if err != nil { return nil, err } return b[:n], nil } } func (m *Grant) XXX_Merge(src proto.Message) { xxx_messageInfo_Grant.Merge(m, src) } func (m *Grant) XXX_Size() int { return m.Size() } func (m *Grant) XXX_DiscardUnknown() { xxx_messageInfo_Grant.DiscardUnknown(m) } var xxx_messageInfo_Grant proto.InternalMessageInfo func (m *Grant) GetGranter() string { if m != nil { return m.Granter } return "" } func (m *Grant) GetGrantee() string { if m != nil { return m.Grantee } return "" } func (m *Grant) GetAllowance() *types1.Any { if m != nil { return m.Allowance } return nil } func init() { proto.RegisterType((*BasicAllowance)(nil), "cosmos.feegrant.v1beta1.BasicAllowance") proto.RegisterType((*PeriodicAllowance)(nil), "cosmos.feegrant.v1beta1.PeriodicAllowance") proto.RegisterType((*AllowedMsgAllowance)(nil), "cosmos.feegrant.v1beta1.AllowedMsgAllowance") proto.RegisterType((*Grant)(nil), "cosmos.feegrant.v1beta1.Grant") } func init() { proto.RegisterFile("cosmos/feegrant/v1beta1/feegrant.proto", fileDescriptor_7279582900c30aea) } var fileDescriptor_7279582900c30aea = []byte{ // 639 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xb4, 0x55, 0x3f, 0x6f, 0xd3, 0x40, 0x14, 0x8f, 0x9b, 0xb6, 0x28, 0x17, 0x28, 0xad, 0xa9, 0x84, 0x53, 0x21, 0xbb, 0x8a, 0x04, 0x4d, 0x2b, 0xd5, 0x56, 0x8b, 0x58, 0x3a, 0x35, 0x2e, 0xa2, 0x80, 0x5a, 0xa9, 0x72, 0x99, 0x90, 0x50, 0x74, 0xb6, 0xaf, 0xe6, 0x44, 0xec, 0x33, 0x3e, 0x17, 0x1a, 0x06, 0x66, 0xc4, 0x80, 0x32, 0x32, 0x32, 0x22, 0xa6, 0x0e, 0xe5, 0x3b, 0x54, 0x0c, 0xa8, 0x62, 0x62, 0x22, 0x28, 0x19, 0x3a, 0xf3, 0x0d, 0x90, 0xef, 0xce, 0x8e, 0x9b, 0x50, 0x68, 0x25, 0xba, 0x24, 0x77, 0xef, 0xde, 0xfb, 0xfd, 0x79, 0xef, 0x45, 0x01, 0xb7, 0x1c, 0x42, 0x7d, 0x42, 0x8d, 0x1d, 0x84, 0xbc, 0x08, 0x06, 0xb1, 0xf1, 0x62, 0xc9, 0x46, 0x31, 0x5c, 0xca, 0x02, 0x7a, 0x18, 0x91, 0x98, 0xc8, 0xd7, 0x79, 0x9e, 0x9e, 0x85, 0x45, 0xde, 0xcc, 0xb4, 0x47, 0x3c, 0xc2, 0x72, 0x8c, 0xe4, 0xc4, 0xd3, 0x67, 0x2a, 0x1e, 0x21, 0x5e, 0x13, 0x19, 0xec, 0x66, 0xef, 0xee, 0x18, 0x30, 0x68, 0xa5, 0x4f, 0x1c, 0xa9, 0xc1, 0x6b, 0x04, 0x2c, 0x7f, 0x52, 0x85, 0x18, 0x1b, 0x52, 0x94, 0x09, 0x71, 0x08, 0x0e, 0xc4, 0xfb, 0x14, 0xf4, 0x71, 0x40, 0x0c, 0xf6, 0x29, 0x42, 0xda, 0x20, 0x51, 0x8c, 0x7d, 0x44, 0x63, 0xe8, 0x87, 0x29, 0xe6, 0x60, 0x82, 0xbb, 0x1b, 0xc1, 0x18, 0x13, 0x81, 0x59, 0x7d, 0x37, 0x02, 0x26, 0x4c, 0x48, 0xb1, 0x53, 0x6f, 0x36, 0xc9, 0x4b, 0x18, 0x38, 0x48, 0x7e, 0x0e, 0xca, 0x34, 0x44, 0x81, 0xdb, 0x68, 0x62, 0x1f, 0xc7, 0x8a, 0x34, 0x5b, 0xac, 0x95, 0x97, 0x2b, 0xba, 0x90, 0x9a, 0x88, 0x4b, 0xdd, 0xeb, 0x6b, 0x04, 0x07, 0xe6, 0x9d, 0xc3, 0x1f, 0x5a, 0xe1, 0x53, 0x47, 0xab, 0x79, 0x38, 0x7e, 0xba, 0x6b, 0xeb, 0x0e, 0xf1, 0x85, 0x2f, 0xf1, 0xb5, 0x48, 0xdd, 0x67, 0x46, 0xdc, 0x0a, 0x11, 0x65, 0x05, 0xf4, 0xe3, 0xf1, 0xfe, 0x82, 0x64, 0x01, 0x46, 0xb2, 0x91, 0x70, 0xc8, 0xab, 0x00, 0xa0, 0xbd, 0x10, 0x73, 0x65, 0xca, 0xc8, 0xac, 0x54, 0x2b, 0x2f, 0xcf, 0xe8, 0x5c, 0xba, 0x9e, 0x4a, 0xd7, 0x1f, 0xa5, 0xde, 0xcc, 0xd1, 0x76, 0x47, 0x93, 0xac, 0x5c, 0xcd, 0xca, 0xfa, 0x97, 0x83, 0xc5, 0x9b, 0xa7, 0x0c, 0x49, 0xbf, 0x87, 0x50, 0x66, 0xef, 0xc1, 0xdb, 0xe3, 0xfd, 0x85, 0x4a, 0x4e, 0xd8, 0x49, 0xf7, 0xd5, 0xcf, 0xa3, 0x60, 0x6a, 0x0b, 0x45, 0x98, 0xb8, 0xf9, 0x9e, 0xdc, 0x07, 0x63, 0x76, 0x92, 0xa7, 0x48, 0x4c, 0xdb, 0x9c, 0x7e, 0x1a, 0xd5, 0x49, 0x34, 0xb3, 0x94, 0xf4, 0x86, 0xfb, 0xe5, 0x00, 0xf2, 0x2a, 0x18, 0x0f, 0x19, 0xbc, 0xb0, 0x59, 0x19, 0xb2, 0x79, 0x57, 0x4c, 0xc8, 0xbc, 0x92, 0x14, 0xbf, 0xef, 0x68, 0x12, 0x07, 0x10, 0x75, 0xf2, 0x6b, 0x20, 0xf3, 0x53, 0x23, 0x3f, 0xa6, 0xe2, 0x05, 0x8d, 0x69, 0x92, 0x73, 0x6d, 0xf7, 0x87, 0xf5, 0x0a, 0x88, 0x58, 0xc3, 0x81, 0x01, 0xd7, 0xa0, 0x8c, 0x5e, 0x10, 0xfb, 0x04, 0x67, 0x5a, 0x83, 0x01, 0x13, 0x20, 0x6f, 0x80, 0xcb, 0x82, 0x3b, 0x42, 0x14, 0xc5, 0xca, 0xd8, 0x3f, 0x57, 0x85, 0x35, 0xb1, 0x9d, 0x35, 0xb1, 0xcc, 0xcb, 0xad, 0xa4, 0x7a, 0xe5, 0xe1, 0xb9, 0x96, 0xe6, 0x46, 0x4e, 0xe8, 0xd0, 0x86, 0x54, 0x7f, 0x49, 0xe0, 0x1a, 0xbb, 0x21, 0x77, 0x93, 0x7a, 0xfd, 0xcd, 0x79, 0x02, 0x4a, 0x30, 0xbd, 0x88, 0xed, 0x99, 0x1e, 0x92, 0x5b, 0x0f, 0x5a, 0xe6, 0xfc, 0x99, 0xc5, 0x58, 0x7d, 0x44, 0x79, 0x1e, 0x4c, 0x42, 0xce, 0xda, 0xf0, 0x11, 0xa5, 0xd0, 0x43, 0x54, 0x19, 0x99, 0x2d, 0xd6, 0x4a, 0xd6, 0x55, 0x11, 0xdf, 0x14, 0xe1, 0x95, 0xad, 0x37, 0x1f, 0xb4, 0xc2, 0xb9, 0x1c, 0xab, 0x39, 0xc7, 0x7f, 0xf0, 0x56, 0xfd, 0x2a, 0x81, 0xb1, 0xf5, 0x04, 0x42, 0x5e, 0x06, 0x97, 0x18, 0x16, 0x8a, 0x98, 0xc7, 0x92, 0xa9, 0x7c, 0x3b, 0x58, 0x9c, 0x16, 0x44, 0x75, 0xd7, 0x8d, 0x10, 0xa5, 0xdb, 0x71, 0x84, 0x03, 0xcf, 0x4a, 0x13, 0xfb, 0x35, 0x88, 0xfd, 0x14, 0xce, 0x50, 0x33, 0xd0, 0xcd, 0xe2, 0xff, 0xee, 0xa6, 0x59, 0x3f, 0xec, 0xaa, 0xd2, 0x51, 0x57, 0x95, 0x7e, 0x76, 0x55, 0xa9, 0xdd, 0x53, 0x0b, 0x47, 0x3d, 0xb5, 0xf0, 0xbd, 0xa7, 0x16, 0x1e, 0xcf, 0xfd, 0x75, 0x6f, 0xf7, 0xb2, 0xff, 0x0b, 0x7b, 0x9c, 0xc9, 0xb8, 0xfd, 0x3b, 0x00, 0x00, 0xff, 0xff, 0xe4, 0x3d, 0x09, 0x1d, 0x5a, 0x06, 0x00, 0x00, } func (m *BasicAllowance) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) if err != nil { return nil, err } return dAtA[:n], nil } func (m *BasicAllowance) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } func (m *BasicAllowance) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int _ = l if m.Expiration != nil { n1, err1 := github_com_cosmos_gogoproto_types.StdTimeMarshalTo(*m.Expiration, dAtA[i-github_com_cosmos_gogoproto_types.SizeOfStdTime(*m.Expiration):]) if err1 != nil { return 0, err1 } i -= n1 i = encodeVarintFeegrant(dAtA, i, uint64(n1)) i-- dAtA[i] = 0x12 } if len(m.SpendLimit) > 0 { for iNdEx := len(m.SpendLimit) - 1; iNdEx >= 0; iNdEx-- { { size, err := m.SpendLimit[iNdEx].MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } i -= size i = encodeVarintFeegrant(dAtA, i, uint64(size)) } i-- dAtA[i] = 0xa } } return len(dAtA) - i, nil } func (m *PeriodicAllowance) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) if err != nil { return nil, err } return dAtA[:n], nil } func (m *PeriodicAllowance) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } func (m *PeriodicAllowance) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int _ = l n2, err2 := github_com_cosmos_gogoproto_types.StdTimeMarshalTo(m.PeriodReset, dAtA[i-github_com_cosmos_gogoproto_types.SizeOfStdTime(m.PeriodReset):]) if err2 != nil { return 0, err2 } i -= n2 i = encodeVarintFeegrant(dAtA, i, uint64(n2)) i-- dAtA[i] = 0x2a if len(m.PeriodCanSpend) > 0 { for iNdEx := len(m.PeriodCanSpend) - 1; iNdEx >= 0; iNdEx-- { { size, err := m.PeriodCanSpend[iNdEx].MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } i -= size i = encodeVarintFeegrant(dAtA, i, uint64(size)) } i-- dAtA[i] = 0x22 } } if len(m.PeriodSpendLimit) > 0 { for iNdEx := len(m.PeriodSpendLimit) - 1; iNdEx >= 0; iNdEx-- { { size, err := m.PeriodSpendLimit[iNdEx].MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } i -= size i = encodeVarintFeegrant(dAtA, i, uint64(size)) } i-- dAtA[i] = 0x1a } } n3, err3 := github_com_cosmos_gogoproto_types.StdDurationMarshalTo(m.Period, dAtA[i-github_com_cosmos_gogoproto_types.SizeOfStdDuration(m.Period):]) if err3 != nil { return 0, err3 } i -= n3 i = encodeVarintFeegrant(dAtA, i, uint64(n3)) i-- dAtA[i] = 0x12 { size, err := m.Basic.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } i -= size i = encodeVarintFeegrant(dAtA, i, uint64(size)) } i-- dAtA[i] = 0xa return len(dAtA) - i, nil } func (m *AllowedMsgAllowance) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) if err != nil { return nil, err } return dAtA[:n], nil } func (m *AllowedMsgAllowance) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } func (m *AllowedMsgAllowance) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int _ = l if len(m.AllowedMessages) > 0 { for iNdEx := len(m.AllowedMessages) - 1; iNdEx >= 0; iNdEx-- { i -= len(m.AllowedMessages[iNdEx]) copy(dAtA[i:], m.AllowedMessages[iNdEx]) i = encodeVarintFeegrant(dAtA, i, uint64(len(m.AllowedMessages[iNdEx]))) i-- dAtA[i] = 0x12 } } if m.Allowance != nil { { size, err := m.Allowance.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } i -= size i = encodeVarintFeegrant(dAtA, i, uint64(size)) } i-- dAtA[i] = 0xa } return len(dAtA) - i, nil } func (m *Grant) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) if err != nil { return nil, err } return dAtA[:n], nil } func (m *Grant) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } func (m *Grant) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int _ = l if m.Allowance != nil { { size, err := m.Allowance.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } i -= size i = encodeVarintFeegrant(dAtA, i, uint64(size)) } i-- dAtA[i] = 0x1a } if len(m.Grantee) > 0 { i -= len(m.Grantee) copy(dAtA[i:], m.Grantee) i = encodeVarintFeegrant(dAtA, i, uint64(len(m.Grantee))) i-- dAtA[i] = 0x12 } if len(m.Granter) > 0 { i -= len(m.Granter) copy(dAtA[i:], m.Granter) i = encodeVarintFeegrant(dAtA, i, uint64(len(m.Granter))) i-- dAtA[i] = 0xa } return len(dAtA) - i, nil } func encodeVarintFeegrant(dAtA []byte, offset int, v uint64) int { offset -= sovFeegrant(v) base := offset for v >= 1<<7 { dAtA[offset] = uint8(v&0x7f | 0x80) v >>= 7 offset++ } dAtA[offset] = uint8(v) return base } func (m *BasicAllowance) Size() (n int) { if m == nil { return 0 } var l int _ = l if len(m.SpendLimit) > 0 { for _, e := range m.SpendLimit { l = e.Size() n += 1 + l + sovFeegrant(uint64(l)) } } if m.Expiration != nil { l = github_com_cosmos_gogoproto_types.SizeOfStdTime(*m.Expiration) n += 1 + l + sovFeegrant(uint64(l)) } return n } func (m *PeriodicAllowance) Size() (n int) { if m == nil { return 0 } var l int _ = l l = m.Basic.Size() n += 1 + l + sovFeegrant(uint64(l)) l = github_com_cosmos_gogoproto_types.SizeOfStdDuration(m.Period) n += 1 + l + sovFeegrant(uint64(l)) if len(m.PeriodSpendLimit) > 0 { for _, e := range m.PeriodSpendLimit { l = e.Size() n += 1 + l + sovFeegrant(uint64(l)) } } if len(m.PeriodCanSpend) > 0 { for _, e := range m.PeriodCanSpend { l = e.Size() n += 1 + l + sovFeegrant(uint64(l)) } } l = github_com_cosmos_gogoproto_types.SizeOfStdTime(m.PeriodReset) n += 1 + l + sovFeegrant(uint64(l)) return n } func (m *AllowedMsgAllowance) Size() (n int) { if m == nil { return 0 } var l int _ = l if m.Allowance != nil { l = m.Allowance.Size() n += 1 + l + sovFeegrant(uint64(l)) } if len(m.AllowedMessages) > 0 { for _, s := range m.AllowedMessages { l = len(s) n += 1 + l + sovFeegrant(uint64(l)) } } return n } func (m *Grant) Size() (n int) { if m == nil { return 0 } var l int _ = l l = len(m.Granter) if l > 0 { n += 1 + l + sovFeegrant(uint64(l)) } l = len(m.Grantee) if l > 0 { n += 1 + l + sovFeegrant(uint64(l)) } if m.Allowance != nil { l = m.Allowance.Size() n += 1 + l + sovFeegrant(uint64(l)) } return n } func sovFeegrant(x uint64) (n int) { return (math_bits.Len64(x|1) + 6) / 7 } func sozFeegrant(x uint64) (n int) { return sovFeegrant(uint64((x << 1) ^ uint64((int64(x) >> 63)))) } func (m *BasicAllowance) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { preIndex := iNdEx var wire uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowFeegrant } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ wire |= uint64(b&0x7F) << shift if b < 0x80 { break } } fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { return fmt.Errorf("proto: BasicAllowance: wiretype end group for non-group") } if fieldNum <= 0 { return fmt.Errorf("proto: BasicAllowance: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field SpendLimit", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowFeegrant } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ msglen |= int(b&0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthFeegrant } postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthFeegrant } if postIndex > l { return io.ErrUnexpectedEOF } m.SpendLimit = append(m.SpendLimit, types.Coin{ }) if err := m.SpendLimit[len(m.SpendLimit)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 2: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field Expiration", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowFeegrant } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ msglen |= int(b&0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthFeegrant } postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthFeegrant } if postIndex > l { return io.ErrUnexpectedEOF } if m.Expiration == nil { m.Expiration = new(time.Time) } if err := github_com_cosmos_gogoproto_types.StdTimeUnmarshal(m.Expiration, dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipFeegrant(dAtA[iNdEx:]) if err != nil { return err } if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthFeegrant } if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } iNdEx += skippy } } if iNdEx > l { return io.ErrUnexpectedEOF } return nil } func (m *PeriodicAllowance) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { preIndex := iNdEx var wire uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowFeegrant } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ wire |= uint64(b&0x7F) << shift if b < 0x80 { break } } fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { return fmt.Errorf("proto: PeriodicAllowance: wiretype end group for non-group") } if fieldNum <= 0 { return fmt.Errorf("proto: PeriodicAllowance: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field Basic", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowFeegrant } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ msglen |= int(b&0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthFeegrant } postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthFeegrant } if postIndex > l { return io.ErrUnexpectedEOF } if err := m.Basic.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 2: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field Period", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowFeegrant } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ msglen |= int(b&0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthFeegrant } postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthFeegrant } if postIndex > l { return io.ErrUnexpectedEOF } if err := github_com_cosmos_gogoproto_types.StdDurationUnmarshal(&m.Period, dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field PeriodSpendLimit", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowFeegrant } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ msglen |= int(b&0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthFeegrant } postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthFeegrant } if postIndex > l { return io.ErrUnexpectedEOF } m.PeriodSpendLimit = append(m.PeriodSpendLimit, types.Coin{ }) if err := m.PeriodSpendLimit[len(m.PeriodSpendLimit)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field PeriodCanSpend", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowFeegrant } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ msglen |= int(b&0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthFeegrant } postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthFeegrant } if postIndex > l { return io.ErrUnexpectedEOF } m.PeriodCanSpend = append(m.PeriodCanSpend, types.Coin{ }) if err := m.PeriodCanSpend[len(m.PeriodCanSpend)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 5: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field PeriodReset", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowFeegrant } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ msglen |= int(b&0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthFeegrant } postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthFeegrant } if postIndex > l { return io.ErrUnexpectedEOF } if err := github_com_cosmos_gogoproto_types.StdTimeUnmarshal(&m.PeriodReset, dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipFeegrant(dAtA[iNdEx:]) if err != nil { return err } if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthFeegrant } if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } iNdEx += skippy } } if iNdEx > l { return io.ErrUnexpectedEOF } return nil } func (m *AllowedMsgAllowance) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { preIndex := iNdEx var wire uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowFeegrant } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ wire |= uint64(b&0x7F) << shift if b < 0x80 { break } } fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { return fmt.Errorf("proto: AllowedMsgAllowance: wiretype end group for non-group") } if fieldNum <= 0 { return fmt.Errorf("proto: AllowedMsgAllowance: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field Allowance", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowFeegrant } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ msglen |= int(b&0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthFeegrant } postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthFeegrant } if postIndex > l { return io.ErrUnexpectedEOF } if m.Allowance == nil { m.Allowance = &types1.Any{ } } if err := m.Allowance.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 2: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field AllowedMessages", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowFeegrant } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } intStringLen := int(stringLen) if intStringLen < 0 { return ErrInvalidLengthFeegrant } postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthFeegrant } if postIndex > l { return io.ErrUnexpectedEOF } m.AllowedMessages = append(m.AllowedMessages, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipFeegrant(dAtA[iNdEx:]) if err != nil { return err } if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthFeegrant } if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } iNdEx += skippy } } if iNdEx > l { return io.ErrUnexpectedEOF } return nil } func (m *Grant) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { preIndex := iNdEx var wire uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowFeegrant } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ wire |= uint64(b&0x7F) << shift if b < 0x80 { break } } fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { return fmt.Errorf("proto: Grant: wiretype end group for non-group") } if fieldNum <= 0 { return fmt.Errorf("proto: Grant: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field Granter", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowFeegrant } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } intStringLen := int(stringLen) if intStringLen < 0 { return ErrInvalidLengthFeegrant } postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthFeegrant } if postIndex > l { return io.ErrUnexpectedEOF } m.Granter = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 2: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field Grantee", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowFeegrant } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } intStringLen := int(stringLen) if intStringLen < 0 { return ErrInvalidLengthFeegrant } postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthFeegrant } if postIndex > l { return io.ErrUnexpectedEOF } m.Grantee = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 3: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field Allowance", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowFeegrant } if iNdEx >= l { return io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ msglen |= int(b&0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthFeegrant } postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthFeegrant } if postIndex > l { return io.ErrUnexpectedEOF } if m.Allowance == nil { m.Allowance = &types1.Any{ } } if err := m.Allowance.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipFeegrant(dAtA[iNdEx:]) if err != nil { return err } if (skippy < 0) || (iNdEx+skippy) < 0 { return ErrInvalidLengthFeegrant } if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } iNdEx += skippy } } if iNdEx > l { return io.ErrUnexpectedEOF } return nil } func skipFeegrant(dAtA []byte) (n int, err error) { l := len(dAtA) iNdEx := 0 depth := 0 for iNdEx < l { var wire uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return 0, ErrIntOverflowFeegrant } if iNdEx >= l { return 0, io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ wire |= (uint64(b) & 0x7F) << shift if b < 0x80 { break } } wireType := int(wire & 0x7) switch wireType { case 0: for shift := uint(0); ; shift += 7 { if shift >= 64 { return 0, ErrIntOverflowFeegrant } if iNdEx >= l { return 0, io.ErrUnexpectedEOF } iNdEx++ if dAtA[iNdEx-1] < 0x80 { break } } case 1: iNdEx += 8 case 2: var length int for shift := uint(0); ; shift += 7 { if shift >= 64 { return 0, ErrIntOverflowFeegrant } if iNdEx >= l { return 0, io.ErrUnexpectedEOF } b := dAtA[iNdEx] iNdEx++ length |= (int(b) & 0x7F) << shift if b < 0x80 { break } } if length < 0 { return 0, ErrInvalidLengthFeegrant } iNdEx += length case 3: depth++ case 4: if depth == 0 { return 0, ErrUnexpectedEndOfGroupFeegrant } depth-- case 5: iNdEx += 4 default: return 0, fmt.Errorf("proto: illegal wireType %d", wireType) } if iNdEx < 0 { return 0, ErrInvalidLengthFeegrant } if depth == 0 { return iNdEx, nil } } return 0, io.ErrUnexpectedEOF } var ( ErrInvalidLengthFeegrant = fmt.Errorf("proto: negative length found during unmarshaling") ErrIntOverflowFeegrant = fmt.Errorf("proto: integer overflow") ErrUnexpectedEndOfGroupFeegrant = fmt.Errorf("proto: unexpected end of group") ) ``` ### FeeAllowanceQueue Fee Allowances queue items are identified by combining the `FeeAllowancePrefixQueue` (i.e., 0x01), `expiration`, `grantee` (the account address of fee allowance grantee), `granter` (the account address of fee allowance granter). Endblocker checks `FeeAllowanceQueue` state for the expired grants and prunes them from `FeeAllowance` if there are any found. Fee allowance queue keys are stored in the state as follows: * Grant: `0x01 | expiration_bytes | grantee_addr_len (1 byte) | grantee_addr_bytes | granter_addr_len (1 byte) | granter_addr_bytes -> EmptyBytes` ## Messages ### Msg/GrantAllowance A fee allowance grant will be created with the `MsgGrantAllowance` message. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/feegrant/v1beta1/tx.proto#L25-L39 ``` ### Msg/RevokeAllowance An allowed grant fee allowance can be removed with the `MsgRevokeAllowance` message. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/feegrant/v1beta1/tx.proto#L41-L54 ``` ## Events The feegrant module emits the following events: ## Msg Server ### MsgGrantAllowance | Type | Attribute Key | Attribute Value | | ------- | ------------- | ------------------ | | message | action | set\_feegrant | | message | granter | `{granterAddress}` | | message | grantee | `{granteeAddress}` | ### MsgRevokeAllowance | Type | Attribute Key | Attribute Value | | ------- | ------------- | ------------------ | | message | action | revoke\_feegrant | | message | granter | `{granterAddress}` | | message | grantee | `{granteeAddress}` | ### Exec fee allowance | Type | Attribute Key | Attribute Value | | ------- | ------------- | ------------------ | | message | action | use\_feegrant | | message | granter | `{granterAddress}` | | message | grantee | `{granteeAddress}` | ### Prune fee allowances | Type | Attribute Key | Attribute Value | | ------- | ------------- | ----------------- | | message | action | prune\_feegrant | | message | pruner | `{prunerAddress}` | ## Client ### CLI A user can query and interact with the `feegrant` module using the CLI. #### Query The `query` commands allow users to query `feegrant` state. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query feegrant --help ``` ##### grant The `grant` command allows users to query a grant for a given granter-grantee pair. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query feegrant grant [granter] [grantee] [flags] ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query feegrant grant cosmos1.. cosmos1.. ``` Example Output: ```yml theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} allowance: '@type': /cosmos.feegrant.v1beta1.BasicAllowance expiration: null spend_limit: - amount: "100" denom: stake grantee: cosmos1.. granter: cosmos1.. ``` ##### grants The `grants` command allows users to query all grants for a given grantee. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query feegrant grants [grantee] [flags] ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query feegrant grants cosmos1.. ``` Example Output: ```yml expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} allowances: - allowance: '@type': /cosmos.feegrant.v1beta1.BasicAllowance expiration: null spend_limit: - amount: "100" denom: stake grantee: cosmos1.. granter: cosmos1.. pagination: next_key: null total: "0" ``` #### Transactions The `tx` commands allow users to interact with the `feegrant` module. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx feegrant --help ``` ##### grant The `grant` command allows users to grant fee allowances to another account. The fee allowance can have an expiration date, a total spend limit, and/or a periodic spend limit. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx feegrant grant [granter] [grantee] [flags] ``` Example (one-time spend limit): ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx feegrant grant cosmos1.. cosmos1.. --spend-limit 100stake ``` Example (periodic spend limit): ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx feegrant grant cosmos1.. cosmos1.. --period 3600 --period-limit 10stake ``` ##### revoke The `revoke` command allows users to revoke a granted fee allowance. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx feegrant revoke [granter] [grantee] [flags] ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx feegrant revoke cosmos1.. cosmos1.. ``` ### gRPC A user can query the `feegrant` module using gRPC endpoints. #### Allowance The `Allowance` endpoint allows users to query a granted fee allowance. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.feegrant.v1beta1.Query/Allowance ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"grantee":"cosmos1..","granter":"cosmos1.."}' \ localhost:9090 \ cosmos.feegrant.v1beta1.Query/Allowance ``` Example Output: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "allowance": { "granter": "cosmos1..", "grantee": "cosmos1..", "allowance": { "@type": "/cosmos.feegrant.v1beta1.BasicAllowance", "spendLimit": [ { "denom": "stake", "amount": "100" } ] } } } ``` #### Allowances The `Allowances` endpoint allows users to query all granted fee allowances for a given grantee. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.feegrant.v1beta1.Query/Allowances ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"address":"cosmos1.."}' \ localhost:9090 \ cosmos.feegrant.v1beta1.Query/Allowances ``` Example Output: ```json expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "allowances": [ { "granter": "cosmos1..", "grantee": "cosmos1..", "allowance": { "@type": "/cosmos.feegrant.v1beta1.BasicAllowance", "spendLimit": [ { "denom": "stake", "amount": "100" } ] } } ], "pagination": { "total": "1" } } ``` # x/genutil Source: https://docs.cosmos.network/sdk/latest/modules/genutil/README The genutil package contains a variety of genesis utility functionalities for usage within a blockchain application. Namely: ## Concepts The `genutil` package contains a variety of genesis utility functionalities for usage within a blockchain application. Namely: * Genesis transactions related (gentx) * Commands for collection and creation of gentxs * `InitChain` processing of gentxs * Genesis file creation * Genesis file validation * Genesis file migration * CometBFT related initialization * Translation of an app genesis to a CometBFT genesis ## Genesis Genutil contains the data structure that defines an application genesis. An application genesis consist of a consensus genesis (g.e. CometBFT genesis) and application related genesis data. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package types import ( "bytes" "encoding/json" "errors" "fmt" "os" "time" cmtjson "github.com/cometbft/cometbft/libs/json" cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" cmttypes "github.com/cometbft/cometbft/types" cmttime "github.com/cometbft/cometbft/types/time" "github.com/cosmos/cosmos-sdk/version" ) const ( // MaxChainIDLen is the maximum length of a chain ID. MaxChainIDLen = cmttypes.MaxChainIDLen ) // AppGenesis defines the app's genesis. type AppGenesis struct { AppName string `json:"app_name"` AppVersion string `json:"app_version"` GenesisTime time.Time `json:"genesis_time"` ChainID string `json:"chain_id"` InitialHeight int64 `json:"initial_height"` AppHash []byte `json:"app_hash"` AppState json.RawMessage `json:"app_state,omitempty"` Consensus *ConsensusGenesis `json:"consensus,omitempty"` } // NewAppGenesisWithVersion returns a new AppGenesis with the app name and app version already. func NewAppGenesisWithVersion(chainID string, appState json.RawMessage) *AppGenesis { return &AppGenesis{ AppName: version.AppName, AppVersion: version.Version, ChainID: chainID, AppState: appState, Consensus: &ConsensusGenesis{ Validators: nil, }, } } // ValidateAndComplete performs validation and completes the AppGenesis. func (ag *AppGenesis) ValidateAndComplete() error { if ag.ChainID == "" { return errors.New("genesis doc must include non-empty chain_id") } if len(ag.ChainID) > MaxChainIDLen { return fmt.Errorf("chain_id in genesis doc is too long (max: %d)", MaxChainIDLen) } if ag.InitialHeight < 0 { return fmt.Errorf("initial_height cannot be negative (got %v)", ag.InitialHeight) } if ag.InitialHeight == 0 { ag.InitialHeight = 1 } if ag.GenesisTime.IsZero() { ag.GenesisTime = cmttime.Now() } if err := ag.Consensus.ValidateAndComplete(); err != nil { return err } return nil } // SaveAs is a utility method for saving AppGenesis as a JSON file. func (ag *AppGenesis) SaveAs(file string) error { appGenesisBytes, err := json.MarshalIndent(ag, "", " ") if err != nil { return err } return os.WriteFile(file, appGenesisBytes, 0o600) } // AppGenesisFromFile reads the AppGenesis from the provided file. func AppGenesisFromFile(genFile string) (*AppGenesis, error) { jsonBlob, err := os.ReadFile(genFile) if err != nil { return nil, fmt.Errorf("couldn't read AppGenesis file (%s): %w", genFile, err) } var appGenesis AppGenesis if err := json.Unmarshal(jsonBlob, &appGenesis); err != nil { // fallback to CometBFT genesis var ctmGenesis cmttypes.GenesisDoc if err2 := cmtjson.Unmarshal(jsonBlob, &ctmGenesis); err2 != nil { return nil, fmt.Errorf("error unmarshalling AppGenesis at %s: %w\n failed fallback to CometBFT GenDoc: %w", genFile, err, err2) } appGenesis = AppGenesis{ AppName: version.AppName, // AppVersion is not filled as we do not know it from a CometBFT genesis GenesisTime: ctmGenesis.GenesisTime, ChainID: ctmGenesis.ChainID, InitialHeight: ctmGenesis.InitialHeight, AppHash: ctmGenesis.AppHash, AppState: ctmGenesis.AppState, Consensus: &ConsensusGenesis{ Validators: ctmGenesis.Validators, Params: ctmGenesis.ConsensusParams, }, } } return &appGenesis, nil } // -------------------------- // CometBFT Genesis Handling // -------------------------- // ToGenesisDoc converts the AppGenesis to a CometBFT GenesisDoc. func (ag *AppGenesis) ToGenesisDoc() (*cmttypes.GenesisDoc, error) { return &cmttypes.GenesisDoc{ GenesisTime: ag.GenesisTime, ChainID: ag.ChainID, InitialHeight: ag.InitialHeight, AppHash: ag.AppHash, AppState: ag.AppState, Validators: ag.Consensus.Validators, ConsensusParams: ag.Consensus.Params, }, nil } // ConsensusGenesis defines the consensus layer's genesis. // TODO(@julienrbrt) eventually abstract from CometBFT types type ConsensusGenesis struct { Validators []cmttypes.GenesisValidator `json:"validators,omitempty"` Params *cmttypes.ConsensusParams `json:"params,omitempty"` } // NewConsensusGenesis returns a ConsensusGenesis with given values. // It takes a proto consensus params so it can called from server export command. func NewConsensusGenesis(params cmtproto.ConsensusParams, validators []cmttypes.GenesisValidator) *ConsensusGenesis { return &ConsensusGenesis{ Params: &cmttypes.ConsensusParams{ Block: cmttypes.BlockParams{ MaxBytes: params.Block.MaxBytes, MaxGas: params.Block.MaxGas, }, Evidence: cmttypes.EvidenceParams{ MaxAgeNumBlocks: params.Evidence.MaxAgeNumBlocks, MaxAgeDuration: params.Evidence.MaxAgeDuration, MaxBytes: params.Evidence.MaxBytes, }, Validator: cmttypes.ValidatorParams{ PubKeyTypes: params.Validator.PubKeyTypes, }, }, Validators: validators, } } func (cs *ConsensusGenesis) MarshalJSON() ([]byte, error) { type Alias ConsensusGenesis return cmtjson.Marshal(&Alias{ Validators: cs.Validators, Params: cs.Params, }) } func (cs *ConsensusGenesis) UnmarshalJSON(b []byte) error { type Alias ConsensusGenesis result := Alias{ } if err := cmtjson.Unmarshal(b, &result); err != nil { return err } cs.Params = result.Params cs.Validators = result.Validators return nil } func (cs *ConsensusGenesis) ValidateAndComplete() error { if cs == nil { return fmt.Errorf("consensus genesis cannot be nil") } if cs.Params == nil { cs.Params = cmttypes.DefaultConsensusParams() } else if err := cs.Params.ValidateBasic(); err != nil { return err } for i, v := range cs.Validators { if v.Power == 0 { return fmt.Errorf("the genesis file cannot contain validators with no voting power: %v", v) } if len(v.Address) > 0 && !bytes.Equal(v.PubKey.Address(), v.Address) { return fmt.Errorf("incorrect address for validator %v in the genesis file, should be %v", v, v.PubKey.Address()) } if len(v.Address) == 0 { cs.Validators[i].Address = v.PubKey.Address() } } return nil } ``` The application genesis can then be translated to the consensus engine to the right format: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package types import ( "bytes" "encoding/json" "errors" "fmt" "os" "time" cmtjson "github.com/cometbft/cometbft/libs/json" cmtproto "github.com/cometbft/cometbft/proto/tendermint/types" cmttypes "github.com/cometbft/cometbft/types" cmttime "github.com/cometbft/cometbft/types/time" "github.com/cosmos/cosmos-sdk/version" ) const ( // MaxChainIDLen is the maximum length of a chain ID. MaxChainIDLen = cmttypes.MaxChainIDLen ) // AppGenesis defines the app's genesis. type AppGenesis struct { AppName string `json:"app_name"` AppVersion string `json:"app_version"` GenesisTime time.Time `json:"genesis_time"` ChainID string `json:"chain_id"` InitialHeight int64 `json:"initial_height"` AppHash []byte `json:"app_hash"` AppState json.RawMessage `json:"app_state,omitempty"` Consensus *ConsensusGenesis `json:"consensus,omitempty"` } // NewAppGenesisWithVersion returns a new AppGenesis with the app name and app version already. func NewAppGenesisWithVersion(chainID string, appState json.RawMessage) *AppGenesis { return &AppGenesis{ AppName: version.AppName, AppVersion: version.Version, ChainID: chainID, AppState: appState, Consensus: &ConsensusGenesis{ Validators: nil, }, } } // ValidateAndComplete performs validation and completes the AppGenesis. func (ag *AppGenesis) ValidateAndComplete() error { if ag.ChainID == "" { return errors.New("genesis doc must include non-empty chain_id") } if len(ag.ChainID) > MaxChainIDLen { return fmt.Errorf("chain_id in genesis doc is too long (max: %d)", MaxChainIDLen) } if ag.InitialHeight < 0 { return fmt.Errorf("initial_height cannot be negative (got %v)", ag.InitialHeight) } if ag.InitialHeight == 0 { ag.InitialHeight = 1 } if ag.GenesisTime.IsZero() { ag.GenesisTime = cmttime.Now() } if err := ag.Consensus.ValidateAndComplete(); err != nil { return err } return nil } // SaveAs is a utility method for saving AppGenesis as a JSON file. func (ag *AppGenesis) SaveAs(file string) error { appGenesisBytes, err := json.MarshalIndent(ag, "", " ") if err != nil { return err } return os.WriteFile(file, appGenesisBytes, 0o600) } // AppGenesisFromFile reads the AppGenesis from the provided file. func AppGenesisFromFile(genFile string) (*AppGenesis, error) { jsonBlob, err := os.ReadFile(genFile) if err != nil { return nil, fmt.Errorf("couldn't read AppGenesis file (%s): %w", genFile, err) } var appGenesis AppGenesis if err := json.Unmarshal(jsonBlob, &appGenesis); err != nil { // fallback to CometBFT genesis var ctmGenesis cmttypes.GenesisDoc if err2 := cmtjson.Unmarshal(jsonBlob, &ctmGenesis); err2 != nil { return nil, fmt.Errorf("error unmarshalling AppGenesis at %s: %w\n failed fallback to CometBFT GenDoc: %w", genFile, err, err2) } appGenesis = AppGenesis{ AppName: version.AppName, // AppVersion is not filled as we do not know it from a CometBFT genesis GenesisTime: ctmGenesis.GenesisTime, ChainID: ctmGenesis.ChainID, InitialHeight: ctmGenesis.InitialHeight, AppHash: ctmGenesis.AppHash, AppState: ctmGenesis.AppState, Consensus: &ConsensusGenesis{ Validators: ctmGenesis.Validators, Params: ctmGenesis.ConsensusParams, }, } } return &appGenesis, nil } // -------------------------- // CometBFT Genesis Handling // -------------------------- // ToGenesisDoc converts the AppGenesis to a CometBFT GenesisDoc. func (ag *AppGenesis) ToGenesisDoc() (*cmttypes.GenesisDoc, error) { return &cmttypes.GenesisDoc{ GenesisTime: ag.GenesisTime, ChainID: ag.ChainID, InitialHeight: ag.InitialHeight, AppHash: ag.AppHash, AppState: ag.AppState, Validators: ag.Consensus.Validators, ConsensusParams: ag.Consensus.Params, }, nil } // ConsensusGenesis defines the consensus layer's genesis. // TODO(@julienrbrt) eventually abstract from CometBFT types type ConsensusGenesis struct { Validators []cmttypes.GenesisValidator `json:"validators,omitempty"` Params *cmttypes.ConsensusParams `json:"params,omitempty"` } // NewConsensusGenesis returns a ConsensusGenesis with given values. // It takes a proto consensus params so it can called from server export command. func NewConsensusGenesis(params cmtproto.ConsensusParams, validators []cmttypes.GenesisValidator) *ConsensusGenesis { return &ConsensusGenesis{ Params: &cmttypes.ConsensusParams{ Block: cmttypes.BlockParams{ MaxBytes: params.Block.MaxBytes, MaxGas: params.Block.MaxGas, }, Evidence: cmttypes.EvidenceParams{ MaxAgeNumBlocks: params.Evidence.MaxAgeNumBlocks, MaxAgeDuration: params.Evidence.MaxAgeDuration, MaxBytes: params.Evidence.MaxBytes, }, Validator: cmttypes.ValidatorParams{ PubKeyTypes: params.Validator.PubKeyTypes, }, }, Validators: validators, } } func (cs *ConsensusGenesis) MarshalJSON() ([]byte, error) { type Alias ConsensusGenesis return cmtjson.Marshal(&Alias{ Validators: cs.Validators, Params: cs.Params, }) } func (cs *ConsensusGenesis) UnmarshalJSON(b []byte) error { type Alias ConsensusGenesis result := Alias{ } if err := cmtjson.Unmarshal(b, &result); err != nil { return err } cs.Params = result.Params cs.Validators = result.Validators return nil } func (cs *ConsensusGenesis) ValidateAndComplete() error { if cs == nil { return fmt.Errorf("consensus genesis cannot be nil") } if cs.Params == nil { cs.Params = cmttypes.DefaultConsensusParams() } else if err := cs.Params.ValidateBasic(); err != nil { return err } for i, v := range cs.Validators { if v.Power == 0 { return fmt.Errorf("the genesis file cannot contain validators with no voting power: %v", v) } if len(v.Address) > 0 && !bytes.Equal(v.PubKey.Address(), v.Address) { return fmt.Errorf("incorrect address for validator %v in the genesis file, should be %v", v, v.PubKey.Address()) } if len(v.Address) == 0 { cs.Validators[i].Address = v.PubKey.Address() } } return nil } ``` ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package server import ( "context" "errors" "fmt" "io" "net" "os" "runtime/pprof" "github.com/cometbft/cometbft/abci/server" cmtcmd "github.com/cometbft/cometbft/cmd/cometbft/commands" cmtcfg "github.com/cometbft/cometbft/config" "github.com/cometbft/cometbft/node" "github.com/cometbft/cometbft/p2p" pvm "github.com/cometbft/cometbft/privval" "github.com/cometbft/cometbft/proxy" "github.com/cometbft/cometbft/rpc/client/local" cmttypes "github.com/cometbft/cometbft/types" dbm "github.com/cosmos/cosmos-db" "github.com/hashicorp/go-metrics" "github.com/spf13/cobra" "github.com/spf13/pflag" "golang.org/x/sync/errgroup" "google.golang.org/grpc" "google.golang.org/grpc/credentials/insecure" pruningtypes "cosmossdk.io/store/pruning/types" "github.com/cosmos/cosmos-sdk/client" "github.com/cosmos/cosmos-sdk/client/flags" "github.com/cosmos/cosmos-sdk/codec" "github.com/cosmos/cosmos-sdk/server/api" serverconfig "github.com/cosmos/cosmos-sdk/server/config" servergrpc "github.com/cosmos/cosmos-sdk/server/grpc" servercmtlog "github.com/cosmos/cosmos-sdk/server/log" "github.com/cosmos/cosmos-sdk/server/types" "github.com/cosmos/cosmos-sdk/telemetry" "github.com/cosmos/cosmos-sdk/types/mempool" "github.com/cosmos/cosmos-sdk/version" genutiltypes "github.com/cosmos/cosmos-sdk/x/genutil/types" ) const ( // CometBFT full-node start flags flagWithComet = "with-comet" flagAddress = "address" flagTransport = "transport" flagTraceStore = "trace-store" flagCPUProfile = "cpu-profile" FlagMinGasPrices = "minimum-gas-prices" FlagQueryGasLimit = "query-gas-limit" FlagHaltHeight = "halt-height" FlagHaltTime = "halt-time" FlagInterBlockCache = "inter-block-cache" FlagUnsafeSkipUpgrades = "unsafe-skip-upgrades" FlagTrace = "trace" FlagInvCheckPeriod = "inv-check-period" FlagPruning = "pruning" FlagPruningKeepRecent = "pruning-keep-recent" FlagPruningInterval = "pruning-interval" FlagIndexEvents = "index-events" FlagMinRetainBlocks = "min-retain-blocks" FlagIAVLCacheSize = "iavl-cache-size" FlagDisableIAVLFastNode = "iavl-disable-fastnode" // state sync-related flags FlagStateSyncSnapshotInterval = "state-sync.snapshot-interval" FlagStateSyncSnapshotKeepRecent = "state-sync.snapshot-keep-recent" // api-related flags FlagAPIEnable = "api.enable" FlagAPISwagger = "api.swagger" FlagAPIAddress = "api.address" FlagAPIMaxOpenConnections = "api.max-open-connections" FlagRPCReadTimeout = "api.rpc-read-timeout" FlagRPCWriteTimeout = "api.rpc-write-timeout" FlagRPCMaxBodyBytes = "api.rpc-max-body-bytes" FlagAPIEnableUnsafeCORS = "api.enabled-unsafe-cors" // gRPC-related flags flagGRPCOnly = "grpc-only" flagGRPCEnable = "grpc.enable" flagGRPCAddress = "grpc.address" flagGRPCWebEnable = "grpc-web.enable" // mempool flags FlagMempoolMaxTxs = "mempool.max-txs" ) // StartCmdOptions defines options that can be customized in `StartCmdWithOptions`, type StartCmdOptions struct { // DBOpener can be used to customize db opening, for example customize db options or support different db backends, // default to the builtin db opener. DBOpener func(rootDir string, backendType dbm.BackendType) (dbm.DB, error) // PostSetup can be used to setup extra services under the same cancellable context, // it's not called in stand-alone mode, only for in-process mode. PostSetup func(svrCtx *Context, clientCtx client.Context, ctx context.Context, g *errgroup.Group) error // AddFlags add custom flags to start cmd AddFlags func(cmd *cobra.Command) } // StartCmd runs the service passed in, either stand-alone or in-process with // CometBFT. func StartCmd(appCreator types.AppCreator, defaultNodeHome string) *cobra.Command { return StartCmdWithOptions(appCreator, defaultNodeHome, StartCmdOptions{ }) } // StartCmdWithOptions runs the service passed in, either stand-alone or in-process with // CometBFT. func StartCmdWithOptions(appCreator types.AppCreator, defaultNodeHome string, opts StartCmdOptions) *cobra.Command { if opts.DBOpener == nil { opts.DBOpener = openDB } cmd := &cobra.Command{ Use: "start", Short: "Run the full node", Long: `Run the full node application with CometBFT in or out of process. By default, the application will run with CometBFT in process. Pruning options can be provided via the '--pruning' flag or alternatively with '--pruning-keep-recent', and 'pruning-interval' together. For '--pruning' the options are as follows: default: the last 362880 states are kept, pruning at 10 block intervals nothing: all historic states will be saved, nothing will be deleted (i.e. archiving node) everything: 2 latest states will be kept; pruning at 10 block intervals. custom: allow pruning options to be manually specified through 'pruning-keep-recent', and 'pruning-interval' Node halting configurations exist in the form of two flags: '--halt-height' and '--halt-time'. During the ABCI Commit phase, the node will check if the current block height is greater than or equal to the halt-height or if the current block time is greater than or equal to the halt-time. If so, the node will attempt to gracefully shutdown and the block will not be committed. In addition, the node will not be able to commit subsequent blocks. For profiling and benchmarking purposes, CPU profiling can be enabled via the '--cpu-profile' flag which accepts a path for the resulting pprof file. The node may be started in a 'query only' mode where only the gRPC and JSON HTTP API services are enabled via the 'grpc-only' flag. In this mode, CometBFT is bypassed and can be used when legacy queries are needed after an on-chain upgrade is performed. Note, when enabled, gRPC will also be automatically enabled. `, PreRunE: func(cmd *cobra.Command, _ []string) error { serverCtx := GetServerContextFromCmd(cmd) // Bind flags to the Context's Viper so the app construction can set // options accordingly. if err := serverCtx.Viper.BindPFlags(cmd.Flags()); err != nil { return err } _, err := GetPruningOptionsFromFlags(serverCtx.Viper) return err }, RunE: func(cmd *cobra.Command, _ []string) error { serverCtx := GetServerContextFromCmd(cmd) clientCtx, err := client.GetClientQueryContext(cmd) if err != nil { return err } withCMT, _ := cmd.Flags().GetBool(flagWithComet) if !withCMT { serverCtx.Logger.Info("starting ABCI without CometBFT") } return wrapCPUProfile(serverCtx, func() error { return start(serverCtx, clientCtx, appCreator, withCMT, opts) }) }, } cmd.Flags().String(flags.FlagHome, defaultNodeHome, "The application home directory") cmd.Flags().Bool(flagWithComet, true, "Run abci app embedded in-process with CometBFT") cmd.Flags().String(flagAddress, "tcp://0.0.0.0:26658", "Listen address") cmd.Flags().String(flagTransport, "socket", "Transport protocol: socket, grpc") cmd.Flags().String(flagTraceStore, "", "Enable KVStore tracing to an output file") cmd.Flags().String(FlagMinGasPrices, "", "Minimum gas prices to accept for transactions; Any fee in a tx must meet this minimum (e.g. 0.01photino;0.0001stake)") cmd.Flags().Uint64(FlagQueryGasLimit, 0, "Maximum gas a Rest/Grpc query can consume. Blank and 0 imply unbounded.") cmd.Flags().IntSlice(FlagUnsafeSkipUpgrades, []int{ }, "Skip a set of upgrade heights to continue the old binary") cmd.Flags().Uint64(FlagHaltHeight, 0, "Block height at which to gracefully halt the chain and shutdown the node") cmd.Flags().Uint64(FlagHaltTime, 0, "Minimum block time (in Unix seconds) at which to gracefully halt the chain and shutdown the node") cmd.Flags().Bool(FlagInterBlockCache, true, "Enable inter-block caching") cmd.Flags().String(flagCPUProfile, "", "Enable CPU profiling and write to the provided file") cmd.Flags().Bool(FlagTrace, false, "Provide full stack traces for errors in ABCI Log") cmd.Flags().String(FlagPruning, pruningtypes.PruningOptionDefault, "Pruning strategy (default|nothing|everything|custom)") cmd.Flags().Uint64(FlagPruningKeepRecent, 0, "Number of recent heights to keep on disk (ignored if pruning is not 'custom')") cmd.Flags().Uint64(FlagPruningInterval, 0, "Height interval at which pruned heights are removed from disk (ignored if pruning is not 'custom')") cmd.Flags().Uint(FlagInvCheckPeriod, 0, "Assert registered invariants every N blocks") cmd.Flags().Uint64(FlagMinRetainBlocks, 0, "Minimum block height offset during ABCI commit to prune CometBFT blocks") cmd.Flags().Bool(FlagAPIEnable, false, "Define if the API server should be enabled") cmd.Flags().Bool(FlagAPISwagger, false, "Define if swagger documentation should automatically be registered (Note: the API must also be enabled)") cmd.Flags().String(FlagAPIAddress, serverconfig.DefaultAPIAddress, "the API server address to listen on") cmd.Flags().Uint(FlagAPIMaxOpenConnections, 1000, "Define the number of maximum open connections") cmd.Flags().Uint(FlagRPCReadTimeout, 10, "Define the CometBFT RPC read timeout (in seconds)") cmd.Flags().Uint(FlagRPCWriteTimeout, 0, "Define the CometBFT RPC write timeout (in seconds)") cmd.Flags().Uint(FlagRPCMaxBodyBytes, 1000000, "Define the CometBFT maximum request body (in bytes)") cmd.Flags().Bool(FlagAPIEnableUnsafeCORS, false, "Define if CORS should be enabled (unsafe - use it at your own risk)") cmd.Flags().Bool(flagGRPCOnly, false, "Start the node in gRPC query only mode (no CometBFT process is started)") cmd.Flags().Bool(flagGRPCEnable, true, "Define if the gRPC server should be enabled") cmd.Flags().String(flagGRPCAddress, serverconfig.DefaultGRPCAddress, "the gRPC server address to listen on") cmd.Flags().Bool(flagGRPCWebEnable, true, "Define if the gRPC-Web server should be enabled. (Note: gRPC must also be enabled)") cmd.Flags().Uint64(FlagStateSyncSnapshotInterval, 0, "State sync snapshot interval") cmd.Flags().Uint32(FlagStateSyncSnapshotKeepRecent, 2, "State sync snapshot to keep") cmd.Flags().Bool(FlagDisableIAVLFastNode, false, "Disable fast node for IAVL tree") cmd.Flags().Int(FlagMempoolMaxTxs, mempool.DefaultMaxTx, "Sets MaxTx value for the app-side mempool") // support old flags name for backwards compatibility cmd.Flags().SetNormalizeFunc(func(f *pflag.FlagSet, name string) pflag.NormalizedName { if name == "with-tendermint" { name = flagWithComet } return pflag.NormalizedName(name) }) // add support for all CometBFT-specific command line options cmtcmd.AddNodeFlags(cmd) if opts.AddFlags != nil { opts.AddFlags(cmd) } return cmd } func start(svrCtx *Context, clientCtx client.Context, appCreator types.AppCreator, withCmt bool, opts StartCmdOptions) error { svrCfg, err := getAndValidateConfig(svrCtx) if err != nil { return err } app, appCleanupFn, err := startApp(svrCtx, appCreator, opts) if err != nil { return err } defer appCleanupFn() metrics, err := startTelemetry(svrCfg) if err != nil { return err } emitServerInfoMetrics() if !withCmt { return startStandAlone(svrCtx, app, opts) } return startInProcess(svrCtx, svrCfg, clientCtx, app, metrics, opts) } func startStandAlone(svrCtx *Context, app types.Application, opts StartCmdOptions) error { addr := svrCtx.Viper.GetString(flagAddress) transport := svrCtx.Viper.GetString(flagTransport) cmtApp := NewCometABCIWrapper(app) svr, err := server.NewServer(addr, transport, cmtApp) if err != nil { return fmt.Errorf("error creating listener: %v", err) } svr.SetLogger(servercmtlog.CometLoggerWrapper{ Logger: svrCtx.Logger.With("module", "abci-server") }) g, ctx := getCtx(svrCtx, false) g.Go(func() error { if err := svr.Start(); err != nil { svrCtx.Logger.Error("failed to start out-of-process ABCI server", "err", err) return err } // Wait for the calling process to be canceled or close the provided context, // so we can gracefully stop the ABCI server. <-ctx.Done() svrCtx.Logger.Info("stopping the ABCI server...") return errors.Join(svr.Stop(), app.Close()) }) return g.Wait() } func startInProcess(svrCtx *Context, svrCfg serverconfig.Config, clientCtx client.Context, app types.Application, metrics *telemetry.Metrics, opts StartCmdOptions, ) error { cmtCfg := svrCtx.Config home := cmtCfg.RootDir gRPCOnly := svrCtx.Viper.GetBool(flagGRPCOnly) g, ctx := getCtx(svrCtx, true) if gRPCOnly { // TODO: Generalize logic so that gRPC only is really in startStandAlone svrCtx.Logger.Info("starting node in gRPC only mode; CometBFT is disabled") svrCfg.GRPC.Enable = true } else { svrCtx.Logger.Info("starting node with ABCI CometBFT in-process") tmNode, cleanupFn, err := startCmtNode(ctx, cmtCfg, app, svrCtx) if err != nil { return err } defer cleanupFn() // Add the tx service to the gRPC router. We only need to register this // service if API or gRPC is enabled, and avoid doing so in the general // case, because it spawns a new local CometBFT RPC client. if svrCfg.API.Enable || svrCfg.GRPC.Enable { // Re-assign for making the client available below do not use := to avoid // shadowing the clientCtx variable. clientCtx = clientCtx.WithClient(local.New(tmNode)) app.RegisterTxService(clientCtx) app.RegisterTendermintService(clientCtx) app.RegisterNodeService(clientCtx, svrCfg) } } grpcSrv, clientCtx, err := startGrpcServer(ctx, g, svrCfg.GRPC, clientCtx, svrCtx, app) if err != nil { return err } err = startAPIServer(ctx, g, cmtCfg, svrCfg, clientCtx, svrCtx, app, home, grpcSrv, metrics) if err != nil { return err } if opts.PostSetup != nil { if err := opts.PostSetup(svrCtx, clientCtx, ctx, g); err != nil { return err } } // wait for signal capture and gracefully return // we are guaranteed to be waiting for the "ListenForQuitSignals" goroutine. return g.Wait() } // TODO: Move nodeKey into being created within the function. func startCmtNode( ctx context.Context, cfg *cmtcfg.Config, app types.Application, svrCtx *Context, ) (tmNode *node.Node, cleanupFn func(), err error) { nodeKey, err := p2p.LoadOrGenNodeKey(cfg.NodeKeyFile()) if err != nil { return nil, cleanupFn, err } cmtApp := NewCometABCIWrapper(app) tmNode, err = node.NewNodeWithContext( ctx, cfg, pvm.LoadOrGenFilePV(cfg.PrivValidatorKeyFile(), cfg.PrivValidatorStateFile()), nodeKey, proxy.NewLocalClientCreator(cmtApp), getGenDocProvider(cfg), cmtcfg.DefaultDBProvider, node.DefaultMetricsProvider(cfg.Instrumentation), servercmtlog.CometLoggerWrapper{ Logger: svrCtx.Logger }, ) if err != nil { return tmNode, cleanupFn, err } if err := tmNode.Start(); err != nil { return tmNode, cleanupFn, err } cleanupFn = func() { if tmNode != nil && tmNode.IsRunning() { _ = tmNode.Stop() _ = app.Close() } } return tmNode, cleanupFn, nil } func getAndValidateConfig(svrCtx *Context) (serverconfig.Config, error) { config, err := serverconfig.GetConfig(svrCtx.Viper) if err != nil { return config, err } if err := config.ValidateBasic(); err != nil { return config, err } return config, nil } // returns a function which returns the genesis doc from the genesis file. func getGenDocProvider(cfg *cmtcfg.Config) func() (*cmttypes.GenesisDoc, error) { return func() (*cmttypes.GenesisDoc, error) { appGenesis, err := genutiltypes.AppGenesisFromFile(cfg.GenesisFile()) if err != nil { return nil, err } return appGenesis.ToGenesisDoc() } } func setupTraceWriter(svrCtx *Context) (traceWriter io.WriteCloser, cleanup func(), err error) { // clean up the traceWriter when the server is shutting down cleanup = func() { } traceWriterFile := svrCtx.Viper.GetString(flagTraceStore) traceWriter, err = openTraceWriter(traceWriterFile) if err != nil { return traceWriter, cleanup, err } // if flagTraceStore is not used then traceWriter is nil if traceWriter != nil { cleanup = func() { if err = traceWriter.Close(); err != nil { svrCtx.Logger.Error("failed to close trace writer", "err", err) } } } return traceWriter, cleanup, nil } func startGrpcServer( ctx context.Context, g *errgroup.Group, config serverconfig.GRPCConfig, clientCtx client.Context, svrCtx *Context, app types.Application, ) (*grpc.Server, client.Context, error) { if !config.Enable { // return grpcServer as nil if gRPC is disabled return nil, clientCtx, nil } _, port, err := net.SplitHostPort(config.Address) if err != nil { return nil, clientCtx, err } maxSendMsgSize := config.MaxSendMsgSize if maxSendMsgSize == 0 { maxSendMsgSize = serverconfig.DefaultGRPCMaxSendMsgSize } maxRecvMsgSize := config.MaxRecvMsgSize if maxRecvMsgSize == 0 { maxRecvMsgSize = serverconfig.DefaultGRPCMaxRecvMsgSize } grpcAddress := fmt.Sprintf("127.0.0.1:%s", port) // if gRPC is enabled, configure gRPC client for gRPC gateway grpcClient, err := grpc.Dial( grpcAddress, grpc.WithTransportCredentials(insecure.NewCredentials()), grpc.WithDefaultCallOptions( grpc.ForceCodec(codec.NewProtoCodec(clientCtx.InterfaceRegistry).GRPCCodec()), grpc.MaxCallRecvMsgSize(maxRecvMsgSize), grpc.MaxCallSendMsgSize(maxSendMsgSize), ), ) if err != nil { return nil, clientCtx, err } clientCtx = clientCtx.WithGRPCClient(grpcClient) svrCtx.Logger.Debug("gRPC client assigned to client context", "target", grpcAddress) grpcSrv, err := servergrpc.NewGRPCServer(clientCtx, app, config) if err != nil { return nil, clientCtx, err } // Start the gRPC server in a goroutine. Note, the provided ctx will ensure // that the server is gracefully shut down. g.Go(func() error { return servergrpc.StartGRPCServer(ctx, svrCtx.Logger.With("module", "grpc-server"), config, grpcSrv) }) return grpcSrv, clientCtx, nil } func startAPIServer( ctx context.Context, g *errgroup.Group, cmtCfg *cmtcfg.Config, svrCfg serverconfig.Config, clientCtx client.Context, svrCtx *Context, app types.Application, home string, grpcSrv *grpc.Server, metrics *telemetry.Metrics, ) error { if !svrCfg.API.Enable { return nil } clientCtx = clientCtx.WithHomeDir(home) apiSrv := api.New(clientCtx, svrCtx.Logger.With("module", "api-server"), grpcSrv) app.RegisterAPIRoutes(apiSrv, svrCfg.API) if svrCfg.Telemetry.Enabled { apiSrv.SetTelemetry(metrics) } g.Go(func() error { return apiSrv.Start(ctx, svrCfg) }) return nil } func startTelemetry(cfg serverconfig.Config) (*telemetry.Metrics, error) { if !cfg.Telemetry.Enabled { return nil, nil } return telemetry.New(cfg.Telemetry) } // wrapCPUProfile starts CPU profiling, if enabled, and executes the provided // callbackFn in a separate goroutine, then will wait for that callback to // return. // // NOTE: We expect the caller to handle graceful shutdown and signal handling. func wrapCPUProfile(svrCtx *Context, callbackFn func() error) error { if cpuProfile := svrCtx.Viper.GetString(flagCPUProfile); cpuProfile != "" { f, err := os.Create(cpuProfile) if err != nil { return err } svrCtx.Logger.Info("starting CPU profiler", "profile", cpuProfile) if err := pprof.StartCPUProfile(f); err != nil { return err } defer func() { svrCtx.Logger.Info("stopping CPU profiler", "profile", cpuProfile) pprof.StopCPUProfile() if err := f.Close(); err != nil { svrCtx.Logger.Info("failed to close cpu-profile file", "profile", cpuProfile, "err", err.Error()) } }() } return callbackFn() } // emitServerInfoMetrics emits server info related metrics using application telemetry. func emitServerInfoMetrics() { var ls []metrics.Label versionInfo := version.NewInfo() if len(versionInfo.GoVersion) > 0 { ls = append(ls, telemetry.NewLabel("go", versionInfo.GoVersion)) } if len(versionInfo.CosmosSdkVersion) > 0 { ls = append(ls, telemetry.NewLabel("version", versionInfo.CosmosSdkVersion)) } if len(ls) == 0 { return } telemetry.SetGaugeWithLabels([]string{"server", "info" }, 1, ls) } func getCtx(svrCtx *Context, block bool) (*errgroup.Group, context.Context) { ctx, cancelFn := context.WithCancel(context.Background()) g, ctx := errgroup.WithContext(ctx) // listen for quit signals so the calling parent process can gracefully exit ListenForQuitSignals(g, block, cancelFn, svrCtx.Logger) return g, ctx } func startApp(svrCtx *Context, appCreator types.AppCreator, opts StartCmdOptions) (app types.Application, cleanupFn func(), err error) { traceWriter, traceCleanupFn, err := setupTraceWriter(svrCtx) if err != nil { return app, traceCleanupFn, err } home := svrCtx.Config.RootDir db, err := opts.DBOpener(home, GetAppDBBackend(svrCtx.Viper)) if err != nil { return app, traceCleanupFn, err } app = appCreator(svrCtx.Logger, db, traceWriter, svrCtx.Viper) cleanupFn = func() { traceCleanupFn() if localErr := app.Close(); localErr != nil { svrCtx.Logger.Error(localErr.Error()) } } return app, cleanupFn, nil } ``` ## Client ### CLI The genutil commands are available under the `genesis` subcommand. #### add-genesis-account Add a genesis account to `genesis.json`. Learn more [here](/sdk/latest/node/run-node#adding-genesis-accounts). #### collect-gentxs Collect genesis txs and output a `genesis.json` file. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd genesis collect-gentxs ``` This will create a new `genesis.json` file that includes data from all the validators (we sometimes call it the "super genesis file" to distinguish it from single-validator genesis files). #### gentx Generate a genesis tx carrying a self delegation. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd genesis gentx [key_name] [amount] --chain-id [chain-id] ``` This will create the genesis transaction for your new chain. Here `amount` should be at least `1000000000stake`. If you provide too much or too little, you will encounter an error when starting a node. #### migrate Migrate genesis to a specified target (SDK) version. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd genesis migrate [target-version] ``` The `migrate` command is extensible and takes a `MigrationMap`. This map is a mapping of target versions to genesis migrations functions. When not using the default `MigrationMap`, it is recommended to still call the default `MigrationMap` corresponding the SDK version of the chain and prepend/append your own genesis migrations. #### validate-genesis Validates the genesis file at the default location or at the location passed as an argument. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd genesis validate-genesis ``` Validate genesis only validates if the genesis is valid at the **current application binary**. For validating a genesis from a previous version of the application, use the `migrate` command to migrate the genesis to the current version. # x/gov Source: https://docs.cosmos.network/sdk/latest/modules/gov/README This paper specifies the Governance module of the Cosmos SDK, which was first described in the Cosmos Whitepaper in June 2016. ## Abstract This paper specifies the Governance module of the Cosmos SDK, which was first described in the [Cosmos Whitepaper](https://github.com/cosmos/cosmos/blob/master/WHITEPAPER.md) in June 2016. The module enables Cosmos SDK based blockchain to support an on-chain governance system. In this system, holders of the native staking token of the chain can vote on proposals on a 1 token 1 vote basis. Next is a list of features the module currently supports: * **Proposal submission:** Users can submit proposals with a deposit. Once the minimum deposit is reached, the proposal enters voting period. The minimum deposit can be reached by collecting deposits from different users (including proposer) within deposit period. * **Vote:** Participants can vote on proposals that reached MinDeposit and entered voting period. * **Inheritance and penalties:** Delegators inherit their validator's vote if they don't vote themselves. * **Claiming deposit:** Users that deposited on proposals can recover their deposits if the proposal was accepted or rejected. If the proposal was vetoed, or never entered voting period (minimum deposit not reached within deposit period), the deposit is burned. This module is in use on the Cosmos Hub (a.k.a [gaia](https://github.com/cosmos/gaia)). Features that may be added in the future are described in [Future Improvements](#future-improvements). ## Contents The following specification uses *ATOM* as the native staking token. The module can be adapted to any Proof-Of-Stake blockchain by replacing *ATOM* with the native staking token of the chain. * [Concepts](#concepts) * [Proposal submission](#proposal-submission) * [Deposit](#deposit) * [Vote](#vote) * [Software Upgrade](#software-upgrade) * [State](#state) * [Proposals](#proposals) * [Parameters and base types](#parameters-and-base-types) * [Deposit](#deposit-1) * [ValidatorGovInfo](#validatorgovinfo) * [Stores](#stores) * [Proposal Processing Queue](#proposal-processing-queue) * [Legacy Proposal](#legacy-proposal) * [Messages](#messages) * [Proposal Submission](#proposal-submission-1) * [Deposit](#deposit-2) * [Vote](#vote-1) * [Events](#events) * [EndBlocker](#endblocker) * [Handlers](#handlers) * [Hooks](#hooks) * [AfterProposalSubmission](#afterproposalsubmission) * [AfterProposalDeposit](#afterproposaldeposit) * [AfterProposalVote](#afterproposalvote) * [AfterProposalFailedMinDeposit](#afterproposalfailedmindeposit) * [AfterProposalVotingPeriodEnded](#afterproposalvotingperiodended) * [Parameters](#parameters) * [Client](#client) * [CLI](#cli) * [gRPC](#grpc) * [REST](#rest) * [Metadata](#metadata) * [Proposal](#proposal-3) * [Vote](#vote-5) * [Future Improvements](#future-improvements) ## Concepts The governance process is divided in a few steps that are outlined below: * **Proposal submission:** Proposal is submitted to the blockchain with a deposit. * **Vote:** Once deposit reaches a certain value (`MinDeposit`), proposal is confirmed and vote opens. Bonded Atom holders can then send `TxGovVote` transactions to vote on the proposal. * **Execution** After a period of time, the votes are tallied and depending on the result, the messages in the proposal will be executed. ### Proposal submission #### Right to submit a proposal Every account can submit proposals by sending a `MsgSubmitProposal` transaction. Once a proposal is submitted, it is identified by its unique `proposalID`. #### Proposal Messages A proposal includes an array of `sdk.Msg`s which are executed automatically if the proposal passes. The messages are executed by the governance `ModuleAccount` itself. Modules such as `x/upgrade`, that want to allow certain messages to be executed by governance only should add a whitelist within the respective msg server, granting the governance module the right to execute the message once a quorum has been reached. The governance module uses the `MsgServiceRouter` to check that these messages are correctly constructed and have a respective path to execute on but do not perform a full validity check. ### Deposit To prevent spam, proposals must be submitted with a deposit in the coins defined by the `MinDeposit` param. When a proposal is submitted, it has to be accompanied with a deposit that must be strictly positive, but can be inferior to `MinDeposit`. The submitter doesn't need to pay for the entire deposit on their own. The newly created proposal is stored in an *inactive proposal queue* and stays there until its deposit passes the `MinDeposit`. Other token holders can increase the proposal's deposit by sending a `Deposit` transaction. If a proposal doesn't pass the `MinDeposit` before the deposit end time (the time when deposits are no longer accepted), the proposal will be destroyed: the proposal will be removed from state and the deposit will be burned (see x/gov `EndBlocker`). When a proposal deposit passes the `MinDeposit` threshold (even during the proposal submission) before the deposit end time, the proposal will be moved into the *active proposal queue* and the voting period will begin. The deposit is kept in escrow and held by the governance `ModuleAccount` until the proposal is finalized (passed or rejected). #### Deposit refund and burn When a proposal is finalized, the coins from the deposit are either refunded or burned according to the final tally of the proposal: * If the proposal is approved or rejected but *not* vetoed, each deposit will be automatically refunded to its respective depositor (transferred from the governance `ModuleAccount`). * When the proposal is vetoed with greater than 1/3, deposits will be burned from the governance `ModuleAccount` and the proposal information along with its deposit information will be removed from state. * All refunded or burned deposits are removed from the state. Events are issued when burning or refunding a deposit. ### Vote #### Participants *Participants* are users that have the right to vote on proposals. On the Cosmos Hub, participants are bonded Atom holders. Unbonded Atom holders and other users do not get the right to participate in governance. However, they can submit and deposit on proposals. Note that when *participants* have bonded and unbonded Atoms, their voting power is calculated from their bonded Atom holdings only. #### Voting period Once a proposal reaches `MinDeposit`, it immediately enters `Voting period`. We define `Voting period` as the interval between the moment the vote opens and the moment the vote closes. The initial value of `Voting period` is 2 weeks. #### Option set The option set of a proposal refers to the set of choices a participant can choose from when casting its vote. The initial option set includes the following options: * `Yes` * `No` * `NoWithVeto` * `Abstain` `NoWithVeto` counts as `No` but also adds a `Veto` vote. `Abstain` option allows voters to signal that they do not intend to vote in favor or against the proposal but accept the result of the vote. *Note: from the UI, for urgent proposals we should maybe add a ‘Not Urgent’ option that casts a `NoWithVeto` vote.* #### Weighted Votes [ADR-037](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-037-gov-split-vote.md) introduces the weighted vote feature which allows a staker to split their votes into several voting options. For example, it could use 70% of its voting power to vote Yes and 30% of its voting power to vote No. Often times the entity owning that address might not be a single individual. For example, a company might have different stakeholders who want to vote differently, and so it makes sense to allow them to split their voting power. Currently, it is not possible for them to do "passthrough voting" and giving their users voting rights over their tokens. However, with this system, exchanges can poll their users for voting preferences, and then vote on-chain proportionally to the results of the poll. To represent weighted vote on chain, we use the following Protobuf message. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/gov/v1beta1/gov.proto#L34-L47 ``` ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/gov/v1beta1/gov.proto#L181-L201 ``` For a weighted vote to be valid, the `options` field must not contain duplicate vote options, and the sum of weights of all options must be equal to 1. #### Custom Vote Calculation Cosmos SDK v0.53.0 introduced an option for developers to define a custom vote result and voting power calculation function. As of v0.54, `x/gov` has been decoupled from `x/staking`: the `keeper.NewKeeper` constructor now requires a `CalculateVoteResultsAndVotingPowerFn` as a required parameter instead of a `StakingKeeper`. To use the default staking-based tally logic, wrap your staking keeper with `keeper.NewDefaultCalculateVoteResultsAndVotingPower(stakingKeeper)`. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package keeper import ( "context" "fmt" "cosmossdk.io/collections" "cosmossdk.io/math" sdk "github.com/cosmos/cosmos-sdk/types" v1 "github.com/cosmos/cosmos-sdk/x/gov/types/v1" stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" ) // CalculateVoteResultsAndVotingPowerFn is a function signature for calculating vote results and voting power // It can be overridden to customize the voting power calculation for proposals // It gets the proposal tallied and the validators governance infos (validator power, voting power, etc.) // It must return the total voting power and the results of the vote type CalculateVoteResultsAndVotingPowerFn func( ctx context.Context, k Keeper, proposal v1.Proposal, validators map[string]v1.ValidatorGovInfo, ) (totalVoterPower math.LegacyDec, results map[v1.VoteOption]math.LegacyDec, err error) func defaultCalculateVoteResultsAndVotingPower( ctx context.Context, k Keeper, proposal v1.Proposal, validators map[string]v1.ValidatorGovInfo, ) (totalVoterPower math.LegacyDec, results map[v1.VoteOption]math.LegacyDec, err error) { totalVotingPower := math.LegacyZeroDec() results = make(map[v1.VoteOption]math.LegacyDec) results[v1.OptionYes] = math.LegacyZeroDec() results[v1.OptionAbstain] = math.LegacyZeroDec() results[v1.OptionNo] = math.LegacyZeroDec() results[v1.OptionNoWithVeto] = math.LegacyZeroDec() rng := collections.NewPrefixedPairRange[uint64, sdk.AccAddress](proposal.Id) votesToRemove := []collections.Pair[uint64, sdk.AccAddress]{ } err = k.Votes.Walk(ctx, rng, func(key collections.Pair[uint64, sdk.AccAddress], vote v1.Vote) (bool, error) { // if validator, just record it in the map voter, err := k.authKeeper.AddressCodec().StringToBytes(vote.Voter) if err != nil { return false, err } valAddrStr, err := k.sk.ValidatorAddressCodec().BytesToString(voter) if err != nil { return false, err } if val, ok := validators[valAddrStr]; ok { val.Vote = vote.Options validators[valAddrStr] = val } // iterate over all delegations from voter, deduct from any delegated-to validators err = k.sk.IterateDelegations(ctx, voter, func(index int64, delegation stakingtypes.DelegationI) (stop bool) { valAddrStr := delegation.GetValidatorAddr() if val, ok := validators[valAddrStr]; ok { // There is no need to handle the special case that validator address equal to voter address. // Because voter's voting power will tally again even if there will be deduction of voter's voting power from validator. val.DelegatorDeductions = val.DelegatorDeductions.Add(delegation.GetShares()) validators[valAddrStr] = val // delegation shares * bonded / total shares votingPower := delegation.GetShares().MulInt(val.ValidatorPower).Quo(val.DelegatorShares) for _, option := range vote.Options { weight, _ := math.LegacyNewDecFromStr(option.Weight) subPower := votingPower.Mul(weight) results[option.Option] = results[option.Option].Add(subPower) } totalVotingPower = totalVotingPower.Add(votingPower) } return false }) if err != nil { return false, err } votesToRemove = append(votesToRemove, key) return false, nil }) if err != nil { return math.LegacyZeroDec(), nil, fmt.Errorf("error while iterating delegations: %w", err) } // remove all votes from store for _, key := range votesToRemove { if err := k.Votes.Remove(ctx, key); err != nil { return math.LegacyDec{ }, nil, fmt.Errorf("error while removing vote (%d/%s): %w", key.K1(), key.K2(), err) } } // iterate over the validators again to tally their voting power for _, val := range validators { if len(val.Vote) == 0 { continue } sharesAfterDeductions := val.DelegatorShares.Sub(val.DelegatorDeductions) votingPower := sharesAfterDeductions.MulInt(val.ValidatorPower).Quo(val.DelegatorShares) for _, option := range val.Vote { weight, _ := math.LegacyNewDecFromStr(option.Weight) subPower := votingPower.Mul(weight) results[option.Option] = results[option.Option].Add(subPower) } totalVotingPower = totalVotingPower.Add(votingPower) } return totalVotingPower, results, nil } // getCurrentValidators fetches all the bonded validators, insert them into currValidators func (k Keeper) getCurrentValidators(ctx context.Context) (map[string]v1.ValidatorGovInfo, error) { currValidators := make(map[string]v1.ValidatorGovInfo) if err := k.sk.IterateBondedValidatorsByPower(ctx, func(index int64, validator stakingtypes.ValidatorI) (stop bool) { valBz, err := k.sk.ValidatorAddressCodec().StringToBytes(validator.GetOperator()) if err != nil { return false } currValidators[validator.GetOperator()] = v1.NewValidatorGovInfo( valBz, validator.GetValidatorPower(), validator.GetDelegatorShares(), math.LegacyZeroDec(), v1.WeightedVoteOptions{ }, ) return false }); err != nil { return nil, err } return currValidators, nil } // Tally iterates over the votes and updates the tally of a proposal based on the voting power of the // voters func (k Keeper) Tally(ctx context.Context, proposal v1.Proposal) (passes, burnDeposits bool, tallyResults v1.TallyResult, err error) { currValidators, err := k.getCurrentValidators(ctx) if err != nil { return false, false, tallyResults, fmt.Errorf("error while getting current validators: %w", err) } tallyFn := k.calculateVoteResultsAndVotingPowerFn totalVotingPower, results, err := tallyFn(ctx, k, proposal, currValidators) if err != nil { return false, false, tallyResults, fmt.Errorf("error while calculating tally results: %w", err) } tallyResults = v1.NewTallyResultFromMap(results) // TODO: Upgrade the spec to cover all of these cases & remove pseudocode. // If there is no staked coins, the proposal fails totalBonded, err := k.sk.TotalValidatorPower(ctx) if err != nil { return false, false, tallyResults, err } if totalBonded.IsZero() { return false, false, tallyResults, nil } params, err := k.Params.Get(ctx) if err != nil { return false, false, tallyResults, fmt.Errorf("error while getting params: %w", err) } // If there is not enough quorum of votes, the proposal fails percentVoting := totalVotingPower.Quo(math.LegacyNewDecFromInt(totalBonded)) quorum, _ := math.LegacyNewDecFromStr(params.Quorum) if percentVoting.LT(quorum) { return false, params.BurnVoteQuorum, tallyResults, nil } // If no one votes (everyone abstains), proposal fails if totalVotingPower.Sub(results[v1.OptionAbstain]).Equal(math.LegacyZeroDec()) { return false, false, tallyResults, nil } // If more than 1/3 of voters veto, proposal fails vetoThreshold, _ := math.LegacyNewDecFromStr(params.VetoThreshold) if results[v1.OptionNoWithVeto].Quo(totalVotingPower).GT(vetoThreshold) { return false, params.BurnVoteVeto, tallyResults, nil } // If more than 1/2 of non-abstaining voters vote Yes, proposal passes // For expedited 2/3 var thresholdStr string if proposal.Expedited { thresholdStr = params.GetExpeditedThreshold() } else { thresholdStr = params.GetThreshold() } threshold, _ := math.LegacyNewDecFromStr(thresholdStr) if results[v1.OptionYes].Quo(totalVotingPower.Sub(results[v1.OptionAbstain])).GT(threshold) { return true, false, tallyResults, nil } // If more than 1/2 of non-abstaining voters vote No, proposal fails return false, false, tallyResults, nil } ``` This gives developers a more expressive way to handle governance on their appchains. Developers can now build systems with: * Quadratic Voting * Time-weighted Voting * Reputation-Based voting ##### Example ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func myCustomVotingFunction( ctx context.Context, k Keeper, proposal v1.Proposal, validators map[string]v1.ValidatorGovInfo, ) (totalVoterPower math.LegacyDec, results map[v1.VoteOption]math.LegacyDec, err error) { // ... tally logic } govKeeper := govkeeper.NewKeeper( appCodec, runtime.NewKVStoreService(keys[govtypes.StoreKey]), app.AccountKeeper, app.BankKeeper, app.DistrKeeper, // optional: can be nil if the module address is not used as a cancellation fee destination app.MsgServiceRouter(), govConfig, authtypes.NewModuleAddress(govtypes.ModuleName).String(), myCustomVotingFunction, // required: CalculateVoteResultsAndVotingPowerFn ) ``` ### Quorum Quorum is defined as the minimum percentage of voting power that needs to be cast on a proposal for the result to be valid. ### Expedited Proposals A proposal can be expedited, making the proposal use shorter voting duration and a higher tally threshold by its default. If an expedited proposal fails to meet the threshold within the scope of shorter voting duration, the expedited proposal is then converted to a regular proposal and restarts voting under regular voting conditions. #### Threshold Threshold is defined as the minimum proportion of `Yes` votes (excluding `Abstain` votes) for the proposal to be accepted. Initially, the threshold is set at 50% of `Yes` votes, excluding `Abstain` votes. A possibility to veto exists if more than 1/3rd of all votes are `NoWithVeto` votes. Note, both of these values are derived from the `TallyParams` on-chain parameter, which is modifiable by governance. This means that proposals are accepted iff: * There exist bonded tokens. * Quorum has been achieved. * The proportion of `Abstain` votes is inferior to 1/1. * The proportion of `NoWithVeto` votes is inferior to 1/3, including `Abstain` votes. * The proportion of `Yes` votes, excluding `Abstain` votes, at the end of the voting period is superior to 1/2. For expedited proposals, by default, the threshold is higher than with a *normal proposal*, namely, 66.7%. #### Inheritance If a delegator does not vote, it will inherit its validator vote. * If the delegator votes before its validator, it will not inherit from the validator's vote. * If the delegator votes after its validator, it will override its validator vote with its own. If the proposal is urgent, it is possible that the vote will close before delegators have a chance to react and override their validator's vote. This is not a problem, as proposals require more than 2/3rd of the total voting power to pass, when tallied at the end of the voting period. Because as little as 1/3 + 1 validation power could collude to censor transactions, non-collusion is already assumed for ranges exceeding this threshold. #### Validator’s punishment for non-voting At present, validators are not punished for failing to vote. #### Governance address Later, we may add permissioned keys that could only sign txs from certain modules. For the MVP, the `Governance address` will be the main validator address generated at account creation. This address corresponds to a different PrivKey than the CometBFT PrivKey which is responsible for signing consensus messages. Validators thus do not have to sign governance transactions with the sensitive CometBFT PrivKey. #### Burnable Params There are three parameters that define if the deposit of a proposal should be burned or returned to the depositors. * `BurnVoteVeto` burns the proposal deposit if the proposal gets vetoed. * `BurnVoteQuorum` burns the proposal deposit if the proposal deposit if the vote does not reach quorum. * `BurnProposalDepositPrevote` burns the proposal deposit if it does not enter the voting phase. > Note: These parameters are modifiable via governance. ## State ### Constitution `Constitution` is found in the genesis state. It is a string field intended to be used to describe the purpose of a particular blockchain, and its expected norms. A few examples of how the constitution field can be used: * define the purpose of the chain, laying a foundation for its future development * set expectations for delegators * set expectations for validators * define the chain's relationship to "meatspace" entities, like a foundation or corporation Since this is more of a social feature than a technical feature, we'll now get into some items that may have been useful to have in a genesis constitution: * What limitations on governance exist, if any? * is it okay for the community to slash the wallet of a whale that they no longer feel that they want around? (viz: Juno Proposal 4 and 16) * can governance "socially slash" a validator who is using unapproved MEV? (viz: commonwealth.im/osmosis) * In the event of an economic emergency, what should validators do? * Terra crash of May, 2022, saw validators choose to run a new binary with code that had not been approved by governance, because the governance token had been inflated to nothing. * What is the purpose of the chain, specifically? * best example of this is the Cosmos hub, where different founding groups, have different interpertations of the purpose of the network. This genesis entry, "constitution" hasn't been designed for existing chains, who should likely just ratify a constitution using their governance system. Instead, this is for new chains. It will allow for validators to have a much clearer idea of purpose and the expectations placed on them while operating their nodes. Likewise, for community members, the constitution will give them some idea of what to expect from both the "chain team" and the validators, respectively. This constitution is designed to be immutable, and placed only in genesis, though that could change over time by a pull request to the cosmos-sdk that allows for the constitution to be changed by governance. Communities wishing to make amendments to their original constitution should use the governance mechanism and a "signaling proposal" to do exactly that. **Ideal use scenario for a cosmos chain constitution** As a chain developer, you decide that you'd like to provide clarity to your key user groups: * validators * token holders * developers (yourself) You use the constitution to immutably store some Markdown in genesis, so that when difficult questions come up, the constitution can provide guidance to the community. ### Proposals `Proposal` objects are used to tally votes and generally track the proposal's state. They contain an array of arbitrary `sdk.Msg`'s which the governance module will attempt to resolve and then execute if the proposal passes. `Proposal`'s are identified by a unique id and contains a series of timestamps: `submit_time`, `deposit_end_time`, `voting_start_time`, `voting_end_time` which track the lifecycle of a proposal ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/gov/v1/gov.proto#L51-L99 ``` A proposal will generally require more than just a set of messages to explain its purpose but need some greater justification and allow a means for interested participants to discuss and debate the proposal. In most cases, **it is encouraged to have an off-chain system that supports the on-chain governance process**. To accommodate for this, a proposal contains a special **`metadata`** field, a string, which can be used to add context to the proposal. The `metadata` field allows custom use for networks, however, it is expected that the field contains a URL or some form of CID using a system such as [IPFS](https://docs.ipfs.io/concepts/content-addressing/). To support the case of interoperability across networks, the SDK recommends that the `metadata` represents the following `JSON` template: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "title": "...", "description": "...", "forum": "...", // a link to the discussion platform (i.e. Discord) "other": "..." // any extra data that doesn't correspond to the other fields } ``` This makes it far easier for clients to support multiple networks. The metadata has a maximum length that is chosen by the app developer, and passed into the gov keeper as a config. The default maximum length in the SDK is 255 characters. #### Writing a module that uses governance There are many aspects of a chain, or of the individual modules that you may want to use governance to perform such as changing various parameters. This is very simple to do. First, write out your message types and `MsgServer` implementation. Add an `authority` field to the keeper which will be populated in the constructor with the governance module account: `govKeeper.GetGovernanceAccount().GetAddress()`. Then for the methods in the `msg_server.go`, perform a check on the message that the signer matches `authority`. This will prevent any user from executing that message. ### Parameters and base types `Parameters` define the rules according to which votes are run. There can only be one active parameter set at any given time. If governance wants to change a parameter set, either to modify a value or add/remove a parameter field, a new parameter set has to be created and the previous one rendered inactive. #### DepositParams ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/gov/v1/gov.proto#L152-L162 ``` #### VotingParams ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/gov/v1/gov.proto#L164-L168 ``` #### TallyParams ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/gov/v1/gov.proto#L170-L182 ``` Parameters are stored in a global `GlobalParams` KVStore. Additionally, we introduce some basic types: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type Vote byte const ( VoteYes = 0x1 VoteNo = 0x2 VoteNoWithVeto = 0x3 VoteAbstain = 0x4 ) type ProposalType string const ( ProposalTypePlainText = "Text" ProposalTypeSoftwareUpgrade = "SoftwareUpgrade" ) type ProposalStatus byte const ( StatusNil ProposalStatus = 0x00 StatusDepositPeriod ProposalStatus = 0x01 // Proposal is submitted. Participants can deposit on it but not vote StatusVotingPeriod ProposalStatus = 0x02 // MinDeposit is reached, participants can vote StatusPassed ProposalStatus = 0x03 // Proposal passed and successfully executed StatusRejected ProposalStatus = 0x04 // Proposal has been rejected StatusFailed ProposalStatus = 0x05 // Proposal passed but failed execution ) ``` ### Deposit ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/gov/v1/gov.proto#L38-L49 ``` ### ValidatorGovInfo This type is used in a temp map when tallying ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type ValidatorGovInfo struct { Minus sdk.Dec Vote Vote } ``` ## Stores Stores are KVStores in the multi-store. The key to find the store is the first parameter in the list We will use one KVStore `Governance` to store four mappings: * A mapping from `proposalID|'proposal'` to `Proposal`. * A mapping from `proposalID|'addresses'|address` to `Vote`. This mapping allows us to query all addresses that voted on the proposal along with their vote by doing a range query on `proposalID:addresses`. * A mapping from `ParamsKey|'Params'` to `Params`. This map allows to query all x/gov params. * A mapping from `VotingPeriodProposalKeyPrefix|proposalID` to a single byte. This allows us to know if a proposal is in the voting period or not with very low gas cost. For pseudocode purposes, here are the two function we will use to read or write in stores: * `load(StoreKey, Key)`: Retrieve item stored at key `Key` in store found at key `StoreKey` in the multistore * `store(StoreKey, Key, value)`: Write value `Value` at key `Key` in store found at key `StoreKey` in the multistore ### Proposal Processing Queue **Store:** * `ProposalProcessingQueue`: A queue `queue[proposalID]` containing all the `ProposalIDs` of proposals that reached `MinDeposit`. During each `EndBlock`, all the proposals that have reached the end of their voting period are processed. To process a finished proposal, the application tallies the votes, computes the votes of each validator and checks if every validator in the validator set has voted. If the proposal is accepted, deposits are refunded. Finally, the proposal content `Handler` is executed. And the pseudocode for the `ProposalProcessingQueue`: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} in EndBlock do for finishedProposalID in GetAllFinishedProposalIDs(block.Time) proposal = load(Governance, ) // proposal is a const key validators = Keeper.getAllValidators() tmpValMap := map(sdk.AccAddress) ValidatorGovInfo // Initiate mapping at 0. This is the amount of shares of the validator's vote that will be overridden by their delegator's votes for each validator in validators tmpValMap(validator.OperatorAddr).Minus = 0 // Tally voterIterator = rangeQuery(Governance, ) //return all the addresses that voted on the proposal for each (voterAddress, vote) in voterIterator delegations = stakingKeeper.getDelegations(voterAddress) // get all delegations for current voter for each delegation in delegations // make sure delegation.Shares does NOT include shares being unbonded tmpValMap(delegation.ValidatorAddr).Minus += delegation.Shares proposal.updateTally(vote, delegation.Shares) _, isVal = stakingKeeper.getValidator(voterAddress) if (isVal) tmpValMap(voterAddress).Vote = vote tallyingParam = load(GlobalParams, 'TallyingParam') // Update tally if validator voted for each validator in validators if tmpValMap(validator).HasVoted proposal.updateTally(tmpValMap(validator).Vote, (validator.TotalShares - tmpValMap(validator).Minus)) // Check if proposal is accepted or rejected totalNonAbstain := proposal.YesVotes + proposal.NoVotes + proposal.NoWithVetoVotes if (proposal.Votes.YesVotes/totalNonAbstain > tallyingParam.Threshold AND proposal.Votes.NoWithVetoVotes/totalNonAbstain < tallyingParam.Veto) // proposal was accepted at the end of the voting period // refund deposits (non-voters already punished) for each (amount, depositor) in proposal.Deposits depositor.AtomBalance += amount stateWriter, err := proposal.Handler() if err != nil // proposal passed but failed during state execution proposal.CurrentStatus = ProposalStatusFailed else // proposal pass and state is persisted proposal.CurrentStatus = ProposalStatusAccepted stateWriter.save() else // proposal was rejected proposal.CurrentStatus = ProposalStatusRejected store(Governance, , proposal) ``` ### Legacy Proposal Legacy proposals are deprecated. Use the new proposal flow by granting the governance module the right to execute the message. A legacy proposal is the old implementation of governance proposal. Contrary to proposal that can contain any messages, a legacy proposal allows to submit a set of pre-defined proposals. These proposals are defined by their types and handled by handlers that are registered in the gov v1beta1 router. More information on how to submit proposals in the [client section](#client). ## Messages ### Proposal Submission Proposals can be submitted by any account via a `MsgSubmitProposal` transaction. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/gov/v1/tx.proto#L42-L69 ``` All `sdk.Msgs` passed into the `messages` field of a `MsgSubmitProposal` message must be registered in the app's `MsgServiceRouter`. Each of these messages must have one signer, namely the gov module account. And finally, the metadata length must not be larger than the `maxMetadataLen` config passed into the gov keeper. The `initialDeposit` must be strictly positive and conform to the accepted denom of the `MinDeposit` param. **State modifications:** * Generate new `proposalID` * Create new `Proposal` * Initialize `Proposal`'s attributes * Decrease balance of sender by `InitialDeposit` * If `MinDeposit` is reached: * Push `proposalID` in `ProposalProcessingQueue` * Transfer `InitialDeposit` from the `Proposer` to the governance `ModuleAccount` ### Deposit Once a proposal is submitted, if `Proposal.TotalDeposit < ActiveParam.MinDeposit`, Atom holders can send `MsgDeposit` transactions to increase the proposal's deposit. A deposit is accepted iff: * The proposal exists * The proposal is not in the voting period * The deposited coins are conform to the accepted denom from the `MinDeposit` param ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/gov/v1/tx.proto#L134-L147 ``` **State modifications:** * Decrease balance of sender by `deposit` * Add `deposit` of sender in `proposal.Deposits` * Increase `proposal.TotalDeposit` by sender's `deposit` * If `MinDeposit` is reached: * Push `proposalID` in `ProposalProcessingQueueEnd` * Transfer `Deposit` from the `proposer` to the governance `ModuleAccount` ### Vote Once `ActiveParam.MinDeposit` is reached, voting period starts. From there, bonded Atom holders are able to send `MsgVote` transactions to cast their vote on the proposal. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/gov/v1/tx.proto#L92-L108 ``` **State modifications:** * Record `Vote` of sender Gas cost for this message has to take into account the future tallying of the vote in EndBlocker. ## Events The governance module emits the following events: ### EndBlocker | Type | Attribute Key | Attribute Value | | ------------------ | ---------------- | ------------------ | | inactive\_proposal | proposal\_id | `{proposalID}` | | inactive\_proposal | proposal\_result | `{proposalResult}` | | active\_proposal | proposal\_id | `{proposalID}` | | active\_proposal | proposal\_result | `{proposalResult}` | ### Handlers #### MsgSubmitProposal | Type | Attribute Key | Attribute Value | | --------------------- | --------------------- | ----------------- | | submit\_proposal | proposal\_id | `{proposalID}` | | submit\_proposal \[0] | voting\_period\_start | `{proposalID}` | | proposal\_deposit | amount | `{depositAmount}` | | proposal\_deposit | proposal\_id | `{proposalID}` | | message | module | governance | | message | action | submit\_proposal | | message | sender | `{senderAddress}` | * \[0] Event only emitted if the voting period starts during the submission. #### MsgVote | Type | Attribute Key | Attribute Value | | -------------- | ------------- | ----------------- | | proposal\_vote | option | `{voteOption}` | | proposal\_vote | proposal\_id | `{proposalID}` | | message | module | governance | | message | action | vote | | message | sender | `{senderAddress}` | #### MsgVoteWeighted | Type | Attribute Key | Attribute Value | | -------------- | ------------- | ----------------------- | | proposal\_vote | option | `{weightedVoteOptions}` | | proposal\_vote | proposal\_id | `{proposalID}` | | message | module | governance | | message | action | vote | | message | sender | `{senderAddress}` | #### MsgDeposit | Type | Attribute Key | Attribute Value | | ---------------------- | --------------------- | ----------------- | | proposal\_deposit | amount | `{depositAmount}` | | proposal\_deposit | proposal\_id | `{proposalID}` | | proposal\_deposit \[0] | voting\_period\_start | `{proposalID}` | | message | module | governance | | message | action | deposit | | message | sender | `{senderAddress}` | * \[0] Event only emitted if the voting period starts during the submission. ## Hooks The governance module exposes a `GovHooks` interface that allows other modules to react to governance events. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type GovHooks interface { AfterProposalSubmission(ctx context.Context, proposalID uint64, proposerAddr sdk.AccAddress) error AfterProposalDeposit(ctx context.Context, proposalID uint64, depositorAddr sdk.AccAddress) error AfterProposalVote(ctx context.Context, proposalID uint64, voterAddr sdk.AccAddress) error AfterProposalFailedMinDeposit(ctx context.Context, proposalID uint64) error AfterProposalVotingPeriodEnded(ctx context.Context, proposalID uint64) error } ``` ### AfterProposalSubmission Called after a proposal is submitted. The hook receives the proposal ID and the proposer's address. **Note:** The `proposerAddr` parameter was added in a recent release. If you are implementing `GovHooks`, you must update your `AfterProposalSubmission` method signature to include `proposerAddr sdk.AccAddress` as a third parameter. **Before:** ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (h MyGovHooks) AfterProposalSubmission(ctx context.Context, proposalID uint64) error { // implementation } ``` **After:** ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (h MyGovHooks) AfterProposalSubmission(ctx context.Context, proposalID uint64, proposerAddr sdk.AccAddress) error { // implementation } ``` ### AfterProposalDeposit Called after a deposit is made on a proposal. ### AfterProposalVote Called after a vote is cast on a proposal. ### AfterProposalFailedMinDeposit Called when a proposal fails to reach the minimum deposit within the deposit period. ### AfterProposalVotingPeriodEnded Called when a proposal's voting period ends. ## Parameters The governance module contains the following parameters: | Key | Type | Example | | -------------------------------- | ---------------- | ------------------------------------------ | | min\_deposit | array (coins) | \[`{"denom":"uatom","amount":"10000000"}`] | | max\_deposit\_period | string (time ns) | "172800000000000" (17280s) | | voting\_period | string (time ns) | "172800000000000" (17280s) | | quorum | string (dec) | "0.334000000000000000" | | threshold | string (dec) | "0.500000000000000000" | | veto | string (dec) | "0.334000000000000000" | | expedited\_threshold | string (time ns) | "0.667000000000000000" | | expedited\_voting\_period | string (time ns) | "86400000000000" (8600s) | | expedited\_min\_deposit | array (coins) | \[`{"denom":"uatom","amount":"50000000"}`] | | burn\_proposal\_deposit\_prevote | bool | false | | burn\_vote\_quorum | bool | false | | burn\_vote\_veto | bool | true | | min\_initial\_deposit\_ratio | string | "0.1" | **NOTE**: The governance module contains parameters that are objects unlike other modules. If only a subset of parameters are desired to be changed, only they need to be included and not the entire parameter object structure. ## Client ### CLI A user can query and interact with the `gov` module using the CLI. #### Query The `query` commands allow users to query `gov` state. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query gov --help ``` ##### deposit The `deposit` command allows users to query a deposit for a given proposal from a given depositor. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query gov deposit [proposal-id] [depositer-addr] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query gov deposit 1 cosmos1.. ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} amount: - amount: "100" denom: stake depositor: cosmos1.. proposal_id: "1" ``` ##### deposits The `deposits` command allows users to query all deposits for a given proposal. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query gov deposits [proposal-id] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query gov deposits 1 ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} deposits: - amount: - amount: "100" denom: stake depositor: cosmos1.. proposal_id: "1" pagination: next_key: null total: "0" ``` ##### param The `param` command allows users to query a given parameter for the `gov` module. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query gov param [param-type] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query gov param voting ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} voting_period: "172800000000000" ``` ##### params The `params` command allows users to query all parameters for the `gov` module. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query gov params [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query gov params ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} deposit_params: max_deposit_period: 172800s min_deposit: - amount: "10000000" denom: stake params: expedited_min_deposit: - amount: "50000000" denom: stake expedited_threshold: "0.670000000000000000" expedited_voting_period: 86400s max_deposit_period: 172800s min_deposit: - amount: "10000000" denom: stake min_initial_deposit_ratio: "0.000000000000000000" proposal_cancel_burn_rate: "0.500000000000000000" quorum: "0.334000000000000000" threshold: "0.500000000000000000" veto_threshold: "0.334000000000000000" voting_period: 172800s tally_params: quorum: "0.334000000000000000" threshold: "0.500000000000000000" veto_threshold: "0.334000000000000000" voting_params: voting_period: 172800s ``` ##### proposal The `proposal` command allows users to query a given proposal. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query gov proposal [proposal-id] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query gov proposal 1 ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} deposit_end_time: "2022-03-30T11:50:20.819676256Z" final_tally_result: abstain_count: "0" no_count: "0" no_with_veto_count: "0" yes_count: "0" id: "1" messages: - '@type': /cosmos.bank.v1beta1.MsgSend amount: - amount: "10" denom: stake from_address: cosmos1.. to_address: cosmos1.. metadata: AQ== status: PROPOSAL_STATUS_DEPOSIT_PERIOD submit_time: "2022-03-28T11:50:20.819676256Z" total_deposit: - amount: "10" denom: stake voting_end_time: null voting_start_time: null ``` ##### proposals The `proposals` command allows users to query all proposals with optional filters. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query gov proposals [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query gov proposals ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} pagination: next_key: null total: "0" proposals: - deposit_end_time: "2022-03-30T11:50:20.819676256Z" final_tally_result: abstain_count: "0" no_count: "0" no_with_veto_count: "0" yes_count: "0" id: "1" messages: - '@type': /cosmos.bank.v1beta1.MsgSend amount: - amount: "10" denom: stake from_address: cosmos1.. to_address: cosmos1.. metadata: AQ== status: PROPOSAL_STATUS_DEPOSIT_PERIOD submit_time: "2022-03-28T11:50:20.819676256Z" total_deposit: - amount: "10" denom: stake voting_end_time: null voting_start_time: null - deposit_end_time: "2022-03-30T14:02:41.165025015Z" final_tally_result: abstain_count: "0" no_count: "0" no_with_veto_count: "0" yes_count: "0" id: "2" messages: - '@type': /cosmos.bank.v1beta1.MsgSend amount: - amount: "10" denom: stake from_address: cosmos1.. to_address: cosmos1.. metadata: AQ== status: PROPOSAL_STATUS_DEPOSIT_PERIOD submit_time: "2022-03-28T14:02:41.165025015Z" total_deposit: - amount: "10" denom: stake voting_end_time: null voting_start_time: null ``` ##### proposer The `proposer` command allows users to query the proposer for a given proposal. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query gov proposer [proposal-id] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query gov proposer 1 ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} proposal_id: "1" proposer: cosmos1.. ``` ##### tally The `tally` command allows users to query the tally of a given proposal vote. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query gov tally [proposal-id] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query gov tally 1 ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} abstain: "0" "no": "0" no_with_veto: "0" "yes": "1" ``` ##### vote The `vote` command allows users to query a vote for a given proposal. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query gov vote [proposal-id] [voter-addr] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query gov vote 1 cosmos1.. ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} option: VOTE_OPTION_YES options: - option: VOTE_OPTION_YES weight: "1.000000000000000000" proposal_id: "1" voter: cosmos1.. ``` ##### votes The `votes` command allows users to query all votes for a given proposal. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query gov votes [proposal-id] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query gov votes 1 ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} pagination: next_key: null total: "0" votes: - option: VOTE_OPTION_YES options: - option: VOTE_OPTION_YES weight: "1.000000000000000000" proposal_id: "1" voter: cosmos1.. ``` #### Transactions The `tx` commands allow users to interact with the `gov` module. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx gov --help ``` ##### deposit The `deposit` command allows users to deposit tokens for a given proposal. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx gov deposit [proposal-id] [deposit] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx gov deposit 1 10000000stake --from cosmos1.. ``` ##### draft-proposal The `draft-proposal` command allows users to draft any type of proposal. The command returns a `draft_proposal.json`, to be used by `submit-proposal` after being completed. The `draft_metadata.json` is meant to be uploaded to [IPFS](#metadata). ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx gov draft-proposal ``` ##### submit-proposal The `submit-proposal` command allows users to submit a governance proposal along with some messages and metadata. Messages, metadata and deposit are defined in a JSON file. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx gov submit-proposal [path-to-proposal-json] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx gov submit-proposal /path/to/proposal.json --from cosmos1.. ``` where `proposal.json` contains: ```json expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "messages": [ { "@type": "/cosmos.bank.v1beta1.MsgSend", "from_address": "cosmos1...", // The gov module module address "to_address": "cosmos1...", "amount":[{ "denom": "stake", "amount": "10"}] } ], "metadata": "AQ==", "deposit": "10stake", "title": "Proposal Title", "summary": "Proposal Summary" } ``` By default the metadata, summary and title are both limited by 255 characters, this can be overridden by the application developer. When metadata is not specified, the title is limited to 255 characters and the summary 40x the title length. ##### submit-legacy-proposal The `submit-legacy-proposal` command allows users to submit a governance legacy proposal along with an initial deposit. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx gov submit-legacy-proposal [command] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx gov submit-legacy-proposal --title="Test Proposal" --description="testing" --type="Text" --deposit="100000000stake" --from cosmos1.. ``` Example (`param-change`): ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx gov submit-legacy-proposal param-change proposal.json --from cosmos1.. ``` ```json expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "title": "Test Proposal", "description": "testing, testing, 1, 2, 3", "changes": [ { "subspace": "staking", "key": "MaxValidators", "value": 100 } ], "deposit": "10000000stake" } ``` #### cancel-proposal Once proposal is canceled, from the deposits of proposal `deposits * proposal_cancel_ratio` will be burned or sent to `ProposalCancelDest` address , if `ProposalCancelDest` is empty then deposits will be burned. The `remaining deposits` will be sent to depositers. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx gov cancel-proposal [proposal-id] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx gov cancel-proposal 1 --from cosmos1... ``` ##### vote The `vote` command allows users to submit a vote for a given governance proposal. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx gov vote [command] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx gov vote 1 yes --from cosmos1.. ``` ##### weighted-vote The `weighted-vote` command allows users to submit a weighted vote for a given governance proposal. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx gov weighted-vote [proposal-id] [weighted-options] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx gov weighted-vote 1 yes=0.5,no=0.5 --from cosmos1.. ``` ### gRPC A user can query the `gov` module using gRPC endpoints. #### Proposal The `Proposal` endpoint allows users to query a given proposal. Using legacy v1beta1: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.gov.v1beta1.Query/Proposal ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"proposal_id":"1"}' \ localhost:9090 \ cosmos.gov.v1beta1.Query/Proposal ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "proposal": { "proposalId": "1", "content": {"@type":"/cosmos.gov.v1beta1.TextProposal","description":"testing, testing, 1, 2, 3","title":"Test Proposal"}, "status": "PROPOSAL_STATUS_VOTING_PERIOD", "finalTallyResult": { "yes": "0", "abstain": "0", "no": "0", "noWithVeto": "0" }, "submitTime": "2021-09-16T19:40:08.712440474Z", "depositEndTime": "2021-09-18T19:40:08.712440474Z", "totalDeposit": [ { "denom": "stake", "amount": "10000000" } ], "votingStartTime": "2021-09-16T19:40:08.712440474Z", "votingEndTime": "2021-09-18T19:40:08.712440474Z", "title": "Test Proposal", "summary": "testing, testing, 1, 2, 3" } } ``` Using v1: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.gov.v1.Query/Proposal ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"proposal_id":"1"}' \ localhost:9090 \ cosmos.gov.v1.Query/Proposal ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "proposal": { "id": "1", "messages": [ {"@type":"/cosmos.bank.v1beta1.MsgSend","amount":[{"denom":"stake","amount":"10"}],"fromAddress":"cosmos1..","toAddress":"cosmos1.."} ], "status": "PROPOSAL_STATUS_VOTING_PERIOD", "finalTallyResult": { "yesCount": "0", "abstainCount": "0", "noCount": "0", "noWithVetoCount": "0" }, "submitTime": "2022-03-28T11:50:20.819676256Z", "depositEndTime": "2022-03-30T11:50:20.819676256Z", "totalDeposit": [ { "denom": "stake", "amount": "10000000" } ], "votingStartTime": "2022-03-28T14:25:26.644857113Z", "votingEndTime": "2022-03-30T14:25:26.644857113Z", "metadata": "AQ==", "title": "Test Proposal", "summary": "testing, testing, 1, 2, 3" } } ``` #### Proposals The `Proposals` endpoint allows users to query all proposals with optional filters. Using legacy v1beta1: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.gov.v1beta1.Query/Proposals ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ localhost:9090 \ cosmos.gov.v1beta1.Query/Proposals ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "proposals": [ { "proposalId": "1", "status": "PROPOSAL_STATUS_VOTING_PERIOD", "finalTallyResult": { "yes": "0", "abstain": "0", "no": "0", "noWithVeto": "0" }, "submitTime": "2022-03-28T11:50:20.819676256Z", "depositEndTime": "2022-03-30T11:50:20.819676256Z", "totalDeposit": [ { "denom": "stake", "amount": "10000000010" } ], "votingStartTime": "2022-03-28T14:25:26.644857113Z", "votingEndTime": "2022-03-30T14:25:26.644857113Z" }, { "proposalId": "2", "status": "PROPOSAL_STATUS_DEPOSIT_PERIOD", "finalTallyResult": { "yes": "0", "abstain": "0", "no": "0", "noWithVeto": "0" }, "submitTime": "2022-03-28T14:02:41.165025015Z", "depositEndTime": "2022-03-30T14:02:41.165025015Z", "totalDeposit": [ { "denom": "stake", "amount": "10" } ], "votingStartTime": "0001-01-01T00:00:00Z", "votingEndTime": "0001-01-01T00:00:00Z" } ], "pagination": { "total": "2" } } ``` Using v1: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.gov.v1.Query/Proposals ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ localhost:9090 \ cosmos.gov.v1.Query/Proposals ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "proposals": [ { "id": "1", "messages": [ {"@type":"/cosmos.bank.v1beta1.MsgSend","amount":[{"denom":"stake","amount":"10"}],"fromAddress":"cosmos1..","toAddress":"cosmos1.."} ], "status": "PROPOSAL_STATUS_VOTING_PERIOD", "finalTallyResult": { "yesCount": "0", "abstainCount": "0", "noCount": "0", "noWithVetoCount": "0" }, "submitTime": "2022-03-28T11:50:20.819676256Z", "depositEndTime": "2022-03-30T11:50:20.819676256Z", "totalDeposit": [ { "denom": "stake", "amount": "10000000010" } ], "votingStartTime": "2022-03-28T14:25:26.644857113Z", "votingEndTime": "2022-03-30T14:25:26.644857113Z", "metadata": "AQ==", "title": "Proposal Title", "summary": "Proposal Summary" }, { "id": "2", "messages": [ {"@type":"/cosmos.bank.v1beta1.MsgSend","amount":[{"denom":"stake","amount":"10"}],"fromAddress":"cosmos1..","toAddress":"cosmos1.."} ], "status": "PROPOSAL_STATUS_DEPOSIT_PERIOD", "finalTallyResult": { "yesCount": "0", "abstainCount": "0", "noCount": "0", "noWithVetoCount": "0" }, "submitTime": "2022-03-28T14:02:41.165025015Z", "depositEndTime": "2022-03-30T14:02:41.165025015Z", "totalDeposit": [ { "denom": "stake", "amount": "10" } ], "metadata": "AQ==", "title": "Proposal Title", "summary": "Proposal Summary" } ], "pagination": { "total": "2" } } ``` #### Vote The `Vote` endpoint allows users to query a vote for a given proposal. Using legacy v1beta1: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.gov.v1beta1.Query/Vote ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"proposal_id":"1","voter":"cosmos1.."}' \ localhost:9090 \ cosmos.gov.v1beta1.Query/Vote ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "vote": { "proposalId": "1", "voter": "cosmos1..", "option": "VOTE_OPTION_YES", "options": [ { "option": "VOTE_OPTION_YES", "weight": "1000000000000000000" } ] } } ``` Using v1: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.gov.v1.Query/Vote ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"proposal_id":"1","voter":"cosmos1.."}' \ localhost:9090 \ cosmos.gov.v1.Query/Vote ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "vote": { "proposalId": "1", "voter": "cosmos1..", "option": "VOTE_OPTION_YES", "options": [ { "option": "VOTE_OPTION_YES", "weight": "1.000000000000000000" } ] } } ``` #### Votes The `Votes` endpoint allows users to query all votes for a given proposal. Using legacy v1beta1: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.gov.v1beta1.Query/Votes ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"proposal_id":"1"}' \ localhost:9090 \ cosmos.gov.v1beta1.Query/Votes ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "votes": [ { "proposalId": "1", "voter": "cosmos1..", "options": [ { "option": "VOTE_OPTION_YES", "weight": "1000000000000000000" } ] } ], "pagination": { "total": "1" } } ``` Using v1: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.gov.v1.Query/Votes ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"proposal_id":"1"}' \ localhost:9090 \ cosmos.gov.v1.Query/Votes ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "votes": [ { "proposalId": "1", "voter": "cosmos1..", "options": [ { "option": "VOTE_OPTION_YES", "weight": "1.000000000000000000" } ] } ], "pagination": { "total": "1" } } ``` #### Params The `Params` endpoint allows users to query all parameters for the `gov` module. Using legacy v1beta1: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.gov.v1beta1.Query/Params ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"params_type":"voting"}' \ localhost:9090 \ cosmos.gov.v1beta1.Query/Params ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "votingParams": { "votingPeriod": "172800s" }, "depositParams": { "maxDepositPeriod": "0s" }, "tallyParams": { "quorum": "MA==", "threshold": "MA==", "vetoThreshold": "MA==" } } ``` Using v1: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.gov.v1.Query/Params ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"params_type":"voting"}' \ localhost:9090 \ cosmos.gov.v1.Query/Params ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "votingParams": { "votingPeriod": "172800s" } } ``` #### Deposit The `Deposit` endpoint allows users to query a deposit for a given proposal from a given depositor. Using legacy v1beta1: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.gov.v1beta1.Query/Deposit ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ '{"proposal_id":"1","depositor":"cosmos1.."}' \ localhost:9090 \ cosmos.gov.v1beta1.Query/Deposit ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "deposit": { "proposalId": "1", "depositor": "cosmos1..", "amount": [ { "denom": "stake", "amount": "10000000" } ] } } ``` Using v1: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.gov.v1.Query/Deposit ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ '{"proposal_id":"1","depositor":"cosmos1.."}' \ localhost:9090 \ cosmos.gov.v1.Query/Deposit ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "deposit": { "proposalId": "1", "depositor": "cosmos1..", "amount": [ { "denom": "stake", "amount": "10000000" } ] } } ``` #### deposits The `Deposits` endpoint allows users to query all deposits for a given proposal. Using legacy v1beta1: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.gov.v1beta1.Query/Deposits ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"proposal_id":"1"}' \ localhost:9090 \ cosmos.gov.v1beta1.Query/Deposits ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "deposits": [ { "proposalId": "1", "depositor": "cosmos1..", "amount": [ { "denom": "stake", "amount": "10000000" } ] } ], "pagination": { "total": "1" } } ``` Using v1: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.gov.v1.Query/Deposits ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"proposal_id":"1"}' \ localhost:9090 \ cosmos.gov.v1.Query/Deposits ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "deposits": [ { "proposalId": "1", "depositor": "cosmos1..", "amount": [ { "denom": "stake", "amount": "10000000" } ] } ], "pagination": { "total": "1" } } ``` #### TallyResult The `TallyResult` endpoint allows users to query the tally of a given proposal. Using legacy v1beta1: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.gov.v1beta1.Query/TallyResult ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"proposal_id":"1"}' \ localhost:9090 \ cosmos.gov.v1beta1.Query/TallyResult ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "tally": { "yes": "1000000", "abstain": "0", "no": "0", "noWithVeto": "0" } } ``` Using v1: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.gov.v1.Query/TallyResult ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"proposal_id":"1"}' \ localhost:9090 \ cosmos.gov.v1.Query/TallyResult ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "tally": { "yes": "1000000", "abstain": "0", "no": "0", "noWithVeto": "0" } } ``` ### REST A user can query the `gov` module using REST endpoints. #### proposal The `proposals` endpoint allows users to query a given proposal. Using legacy v1beta1: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/gov/v1beta1/proposals/{proposal_id} ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl localhost:1317/cosmos/gov/v1beta1/proposals/1 ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "proposal": { "proposal_id": "1", "content": null, "status": "PROPOSAL_STATUS_VOTING_PERIOD", "final_tally_result": { "yes": "0", "abstain": "0", "no": "0", "no_with_veto": "0" }, "submit_time": "2022-03-28T11:50:20.819676256Z", "deposit_end_time": "2022-03-30T11:50:20.819676256Z", "total_deposit": [ { "denom": "stake", "amount": "10000000010" } ], "voting_start_time": "2022-03-28T14:25:26.644857113Z", "voting_end_time": "2022-03-30T14:25:26.644857113Z" } } ``` Using v1: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/gov/v1/proposals/{proposal_id} ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl localhost:1317/cosmos/gov/v1/proposals/1 ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "proposal": { "id": "1", "messages": [ { "@type": "/cosmos.bank.v1beta1.MsgSend", "from_address": "cosmos1..", "to_address": "cosmos1..", "amount": [ { "denom": "stake", "amount": "10" } ] } ], "status": "PROPOSAL_STATUS_VOTING_PERIOD", "final_tally_result": { "yes_count": "0", "abstain_count": "0", "no_count": "0", "no_with_veto_count": "0" }, "submit_time": "2022-03-28T11:50:20.819676256Z", "deposit_end_time": "2022-03-30T11:50:20.819676256Z", "total_deposit": [ { "denom": "stake", "amount": "10000000" } ], "voting_start_time": "2022-03-28T14:25:26.644857113Z", "voting_end_time": "2022-03-30T14:25:26.644857113Z", "metadata": "AQ==", "title": "Proposal Title", "summary": "Proposal Summary" } } ``` #### proposals The `proposals` endpoint also allows users to query all proposals with optional filters. Using legacy v1beta1: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/gov/v1beta1/proposals ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl localhost:1317/cosmos/gov/v1beta1/proposals ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "proposals": [ { "proposal_id": "1", "content": null, "status": "PROPOSAL_STATUS_VOTING_PERIOD", "final_tally_result": { "yes": "0", "abstain": "0", "no": "0", "no_with_veto": "0" }, "submit_time": "2022-03-28T11:50:20.819676256Z", "deposit_end_time": "2022-03-30T11:50:20.819676256Z", "total_deposit": [ { "denom": "stake", "amount": "10000000" } ], "voting_start_time": "2022-03-28T14:25:26.644857113Z", "voting_end_time": "2022-03-30T14:25:26.644857113Z" }, { "proposal_id": "2", "content": null, "status": "PROPOSAL_STATUS_DEPOSIT_PERIOD", "final_tally_result": { "yes": "0", "abstain": "0", "no": "0", "no_with_veto": "0" }, "submit_time": "2022-03-28T14:02:41.165025015Z", "deposit_end_time": "2022-03-30T14:02:41.165025015Z", "total_deposit": [ { "denom": "stake", "amount": "10" } ], "voting_start_time": "0001-01-01T00:00:00Z", "voting_end_time": "0001-01-01T00:00:00Z" } ], "pagination": { "next_key": null, "total": "2" } } ``` Using v1: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/gov/v1/proposals ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl localhost:1317/cosmos/gov/v1/proposals ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "proposals": [ { "id": "1", "messages": [ { "@type": "/cosmos.bank.v1beta1.MsgSend", "from_address": "cosmos1..", "to_address": "cosmos1..", "amount": [ { "denom": "stake", "amount": "10" } ] } ], "status": "PROPOSAL_STATUS_VOTING_PERIOD", "final_tally_result": { "yes_count": "0", "abstain_count": "0", "no_count": "0", "no_with_veto_count": "0" }, "submit_time": "2022-03-28T11:50:20.819676256Z", "deposit_end_time": "2022-03-30T11:50:20.819676256Z", "total_deposit": [ { "denom": "stake", "amount": "10000000010" } ], "voting_start_time": "2022-03-28T14:25:26.644857113Z", "voting_end_time": "2022-03-30T14:25:26.644857113Z", "metadata": "AQ==", "title": "Proposal Title", "summary": "Proposal Summary" }, { "id": "2", "messages": [ { "@type": "/cosmos.bank.v1beta1.MsgSend", "from_address": "cosmos1..", "to_address": "cosmos1..", "amount": [ { "denom": "stake", "amount": "10" } ] } ], "status": "PROPOSAL_STATUS_DEPOSIT_PERIOD", "final_tally_result": { "yes_count": "0", "abstain_count": "0", "no_count": "0", "no_with_veto_count": "0" }, "submit_time": "2022-03-28T14:02:41.165025015Z", "deposit_end_time": "2022-03-30T14:02:41.165025015Z", "total_deposit": [ { "denom": "stake", "amount": "10" } ], "voting_start_time": null, "voting_end_time": null, "metadata": "AQ==", "title": "Proposal Title", "summary": "Proposal Summary" } ], "pagination": { "next_key": null, "total": "2" } } ``` #### voter vote The `votes` endpoint allows users to query a vote for a given proposal. Using legacy v1beta1: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/gov/v1beta1/proposals/{proposal_id}/votes/{voter} ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl localhost:1317/cosmos/gov/v1beta1/proposals/1/votes/cosmos1.. ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "vote": { "proposal_id": "1", "voter": "cosmos1..", "option": "VOTE_OPTION_YES", "options": [ { "option": "VOTE_OPTION_YES", "weight": "1.000000000000000000" } ] } } ``` Using v1: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/gov/v1/proposals/{proposal_id}/votes/{voter} ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl localhost:1317/cosmos/gov/v1/proposals/1/votes/cosmos1.. ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "vote": { "proposal_id": "1", "voter": "cosmos1..", "options": [ { "option": "VOTE_OPTION_YES", "weight": "1.000000000000000000" } ], "metadata": "" } } ``` #### votes The `votes` endpoint allows users to query all votes for a given proposal. Using legacy v1beta1: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/gov/v1beta1/proposals/{proposal_id}/votes ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl localhost:1317/cosmos/gov/v1beta1/proposals/1/votes ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "votes": [ { "proposal_id": "1", "voter": "cosmos1..", "option": "VOTE_OPTION_YES", "options": [ { "option": "VOTE_OPTION_YES", "weight": "1.000000000000000000" } ] } ], "pagination": { "next_key": null, "total": "1" } } ``` Using v1: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/gov/v1/proposals/{proposal_id}/votes ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl localhost:1317/cosmos/gov/v1/proposals/1/votes ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "votes": [ { "proposal_id": "1", "voter": "cosmos1..", "options": [ { "option": "VOTE_OPTION_YES", "weight": "1.000000000000000000" } ], "metadata": "" } ], "pagination": { "next_key": null, "total": "1" } } ``` #### params The `params` endpoint allows users to query all parameters for the `gov` module. Using legacy v1beta1: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/gov/v1beta1/params/{params_type} ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl localhost:1317/cosmos/gov/v1beta1/params/voting ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "voting_params": { "voting_period": "172800s" }, "deposit_params": { "min_deposit": [ ], "max_deposit_period": "0s" }, "tally_params": { "quorum": "0.000000000000000000", "threshold": "0.000000000000000000", "veto_threshold": "0.000000000000000000" } } ``` Using v1: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/gov/v1/params/{params_type} ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl localhost:1317/cosmos/gov/v1/params/voting ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "voting_params": { "voting_period": "172800s" }, "deposit_params": { "min_deposit": [ ], "max_deposit_period": "0s" }, "tally_params": { "quorum": "0.000000000000000000", "threshold": "0.000000000000000000", "veto_threshold": "0.000000000000000000" } } ``` #### deposits The `deposits` endpoint allows users to query a deposit for a given proposal from a given depositor. Using legacy v1beta1: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/gov/v1beta1/proposals/{proposal_id}/deposits/{depositor} ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl localhost:1317/cosmos/gov/v1beta1/proposals/1/deposits/cosmos1.. ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "deposit": { "proposal_id": "1", "depositor": "cosmos1..", "amount": [ { "denom": "stake", "amount": "10000000" } ] } } ``` Using v1: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/gov/v1/proposals/{proposal_id}/deposits/{depositor} ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl localhost:1317/cosmos/gov/v1/proposals/1/deposits/cosmos1.. ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "deposit": { "proposal_id": "1", "depositor": "cosmos1..", "amount": [ { "denom": "stake", "amount": "10000000" } ] } } ``` #### proposal deposits The `deposits` endpoint allows users to query all deposits for a given proposal. Using legacy v1beta1: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/gov/v1beta1/proposals/{proposal_id}/deposits ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl localhost:1317/cosmos/gov/v1beta1/proposals/1/deposits ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "deposits": [ { "proposal_id": "1", "depositor": "cosmos1..", "amount": [ { "denom": "stake", "amount": "10000000" } ] } ], "pagination": { "next_key": null, "total": "1" } } ``` Using v1: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/gov/v1/proposals/{proposal_id}/deposits ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl localhost:1317/cosmos/gov/v1/proposals/1/deposits ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "deposits": [ { "proposal_id": "1", "depositor": "cosmos1..", "amount": [ { "denom": "stake", "amount": "10000000" } ] } ], "pagination": { "next_key": null, "total": "1" } } ``` #### tally The `tally` endpoint allows users to query the tally of a given proposal. Using legacy v1beta1: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/gov/v1beta1/proposals/{proposal_id}/tally ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl localhost:1317/cosmos/gov/v1beta1/proposals/1/tally ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "tally": { "yes": "1000000", "abstain": "0", "no": "0", "no_with_veto": "0" } } ``` Using v1: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/gov/v1/proposals/{proposal_id}/tally ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl localhost:1317/cosmos/gov/v1/proposals/1/tally ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "tally": { "yes": "1000000", "abstain": "0", "no": "0", "no_with_veto": "0" } } ``` ## Metadata The gov module has two locations for metadata where users can provide further context about the on-chain actions they are taking. By default all metadata fields have a 255 character length field where metadata can be stored in json format, either on-chain or off-chain depending on the amount of data required. Here we provide a recommendation for the json structure and where the data should be stored. There are two important factors in making these recommendations. First, that the gov and group modules are consistent with one another, note the number of proposals made by all groups may be quite large. Second, that client applications such as block explorers and governance interfaces have confidence in the consistency of metadata structure accross chains. ### Proposal Location: off-chain as json object stored on IPFS (mirrors [group proposal](/sdk/latest/modules/group/README#metadata)) ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "title": "", "authors": [""], "summary": "", "details": "", "proposal_forum_url": "", "vote_option_context": "", } ``` The `authors` field is an array of strings, this is to allow for multiple authors to be listed in the metadata. In v0.46, the `authors` field is a comma-separated string. Frontends are encouraged to support both formats for backwards compatibility. ### Vote Location: on-chain as json within 255 character limit (mirrors [group vote](/sdk/latest/modules/group/README#metadata)) ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "justification": "", } ``` ## Future Improvements The current documentation only describes the minimum viable product for the governance module. Future improvements may include: * **`BountyProposals`:** If accepted, a `BountyProposal` creates an open bounty. The `BountyProposal` specifies how many Atoms will be given upon completion. These Atoms will be taken from the `reserve pool`. After a `BountyProposal` is accepted by governance, anybody can submit a `SoftwareUpgradeProposal` with the code to claim the bounty. Note that once a `BountyProposal` is accepted, the corresponding funds in the `reserve pool` are locked so that payment can always be honored. In order to link a `SoftwareUpgradeProposal` to an open bounty, the submitter of the `SoftwareUpgradeProposal` will use the `Proposal.LinkedProposal` attribute. If a `SoftwareUpgradeProposal` linked to an open bounty is accepted by governance, the funds that were reserved are automatically transferred to the submitter. * **Complex delegation:** Delegators could choose other representatives than their validators. Ultimately, the chain of representatives would always end up to a validator, but delegators could inherit the vote of their chosen representative before they inherit the vote of their validator. In other words, they would only inherit the vote of their validator if their other appointed representative did not vote. * **Better process for proposal review:** There would be two parts to `proposal.Deposit`, one for anti-spam (same as in MVP) and an other one to reward third party auditors. # x/group Source: https://docs.cosmos.network/sdk/latest/modules/group/README The following documents specify the group module. The `x/group` module is now maintained under the Cosmos Enterprise offering. If your application uses `x/group`, you will need to migrate your code to the Enterprise-distributed package and obtain a Cosmos Enterprise license to continue using it. Please see [Cosmos Enterprise](/enterprise/overview) to learn more. ## Abstract The following documents specify the group module. This module allows the creation and management of on-chain multisig accounts and enables voting for message execution based on configurable decision policies. ## Contents * [Concepts](#concepts) * [Group](#group) * [Group Policy](#group-policy) * [Decision Policy](#decision-policy) * [Proposal](#proposal) * [Pruning](#pruning) * [State](#state) * [Group Table](#group-table) * [Group Member Table](#group-member-table) * [Group Policy Table](#group-policy-table) * [Proposal Table](#proposal-table) * [Vote Table](#vote-table) * [Msg Service](#msg-service) * [Msg/CreateGroup](#msgcreategroup) * [Msg/UpdateGroupMembers](#msgupdategroupmembers) * [Msg/UpdateGroupAdmin](#msgupdategroupadmin) * [Msg/UpdateGroupMetadata](#msgupdategroupmetadata) * [Msg/CreateGroupPolicy](#msgcreategrouppolicy) * [Msg/CreateGroupWithPolicy](#msgcreategroupwithpolicy) * [Msg/UpdateGroupPolicyAdmin](#msgupdategrouppolicyadmin) * [Msg/UpdateGroupPolicyDecisionPolicy](#msgupdategrouppolicydecisionpolicy) * [Msg/UpdateGroupPolicyMetadata](#msgupdategrouppolicymetadata) * [Msg/SubmitProposal](#msgsubmitproposal) * [Msg/WithdrawProposal](#msgwithdrawproposal) * [Msg/Vote](#msgvote) * [Msg/Exec](#msgexec) * [Msg/LeaveGroup](#msgleavegroup) * [Events](#events) * [EventCreateGroup](#eventcreategroup) * [EventUpdateGroup](#eventupdategroup) * [EventCreateGroupPolicy](#eventcreategrouppolicy) * [EventUpdateGroupPolicy](#eventupdategrouppolicy) * [EventCreateProposal](#eventcreateproposal) * [EventWithdrawProposal](#eventwithdrawproposal) * [EventVote](#eventvote) * [EventExec](#eventexec) * [EventLeaveGroup](#eventleavegroup) * [EventProposalPruned](#eventproposalpruned) * [Client](#client) * [CLI](#cli) * [gRPC](#grpc) * [REST](#rest) * [Metadata](#metadata) ## Concepts ### Group A group is simply an aggregation of accounts with associated weights. It is not an account and doesn't have a balance. It doesn't in and of itself have any sort of voting or decision weight. It does have an "administrator" which has the ability to add, remove and update members in the group. Note that a group policy account could be an administrator of a group, and that the administrator doesn't necessarily have to be a member of the group. ### Group Policy A group policy is an account associated with a group and a decision policy. Group policies are abstracted from groups because a single group may have multiple decision policies for different types of actions. Managing group membership separately from decision policies results in the least overhead and keeps membership consistent across different policies. The pattern that is recommended is to have a single master group policy for a given group, and then to create separate group policies with different decision policies and delegate the desired permissions from the master account to those "sub-accounts" using the `x/authz` module. ### Decision Policy A decision policy is the mechanism by which members of a group can vote on proposals, as well as the rules that dictate whether a proposal should pass or not based on its tally outcome. All decision policies generally would have a mininum execution period and a maximum voting window. The minimum execution period is the minimum amount of time that must pass after submission in order for a proposal to potentially be executed, and it may be set to 0. The maximum voting window is the maximum time after submission that a proposal may be voted on before it is tallied. The chain developer also defines an app-wide maximum execution period, which is the maximum amount of time after a proposal's voting period end where users are allowed to execute a proposal. The current group module comes shipped with two decision policies: threshold and percentage. Any chain developer can extend upon these two, by creating custom decision policies, as long as they adhere to the `DecisionPolicy` interface: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/x/group/types.go#L27-L45 ``` #### Threshold decision policy A threshold decision policy defines a threshold of yes votes (based on a tally of voter weights) that must be achieved in order for a proposal to pass. For this decision policy, abstain and veto are simply treated as no's. This decision policy also has a VotingPeriod window and a MinExecutionPeriod window. The former defines the duration after proposal submission where members are allowed to vote, after which tallying is performed. The latter specifies the minimum duration after proposal submission where the proposal can be executed. If set to 0, then the proposal is allowed to be executed immediately on submission (using the `TRY_EXEC` option). Obviously, MinExecutionPeriod cannot be greater than VotingPeriod+MaxExecutionPeriod (where MaxExecution is the app-defined duration that specifies the window after voting ended where a proposal can be executed). #### Percentage decision policy A percentage decision policy is similar to a threshold decision policy, except that the threshold is not defined as a constant weight, but as a percentage. It's more suited for groups where the group members' weights can be updated, as the percentage threshold stays the same, and doesn't depend on how those member weights get updated. Same as the Threshold decision policy, the percentage decision policy has the two VotingPeriod and MinExecutionPeriod parameters. ### Proposal Any member(s) of a group can submit a proposal for a group policy account to decide upon. A proposal consists of a set of messages that will be executed if the proposal passes as well as any metadata associated with the proposal. #### Voting There are four choices to choose while voting - yes, no, abstain and veto. Not all decision policies will take the four choices into account. Votes can contain some optional metadata. In the current implementation, the voting window begins as soon as a proposal is submitted, and the end is defined by the group policy's decision policy. #### Withdrawing Proposals Proposals can be withdrawn any time before the voting period end, either by the admin of the group policy or by one of the proposers. Once withdrawn, it is marked as `PROPOSAL_STATUS_WITHDRAWN`, and no more voting or execution is allowed on it. #### Aborted Proposals If the group policy is updated during the voting period of the proposal, then the proposal is marked as `PROPOSAL_STATUS_ABORTED`, and no more voting or execution is allowed on it. This is because the group policy defines the rules of proposal voting and execution, so if those rules change during the lifecycle of a proposal, then the proposal should be marked as stale. #### Tallying Tallying is the counting of all votes on a proposal. It happens only once in the lifecycle of a proposal, but can be triggered by two factors, whichever happens first: * either someone tries to execute the proposal (see next section), which can happen on a `Msg/Exec` transaction, or a `Msg/{SubmitProposal,Vote}` transaction with the `Exec` field set. When a proposal execution is attempted, a tally is done first to make sure the proposal passes. * or on `EndBlock` when the proposal's voting period end just passed. If the tally result passes the decision policy's rules, then the proposal is marked as `PROPOSAL_STATUS_ACCEPTED`, or else it is marked as `PROPOSAL_STATUS_REJECTED`. In any case, no more voting is allowed anymore, and the tally result is persisted to state in the proposal's `FinalTallyResult`. #### Executing Proposals Proposals are executed only when the tallying is done, and the group account's decision policy allows the proposal to pass based on the tally outcome. They are marked by the status `PROPOSAL_STATUS_ACCEPTED`. Execution must happen before a duration of `MaxExecutionPeriod` (set by the chain developer) after each proposal's voting period end. Proposals will not be automatically executed by the chain in this current design, but rather a user must submit a `Msg/Exec` transaction to attempt to execute the proposal based on the current votes and decision policy. Any user (not only the group members) can execute proposals that have been accepted, and execution fees are paid by the proposal executor. It's also possible to try to execute a proposal immediately on creation or on new votes using the `Exec` field of `Msg/SubmitProposal` and `Msg/Vote` requests. In the former case, proposers signatures are considered as yes votes. In these cases, if the proposal can't be executed (i.e. it didn't pass the decision policy's rules), it will still be opened for new votes and could be tallied and executed later on. A successful proposal execution will have its `ExecutorResult` marked as `PROPOSAL_EXECUTOR_RESULT_SUCCESS`. The proposal will be automatically pruned after execution. On the other hand, a failed proposal execution will be marked as `PROPOSAL_EXECUTOR_RESULT_FAILURE`. Such a proposal can be re-executed multiple times, until it expires after `MaxExecutionPeriod` after voting period end. ### Pruning Proposals and votes are automatically pruned to avoid state bloat. Votes are pruned: * either after a successful tally, i.e. a tally whose result passes the decision policy's rules, which can be trigged by a `Msg/Exec` or a `Msg/{SubmitProposal,Vote}` with the `Exec` field set, * or on `EndBlock` right after the proposal's voting period end. This applies to proposals with status `aborted` or `withdrawn` too. whichever happens first. Proposals are pruned: * on `EndBlock` whose proposal status is `withdrawn` or `aborted` on proposal's voting period end before tallying, * and either after a successful proposal execution, * or on `EndBlock` right after the proposal's `voting_period_end` + `max_execution_period` (defined as an app-wide configuration) is passed, whichever happens first. ## State The `group` module uses the `orm` package which provides table storage with support for primary keys and secondary indexes. `orm` also defines `Sequence` which is a persistent unique key generator based on a counter that can be used along with `Table`s. Here's the list of tables and associated sequences and indexes stored as part of the `group` module. ### Group Table The `groupTable` stores `GroupInfo`: `0x0 | BigEndian(GroupId) -> ProtocolBuffer(GroupInfo)`. #### groupSeq The value of `groupSeq` is incremented when creating a new group and corresponds to the new `GroupId`: `0x1 | 0x1 -> BigEndian`. The second `0x1` corresponds to the ORM `sequenceStorageKey`. #### groupByAdminIndex `groupByAdminIndex` allows to retrieve groups by admin address: `0x2 | len([]byte(group.Admin)) | []byte(group.Admin) | BigEndian(GroupId) -> []byte()`. ### Group Member Table The `groupMemberTable` stores `GroupMember`s: `0x10 | BigEndian(GroupId) | []byte(member.Address) -> ProtocolBuffer(GroupMember)`. The `groupMemberTable` is a primary key table and its `PrimaryKey` is given by `BigEndian(GroupId) | []byte(member.Address)` which is used by the following indexes. #### groupMemberByGroupIndex `groupMemberByGroupIndex` allows to retrieve group members by group id: `0x11 | BigEndian(GroupId) | PrimaryKey -> []byte()`. #### groupMemberByMemberIndex `groupMemberByMemberIndex` allows to retrieve group members by member address: `0x12 | len([]byte(member.Address)) | []byte(member.Address) | PrimaryKey -> []byte()`. ### Group Policy Table The `groupPolicyTable` stores `GroupPolicyInfo`: `0x20 | len([]byte(Address)) | []byte(Address) -> ProtocolBuffer(GroupPolicyInfo)`. The `groupPolicyTable` is a primary key table and its `PrimaryKey` is given by `len([]byte(Address)) | []byte(Address)` which is used by the following indexes. #### groupPolicySeq The value of `groupPolicySeq` is incremented when creating a new group policy and is used to generate the new group policy account `Address`: `0x21 | 0x1 -> BigEndian`. The second `0x1` corresponds to the ORM `sequenceStorageKey`. #### groupPolicyByGroupIndex `groupPolicyByGroupIndex` allows to retrieve group policies by group id: `0x22 | BigEndian(GroupId) | PrimaryKey -> []byte()`. #### groupPolicyByAdminIndex `groupPolicyByAdminIndex` allows to retrieve group policies by admin address: `0x23 | len([]byte(Address)) | []byte(Address) | PrimaryKey -> []byte()`. ### Proposal Table The `proposalTable` stores `Proposal`s: `0x30 | BigEndian(ProposalId) -> ProtocolBuffer(Proposal)`. #### proposalSeq The value of `proposalSeq` is incremented when creating a new proposal and corresponds to the new `ProposalId`: `0x31 | 0x1 -> BigEndian`. The second `0x1` corresponds to the ORM `sequenceStorageKey`. #### proposalByGroupPolicyIndex `proposalByGroupPolicyIndex` allows to retrieve proposals by group policy account address: `0x32 | len([]byte(account.Address)) | []byte(account.Address) | BigEndian(ProposalId) -> []byte()`. #### ProposalsByVotingPeriodEndIndex `proposalsByVotingPeriodEndIndex` allows to retrieve proposals sorted by chronological `voting_period_end`: `0x33 | sdk.FormatTimeBytes(proposal.VotingPeriodEnd) | BigEndian(ProposalId) -> []byte()`. This index is used when tallying the proposal votes at the end of the voting period, and for pruning proposals at `VotingPeriodEnd + MaxExecutionPeriod`. ### Vote Table The `voteTable` stores `Vote`s: `0x40 | BigEndian(ProposalId) | []byte(voter.Address) -> ProtocolBuffer(Vote)`. The `voteTable` is a primary key table and its `PrimaryKey` is given by `BigEndian(ProposalId) | []byte(voter.Address)` which is used by the following indexes. #### voteByProposalIndex `voteByProposalIndex` allows to retrieve votes by proposal id: `0x41 | BigEndian(ProposalId) | PrimaryKey -> []byte()`. #### voteByVoterIndex `voteByVoterIndex` allows to retrieve votes by voter address: `0x42 | len([]byte(voter.Address)) | []byte(voter.Address) | PrimaryKey -> []byte()`. ## Msg Service ### Msg/CreateGroup A new group can be created with the `MsgCreateGroup`, which has an admin address, a list of members and some optional metadata. The metadata has a maximum length that is chosen by the app developer, and passed into the group keeper as a config. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/group/v1/tx.proto#L67-L80 ``` It's expected to fail if * metadata length is greater than `MaxMetadataLen` config * members are not correctly set (e.g. wrong address format, duplicates, or with 0 weight). ### Msg/UpdateGroupMembers Group members can be updated with the `UpdateGroupMembers`. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/group/v1/tx.proto#L88-L102 ``` In the list of `MemberUpdates`, an existing member can be removed by setting its weight to 0. It's expected to fail if: * the signer is not the admin of the group. * for any one of the associated group policies, if its decision policy's `Validate()` method fails against the updated group. ### Msg/UpdateGroupAdmin The `UpdateGroupAdmin` can be used to update a group admin. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/group/v1/tx.proto#L107-L120 ``` It's expected to fail if the signer is not the admin of the group. ### Msg/UpdateGroupMetadata The `UpdateGroupMetadata` can be used to update a group metadata. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/group/v1/tx.proto#L125-L138 ``` It's expected to fail if: * new metadata length is greater than `MaxMetadataLen` config. * the signer is not the admin of the group. ### Msg/CreateGroupPolicy A new group policy can be created with the `MsgCreateGroupPolicy`, which has an admin address, a group id, a decision policy and some optional metadata. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/group/v1/tx.proto#L147-L165 ``` It's expected to fail if: * the signer is not the admin of the group. * metadata length is greater than `MaxMetadataLen` config. * the decision policy's `Validate()` method doesn't pass against the group. ### Msg/CreateGroupWithPolicy A new group with policy can be created with the `MsgCreateGroupWithPolicy`, which has an admin address, a list of members, a decision policy, a `group_policy_as_admin` field to optionally set group and group policy admin with group policy address and some optional metadata for group and group policy. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/group/v1/tx.proto#L191-L215 ``` It's expected to fail for the same reasons as `Msg/CreateGroup` and `Msg/CreateGroupPolicy`. ### Msg/UpdateGroupPolicyAdmin The `UpdateGroupPolicyAdmin` can be used to update a group policy admin. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/group/v1/tx.proto#L173-L186 ``` It's expected to fail if the signer is not the admin of the group policy. ### Msg/UpdateGroupPolicyDecisionPolicy The `UpdateGroupPolicyDecisionPolicy` can be used to update a decision policy. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/group/v1/tx.proto#L226-L241 ``` It's expected to fail if: * the signer is not the admin of the group policy. * the new decision policy's `Validate()` method doesn't pass against the group. ### Msg/UpdateGroupPolicyMetadata The `UpdateGroupPolicyMetadata` can be used to update a group policy metadata. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/group/v1/tx.proto#L246-L259 ``` It's expected to fail if: * new metadata length is greater than `MaxMetadataLen` config. * the signer is not the admin of the group. ### Msg/SubmitProposal A new proposal can be created with the `MsgSubmitProposal`, which has a group policy account address, a list of proposers addresses, a list of messages to execute if the proposal is accepted and some optional metadata. An optional `Exec` value can be provided to try to execute the proposal immediately after proposal creation. Proposers signatures are considered as yes votes in this case. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/group/v1/tx.proto#L281-L315 ``` It's expected to fail if: * metadata, title, or summary length is greater than `MaxMetadataLen` config. * if any of the proposers is not a group member. ### Msg/WithdrawProposal A proposal can be withdrawn using `MsgWithdrawProposal` which has an `address` (can be either a proposer or the group policy admin) and a `proposal_id` (which has to be withdrawn). ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/group/v1/tx.proto#L323-L333 ``` It's expected to fail if: * the signer is neither the group policy admin nor proposer of the proposal. * the proposal is already closed or aborted. ### Msg/Vote A new vote can be created with the `MsgVote`, given a proposal id, a voter address, a choice (yes, no, veto or abstain) and some optional metadata. An optional `Exec` value can be provided to try to execute the proposal immediately after voting. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/group/v1/tx.proto#L338-L358 ``` It's expected to fail if: * metadata length is greater than `MaxMetadataLen` config. * the proposal is not in voting period anymore. ### Msg/Exec A proposal can be executed with the `MsgExec`. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/group/v1/tx.proto#L363-L373 ``` The messages that are part of this proposal won't be executed if: * the proposal has not been accepted by the group policy. * the proposal has already been successfully executed. ### Msg/LeaveGroup The `MsgLeaveGroup` allows group member to leave a group. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/group/v1/tx.proto#L381-L391 ``` It's expected to fail if: * the group member is not part of the group. * for any one of the associated group policies, if its decision policy's `Validate()` method fails against the updated group. ## Events The group module emits the following events: ### EventCreateGroup | Type | Attribute Key | Attribute Value | | -------------------------------- | ------------- | -------------------------------- | | message | action | /cosmos.group.v1.Msg/CreateGroup | | cosmos.group.v1.EventCreateGroup | group\_id | `{groupId}` | ### EventUpdateGroup | Type | Attribute Key | Attribute Value | | -------------------------------- | ------------- | ------------------------------------------------------------ | | message | action | `/cosmos.group.v1.Msg/UpdateGroup{Admin\|Metadata\|Members}` | | cosmos.group.v1.EventUpdateGroup | group\_id | `{groupId}` | ### EventCreateGroupPolicy | Type | Attribute Key | Attribute Value | | -------------------------------------- | ------------- | -------------------------------------- | | message | action | /cosmos.group.v1.Msg/CreateGroupPolicy | | cosmos.group.v1.EventCreateGroupPolicy | address | `{groupPolicyAddress}` | ### EventUpdateGroupPolicy | Type | Attribute Key | Attribute Value | | -------------------------------------- | ------------- | ------------------------------------------------------------------------- | | message | action | `/cosmos.group.v1.Msg/UpdateGroupPolicy{Admin\|Metadata\|DecisionPolicy}` | | cosmos.group.v1.EventUpdateGroupPolicy | address | `{groupPolicyAddress}` | ### EventCreateProposal | Type | Attribute Key | Attribute Value | | ----------------------------------- | ------------- | ----------------------------------- | | message | action | /cosmos.group.v1.Msg/CreateProposal | | cosmos.group.v1.EventCreateProposal | proposal\_id | `{proposalId}` | ### EventWithdrawProposal | Type | Attribute Key | Attribute Value | | ------------------------------------- | ------------- | ------------------------------------- | | message | action | /cosmos.group.v1.Msg/WithdrawProposal | | cosmos.group.v1.EventWithdrawProposal | proposal\_id | `{proposalId}` | ### EventVote | Type | Attribute Key | Attribute Value | | ------------------------- | ------------- | ------------------------- | | message | action | /cosmos.group.v1.Msg/Vote | | cosmos.group.v1.EventVote | proposal\_id | `{proposalId}` | ## EventExec | Type | Attribute Key | Attribute Value | | ------------------------- | ------------- | ------------------------- | | message | action | /cosmos.group.v1.Msg/Exec | | cosmos.group.v1.EventExec | proposal\_id | `{proposalId}` | | cosmos.group.v1.EventExec | logs | `{logs\_string}` | ### EventLeaveGroup | Type | Attribute Key | Attribute Value | | ------------------------------- | ------------- | ------------------------------- | | message | action | /cosmos.group.v1.Msg/LeaveGroup | | cosmos.group.v1.EventLeaveGroup | proposal\_id | `{proposalId}` | | cosmos.group.v1.EventLeaveGroup | address | `{address}` | ### EventProposalPruned | Type | Attribute Key | Attribute Value | | ----------------------------------- | ------------- | ------------------------------- | | message | action | /cosmos.group.v1.Msg/LeaveGroup | | cosmos.group.v1.EventProposalPruned | proposal\_id | `{proposalId}` | | cosmos.group.v1.EventProposalPruned | status | `{ProposalStatus}` | | cosmos.group.v1.EventProposalPruned | tally\_result | `{TallyResult}` | ## Client ### CLI A user can query and interact with the `group` module using the CLI. #### Query The `query` commands allow users to query `group` state. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query group --help ``` ##### group-info The `group-info` command allows users to query for group info by given group id. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query group group-info [id] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query group group-info 1 ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} admin: cosmos1.. group_id: "1" metadata: AQ== total_weight: "3" version: "1" ``` ##### group-policy-info The `group-policy-info` command allows users to query for group policy info by account address of group policy . ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query group group-policy-info [group-policy-account] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query group group-policy-info cosmos1.. ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} address: cosmos1.. admin: cosmos1.. decision_policy: '@type': /cosmos.group.v1.ThresholdDecisionPolicy threshold: "1" windows: min_execution_period: 0s voting_period: 432000s group_id: "1" metadata: AQ== version: "1" ``` ##### group-members The `group-members` command allows users to query for group members by group id with pagination flags. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query group group-members [id] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query group group-members 1 ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} members: - group_id: "1" member: address: cosmos1.. metadata: AQ== weight: "2" - group_id: "1" member: address: cosmos1.. metadata: AQ== weight: "1" pagination: next_key: null total: "2" ``` ##### groups-by-admin The `groups-by-admin` command allows users to query for groups by admin account address with pagination flags. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query group groups-by-admin [admin] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query group groups-by-admin cosmos1.. ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} groups: - admin: cosmos1.. group_id: "1" metadata: AQ== total_weight: "3" version: "1" - admin: cosmos1.. group_id: "2" metadata: AQ== total_weight: "3" version: "1" pagination: next_key: null total: "2" ``` ##### group-policies-by-group The `group-policies-by-group` command allows users to query for group policies by group id with pagination flags. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query group group-policies-by-group [group-id] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query group group-policies-by-group 1 ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} group_policies: - address: cosmos1.. admin: cosmos1.. decision_policy: '@type': /cosmos.group.v1.ThresholdDecisionPolicy threshold: "1" windows: min_execution_period: 0s voting_period: 432000s group_id: "1" metadata: AQ== version: "1" - address: cosmos1.. admin: cosmos1.. decision_policy: '@type': /cosmos.group.v1.ThresholdDecisionPolicy threshold: "1" windows: min_execution_period: 0s voting_period: 432000s group_id: "1" metadata: AQ== version: "1" pagination: next_key: null total: "2" ``` ##### group-policies-by-admin The `group-policies-by-admin` command allows users to query for group policies by admin account address with pagination flags. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query group group-policies-by-admin [admin] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query group group-policies-by-admin cosmos1.. ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} group_policies: - address: cosmos1.. admin: cosmos1.. decision_policy: '@type': /cosmos.group.v1.ThresholdDecisionPolicy threshold: "1" windows: min_execution_period: 0s voting_period: 432000s group_id: "1" metadata: AQ== version: "1" - address: cosmos1.. admin: cosmos1.. decision_policy: '@type': /cosmos.group.v1.ThresholdDecisionPolicy threshold: "1" windows: min_execution_period: 0s voting_period: 432000s group_id: "1" metadata: AQ== version: "1" pagination: next_key: null total: "2" ``` ##### proposal The `proposal` command allows users to query for proposal by id. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query group proposal [id] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query group proposal 1 ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} proposal: address: cosmos1.. executor_result: EXECUTOR_RESULT_NOT_RUN group_policy_version: "1" group_version: "1" metadata: AQ== msgs: - '@type': /cosmos.bank.v1beta1.MsgSend amount: - amount: "100000000" denom: stake from_address: cosmos1.. to_address: cosmos1.. proposal_id: "1" proposers: - cosmos1.. result: RESULT_UNFINALIZED status: STATUS_SUBMITTED submitted_at: "2021-12-17T07:06:26.310638964Z" windows: min_execution_period: 0s voting_period: 432000s vote_state: abstain_count: "0" no_count: "0" veto_count: "0" yes_count: "0" summary: "Summary" title: "Title" ``` ##### proposals-by-group-policy The `proposals-by-group-policy` command allows users to query for proposals by account address of group policy with pagination flags. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query group proposals-by-group-policy [group-policy-account] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query group proposals-by-group-policy cosmos1.. ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} pagination: next_key: null total: "1" proposals: - address: cosmos1.. executor_result: EXECUTOR_RESULT_NOT_RUN group_policy_version: "1" group_version: "1" metadata: AQ== msgs: - '@type': /cosmos.bank.v1beta1.MsgSend amount: - amount: "100000000" denom: stake from_address: cosmos1.. to_address: cosmos1.. proposal_id: "1" proposers: - cosmos1.. result: RESULT_UNFINALIZED status: STATUS_SUBMITTED submitted_at: "2021-12-17T07:06:26.310638964Z" windows: min_execution_period: 0s voting_period: 432000s vote_state: abstain_count: "0" no_count: "0" veto_count: "0" yes_count: "0" summary: "Summary" title: "Title" ``` ##### vote The `vote` command allows users to query for vote by proposal id and voter account address. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query group vote [proposal-id] [voter] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query group vote 1 cosmos1.. ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} vote: choice: CHOICE_YES metadata: AQ== proposal_id: "1" submitted_at: "2021-12-17T08:05:02.490164009Z" voter: cosmos1.. ``` ##### votes-by-proposal The `votes-by-proposal` command allows users to query for votes by proposal id with pagination flags. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query group votes-by-proposal [proposal-id] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query group votes-by-proposal 1 ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} pagination: next_key: null total: "1" votes: - choice: CHOICE_YES metadata: AQ== proposal_id: "1" submitted_at: "2021-12-17T08:05:02.490164009Z" voter: cosmos1.. ``` ##### votes-by-voter The `votes-by-voter` command allows users to query for votes by voter account address with pagination flags. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query group votes-by-voter [voter] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query group votes-by-voter cosmos1.. ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} pagination: next_key: null total: "1" votes: - choice: CHOICE_YES metadata: AQ== proposal_id: "1" submitted_at: "2021-12-17T08:05:02.490164009Z" voter: cosmos1.. ``` ### Transactions The `tx` commands allow users to interact with the `group` module. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group --help ``` #### create-group The `create-group` command allows users to create a group which is an aggregation of member accounts with associated weights and an administrator account. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group create-group [admin] [metadata] [members-json-file] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group create-group cosmos1.. "AQ==" members.json ``` #### update-group-admin The `update-group-admin` command allows users to update a group's admin. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group update-group-admin [admin] [group-id] [new-admin] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group update-group-admin cosmos1.. 1 cosmos1.. ``` #### update-group-members The `update-group-members` command allows users to update a group's members. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group update-group-members [admin] [group-id] [members-json-file] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group update-group-members cosmos1.. 1 members.json ``` #### update-group-metadata The `update-group-metadata` command allows users to update a group's metadata. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group update-group-metadata [admin] [group-id] [metadata] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group update-group-metadata cosmos1.. 1 "AQ==" ``` #### create-group-policy The `create-group-policy` command allows users to create a group policy which is an account associated with a group and a decision policy. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group create-group-policy [admin] [group-id] [metadata] [decision-policy] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group create-group-policy cosmos1.. 1 "AQ==" '{"@type":"/cosmos.group.v1.ThresholdDecisionPolicy", "threshold":"1", "windows": {"voting_period": "120h", "min_execution_period": "0s"}}' ``` #### create-group-with-policy The `create-group-with-policy` command allows users to create a group which is an aggregation of member accounts with associated weights and an administrator account with decision policy. If the `--group-policy-as-admin` flag is set to `true`, the group policy address becomes the group and group policy admin. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group create-group-with-policy [admin] [group-metadata] [group-policy-metadata] [members-json-file] [decision-policy] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group create-group-with-policy cosmos1.. "AQ==" "AQ==" members.json '{"@type":"/cosmos.group.v1.ThresholdDecisionPolicy", "threshold":"1", "windows": {"voting_period": "120h", "min_execution_period": "0s"}}' ``` #### update-group-policy-admin The `update-group-policy-admin` command allows users to update a group policy admin. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group update-group-policy-admin [admin] [group-policy-account] [new-admin] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group update-group-policy-admin cosmos1.. cosmos1.. cosmos1.. ``` #### update-group-policy-metadata The `update-group-policy-metadata` command allows users to update a group policy metadata. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group update-group-policy-metadata [admin] [group-policy-account] [new-metadata] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group update-group-policy-metadata cosmos1.. cosmos1.. "AQ==" ``` #### update-group-policy-decision-policy The `update-group-policy-decision-policy` command allows users to update a group policy's decision policy. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group update-group-policy-decision-policy [admin] [group-policy-account] [decision-policy] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group update-group-policy-decision-policy cosmos1.. cosmos1.. '{"@type":"/cosmos.group.v1.ThresholdDecisionPolicy", "threshold":"2", "windows": {"voting_period": "120h", "min_execution_period": "0s"}}' ``` #### submit-proposal The `submit-proposal` command allows users to submit a new proposal. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group submit-proposal [group-policy-account] [proposer[,proposer]*] [msg_tx_json_file] [metadata] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group submit-proposal cosmos1.. cosmos1.. msg_tx.json "AQ==" ``` #### withdraw-proposal The `withdraw-proposal` command allows users to withdraw a proposal. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group withdraw-proposal [proposal-id] [group-policy-admin-or-proposer] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group withdraw-proposal 1 cosmos1.. ``` #### vote The `vote` command allows users to vote on a proposal. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group vote proposal-id] [voter] [choice] [metadata] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group vote 1 cosmos1.. CHOICE_YES "AQ==" ``` #### exec The `exec` command allows users to execute a proposal. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group exec [proposal-id] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group exec 1 ``` #### leave-group The `leave-group` command allows group member to leave the group. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group leave-group [member-address] [group-id] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group leave-group cosmos1... 1 ``` ### gRPC A user can query the `group` module using gRPC endpoints. #### GroupInfo The `GroupInfo` endpoint allows users to query for group info by given group id. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.group.v1.Query/GroupInfo ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"group_id":1}' localhost:9090 cosmos.group.v1.Query/GroupInfo ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "info": { "groupId": "1", "admin": "cosmos1..", "metadata": "AQ==", "version": "1", "totalWeight": "3" } } ``` #### GroupPolicyInfo The `GroupPolicyInfo` endpoint allows users to query for group policy info by account address of group policy. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.group.v1.Query/GroupPolicyInfo ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"address":"cosmos1.."}' localhost:9090 cosmos.group.v1.Query/GroupPolicyInfo ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "info": { "address": "cosmos1..", "groupId": "1", "admin": "cosmos1..", "version": "1", "decisionPolicy": {"@type":"/cosmos.group.v1.ThresholdDecisionPolicy","threshold":"1","windows": {"voting_period": "120h", "min_execution_period": "0s"}}, } } ``` #### GroupMembers The `GroupMembers` endpoint allows users to query for group members by group id with pagination flags. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.group.v1.Query/GroupMembers ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"group_id":"1"}' localhost:9090 cosmos.group.v1.Query/GroupMembers ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "members": [ { "groupId": "1", "member": { "address": "cosmos1..", "weight": "1" } }, { "groupId": "1", "member": { "address": "cosmos1..", "weight": "2" } } ], "pagination": { "total": "2" } } ``` #### GroupsByAdmin The `GroupsByAdmin` endpoint allows users to query for groups by admin account address with pagination flags. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.group.v1.Query/GroupsByAdmin ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"admin":"cosmos1.."}' localhost:9090 cosmos.group.v1.Query/GroupsByAdmin ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "groups": [ { "groupId": "1", "admin": "cosmos1..", "metadata": "AQ==", "version": "1", "totalWeight": "3" }, { "groupId": "2", "admin": "cosmos1..", "metadata": "AQ==", "version": "1", "totalWeight": "3" } ], "pagination": { "total": "2" } } ``` #### GroupPoliciesByGroup The `GroupPoliciesByGroup` endpoint allows users to query for group policies by group id with pagination flags. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.group.v1.Query/GroupPoliciesByGroup ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"group_id":"1"}' localhost:9090 cosmos.group.v1.Query/GroupPoliciesByGroup ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "GroupPolicies": [ { "address": "cosmos1..", "groupId": "1", "admin": "cosmos1..", "version": "1", "decisionPolicy": {"@type":"/cosmos.group.v1.ThresholdDecisionPolicy","threshold":"1","windows":{"voting_period": "120h", "min_execution_period": "0s"}}, }, { "address": "cosmos1..", "groupId": "1", "admin": "cosmos1..", "version": "1", "decisionPolicy": {"@type":"/cosmos.group.v1.ThresholdDecisionPolicy","threshold":"1","windows":{"voting_period": "120h", "min_execution_period": "0s"}}, } ], "pagination": { "total": "2" } } ``` #### GroupPoliciesByAdmin The `GroupPoliciesByAdmin` endpoint allows users to query for group policies by admin account address with pagination flags. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.group.v1.Query/GroupPoliciesByAdmin ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"admin":"cosmos1.."}' localhost:9090 cosmos.group.v1.Query/GroupPoliciesByAdmin ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "GroupPolicies": [ { "address": "cosmos1..", "groupId": "1", "admin": "cosmos1..", "version": "1", "decisionPolicy": {"@type":"/cosmos.group.v1.ThresholdDecisionPolicy","threshold":"1","windows":{"voting_period": "120h", "min_execution_period": "0s"}}, }, { "address": "cosmos1..", "groupId": "1", "admin": "cosmos1..", "version": "1", "decisionPolicy": {"@type":"/cosmos.group.v1.ThresholdDecisionPolicy","threshold":"1","windows":{"voting_period": "120h", "min_execution_period": "0s"}}, } ], "pagination": { "total": "2" } } ``` #### Proposal The `Proposal` endpoint allows users to query for proposal by id. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.group.v1.Query/Proposal ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"proposal_id":"1"}' localhost:9090 cosmos.group.v1.Query/Proposal ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "proposal": { "proposalId": "1", "address": "cosmos1..", "proposers": [ "cosmos1.." ], "submittedAt": "2021-12-17T07:06:26.310638964Z", "groupVersion": "1", "GroupPolicyVersion": "1", "status": "STATUS_SUBMITTED", "result": "RESULT_UNFINALIZED", "voteState": { "yesCount": "0", "noCount": "0", "abstainCount": "0", "vetoCount": "0" }, "windows": { "min_execution_period": "0s", "voting_period": "432000s" }, "executorResult": "EXECUTOR_RESULT_NOT_RUN", "messages": [ {"@type":"/cosmos.bank.v1beta1.MsgSend","amount":[{"denom":"stake","amount":"100000000"}],"fromAddress":"cosmos1..","toAddress":"cosmos1.."} ], "title": "Title", "summary": "Summary", } } ``` #### ProposalsByGroupPolicy The `ProposalsByGroupPolicy` endpoint allows users to query for proposals by account address of group policy with pagination flags. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.group.v1.Query/ProposalsByGroupPolicy ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"address":"cosmos1.."}' localhost:9090 cosmos.group.v1.Query/ProposalsByGroupPolicy ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "proposals": [ { "proposalId": "1", "address": "cosmos1..", "proposers": [ "cosmos1.." ], "submittedAt": "2021-12-17T08:03:27.099649352Z", "groupVersion": "1", "GroupPolicyVersion": "1", "status": "STATUS_CLOSED", "result": "RESULT_ACCEPTED", "voteState": { "yesCount": "1", "noCount": "0", "abstainCount": "0", "vetoCount": "0" }, "windows": { "min_execution_period": "0s", "voting_period": "432000s" }, "executorResult": "EXECUTOR_RESULT_NOT_RUN", "messages": [ {"@type":"/cosmos.bank.v1beta1.MsgSend","amount":[{"denom":"stake","amount":"100000000"}],"fromAddress":"cosmos1..","toAddress":"cosmos1.."} ], "title": "Title", "summary": "Summary", } ], "pagination": { "total": "1" } } ``` #### VoteByProposalVoter The `VoteByProposalVoter` endpoint allows users to query for vote by proposal id and voter account address. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.group.v1.Query/VoteByProposalVoter ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"proposal_id":"1","voter":"cosmos1.."}' localhost:9090 cosmos.group.v1.Query/VoteByProposalVoter ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "vote": { "proposalId": "1", "voter": "cosmos1..", "choice": "CHOICE_YES", "submittedAt": "2021-12-17T08:05:02.490164009Z" } } ``` #### VotesByProposal The `VotesByProposal` endpoint allows users to query for votes by proposal id with pagination flags. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.group.v1.Query/VotesByProposal ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"proposal_id":"1"}' localhost:9090 cosmos.group.v1.Query/VotesByProposal ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "votes": [ { "proposalId": "1", "voter": "cosmos1..", "choice": "CHOICE_YES", "submittedAt": "2021-12-17T08:05:02.490164009Z" } ], "pagination": { "total": "1" } } ``` #### VotesByVoter The `VotesByVoter` endpoint allows users to query for votes by voter account address with pagination flags. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.group.v1.Query/VotesByVoter ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"voter":"cosmos1.."}' localhost:9090 cosmos.group.v1.Query/VotesByVoter ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "votes": [ { "proposalId": "1", "voter": "cosmos1..", "choice": "CHOICE_YES", "submittedAt": "2021-12-17T08:05:02.490164009Z" } ], "pagination": { "total": "1" } } ``` ### REST A user can query the `group` module using REST endpoints. #### GroupInfo The `GroupInfo` endpoint allows users to query for group info by given group id. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/group/v1/group_info/{group_id} ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl localhost:1317/cosmos/group/v1/group_info/1 ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "info": { "id": "1", "admin": "cosmos1..", "metadata": "AQ==", "version": "1", "total_weight": "3" } } ``` #### GroupPolicyInfo The `GroupPolicyInfo` endpoint allows users to query for group policy info by account address of group policy. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/group/v1/group_policy_info/{address} ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl localhost:1317/cosmos/group/v1/group_policy_info/cosmos1.. ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "info": { "address": "cosmos1..", "group_id": "1", "admin": "cosmos1..", "metadata": "AQ==", "version": "1", "decision_policy": { "@type": "/cosmos.group.v1.ThresholdDecisionPolicy", "threshold": "1", "windows": { "voting_period": "120h", "min_execution_period": "0s" } }, } } ``` #### GroupMembers The `GroupMembers` endpoint allows users to query for group members by group id with pagination flags. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/group/v1/group_members/{group_id} ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl localhost:1317/cosmos/group/v1/group_members/1 ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "members": [ { "group_id": "1", "member": { "address": "cosmos1..", "weight": "1", "metadata": "AQ==" } }, { "group_id": "1", "member": { "address": "cosmos1..", "weight": "2", "metadata": "AQ==" } ], "pagination": { "next_key": null, "total": "2" } } ``` #### GroupsByAdmin The `GroupsByAdmin` endpoint allows users to query for groups by admin account address with pagination flags. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/group/v1/groups_by_admin/{admin} ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl localhost:1317/cosmos/group/v1/groups_by_admin/cosmos1.. ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "groups": [ { "id": "1", "admin": "cosmos1..", "metadata": "AQ==", "version": "1", "total_weight": "3" }, { "id": "2", "admin": "cosmos1..", "metadata": "AQ==", "version": "1", "total_weight": "3" } ], "pagination": { "next_key": null, "total": "2" } } ``` #### GroupPoliciesByGroup The `GroupPoliciesByGroup` endpoint allows users to query for group policies by group id with pagination flags. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/group/v1/group_policies_by_group/{group_id} ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl localhost:1317/cosmos/group/v1/group_policies_by_group/1 ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "group_policies": [ { "address": "cosmos1..", "group_id": "1", "admin": "cosmos1..", "metadata": "AQ==", "version": "1", "decision_policy": { "@type": "/cosmos.group.v1.ThresholdDecisionPolicy", "threshold": "1", "windows": { "voting_period": "120h", "min_execution_period": "0s" } }, }, { "address": "cosmos1..", "group_id": "1", "admin": "cosmos1..", "metadata": "AQ==", "version": "1", "decision_policy": { "@type": "/cosmos.group.v1.ThresholdDecisionPolicy", "threshold": "1", "windows": { "voting_period": "120h", "min_execution_period": "0s" } }, } ], "pagination": { "next_key": null, "total": "2" } } ``` #### GroupPoliciesByAdmin The `GroupPoliciesByAdmin` endpoint allows users to query for group policies by admin account address with pagination flags. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/group/v1/group_policies_by_admin/{admin} ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl localhost:1317/cosmos/group/v1/group_policies_by_admin/cosmos1.. ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "group_policies": [ { "address": "cosmos1..", "group_id": "1", "admin": "cosmos1..", "metadata": "AQ==", "version": "1", "decision_policy": { "@type": "/cosmos.group.v1.ThresholdDecisionPolicy", "threshold": "1", "windows": { "voting_period": "120h", "min_execution_period": "0s" } }, }, { "address": "cosmos1..", "group_id": "1", "admin": "cosmos1..", "metadata": "AQ==", "version": "1", "decision_policy": { "@type": "/cosmos.group.v1.ThresholdDecisionPolicy", "threshold": "1", "windows": { "voting_period": "120h", "min_execution_period": "0s" } }, } ], "pagination": { "next_key": null, "total": "2" } ``` #### Proposal The `Proposal` endpoint allows users to query for proposal by id. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/group/v1/proposal/{proposal_id} ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl localhost:1317/cosmos/group/v1/proposal/1 ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "proposal": { "proposal_id": "1", "address": "cosmos1..", "metadata": "AQ==", "proposers": [ "cosmos1.." ], "submitted_at": "2021-12-17T07:06:26.310638964Z", "group_version": "1", "group_policy_version": "1", "status": "STATUS_SUBMITTED", "result": "RESULT_UNFINALIZED", "vote_state": { "yes_count": "0", "no_count": "0", "abstain_count": "0", "veto_count": "0" }, "windows": { "min_execution_period": "0s", "voting_period": "432000s" }, "executor_result": "EXECUTOR_RESULT_NOT_RUN", "messages": [ { "@type": "/cosmos.bank.v1beta1.MsgSend", "from_address": "cosmos1..", "to_address": "cosmos1..", "amount": [ { "denom": "stake", "amount": "100000000" } ] } ], "title": "Title", "summary": "Summary", } } ``` #### ProposalsByGroupPolicy The `ProposalsByGroupPolicy` endpoint allows users to query for proposals by account address of group policy with pagination flags. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/group/v1/proposals_by_group_policy/{address} ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl localhost:1317/cosmos/group/v1/proposals_by_group_policy/cosmos1.. ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "proposals": [ { "id": "1", "group_policy_address": "cosmos1..", "metadata": "AQ==", "proposers": [ "cosmos1.." ], "submit_time": "2021-12-17T08:03:27.099649352Z", "group_version": "1", "group_policy_version": "1", "status": "STATUS_CLOSED", "result": "RESULT_ACCEPTED", "vote_state": { "yes_count": "1", "no_count": "0", "abstain_count": "0", "veto_count": "0" }, "windows": { "min_execution_period": "0s", "voting_period": "432000s" }, "executor_result": "EXECUTOR_RESULT_NOT_RUN", "messages": [ { "@type": "/cosmos.bank.v1beta1.MsgSend", "from_address": "cosmos1..", "to_address": "cosmos1..", "amount": [ { "denom": "stake", "amount": "100000000" } ] } ] } ], "pagination": { "next_key": null, "total": "1" } } ``` #### VoteByProposalVoter The `VoteByProposalVoter` endpoint allows users to query for vote by proposal id and voter account address. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/group/v1/vote_by_proposal_voter/{proposal_id}/{voter} ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl localhost:1317/cosmos/group/v1beta1/vote_by_proposal_voter/1/cosmos1.. ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "vote": { "proposal_id": "1", "voter": "cosmos1..", "choice": "CHOICE_YES", "metadata": "AQ==", "submitted_at": "2021-12-17T08:05:02.490164009Z" } } ``` #### VotesByProposal The `VotesByProposal` endpoint allows users to query for votes by proposal id with pagination flags. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/group/v1/votes_by_proposal/{proposal_id} ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl localhost:1317/cosmos/group/v1/votes_by_proposal/1 ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "votes": [ { "proposal_id": "1", "voter": "cosmos1..", "option": "CHOICE_YES", "metadata": "AQ==", "submit_time": "2021-12-17T08:05:02.490164009Z" } ], "pagination": { "next_key": null, "total": "1" } } ``` #### VotesByVoter The `VotesByVoter` endpoint allows users to query for votes by voter account address with pagination flags. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/group/v1/votes_by_voter/{voter} ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl localhost:1317/cosmos/group/v1/votes_by_voter/cosmos1.. ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "votes": [ { "proposal_id": "1", "voter": "cosmos1..", "choice": "CHOICE_YES", "metadata": "AQ==", "submitted_at": "2021-12-17T08:05:02.490164009Z" } ], "pagination": { "next_key": null, "total": "1" } } ``` ## Metadata The group module has four locations for metadata where users can provide further context about the on-chain actions they are taking. By default all metadata fields have a 255 character length field where metadata can be stored in json format, either on-chain or off-chain depending on the amount of data required. Here we provide a recommendation for the json structure and where the data should be stored. There are two important factors in making these recommendations. First, that the group and gov modules are consistent with one another, note the number of proposals made by all groups may be quite large. Second, that client applications such as block explorers and governance interfaces have confidence in the consistency of metadata structure across chains. ### Proposal Location: off-chain as json object stored on IPFS (mirrors [gov proposal](/sdk/latest/modules/gov/README#metadata)) ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "title": "", "authors": [""], "summary": "", "details": "", "proposal_forum_url": "", "vote_option_context": "", } ``` The `authors` field is an array of strings, this is to allow for multiple authors to be listed in the metadata. In v0.46, the `authors` field is a comma-separated string. Frontends are encouraged to support both formats for backwards compatibility. ### Vote Location: on-chain as json within 255 character limit (mirrors [gov vote](/sdk/latest/modules/gov/README#metadata)) ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "justification": "", } ``` ### Group Location: off-chain as json object stored on IPFS ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "name": "", "description": "", "group_website_url": "", "group_forum_url": "", } ``` ### Decision policy Location: on-chain as json within 255 character limit ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "name": "", "description": "", } ``` # x/mint Source: https://docs.cosmos.network/sdk/latest/modules/mint/README The x/mint module handles the regular minting of new tokens in a configurable manner. The `x/mint` module handles the regular minting of new tokens in a configurable manner. ## Contents * [State](#state) * [Minter](#minter) * [Params](#params) * [Begin-Block](#begin-block) * [NextInflationRate](#nextinflationrate) * [NextAnnualProvisions](#nextannualprovisions) * [BlockProvision](#blockprovision) * [Parameters](#parameters) * [Events](#events) * [BeginBlocker](#beginblocker) * [Client](#client) * [CLI](#cli) * [gRPC](#grpc) * [REST](#rest) ## Concepts ### The Minting Mechanism The default minting mechanism was designed to: * allow for a flexible inflation rate determined by market demand targeting a particular bonded-stake ratio * effect a balance between market liquidity and staked supply In order to best determine the appropriate market rate for inflation rewards, a moving change rate is used. The moving change rate mechanism ensures that if the % bonded is either over or under the goal %-bonded, the inflation rate will adjust to further incentivize or disincentivize being bonded, respectively. Setting the goal %-bonded at less than 100% encourages the network to maintain some non-staked tokens which should help provide some liquidity. It can be broken down in the following way: * If the actual percentage of bonded tokens is below the goal %-bonded the inflation rate will increase until a maximum value is reached * If the goal % bonded (67% in Cosmos-Hub) is maintained, then the inflation rate will stay constant * If the actual percentage of bonded tokens is above the goal %-bonded the inflation rate will decrease until a minimum value is reached ### Custom Minters As of Cosmos SDK v0.53.0, developers can set a custom `MintFn` for the module for specialized token minting logic. The function signature that a `MintFn` must implement is as follows: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // MintFn defines the function that needs to be implemented in order to customize the minting process. type MintFn func(ctx sdk.Context, k *Keeper) error ``` This can be passed to the `Keeper` upon creation with an additional `Option`: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} app.MintKeeper = mintkeeper.NewKeeper( appCodec, runtime.NewKVStoreService(keys[minttypes.StoreKey]), app.StakingKeeper, app.AccountKeeper, app.BankKeeper, authtypes.FeeCollectorName, authtypes.NewModuleAddress(govtypes.ModuleName).String(), // mintkeeper.WithMintFn(CUSTOM_MINT_FN), // custom mintFn can be added here ) ``` #### Custom Minter DI Example Below is a simple approach to creating a custom mint function with extra dependencies in DI configurations. For this basic example, we will make the minter simply double the supply of `foo` coin. First, we will define a function that takes our required dependencies, and returns a `MintFn`. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // MyCustomMintFunction is a custom mint function that doubles the supply of `foo` coin. func MyCustomMintFunction(bank bankkeeper.BaseKeeper) mintkeeper.MintFn { return func(ctx sdk.Context, k *mintkeeper.Keeper) error { supply := bank.GetSupply(ctx, "foo") err := k.MintCoins(ctx, sdk.NewCoins(supply.Add(supply))) if err != nil { return err } return nil } } ``` Then, pass the function defined above into the `depinject.Supply` function with the required dependencies. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // NewSimApp returns a reference to an initialized SimApp. func NewSimApp( logger log.Logger, db dbm.DB, traceStore io.Writer, loadLatest bool, appOpts servertypes.AppOptions, baseAppOptions ...func(*baseapp.BaseApp), ) *SimApp { var ( app = &SimApp{ } appBuilder *runtime.AppBuilder appConfig = depinject.Configs( AppConfig, depinject.Supply( appOpts, logger, // our custom mint function with the necessary dependency passed in. MyCustomMintFunction(app.BankKeeper), ), ) ) // ... } ``` ## State ### Minter The minter is a space for holding current inflation information. * Minter: `0x00 -> ProtocolBuffer(minter)` ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/mint/v1beta1/mint.proto#L10-L24 ``` ### Params The mint module stores its params in state with the prefix of `0x01`, it can be updated with governance or the address with authority. **Note:** The `MaxSupply` parameter controls the maximum supply of tokens the module can mint. A value of `0` indicates an unlimited supply. * Params: `mint/params -> legacy_amino(params)` ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/mint/v1beta1/mint.proto#L26-L59 ``` ## Begin-Block Minting parameters are recalculated and inflation paid at the beginning of each block. ### Inflation rate calculation Inflation rate is calculated using an "inflation calculation function" that's passed to the `NewAppModule` function. If no function is passed, then the SDK's default inflation function will be used (`NextInflationRate`). In case a custom inflation calculation logic is needed, this can be achieved by defining and passing a function that matches `InflationCalculationFn`'s signature. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type InflationCalculationFn func(ctx sdk.Context, minter Minter, params Params, bondedRatio math.LegacyDec) math.LegacyDec ``` #### NextInflationRate The target annual inflation rate is recalculated each block. The inflation is also subject to a rate change (positive or negative) depending on the distance from the desired ratio (67%). The maximum rate change possible is defined to be 13% per year, however, the annual inflation is capped as between 7% and 20%. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} NextInflationRate(params Params, bondedRatio math.LegacyDec) (inflation math.LegacyDec) { inflationRateChangePerYear = (1 - bondedRatio/params.GoalBonded) * params.InflationRateChange inflationRateChange = inflationRateChangePerYear/blocksPerYr // increase the new annual inflation for this next block inflation += inflationRateChange if inflation > params.InflationMax { inflation = params.InflationMax } if inflation < params.InflationMin { inflation = params.InflationMin } return inflation } ``` ### NextAnnualProvisions Calculate the annual provisions based on current total supply and inflation rate. This parameter is calculated once per block. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} NextAnnualProvisions(params Params, totalSupply math.LegacyDec) (provisions math.LegacyDec) { return Inflation * totalSupply ``` ### BlockProvision Calculate the provisions generated for each block based on current annual provisions. The provisions are then minted by the `mint` module's `ModuleMinterAccount` and then transferred to the `auth`'s `FeeCollector` `ModuleAccount`. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} BlockProvision(params Params) sdk.Coin { provisionAmt = AnnualProvisions/ params.BlocksPerYear return sdk.NewCoin(params.MintDenom, provisionAmt.Truncate()) ``` ## Parameters The minting module contains the following parameters: | Key | Type | Example | | ------------------- | ----------------- | ---------------------- | | MintDenom | string | "uatom" | | InflationRateChange | string (dec) | "0.130000000000000000" | | InflationMax | string (dec) | "0.200000000000000000" | | InflationMin | string (dec) | "0.070000000000000000" | | GoalBonded | string (dec) | "0.670000000000000000" | | BlocksPerYear | string (uint64) | "6311520" | | MaxSupply | string (math.Int) | "0" | A `MaxSupply` value of `0` means no maximum supply is enforced. Minting stops automatically once the total supply reaches the configured `MaxSupply`. For legacy Amino JSON compatibility, `max_supply` is encoded even when set to `"0"`. ## Events The minting module emits the following events: ### BeginBlocker | Type | Attribute Key | Attribute Value | | ---- | ------------------ | -------------------- | | mint | bonded\_ratio | `{bondedRatio}` | | mint | inflation | `{inflation}` | | mint | annual\_provisions | `{annualProvisions}` | | mint | amount | `{amount}` | ## Client ### CLI A user can query and interact with the `mint` module using the CLI. #### Query The `query` commands allows users to query `mint` state. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query mint --help ``` ##### annual-provisions The `annual-provisions` command allows users to query the current minting annual provisions value ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query mint annual-provisions [flags] ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query mint annual-provisions ``` Example Output: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} 22268504368893.612100895088410693 ``` ##### inflation The `inflation` command allows users to query the current minting inflation value ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query mint inflation [flags] ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query mint inflation ``` Example Output: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} 0.199200302563256955 ``` ##### params The `params` command allows users to query the current minting parameters ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query mint params [flags] ``` Example: ```yml theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} blocks_per_year: "4360000" goal_bonded: "0.670000000000000000" inflation_max: "0.200000000000000000" inflation_min: "0.070000000000000000" inflation_rate_change: "0.130000000000000000" max_supply: "0" mint_denom: stake ``` ### gRPC A user can query the `mint` module using gRPC endpoints. #### AnnualProvisions The `AnnualProvisions` endpoint allows users to query the current minting annual provisions value ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos.mint.v1beta1.Query/AnnualProvisions ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext localhost:9090 cosmos.mint.v1beta1.Query/AnnualProvisions ``` Example Output: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "annualProvisions": "1432452520532626265712995618" } ``` #### Inflation The `Inflation` endpoint allows users to query the current minting inflation value ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos.mint.v1beta1.Query/Inflation ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext localhost:9090 cosmos.mint.v1beta1.Query/Inflation ``` Example Output: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "inflation": "130197115720711261" } ``` #### Params The `Params` endpoint allows users to query the current minting parameters ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos.mint.v1beta1.Query/Params ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext localhost:9090 cosmos.mint.v1beta1.Query/Params ``` Example Output: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "params": { "mintDenom": "stake", "inflationRateChange": "130000000000000000", "inflationMax": "200000000000000000", "inflationMin": "70000000000000000", "goalBonded": "670000000000000000", "blocksPerYear": "6311520", "maxSupply": "0" } } ``` ### REST A user can query the `mint` module using REST endpoints. #### annual-provisions ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/mint/v1beta1/annual_provisions ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl "localhost:1317/cosmos/mint/v1beta1/annual_provisions" ``` Example Output: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "annualProvisions": "1432452520532626265712995618" } ``` #### inflation ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/mint/v1beta1/inflation ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl "localhost:1317/cosmos/mint/v1beta1/inflation" ``` Example Output: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "inflation": "130197115720711261" } ``` #### params ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/mint/v1beta1/params ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl "localhost:1317/cosmos/mint/v1beta1/params" ``` Example Output: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "params": { "mintDenom": "stake", "inflationRateChange": "130000000000000000", "inflationMax": "200000000000000000", "inflationMin": "70000000000000000", "goalBonded": "670000000000000000", "blocksPerYear": "6311520", "maxSupply": "0" } } ``` # Module Directory Source: https://docs.cosmos.network/sdk/latest/modules/modules Here are some production-grade modules that can be used in Cosmos SDK applications, along with their respective documentation. Here are some production-grade modules that can be used in Cosmos SDK applications, along with their respective documentation: ## Essential Modules Essential modules include functionality that *must* be included in your Cosmos SDK blockchain. These modules provide the core behaviors that are needed for users and operators such as balance tracking, proof-of-stake capabilities and governance. * [Auth](/sdk/latest/modules/auth/auth) - Authentication of accounts and transactions for Cosmos SDK applications. * [Bank](/sdk/latest/modules/bank/README) - Token transfer functionalities. * [Circuit](/sdk/latest/modules/circuit/README) - Circuit breaker module for pausing messages. * [Consensus](/sdk/latest/modules/consensus/README) - Consensus module for modifying CometBFT's ABCI consensus params. * [Distribution](/sdk/latest/modules/distribution/README) - Fee distribution, and staking token provision distribution. * [Evidence](/sdk/latest/modules/evidence/README) - Evidence handling for double signing, misbehaviour, etc. * [Governance](/sdk/latest/modules/gov/README) - On-chain proposals and voting. * [Genutil](/sdk/latest/modules/genutil/README) - Genesis utilities for the Cosmos SDK. * [Mint](/sdk/latest/modules/mint/README) - Creation of new units of staking token. * [Slashing](/sdk/latest/modules/slashing/README) - Validator punishment mechanisms. * [Staking](/sdk/latest/modules/staking/README) - Proof-of-Stake layer for public blockchains. * [Upgrade](/sdk/latest/modules/upgrade/README) - Software upgrades handling and coordination. ## Supplementary Modules Supplementary modules are modules that are maintained in the Cosmos SDK but are not necessary for the core functionality of your blockchain. They can be thought of as ways to extend the capabilities of your blockchain or further specialize it. * [Authz](/sdk/latest/modules/authz/README) - Authorization for accounts to perform actions on behalf of other accounts. * [Epochs](/sdk/latest/modules/epochs/README) - Registration so SDK modules can have logic to be executed at the timed tickers. * [Feegrant](/sdk/latest/modules/feegrant/README) - Grant fee allowances for executing transactions. * [Group](/sdk/latest/modules/group/README) - Allows for the creation and management of on-chain multisig accounts. * [NFT](/sdk/latest/modules/nft/README) - NFT module implemented based on [ADR43](/sdk/latest/reference/architecture/adr-043-nft-module). * [ProtocolPool](/sdk/latest/modules/protocolpool/README) - Extended management of community pool functionality. ## Deprecated Modules The following modules are deprecated. They will no longer be maintained and eventually will be removed in an upcoming release of the Cosmos SDK per our [release process](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/RELEASE_PROCESS.md). * [Crisis](/sdk/latest/modules/crisis/README) - *Deprecated* halting the blockchain under certain circumstances (e.g. if an invariant is broken). * [Params](/sdk/latest/modules/params/README) - *Deprecated* Globally available parameter store. To learn more about the process of building modules, visit the [building modules reference documentation](/sdk/latest/guides/module-design/module-design-considerations). ## IBC The IBC module for the SDK is maintained by the IBC Go team in its [own repository](https://github.com/cosmos/ibc-go). Additionally, the [capability module](https://github.com/cosmos/ibc-go/tree/fdd664698d79864f1e00e147f9879e58497b5ef1/modules/capability) is from v0.50+ maintained by the IBC Go team in its [own repository](https://github.com/cosmos/ibc-go/tree/fdd664698d79864f1e00e147f9879e58497b5ef1/modules/capability). ## CosmWasm The CosmWasm module enables smart contracts, learn more by going to their [documentation site](https://book.cosmwasm.com/), or visit [the repository](https://github.com/CosmWasm/cosmwasm). ## EVM Read more about writing smart contracts with solidity at the official [`evm` documentation page](https://evm.cosmos.network/). # x/nft Source: https://docs.cosmos.network/sdk/latest/modules/nft/README ## Abstract `x/nft` has been moved to [`./contrib/x/nft`](https://github.com/cosmos/cosmos-sdk/tree/main/contrib/x/nft) and is no longer actively maintained as part of the core Cosmos SDK. It is still available for use but is not included in the SDK Bug Bounty program. It was moved because it was never widely adopted. ## Contents ## Abstract `x/nft` is an implementation of a Cosmos SDK module, per [ADR 43](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-043-nft-module.md), that allows you to create nft classification, create nft, transfer nft, update nft, and support various queries by integrating the module. It is fully compatible with the ERC721 specification. * [Concepts](#concepts) * [Class](#class) * [NFT](#nft) * [State](#state) * [Class](#class-1) * [NFT](#nft-1) * [NFTOfClassByOwner](#nftofclassbyowner) * [Owner](#owner) * [TotalSupply](#totalsupply) * [Messages](#messages) * [MsgSend](#msgsend) * [Events](#events) ## Concepts ### Class `x/nft` module defines a struct `Class` to describe the common characteristics of a class of nft, under this class, you can create a variety of nft, which is equivalent to an erc721 contract for Ethereum. The design is defined in the [ADR 043](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-043-nft-module.md). ### NFT The full name of NFT is Non-Fungible Tokens. Because of the irreplaceable nature of NFT, it means that it can be used to represent unique things. The nft implemented by this module is fully compatible with Ethereum ERC721 standard. ## State ### Class Class is mainly composed of `id`, `name`, `symbol`, `description`, `uri`, `uri_hash`,`data` where `id` is the unique identifier of the class, similar to the Ethereum ERC721 contract address, the others are optional. * Class: `0x01 | classID | -> ProtocolBuffer(Class)` ### NFT NFT is mainly composed of `class_id`, `id`, `uri`, `uri_hash` and `data`. Among them, `class_id` and `id` are two-tuples that identify the uniqueness of nft, `uri` and `uri_hash` is optional, which identifies the off-chain storage location of the nft, and `data` is an Any type. Use Any chain of `x/nft` modules can be customized by extending this field * NFT: `0x02 | classID | 0x00 | nftID |-> ProtocolBuffer(NFT)` ### NFTOfClassByOwner NFTOfClassByOwner is mainly to realize the function of querying all nfts using classID and owner, without other redundant functions. * NFTOfClassByOwner: `0x03 | owner | 0x00 | classID | 0x00 | nftID |-> 0x01` ### Owner Since there is no extra field in NFT to indicate the owner of nft, an additional key-value pair is used to save the ownership of nft. With the transfer of nft, the key-value pair is updated synchronously. * OwnerKey: `0x04 | classID | 0x00 | nftID |-> owner` ### TotalSupply TotalSupply is responsible for tracking the number of all nfts under a certain class. Mint operation is performed under the changed class, supply increases by one, burn operation, and supply decreases by one. * OwnerKey: `0x05 | classID |-> totalSupply` ## Messages In this section we describe the processing of messages for the NFT module. The validation of `ClassID` and `NftID` is left to the app developer.\ The SDK does not provide any validation for these fields. ### MsgSend You can use the `MsgSend` message to transfer the ownership of nft. This is a function provided by the `x/nft` module. Of course, you can use the `Transfer` method to implement your own transfer logic, but you need to pay extra attention to the transfer permissions. The message handling should fail if: * provided `ClassID` does not exist. * provided `Id` does not exist. * provided `Sender` does not the owner of nft. ## Events The nft module emits proto events defined in [the Protobuf reference](https://buf.build/cosmos/cosmos-sdk/docs/main:cosmos.nft.v1beta1). # x/params Source: https://docs.cosmos.network/sdk/latest/modules/params/README NOTE: x/params is deprecated as of Cosmos SDK v0.53 and will be removed in the next release. NOTE: `x/params` is deprecated as of Cosmos SDK v0.53 and will be removed in the next release. ## Abstract Package params provides a globally available parameter store. There are two main types, Keeper and Subspace. Subspace is an isolated namespace for a paramstore, where keys are prefixed by preconfigured spacename. Keeper has a permission to access all existing spaces. Subspace can be used by the individual keepers, which need a private parameter store that the other keepers cannot modify. The params Keeper can be used to add a route to `x/gov` router in order to modify any parameter in case a proposal passes. The following contents explains how to use params module for master and user modules. ## Contents * [Keeper](#keeper) * [Subspace](#subspace) * [Key](#key) * [KeyTable](#keytable) * [ParamSet](#paramset) ## Keeper In the app initialization stage, [subspaces](#subspace) can be allocated for other modules' keeper using `Keeper.Subspace` and are stored in `Keeper.spaces`. Then, those modules can have a reference to their specific parameter store through `Keeper.GetSubspace`. Example: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type ExampleKeeper struct { paramSpace paramtypes.Subspace } func (k ExampleKeeper) SetParams(ctx sdk.Context, params types.Params) { k.paramSpace.SetParamSet(ctx, ¶ms) } ``` ## Subspace `Subspace` is a prefixed subspace of the parameter store. Each module which uses the parameter store will take a `Subspace` to isolate permission to access. ### Key Parameter keys are human readable alphanumeric strings. A parameter for the key `"ExampleParameter"` is stored under `[]byte("SubspaceName" + "/" + "ExampleParameter")`, where `"SubspaceName"` is the name of the subspace. Subkeys are secondary parameter keys those are used along with a primary parameter key. Subkeys can be used for grouping or dynamic parameter key generation during runtime. ### KeyTable All of the parameter keys that will be used should be registered at the compile time. `KeyTable` is essentially a `map[string]attribute`, where the `string` is a parameter key. Currently, `attribute` consists of a `reflect.Type`, which indicates the parameter type to check that provided key and value are compatible and registered, as well as a function `ValueValidatorFn` to validate values. Only primary keys have to be registered on the `KeyTable`. Subkeys inherit the attribute of the primary key. ### ParamSet Modules often define parameters as a proto message. The generated struct can implement `ParamSet` interface to be used with the following methods: * `KeyTable.RegisterParamSet()`: registers all parameters in the struct * `Subspace.{Get, Set}ParamSet()`: Get to & Set from the struct The implementor should be a pointer in order to use `GetParamSet()`. # x/protocolpool Source: https://docs.cosmos.network/sdk/latest/modules/protocolpool/README ## Concepts `x/protocolpool` is a supplemental Cosmos SDK module that handles functionality for community pool funds. The module provides a separate module account for the community pool making it easier to track the pool assets. Starting with v0.53 of the Cosmos SDK, community funds can be tracked using this module instead of the `x/distribution` module. Funds are migrated from the `x/distribution` module's community pool to `x/protocolpool`'s module account automatically. This module is `supplemental`; it is not required to run a Cosmos SDK chain. `x/protocolpool` enhances the community pool functionality provided by `x/distribution` and enables custom modules to further extend the community pool. Note: *as long as an external commmunity pool keeper (here, `x/protocolpool`) is wired in DI configs, `x/distribution` will automatically use it for its external pool.* ## Usage Limitations The following `x/distribution` handlers will now return an error when the `protocolpool` module is used with `x/distribution`: **QueryService** * `CommunityPool` **MsgService** * `CommunityPoolSpend` * `FundCommunityPool` If you have services that rely on this functionality from `x/distribution`, please update them to use the `x/protocolpool` equivalents. ## State Transitions ### FundCommunityPool FundCommunityPool can be called by any valid account to send funds to the `x/protocolpool` module account. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // FundCommunityPool defines a method to allow an account to directly // fund the community pool. rpc FundCommunityPool(MsgFundCommunityPool) returns (MsgFundCommunityPoolResponse); ``` ### CommunityPoolSpend CommunityPoolSpend can be called by the module authority (default governance module account) or any account with authorization to spend funds from the `x/protocolpool` module account to a receiver address. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // CommunityPoolSpend defines a governance operation for sending tokens from // the community pool in the x/protocolpool module to another account, which // could be the governance module itself. The authority is defined in the // keeper. rpc CommunityPoolSpend(MsgCommunityPoolSpend) returns (MsgCommunityPoolSpendResponse); ``` ### CreateContinuousFund CreateContinuousFund is a message used to initiate a continuous fund for a specific recipient. The proposed percentage of funds will be distributed only on withdraw request for the recipient. The fund distribution continues until expiry time is reached or continuous fund request is canceled. NOTE: This feature is designed to work with the SDK's default bond denom. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // CreateContinuousFund defines a method to distribute a percentage of funds to an address continuously. // This ContinuousFund can be indefinite or run until a given expiry time. // Funds come from validator block rewards from x/distribution, but may also come from // any user who funds the ProtocolPoolEscrow module account directly through x/bank. rpc CreateContinuousFund(MsgCreateContinuousFund) returns (MsgCreateContinuousFundResponse); ``` ### CancelContinuousFund CancelContinuousFund is a message used to cancel an existing continuous fund proposal for a specific recipient. Cancelling a continuous fund stops further distribution of funds, and the state object is removed from storage. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // CancelContinuousFund defines a method for cancelling continuous fund. rpc CancelContinuousFund(MsgCancelContinuousFund) returns (MsgCancelContinuousFundResponse); ``` ## Messages ### MsgFundCommunityPool This message sends coins directly from the sender to the community pool. If you know the `x/protocolpool` module account address, you can directly use bank `send` transaction instead. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/proto/cosmos/protocolpool/v1/tx.proto#L43-L53 ``` * The msg will fail if the amount cannot be transferred from the sender to the `x/protocolpool` module account. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (k Keeper) FundCommunityPool(ctx context.Context, amount sdk.Coins, sender sdk.AccAddress) error { return k.bankKeeper.SendCoinsFromAccountToModule(ctx, sender, types.ModuleName, amount) } ``` ### MsgCommunityPoolSpend This message distributes funds from the `x/protocolpool` module account to the recipient using `DistributeFromCommunityPool` keeper method. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/proto/cosmos/protocolpool/v1/tx.proto#L58-L69 ``` The message will fail under the following conditions: * The amount cannot be transferred to the recipient from the `x/protocolpool` module account. * The `recipient` address is restricted ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (k Keeper) DistributeFromCommunityPool(ctx context.Context, amount sdk.Coins, receiveAddr sdk.AccAddress) error { return k.bankKeeper.SendCoinsFromModuleToAccount(ctx, types.ModuleName, receiveAddr, amount) } ``` ### MsgCreateContinuousFund This message is used to create a continuous fund for a specific recipient. The proposed percentage of funds will be distributed only on withdraw request for the recipient. This fund distribution continues until expiry time is reached or continuous fund request is canceled. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/proto/cosmos/protocolpool/v1/tx.proto#L114-L130 ``` The message will fail under the following conditions: * The recipient address is empty or restricted. * The percentage is zero/negative/greater than one. * The Expiry time is less than the current block time. If two continuous fund proposals to the same address are created, the previous ContinuousFund will be updated with the new ContinuousFund. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package keeper import ( "context" "fmt" "cosmossdk.io/math" sdk "github.com/cosmos/cosmos-sdk/types" sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" "github.com/cosmos/cosmos-sdk/x/protocolpool/types" ) type MsgServer struct { Keeper } var _ types.MsgServer = MsgServer{ } // NewMsgServerImpl returns an implementation of the protocolpool MsgServer interface // for the provided Keeper. func NewMsgServerImpl(keeper Keeper) types.MsgServer { return &MsgServer{ Keeper: keeper } } func (k MsgServer) FundCommunityPool(ctx context.Context, msg *types.MsgFundCommunityPool) (*types.MsgFundCommunityPoolResponse, error) { sdkCtx := sdk.UnwrapSDKContext(ctx) depositor, err := k.authKeeper.AddressCodec().StringToBytes(msg.Depositor) if err != nil { return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid depositor address: %s", err) } if err := validateAmount(msg.Amount); err != nil { return nil, err } // send funds to community pool module account if err := k.Keeper.FundCommunityPool(sdkCtx, msg.Amount, depositor); err != nil { return nil, err } return &types.MsgFundCommunityPoolResponse{ }, nil } func (k MsgServer) CommunityPoolSpend(ctx context.Context, msg *types.MsgCommunityPoolSpend) (*types.MsgCommunityPoolSpendResponse, error) { sdkCtx := sdk.UnwrapSDKContext(ctx) if err := k.validateAuthority(msg.Authority); err != nil { return nil, err } if err := validateAmount(msg.Amount); err != nil { return nil, err } recipient, err := k.authKeeper.AddressCodec().StringToBytes(msg.Recipient) if err != nil { return nil, err } // distribute funds from community pool module account if err := k.DistributeFromCommunityPool(sdkCtx, msg.Amount, recipient); err != nil { return nil, err } sdkCtx.Logger().Debug("transferred from the community pool", "amount", msg.Amount.String(), "recipient", msg.Recipient) return &types.MsgCommunityPoolSpendResponse{ }, nil } func (k MsgServer) CreateContinuousFund(ctx context.Context, msg *types.MsgCreateContinuousFund) (*types.MsgCreateContinuousFundResponse, error) { sdkCtx := sdk.UnwrapSDKContext(ctx) if err := k.validateAuthority(msg.Authority); err != nil { return nil, err } recipient, err := k.Keeper.authKeeper.AddressCodec().StringToBytes(msg.Recipient) if err != nil { return nil, err } // deny creation if we know this address is blocked from receiving funds if k.bankKeeper.BlockedAddr(recipient) { return nil, fmt.Errorf("recipient is blocked in the bank keeper: %s", msg.Recipient) } has, err := k.ContinuousFunds.Has(sdkCtx, recipient) if err != nil { return nil, err } if has { return nil, fmt.Errorf("continuous fund already exists for recipient %s", msg.Recipient) } // Validate the message fields err = validateContinuousFund(sdkCtx, *msg) if err != nil { return nil, err } // Check if total funds percentage exceeds 100% // If exceeds, we should not setup continuous fund proposal. totalStreamFundsPercentage := math.LegacyZeroDec() err = k.ContinuousFunds.Walk(sdkCtx, nil, func(key sdk.AccAddress, value types.ContinuousFund) (stop bool, err error) { totalStreamFundsPercentage = totalStreamFundsPercentage.Add(value.Percentage) return false, nil }) if err != nil { return nil, err } totalStreamFundsPercentage = totalStreamFundsPercentage.Add(msg.Percentage) if totalStreamFundsPercentage.GT(math.LegacyOneDec()) { return nil, fmt.Errorf("cannot set continuous fund proposal\ntotal funds percentage exceeds 100\ncurrent total percentage: %s", totalStreamFundsPercentage.Sub(msg.Percentage).MulInt64(100).TruncateInt().String()) } // Create continuous fund proposal cf := types.ContinuousFund{ Recipient: msg.Recipient, Percentage: msg.Percentage, Expiry: msg.Expiry, } // Set continuous fund to the state err = k.ContinuousFunds.Set(sdkCtx, recipient, cf) if err != nil { return nil, err } return &types.MsgCreateContinuousFundResponse{ }, nil } func (k MsgServer) CancelContinuousFund(ctx context.Context, msg *types.MsgCancelContinuousFund) (*types.MsgCancelContinuousFundResponse, error) { sdkCtx := sdk.UnwrapSDKContext(ctx) if err := k.validateAuthority(msg.Authority); err != nil { return nil, err } recipient, err := k.Keeper.authKeeper.AddressCodec().StringToBytes(msg.Recipient) if err != nil { return nil, err } canceledHeight := sdkCtx.BlockHeight() canceledTime := sdkCtx.BlockTime() has, err := k.ContinuousFunds.Has(sdkCtx, recipient) if err != nil { return nil, fmt.Errorf("cannot get continuous fund for recipient %w", err) } if !has { return nil, fmt.Errorf("cannot cancel continuous fund for recipient %s - does not exist", msg.Recipient) } if err := k.ContinuousFunds.Remove(sdkCtx, recipient); err != nil { return nil, fmt.Errorf("failed to remove continuous fund for recipient %s: %w", msg.Recipient, err) } return &types.MsgCancelContinuousFundResponse{ CanceledTime: canceledTime, CanceledHeight: uint64(canceledHeight), Recipient: msg.Recipient, }, nil } func (k MsgServer) UpdateParams(ctx context.Context, msg *types.MsgUpdateParams) (*types.MsgUpdateParamsResponse, error) { sdkCtx := sdk.UnwrapSDKContext(ctx) if err := k.validateAuthority(msg.GetAuthority()); err != nil { return nil, err } if err := msg.Params.Validate(); err != nil { return nil, fmt.Errorf("invalid params: %w", err) } if err := k.Params.Set(sdkCtx, msg.Params); err != nil { return nil, fmt.Errorf("failed to set params: %w", err) } return &types.MsgUpdateParamsResponse{ }, nil } ``` ### MsgCancelContinuousFund This message is used to cancel an existing continuous fund proposal for a specific recipient. Once canceled, the continuous fund will no longer distribute funds at each begin block, and the state object will be removed. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/proto/cosmos/protocolpool/v1/tx.proto#L118-L129 ``` The message will fail under the following conditions: * The recipient address is empty or restricted. * The ContinuousFund for the recipient does not exist. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package keeper import ( "context" "fmt" "cosmossdk.io/math" sdk "github.com/cosmos/cosmos-sdk/types" sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" "github.com/cosmos/cosmos-sdk/x/protocolpool/types" ) type MsgServer struct { Keeper } var _ types.MsgServer = MsgServer{ } // NewMsgServerImpl returns an implementation of the protocolpool MsgServer interface // for the provided Keeper. func NewMsgServerImpl(keeper Keeper) types.MsgServer { return &MsgServer{ Keeper: keeper } } func (k MsgServer) FundCommunityPool(ctx context.Context, msg *types.MsgFundCommunityPool) (*types.MsgFundCommunityPoolResponse, error) { sdkCtx := sdk.UnwrapSDKContext(ctx) depositor, err := k.authKeeper.AddressCodec().StringToBytes(msg.Depositor) if err != nil { return nil, sdkerrors.ErrInvalidAddress.Wrapf("invalid depositor address: %s", err) } if err := validateAmount(msg.Amount); err != nil { return nil, err } // send funds to community pool module account if err := k.Keeper.FundCommunityPool(sdkCtx, msg.Amount, depositor); err != nil { return nil, err } return &types.MsgFundCommunityPoolResponse{ }, nil } func (k MsgServer) CommunityPoolSpend(ctx context.Context, msg *types.MsgCommunityPoolSpend) (*types.MsgCommunityPoolSpendResponse, error) { sdkCtx := sdk.UnwrapSDKContext(ctx) if err := k.validateAuthority(msg.Authority); err != nil { return nil, err } if err := validateAmount(msg.Amount); err != nil { return nil, err } recipient, err := k.authKeeper.AddressCodec().StringToBytes(msg.Recipient) if err != nil { return nil, err } // distribute funds from community pool module account if err := k.DistributeFromCommunityPool(sdkCtx, msg.Amount, recipient); err != nil { return nil, err } sdkCtx.Logger().Debug("transferred from the community pool", "amount", msg.Amount.String(), "recipient", msg.Recipient) return &types.MsgCommunityPoolSpendResponse{ }, nil } func (k MsgServer) CreateContinuousFund(ctx context.Context, msg *types.MsgCreateContinuousFund) (*types.MsgCreateContinuousFundResponse, error) { sdkCtx := sdk.UnwrapSDKContext(ctx) if err := k.validateAuthority(msg.Authority); err != nil { return nil, err } recipient, err := k.Keeper.authKeeper.AddressCodec().StringToBytes(msg.Recipient) if err != nil { return nil, err } // deny creation if we know this address is blocked from receiving funds if k.bankKeeper.BlockedAddr(recipient) { return nil, fmt.Errorf("recipient is blocked in the bank keeper: %s", msg.Recipient) } has, err := k.ContinuousFunds.Has(sdkCtx, recipient) if err != nil { return nil, err } if has { return nil, fmt.Errorf("continuous fund already exists for recipient %s", msg.Recipient) } // Validate the message fields err = validateContinuousFund(sdkCtx, *msg) if err != nil { return nil, err } // Check if total funds percentage exceeds 100% // If exceeds, we should not setup continuous fund proposal. totalStreamFundsPercentage := math.LegacyZeroDec() err = k.ContinuousFunds.Walk(sdkCtx, nil, func(key sdk.AccAddress, value types.ContinuousFund) (stop bool, err error) { totalStreamFundsPercentage = totalStreamFundsPercentage.Add(value.Percentage) return false, nil }) if err != nil { return nil, err } totalStreamFundsPercentage = totalStreamFundsPercentage.Add(msg.Percentage) if totalStreamFundsPercentage.GT(math.LegacyOneDec()) { return nil, fmt.Errorf("cannot set continuous fund proposal\ntotal funds percentage exceeds 100\ncurrent total percentage: %s", totalStreamFundsPercentage.Sub(msg.Percentage).MulInt64(100).TruncateInt().String()) } // Create continuous fund proposal cf := types.ContinuousFund{ Recipient: msg.Recipient, Percentage: msg.Percentage, Expiry: msg.Expiry, } // Set continuous fund to the state err = k.ContinuousFunds.Set(sdkCtx, recipient, cf) if err != nil { return nil, err } return &types.MsgCreateContinuousFundResponse{ }, nil } func (k MsgServer) CancelContinuousFund(ctx context.Context, msg *types.MsgCancelContinuousFund) (*types.MsgCancelContinuousFundResponse, error) { sdkCtx := sdk.UnwrapSDKContext(ctx) if err := k.validateAuthority(msg.Authority); err != nil { return nil, err } recipient, err := k.Keeper.authKeeper.AddressCodec().StringToBytes(msg.Recipient) if err != nil { return nil, err } canceledHeight := sdkCtx.BlockHeight() canceledTime := sdkCtx.BlockTime() has, err := k.ContinuousFunds.Has(sdkCtx, recipient) if err != nil { return nil, fmt.Errorf("cannot get continuous fund for recipient %w", err) } if !has { return nil, fmt.Errorf("cannot cancel continuous fund for recipient %s - does not exist", msg.Recipient) } if err := k.ContinuousFunds.Remove(sdkCtx, recipient); err != nil { return nil, fmt.Errorf("failed to remove continuous fund for recipient %s: %w", msg.Recipient, err) } return &types.MsgCancelContinuousFundResponse{ CanceledTime: canceledTime, CanceledHeight: uint64(canceledHeight), Recipient: msg.Recipient, }, nil } func (k MsgServer) UpdateParams(ctx context.Context, msg *types.MsgUpdateParams) (*types.MsgUpdateParamsResponse, error) { sdkCtx := sdk.UnwrapSDKContext(ctx) if err := k.validateAuthority(msg.GetAuthority()); err != nil { return nil, err } if err := msg.Params.Validate(); err != nil { return nil, fmt.Errorf("invalid params: %w", err) } if err := k.Params.Set(sdkCtx, msg.Params); err != nil { return nil, fmt.Errorf("failed to set params: %w", err) } return &types.MsgUpdateParamsResponse{ }, nil } ``` ## Client It takes the advantage of `AutoCLI` ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package protocolpool import ( "fmt" autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" poolv1 "cosmossdk.io/api/cosmos/protocolpool/v1" "github.com/cosmos/cosmos-sdk/version" ) // AutoCLIOptions implements the autocli.HasAutoCLIConfig interface. func (am AppModule) AutoCLIOptions() *autocliv1.ModuleOptions { return &autocliv1.ModuleOptions{ Query: &autocliv1.ServiceCommandDescriptor{ Service: poolv1.Query_ServiceDesc.ServiceName, RpcCommandOptions: []*autocliv1.RpcCommandOptions{ { RpcMethod: "CommunityPool", Use: "community-pool", Short: "Query the amount of coins in the community pool", Example: fmt.Sprintf(`%s query protocolpool community-pool`, version.AppName), }, { RpcMethod: "ContinuousFunds", Use: "continuous-funds", Short: "Query all continuous funds", Example: fmt.Sprintf(`%s query protocolpool continuous-funds`, version.AppName), }, { RpcMethod: "ContinuousFund", Use: "continuous-fund ", Short: "Query a continuous fund by its recipient address", Example: fmt.Sprintf(`%s query protocolpool continuous-fund cosmos1...`, version.AppName), PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ ProtoField: "recipient" }}, }, }, }, Tx: &autocliv1.ServiceCommandDescriptor{ Service: poolv1.Msg_ServiceDesc.ServiceName, RpcCommandOptions: []*autocliv1.RpcCommandOptions{ { RpcMethod: "FundCommunityPool", Use: "fund-community-pool ", Short: "Funds the community pool with the specified amount", Example: fmt.Sprintf(`%s tx protocolpool fund-community-pool 100uatom --from mykey`, version.AppName), PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ ProtoField: "amount" }}, }, { RpcMethod: "CreateContinuousFund", Use: "create-continuous-fund ", Short: "Create continuous fund for a recipient with optional expiry", Example: fmt.Sprintf(`%s tx protocolpool create-continuous-fund cosmos1... 0.2 2023-11-31T12:34:56.789Z --from mykey`, version.AppName), PositionalArgs: []*autocliv1.PositionalArgDescriptor{ { ProtoField: "recipient" }, { ProtoField: "percentage" }, { ProtoField: "expiry", Optional: true }, }, GovProposal: true, }, { RpcMethod: "CancelContinuousFund", Use: "cancel-continuous-fund ", Short: "Cancel continuous fund for a specific recipient", Example: fmt.Sprintf(`%s tx protocolpool cancel-continuous-fund cosmos1... --from mykey`, version.AppName), PositionalArgs: []*autocliv1.PositionalArgDescriptor{ { ProtoField: "recipient" }, }, GovProposal: true, }, { RpcMethod: "UpdateParams", Use: "update-params-proposal ", Short: "Submit a proposal to update protocolpool module params. Note: the entire params must be provided.", Example: fmt.Sprintf(`%s tx protocolpool update-params-proposal '{ "enabled_distribution_denoms": ["stake", "foo"] }'`, version.AppName), PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ ProtoField: "params" }}, GovProposal: true, }, }, }, } } ``` # x/slashing Source: https://docs.cosmos.network/sdk/latest/modules/slashing/README This section specifies the slashing module of the Cosmos SDK, which implements functionality first outlined in the Cosmos Whitepaper in June 2016. ## Abstract This section specifies the slashing module of the Cosmos SDK, which implements functionality first outlined in the [Cosmos Whitepaper](https://github.com/cosmos/cosmos/blob/master/WHITEPAPER.md) in June 2016. The slashing module enables Cosmos SDK-based blockchains to disincentivize any attributable action by a protocol-recognized actor with value at stake by penalizing them ("slashing"). Penalties may include, but are not limited to: * Burning some amount of their stake * Removing their ability to vote on future blocks for a period of time. This module will be used by the Cosmos Hub, the first hub in the Cosmos ecosystem. ## Contents * [Concepts](#concepts) * [States](#states) * [Tombstone Caps](#tombstone-caps) * [Infraction Timelines](#infraction-timelines) * [State](#state) * [Signing Info (Liveness)](#signing-info-liveness) * [Params](#params) * [Messages](#messages) * [Unjail](#unjail) * [BeginBlock](#beginblock) * [Liveness Tracking](#liveness-tracking) * [Hooks](#hooks) * [Events](#events) * [Staking Tombstone](#staking-tombstone) * [Parameters](#parameters) * [CLI](#cli) * [Query](#query) * [Transactions](#transactions) * [gRPC](#grpc) * [REST](#rest) ## Concepts ### States At any given time, there are any number of validators registered in the state machine. Each block, the top `MaxValidators` (defined by `x/staking`) validators who are not jailed become *bonded*, meaning that they may propose and vote on blocks. Validators who are *bonded* are *at stake*, meaning that part or all of their stake and their delegators' stake is at risk if they commit a protocol fault. For each of these validators we keep a `ValidatorSigningInfo` record that contains information pertaining to validator's liveness and other infraction related attributes. ### Tombstone Caps In order to mitigate the impact of initially likely categories of non-malicious protocol faults, the Cosmos Hub implements for each validator a *tombstone* cap, which only allows a validator to be slashed once for a double sign fault. For example, if you misconfigure your HSM and double-sign a bunch of old blocks, you'll only be punished for the first double-sign (and then immediately tombstoned). This will still be quite expensive and desirable to avoid, but tombstone caps somewhat blunt the economic impact of unintentional misconfiguration. Liveness faults do not have caps, as they can't stack upon each other. Liveness bugs are "detected" as soon as the infraction occurs, and the validators are immediately put in jail, so it is not possible for them to commit multiple liveness faults without unjailing in between. ### Infraction Timelines To illustrate how the `x/slashing` module handles submitted evidence through CometBFT consensus, consider the following examples: **Definitions**: *\[* : timeline start\ *]* : timeline end\ *Cn* : infraction `n` committed\ *Dn* : infraction `n` discovered\ *Vb* : validator bonded\ *Vu* : validator unbonded #### Single Double Sign Infraction \[----------C1----D1,Vu-----] A single infraction is committed then later discovered, at which point the validator is unbonded and slashed at the full amount for the infraction. #### Multiple Double Sign Infractions \[----------C1--C2---C3---D1,D2,D3Vu-----] Multiple infractions are committed and then later discovered, at which point the validator is jailed and slashed for only one infraction. Because the validator is also tombstoned, they can not rejoin the validator set. ## State ### Signing Info (Liveness) Every block includes a set of precommits by the validators for the previous block, known as the `LastCommitInfo` provided by CometBFT. A `LastCommitInfo` is valid so long as it contains precommits from +2/3 of total voting power. Proposers are incentivized to include precommits from all validators in the CometBFT `LastCommitInfo` by receiving additional fees proportional to the difference between the voting power included in the `LastCommitInfo` and +2/3 (see [fee distribution](/sdk/v0.47/build/modules/distribution/README#begin-block)). ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type LastCommitInfo struct { Round int32 Votes []VoteInfo } ``` Validators are penalized for failing to be included in the `LastCommitInfo` for some number of blocks by being automatically jailed, potentially slashed, and unbonded. Information about validator's liveness activity is tracked through `ValidatorSigningInfo`. It is indexed in the store as follows: * ValidatorSigningInfo: `0x01 | ConsAddrLen (1 byte) | ConsAddress -> ProtocolBuffer(ValSigningInfo)` * MissedBlocksBitArray: `0x02 | ConsAddrLen (1 byte) | ConsAddress | LittleEndianUint64(signArrayIndex) -> VarInt(didMiss)` (varint is a number encoding format) The first mapping allows us to easily lookup the recent signing info for a validator based on the validator's consensus address. The second mapping (`MissedBlocksBitArray`) acts as a bit-array of size `SignedBlocksWindow` that tells us if the validator missed the block for a given index in the bit-array. The index in the bit-array is given as little endian uint64. The result is a `varint` that takes on `0` or `1`, where `0` indicates the validator did not miss (did sign) the corresponding block, and `1` indicates they missed the block (did not sign). Note that the `MissedBlocksBitArray` is not explicitly initialized up-front. Keys are added as we progress through the first `SignedBlocksWindow` blocks for a newly bonded validator. The `SignedBlocksWindow` parameter defines the size (number of blocks) of the sliding window used to track validator liveness. The information stored for tracking validator liveness is as follows: ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/slashing/v1beta1/slashing.proto#L13-L35 ``` ### Params The slashing module stores it's params in state with the prefix of `0x00`, it can be updated with governance or the address with authority. * Params: `0x00 | ProtocolBuffer(Params)` ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/slashing/v1beta1/slashing.proto#L37-L59 ``` ## Messages In this section we describe the processing of messages for the `slashing` module. ### Unjail If a validator was automatically unbonded due to downtime and wishes to come back online & possibly rejoin the bonded set, it must send `MsgUnjail`: ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // MsgUnjail is an sdk.Msg used for unjailing a jailed validator, thus returning // them into the bonded validator set, so they can begin receiving provisions // and rewards again. message MsgUnjail { string validator_addr = 1; } ``` Below is a pseudocode of the `MsgSrv/Unjail` RPC: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} unjail(tx MsgUnjail) validator = getValidator(tx.ValidatorAddr) if validator == nil fail with "No validator found" if getSelfDelegation(validator) == 0 fail with "validator must self delegate before unjailing" if !validator.Jailed fail with "Validator not jailed, cannot unjail" info = GetValidatorSigningInfo(operator) if info.Tombstoned fail with "Tombstoned validator cannot be unjailed" if block time < info.JailedUntil fail with "Validator still jailed, cannot unjail until period has expired" validator.Jailed = false setValidator(validator) return ``` If the validator has enough stake to be in the top `n = MaximumBondedValidators`, it will be automatically rebonded, and all delegators still delegated to the validator will be rebonded and begin to again collect provisions and rewards. ## BeginBlock ### Liveness Tracking At the beginning of each block, we update the `ValidatorSigningInfo` for each validator and check if they've crossed below the liveness threshold over a sliding window. This sliding window is defined by `SignedBlocksWindow` and the index in this window is determined by `IndexOffset` found in the validator's `ValidatorSigningInfo`. For each block processed, the `IndexOffset` is incremented regardless if the validator signed or not. Once the index is determined, the `MissedBlocksBitArray` and `MissedBlocksCounter` are updated accordingly. Finally, in order to determine if a validator crosses below the liveness threshold, we fetch the maximum number of blocks missed, `maxMissed`, which is `SignedBlocksWindow - (MinSignedPerWindow * SignedBlocksWindow)` and the minimum height at which we can determine liveness, `minHeight`. If the current block is greater than `minHeight` and the validator's `MissedBlocksCounter` is greater than `maxMissed`, they will be slashed by `SlashFractionDowntime`, will be jailed for `DowntimeJailDuration`, and have the following values reset: `MissedBlocksBitArray`, `MissedBlocksCounter`, and `IndexOffset`. **Note**: Liveness slashes do **NOT** lead to a tombstoning. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} height := block.Height for vote in block.LastCommitInfo.Votes { signInfo := GetValidatorSigningInfo(vote.Validator.Address) // This is a relative index, so we counts blocks the validator SHOULD have // signed. We use the 0-value default signing info if not present, except for // start height. index := signInfo.IndexOffset % SignedBlocksWindow() signInfo.IndexOffset++ // Update MissedBlocksBitArray and MissedBlocksCounter. The MissedBlocksCounter // just tracks the sum of MissedBlocksBitArray. That way we avoid needing to // read/write the whole array each time. missedPrevious := GetValidatorMissedBlockBitArray(vote.Validator.Address, index) missed := !signed switch { case !missedPrevious && missed: // array index has changed from not missed to missed, increment counter SetValidatorMissedBlockBitArray(vote.Validator.Address, index, true) signInfo.MissedBlocksCounter++ case missedPrevious && !missed: // array index has changed from missed to not missed, decrement counter SetValidatorMissedBlockBitArray(vote.Validator.Address, index, false) signInfo.MissedBlocksCounter-- default: // array index at this index has not changed; no need to update counter } if missed { // emit events... } minHeight := signInfo.StartHeight + SignedBlocksWindow() maxMissed := SignedBlocksWindow() - MinSignedPerWindow() // If we are past the minimum height and the validator has missed too many // jail and slash them. if height > minHeight && signInfo.MissedBlocksCounter > maxMissed { validator := ValidatorByConsAddr(vote.Validator.Address) // emit events... // We need to retrieve the stake distribution which signed the block, so we // subtract ValidatorUpdateDelay from the block height, and subtract an // additional 1 since this is the LastCommit. // // Note, that this CAN result in a negative "distributionHeight" up to // -ValidatorUpdateDelay-1, i.e. at the end of the pre-genesis block (none) = at the beginning of the genesis block. // That's fine since this is just used to filter unbonding delegations & redelegations. distributionHeight := height - sdk.ValidatorUpdateDelay - 1 SlashWithInfractionReason(vote.Validator.Address, distributionHeight, vote.Validator.Power, SlashFractionDowntime(), stakingtypes.Downtime) Jail(vote.Validator.Address) signInfo.JailedUntil = block.Time.Add(DowntimeJailDuration()) // We need to reset the counter & array so that the validator won't be // immediately slashed for downtime upon rebonding. signInfo.MissedBlocksCounter = 0 signInfo.IndexOffset = 0 ClearValidatorMissedBlockBitArray(vote.Validator.Address) } SetValidatorSigningInfo(vote.Validator.Address, signInfo) } ``` ## Hooks This section contains a description of the module's `hooks`. Hooks are operations that are executed automatically when events are raised. ### Staking hooks The slashing module implements the `StakingHooks` defined in `x/staking` and are used as record-keeping of validators information. During the app initialization, these hooks should be registered in the staking module struct. The following hooks impact the slashing state: * `AfterValidatorBonded` creates a `ValidatorSigningInfo` instance as described in the following section. * `AfterValidatorCreated` stores a validator's consensus key. * `AfterValidatorRemoved` removes a validator's consensus key. ### Validator Bonded Upon successful first-time bonding of a new validator, we create a new `ValidatorSigningInfo` structure for the now-bonded validator, which `StartHeight` of the current block. If the validator was out of the validator set and gets bonded again, its new bonded height is set. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} onValidatorBonded(address sdk.ValAddress) signingInfo, found = GetValidatorSigningInfo(address) if !found { signingInfo = ValidatorSigningInfo { StartHeight : CurrentHeight, IndexOffset : 0, JailedUntil : time.Unix(0, 0), Tombstone : false, MissedBlocksCounter : 0 } else { signingInfo.StartHeight = CurrentHeight } setValidatorSigningInfo(signingInfo) } return ``` ## Events The slashing module emits the following events: ### MsgServer #### MsgUnjail | Type | Attribute Key | Attribute Value | | ------- | ------------- | -------------------- | | message | module | slashing | | message | sender | `{validatorAddress}` | ### Keeper ### BeginBlocker: HandleValidatorSignature | Type | Attribute Key | Attribute Value | | ----- | ------------- | ----------------------------- | | slash | address | `{validatorConsensusAddress}` | | slash | power | `{validatorPower}` | | slash | reason | `{slashReason}` | | slash | jailed \[0] | `{validatorConsensusAddress}` | | slash | burned coins | `{math.Int}` | * \[0] Only included if the validator is jailed. | Type | Attribute Key | Attribute Value | | -------- | -------------- | ----------------------------- | | liveness | address | `{validatorConsensusAddress}` | | liveness | missed\_blocks | `{missedBlocksCounter}` | | liveness | height | `{blockHeight}` | #### Slash * same as `"slash"` event from `HandleValidatorSignature`, but without the `jailed` attribute. #### Jail | Type | Attribute Key | Attribute Value | | ----- | ------------- | -------------------- | | slash | jailed | `{validatorAddress}` | ## Staking Tombstone ### Abstract In the current implementation of the `slashing` module, when the consensus engine informs the state machine of a validator's consensus fault, the validator is partially slashed, and put into a "jail period", a period of time in which they are not allowed to rejoin the validator set. However, because of the nature of consensus faults and ABCI, there can be a delay between an infraction occurring, and evidence of the infraction reaching the state machine (this is one of the primary reasons for the existence of the unbonding period). > Note: The tombstone concept, only applies to faults that have a delay between > the infraction occurring and evidence reaching the state machine. For example, > evidence of a validator double signing may take a while to reach the state machine > due to unpredictable evidence gossip layer delays and the ability of validators to > selectively reveal double-signatures (e.g. to infrequently-online light clients). > Liveness slashing, on the other hand, is detected immediately as soon as the > infraction occurs, and therefore no slashing period is needed. A validator is > immediately put into jail period, and they cannot commit another liveness fault > until they unjail. In the future, there may be other types of byzantine faults > that have delays (for example, submitting evidence of an invalid proposal as a transaction). > When implemented, it will have to be decided whether these future types of > byzantine faults will result in a tombstoning (and if not, the slash amounts > will not be capped by a slashing period). In the current system design, once a validator is put in the jail for a consensus fault, after the `JailPeriod` they are allowed to send a transaction to `unjail` themselves, and thus rejoin the validator set. One of the "design desires" of the `slashing` module is that if multiple infractions occur before evidence is executed (and a validator is put in jail), they should only be punished for single worst infraction, but not cumulatively. For example, if the sequence of events is: 1. Validator A commits Infraction 1 (worth 30% slash) 2. Validator A commits Infraction 2 (worth 40% slash) 3. Validator A commits Infraction 3 (worth 35% slash) 4. Evidence for Infraction 1 reaches state machine (and validator is put in jail) 5. Evidence for Infraction 2 reaches state machine 6. Evidence for Infraction 3 reaches state machine Only Infraction 2 should have its slash take effect, as it is the highest. This is done, so that in the case of the compromise of a validator's consensus key, they will only be punished once, even if the hacker double-signs many blocks. Because, the unjailing has to be done with the validator's operator key, they have a chance to re-secure their consensus key, and then signal that they are ready using their operator key. We call this period during which we track only the max infraction, the "slashing period". Once, a validator rejoins by unjailing themselves, we begin a new slashing period; if they commit a new infraction after unjailing, it gets slashed cumulatively on top of the worst infraction from the previous slashing period. However, while infractions are grouped based off of the slashing periods, because evidence can be submitted up to an `unbondingPeriod` after the infraction, we still have to allow for evidence to be submitted for previous slashing periods. For example, if the sequence of events is: 1. Validator A commits Infraction 1 (worth 30% slash) 2. Validator A commits Infraction 2 (worth 40% slash) 3. Evidence for Infraction 1 reaches state machine (and Validator A is put in jail) 4. Validator A unjails We are now in a new slashing period, however we still have to keep the door open for the previous infraction, as the evidence for Infraction 2 may still come in. As the number of slashing periods increase, it creates more complexity as we have to keep track of the highest infraction amount for every single slashing period. > Note: Currently, according to the `slashing` module spec, a new slashing period > is created every time a validator is unbonded then rebonded. This should probably > be changed to jailed/unjailed. See issue [#3205](https://github.com/cosmos/cosmos-sdk/issues/3205) > for further details. For the remainder of this, I will assume that we only start > a new slashing period when a validator gets unjailed. The maximum number of slashing periods is the `len(UnbondingPeriod) / len(JailPeriod)`. The current defaults in Gaia for the `UnbondingPeriod` and `JailPeriod` are 3 weeks and 2 days, respectively. This means there could potentially be up to 11 slashing periods concurrently being tracked per validator. If we set the `JailPeriod >= UnbondingPeriod`, we only have to track 1 slashing period (i.e not have to track slashing periods). Currently, in the jail period implementation, once a validator unjails, all of their delegators who are delegated to them (haven't unbonded / redelegated away), stay with them. Given that consensus safety faults are so egregious (way more so than liveness faults), it is probably prudent to have delegators not "auto-rebond" to the validator. #### Proposal: infinite jail We propose setting the "jail time" for a validator who commits a consensus safety fault, to `infinite` (i.e. a tombstone state). This essentially kicks the validator out of the validator set and does not allow them to re-enter the validator set. All of their delegators (including the operator themselves) have to either unbond or redelegate away. The validator operator can create a new validator if they would like, with a new operator key and consensus key, but they have to "re-earn" their delegations back. Implementing the tombstone system and getting rid of the slashing period tracking will make the `slashing` module way simpler, especially because we can remove all of the hooks defined in the `slashing` module consumed by the `staking` module (the `slashing` module still consumes hooks defined in `staking`). #### Single slashing amount Another optimization that can be made is that if we assume that all ABCI faults for CometBFT consensus are slashed at the same level, we don't have to keep track of "max slash". Once an ABCI fault happens, we don't have to worry about comparing potential future ones to find the max. Currently the only CometBFT ABCI fault is: * Unjustified precommits (double signs) It is currently planned to include the following fault in the near future: * Signing a precommit when you're in unbonding phase (needed to make light client bisection safe) Given that these faults are both attributable byzantine faults, we will likely want to slash them equally, and thus we can enact the above change. > Note: This change may make sense for current CometBFT consensus, but maybe > not for a different consensus algorithm or future versions of CometBFT that > may want to punish at different levels (for example, partial slashing). ## Parameters The slashing module contains the following parameters: | Key | Type | Example | | ----------------------- | -------------- | ---------------------- | | SignedBlocksWindow | string (int64) | "100" | | MinSignedPerWindow | string (dec) | "0.500000000000000000" | | DowntimeJailDuration | string (ns) | "600000000000" | | SlashFractionDoubleSign | string (dec) | "0.050000000000000000" | | SlashFractionDowntime | string (dec) | "0.010000000000000000" | ## CLI A user can query and interact with the `slashing` module using the CLI. ### Query The `query` commands allow users to query `slashing` state. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query slashing --help ``` #### params The `params` command allows users to query genesis parameters for the slashing module. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query slashing params [flags] ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query slashing params ``` Example Output: ```yml theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} downtime_jail_duration: 600s min_signed_per_window: "0.500000000000000000" signed_blocks_window: "100" slash_fraction_double_sign: "0.050000000000000000" slash_fraction_downtime: "0.010000000000000000" ``` #### signing-info The `signing-info` command allows users to query signing-info of the validator using consensus public key. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query slashing signing-infos [flags] ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query slashing signing-info '{"@type":"/cosmos.crypto.ed25519.PubKey","key":"Auxs3865HpB/EfssYOzfqNhEJjzys6jD5B6tPgC8="}' ``` Example Output: ```yml theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} address: cosmosvalcons1nrqsld3aw6lh6t082frdqc84uwxn0t958c index_offset: "2068" jailed_until: "1970-01-01T00:00:00Z" missed_blocks_counter: "0" start_height: "0" tombstoned: false ``` #### signing-infos The `signing-infos` command allows users to query signing infos of all validators. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query slashing signing-infos [flags] ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query slashing signing-infos ``` Example Output: ```yml theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} info: - address: cosmosvalcons1nrqsld3aw6lh6t082frdqc84uwxn0t958c index_offset: "2075" jailed_until: "1970-01-01T00:00:00Z" missed_blocks_counter: "0" start_height: "0" tombstoned: false pagination: next_key: null total: "0" ``` ### Transactions The `tx` commands allow users to interact with the `slashing` module. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx slashing --help ``` #### unjail The `unjail` command allows users to unjail a validator previously jailed for downtime. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx slashing unjail --from mykey [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx slashing unjail --from mykey ``` ### gRPC A user can query the `slashing` module using gRPC endpoints. #### Params The `Params` endpoint allows users to query the parameters of slashing module. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.slashing.v1beta1.Query/Params ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext localhost:9090 cosmos.slashing.v1beta1.Query/Params ``` Example Output: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "params": { "signedBlocksWindow": "100", "minSignedPerWindow": "NTAwMDAwMDAwMDAwMDAwMDAw", "downtimeJailDuration": "600s", "slashFractionDoubleSign": "NTAwMDAwMDAwMDAwMDAwMDA=", "slashFractionDowntime": "MTAwMDAwMDAwMDAwMDAwMDA=" } } ``` #### SigningInfo The SigningInfo queries the signing info of given cons address. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.slashing.v1beta1.Query/SigningInfo ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext -d '{"cons_address":"cosmosvalcons1nrqsld3aw6lh6t082frdqc84uwxn0t958c"}' localhost:9090 cosmos.slashing.v1beta1.Query/SigningInfo ``` Example Output: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "valSigningInfo": { "address": "cosmosvalcons1nrqsld3aw6lh6t082frdqc84uwxn0t958c", "indexOffset": "3493", "jailedUntil": "1970-01-01T00:00:00Z" } } ``` #### SigningInfos The SigningInfos queries signing info of all validators. ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.slashing.v1beta1.Query/SigningInfos ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext localhost:9090 cosmos.slashing.v1beta1.Query/SigningInfos ``` Example Output: ```json expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "info": [ { "address": "cosmosvalcons1nrqslkwd3pz096lh6t082frdqc84uwxn0t958c", "indexOffset": "2467", "jailedUntil": "1970-01-01T00:00:00Z" } ], "pagination": { "total": "1" } } ``` ### REST A user can query the `slashing` module using REST endpoints. #### Params ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/slashing/v1beta1/params ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl "localhost:1317/cosmos/slashing/v1beta1/params" ``` Example Output: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "params": { "signed_blocks_window": "100", "min_signed_per_window": "0.500000000000000000", "downtime_jail_duration": "600s", "slash_fraction_double_sign": "0.050000000000000000", "slash_fraction_downtime": "0.010000000000000000" } ``` #### signing\_info ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/slashing/v1beta1/signing_infos/%s ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl "localhost:1317/cosmos/slashing/v1beta1/signing_infos/cosmosvalcons1nrqslkwd3pz096lh6t082frdqc84uwxn0t958c" ``` Example Output: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "val_signing_info": { "address": "cosmosvalcons1nrqslkwd3pz096lh6t082frdqc84uwxn0t958c", "start_height": "0", "index_offset": "4184", "jailed_until": "1970-01-01T00:00:00Z", "tombstoned": false, "missed_blocks_counter": "0" } } ``` #### signing\_infos ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/slashing/v1beta1/signing_infos ``` Example: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl "localhost:1317/cosmos/slashing/v1beta1/signing_infos ``` Example Output: ```json expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "info": [ { "address": "cosmosvalcons1nrqslkwd3pz096lh6t082frdqc84uwxn0t958c", "start_height": "0", "index_offset": "4169", "jailed_until": "1970-01-01T00:00:00Z", "tombstoned": false, "missed_blocks_counter": "0" } ], "pagination": { "next_key": null, "total": "1" } } ``` # x/staking Source: https://docs.cosmos.network/sdk/latest/modules/staking/README This paper specifies the Staking module of the Cosmos SDK that was first described in the Cosmos Whitepaper in June 2016. ## Abstract This paper specifies the Staking module of the Cosmos SDK that was first described in the [Cosmos Whitepaper](https://github.com/cosmos/cosmos/blob/master/WHITEPAPER.md) in June 2016. The module enables Cosmos SDK-based blockchain to support an advanced Proof-of-Stake (PoS) system. In this system, holders of the native staking token of the chain can become validators and can delegate tokens to validators, ultimately determining the effective validator set for the system. This module is used in the Cosmos Hub, the first Hub in the Cosmos network. ## Contents * [State](#state) * [Pool](#pool) * [LastTotalPower](#lasttotalpower) * [ValidatorUpdates](#validatorupdates) * [UnbondingID](#unbondingid) * [Params](#params) * [Validator](#validator) * [Delegation](#delegation) * [UnbondingDelegation](#unbondingdelegation) * [Redelegation](#redelegation) * [Queues](#queues) * [HistoricalInfo](#historicalinfo) * [State Transitions](#state-transitions) * [Validators](#validators) * [Delegations](#delegations) * [Slashing](#slashing) * [How Shares are calculated](#how-shares-are-calculated) * [Messages](#messages) * [MsgCreateValidator](#msgcreatevalidator) * [MsgEditValidator](#msgeditvalidator) * [MsgDelegate](#msgdelegate) * [MsgUndelegate](#msgundelegate) * [MsgCancelUnbondingDelegation](#msgcancelunbondingdelegation) * [MsgBeginRedelegate](#msgbeginredelegate) * [MsgUpdateParams](#msgupdateparams) * [Begin-Block](#begin-block) * [Historical Info Tracking](#historical-info-tracking) * [End-Block](#end-block) * [Validator Set Changes](#validator-set-changes) * [Queues](#queues-1) * [Hooks](#hooks) * [Events](#events) * [EndBlocker](#endblocker) * [Msg's](#msgs) * [Parameters](#parameters) * [Client](#client) * [CLI](#cli) * [gRPC](#grpc) * [REST](#rest) ## State ### Pool Pool is used for tracking bonded and not-bonded token supply of the bond denomination. ### LastTotalPower LastTotalPower tracks the total amounts of bonded tokens recorded during the previous end block. Store entries prefixed with "Last" must remain unchanged until EndBlock. * LastTotalPower: `0x12 -> ProtocolBuffer(math.Int)` ### ValidatorUpdates ValidatorUpdates contains the validator updates returned to ABCI at the end of every block. The values are overwritten in every block. * ValidatorUpdates `0x61 -> []abci.ValidatorUpdate` ### UnbondingID UnbondingID stores the ID of the latest unbonding operation. It enables creating unique IDs for unbonding operations, i.e., UnbondingID is incremented every time a new unbonding operation (validator unbonding, unbonding delegation, redelegation) is initiated. * UnbondingID: `0x37 -> uint64` ### Params The staking module stores its params in state with the prefix of `0x51`, it can be updated with governance or the address with authority. * Params: `0x51 | ProtocolBuffer(Params)` ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/staking.proto#L310-L333 ``` ### Validator Validators can have one of three statuses * `Unbonded`: The validator is not in the active set. They cannot sign blocks and do not earn rewards. They can receive delegations. * `Bonded`: Once the validator receives sufficient bonded tokens they automatically join the active set during [`EndBlock`](#validator-set-changes) and their status is updated to `Bonded`. They are signing blocks and receiving rewards. They can receive further delegations. They can be slashed for misbehavior. Delegators to this validator who unbond their delegation must wait the duration of the UnbondingTime, a chain-specific param, during which time they are still slashable for offences of the source validator if those offences were committed during the period of time that the tokens were bonded. * `Unbonding`: When a validator leaves the active set, either by choice or due to slashing, jailing or tombstoning, an unbonding of all their delegations begins. All delegations must then wait the UnbondingTime before their tokens are moved to their accounts from the `BondedPool`. Tombstoning is permanent, once tombstoned a validator's consensus key can not be reused within the chain where the tombstoning happened. Validators objects should be primarily stored and accessed by the `OperatorAddr`, an SDK validator address for the operator of the validator. Two additional indices are maintained per validator object in order to fulfill required lookups for slashing and validator-set updates. A third special index (`LastValidatorPower`) is also maintained which however remains constant throughout each block, unlike the first two indices which mirror the validator records within a block. * Validators: `0x21 | OperatorAddrLen (1 byte) | OperatorAddr -> ProtocolBuffer(validator)` * ValidatorsByConsAddr: `0x22 | ConsAddrLen (1 byte) | ConsAddr -> OperatorAddr` * ValidatorsByPower: `0x23 | BigEndian(ConsensusPower) | OperatorAddrLen (1 byte) | OperatorAddr -> OperatorAddr` * LastValidatorsPower: `0x11 | OperatorAddrLen (1 byte) | OperatorAddr -> ProtocolBuffer(ConsensusPower)` * ValidatorsByUnbondingID: `0x38 | UnbondingID -> 0x21 | OperatorAddrLen (1 byte) | OperatorAddr` `Validators` is the primary index - it ensures that each operator can have only one associated validator, where the public key of that validator can change in the future. Delegators can refer to the immutable operator of the validator, without concern for the changing public key. `ValidatorsByUnbondingID` is an additional index that enables lookups for validators by the unbonding IDs corresponding to their current unbonding. `ValidatorByConsAddr` is an additional index that enables lookups for slashing. When CometBFT reports evidence, it provides the validator address, so this map is needed to find the operator. Note that the `ConsAddr` corresponds to the address which can be derived from the validator's `ConsPubKey`. `ValidatorsByPower` is an additional index that provides a sorted list of potential validators to quickly determine the current active set. Here ConsensusPower is validator.Tokens/10^6 by default. Note that all validators where `Jailed` is true are not stored within this index. `LastValidatorsPower` is a special index that provides a historical list of the last-block's bonded validators. This index remains constant during a block but is updated during the validator set update process which takes place in [`EndBlock`](#end-block). Each validator's state is stored in a `Validator` struct: ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/staking.proto#L82-L138 ``` ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/staking.proto#L26-L80 ``` ### Delegation Delegations are identified by combining `DelegatorAddr` (the address of the delegator) with the `ValidatorAddr` Delegators are indexed in the store as follows: * Delegation: `0x31 | DelegatorAddrLen (1 byte) | DelegatorAddr | ValidatorAddrLen (1 byte) | ValidatorAddr -> ProtocolBuffer(delegation)` Stake holders may delegate coins to validators; under this circumstance their funds are held in a `Delegation` data structure. It is owned by one delegator, and is associated with the shares for one validator. The sender of the transaction is the owner of the bond. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/staking.proto#L198-L216 ``` #### Delegator Shares When one delegates tokens to a Validator, they are issued a number of delegator shares based on a dynamic exchange rate, calculated as follows from the total number of tokens delegated to the validator and the number of shares issued so far: `Shares per Token = validator.TotalShares() / validator.Tokens()` Only the number of shares received is stored on the DelegationEntry. When a delegator then Undelegates, the token amount they receive is calculated from the number of shares they currently hold and the inverse exchange rate: `Tokens per Share = validator.Tokens() / validatorShares()` These `Shares` are simply an accounting mechanism. They are not a fungible asset. The reason for this mechanism is to simplify the accounting around slashing. Rather than iteratively slashing the tokens of every delegation entry, instead the Validator's total bonded tokens can be slashed, effectively reducing the value of each issued delegator share. ### UnbondingDelegation Shares in a `Delegation` can be unbonded, but they must for some time exist as an `UnbondingDelegation`, where shares can be reduced if Byzantine behavior is detected. `UnbondingDelegation` are indexed in the store as: * UnbondingDelegation: `0x32 | DelegatorAddrLen (1 byte) | DelegatorAddr | ValidatorAddrLen (1 byte) | ValidatorAddr -> ProtocolBuffer(unbondingDelegation)` * UnbondingDelegationsFromValidator: `0x33 | ValidatorAddrLen (1 byte) | ValidatorAddr | DelegatorAddrLen (1 byte) | DelegatorAddr -> nil` * UnbondingDelegationByUnbondingId: `0x38 | UnbondingId -> 0x32 | DelegatorAddrLen (1 byte) | DelegatorAddr | ValidatorAddrLen (1 byte) | ValidatorAddr` `UnbondingDelegation` is used in queries, to lookup all unbonding delegations for a given delegator. `UnbondingDelegationsFromValidator` is used in slashing, to lookup all unbonding delegations associated with a given validator that need to be slashed. `UnbondingDelegationByUnbondingId` is an additional index that enables lookups for unbonding delegations by the unbonding IDs of the containing unbonding delegation entries. A UnbondingDelegation object is created every time an unbonding is initiated. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/staking.proto#L218-L261 ``` ### Redelegation The bonded tokens worth of a `Delegation` may be instantly redelegated from a source validator to a different validator (destination validator). However when this occurs they must be tracked in a `Redelegation` object, whereby their shares can be slashed if their tokens have contributed to a Byzantine fault committed by the source validator. `Redelegation` are indexed in the store as: * Redelegations: `0x34 | DelegatorAddrLen (1 byte) | DelegatorAddr | ValidatorAddrLen (1 byte) | ValidatorSrcAddr | ValidatorDstAddr -> ProtocolBuffer(redelegation)` * RedelegationsBySrc: `0x35 | ValidatorSrcAddrLen (1 byte) | ValidatorSrcAddr | ValidatorDstAddrLen (1 byte) | ValidatorDstAddr | DelegatorAddrLen (1 byte) | DelegatorAddr -> nil` * RedelegationsByDst: `0x36 | ValidatorDstAddrLen (1 byte) | ValidatorDstAddr | ValidatorSrcAddrLen (1 byte) | ValidatorSrcAddr | DelegatorAddrLen (1 byte) | DelegatorAddr -> nil` * RedelegationByUnbondingId: `0x38 | UnbondingId -> 0x34 | DelegatorAddrLen (1 byte) | DelegatorAddr | ValidatorAddrLen (1 byte) | ValidatorSrcAddr | ValidatorDstAddr` `Redelegations` is used for queries, to lookup all redelegations for a given delegator. `RedelegationsBySrc` is used for slashing based on the `ValidatorSrcAddr`. `RedelegationsByDst` is used for slashing based on the `ValidatorDstAddr` The first map here is used for queries, to lookup all redelegations for a given delegator. The second map is used for slashing based on the `ValidatorSrcAddr`, while the third map is for slashing based on the `ValidatorDstAddr`. `RedelegationByUnbondingId` is an additional index that enables lookups for redelegations by the unbonding IDs of the containing redelegation entries. A redelegation object is created every time a redelegation occurs. To prevent "redelegation hopping" redelegations may not occur under the situation that: * the (re)delegator already has another immature redelegation in progress with a destination to a validator (let's call it `Validator X`) * and, the (re)delegator is attempting to create a *new* redelegation where the source validator for this new redelegation is `Validator X`. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/staking.proto#L263-L308 ``` ### Queues All queue objects are sorted by timestamp. The time used within any queue is firstly converted to UTC, rounded to the nearest nanosecond then sorted. The sortable time format used is a slight modification of the RFC3339Nano and uses the format string `"2006-01-02T15:04:05.000000000"`. Notably this format: * right pads all zeros * drops the time zone info (we already use UTC) In all cases, the stored timestamp represents the maturation time of the queue element. #### UnbondingDelegationQueue For the purpose of tracking progress of unbonding delegations the unbonding delegations queue is kept. * UnbondingDelegation: `0x41 | format(time) -> []DVPair` ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/staking.proto#L162-L172 ``` #### RedelegationQueue For the purpose of tracking progress of redelegations the redelegation queue is kept. * RedelegationQueue: `0x42 | format(time) -> []DVVTriplet` ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/staking.proto#L179-L191 ``` #### ValidatorQueue For the purpose of tracking progress of unbonding validators the validator queue is kept. * ValidatorQueueTime: `0x43 | format(time) -> []sdk.ValAddress` The stored object by each key is an array of validator operator addresses from which the validator object can be accessed. Typically it is expected that only a single validator record will be associated with a given timestamp however it is possible that multiple validators exist in the queue at the same location. ### HistoricalInfo HistoricalInfo objects are stored and pruned at each block such that the staking keeper persists the `n` most recent historical info defined by staking module parameter: `HistoricalEntries`. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} syntax = "proto3"; package cosmos.staking.v1beta1; import "gogoproto/gogo.proto"; import "google/protobuf/any.proto"; import "google/protobuf/duration.proto"; import "google/protobuf/timestamp.proto"; import "cosmos_proto/cosmos.proto"; import "cosmos/base/v1beta1/coin.proto"; import "amino/amino.proto"; import "tendermint/types/types.proto"; import "tendermint/abci/types.proto"; option go_package = "github.com/cosmos/cosmos-sdk/x/staking/types"; // HistoricalInfo contains header and validator information for a given block. // It is stored as part of staking module's state, which persists the `n` most // recent HistoricalInfo // (`n` is set by the staking module's `historical_entries` parameter). message HistoricalInfo { tendermint.types.Header header = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; repeated Validator valset = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; } // CommissionRates defines the initial commission rates to be used for creating // a validator. message CommissionRates { option (gogoproto.equal) = true; option (gogoproto.goproto_stringer) = false; // rate is the commission rate charged to delegators, as a fraction. string rate = 1 [ (cosmos_proto.scalar) = "cosmos.Dec", (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", (gogoproto.nullable) = false ]; // max_rate defines the maximum commission rate which validator can ever charge, as a fraction. string max_rate = 2 [ (cosmos_proto.scalar) = "cosmos.Dec", (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", (gogoproto.nullable) = false ]; // max_change_rate defines the maximum daily increase of the validator commission, as a fraction. string max_change_rate = 3 [ (cosmos_proto.scalar) = "cosmos.Dec", (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", (gogoproto.nullable) = false ]; } // Commission defines commission parameters for a given validator. message Commission { option (gogoproto.equal) = true; option (gogoproto.goproto_stringer) = false; // commission_rates defines the initial commission rates to be used for creating a validator. CommissionRates commission_rates = 1 [(gogoproto.embed) = true, (gogoproto.nullable) = false, (amino.dont_omitempty) = true]; // update_time is the last time the commission rate was changed. google.protobuf.Timestamp update_time = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; } // Description defines a validator description. message Description { option (gogoproto.equal) = true; option (gogoproto.goproto_stringer) = false; // moniker defines a human-readable name for the validator. string moniker = 1; // identity defines an optional identity signature (ex. UPort or Keybase). string identity = 2; // website defines an optional website link. string website = 3; // security_contact defines an optional email for security contact. string security_contact = 4; // details define other optional details. string details = 5; } // Validator defines a validator, together with the total amount of the // Validator's bond shares and their exchange rate to coins. Slashing results in // a decrease in the exchange rate, allowing correct calculation of future // undelegations without iterating over delegators. When coins are delegated to // this validator, the validator is credited with a delegation whose number of // bond shares is based on the amount of coins delegated divided by the current // exchange rate. Voting power can be calculated as total bonded shares // multiplied by exchange rate. message Validator { option (gogoproto.equal) = false; option (gogoproto.goproto_stringer) = false; option (gogoproto.goproto_getters) = false; // operator_address defines the address of the validator's operator; bech encoded in JSON. string operator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; // consensus_pubkey is the consensus public key of the validator, as a Protobuf Any. google.protobuf.Any consensus_pubkey = 2 [(cosmos_proto.accepts_interface) = "cosmos.crypto.PubKey"]; // jailed defined whether the validator has been jailed from bonded status or not. bool jailed = 3; // status is the validator status (bonded/unbonding/unbonded). BondStatus status = 4; // tokens define the delegated tokens (incl. self-delegation). string tokens = 5 [ (cosmos_proto.scalar) = "cosmos.Int", (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", (gogoproto.nullable) = false ]; // delegator_shares defines total shares issued to a validator's delegators. string delegator_shares = 6 [ (cosmos_proto.scalar) = "cosmos.Dec", (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", (gogoproto.nullable) = false ]; // description defines the description terms for the validator. Description description = 7 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; // unbonding_height defines, if unbonding, the height at which this validator has begun unbonding. int64 unbonding_height = 8; // unbonding_time defines, if unbonding, the min time for the validator to complete unbonding. google.protobuf.Timestamp unbonding_time = 9 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; // commission defines the commission parameters. Commission commission = 10 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; // min_self_delegation is the validator's self declared minimum self delegation. // // Since: cosmos-sdk 0.46 string min_self_delegation = 11 [ (cosmos_proto.scalar) = "cosmos.Int", (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", (gogoproto.nullable) = false ]; // strictly positive if this validator's unbonding has been stopped by external modules int64 unbonding_on_hold_ref_count = 12; // list of unbonding ids, each uniquely identifing an unbonding of this validator repeated uint64 unbonding_ids = 13; } // BondStatus is the status of a validator. enum BondStatus { option (gogoproto.goproto_enum_prefix) = false; // UNSPECIFIED defines an invalid validator status. BOND_STATUS_UNSPECIFIED = 0 [(gogoproto.enumvalue_customname) = "Unspecified"]; // UNBONDED defines a validator that is not bonded. BOND_STATUS_UNBONDED = 1 [(gogoproto.enumvalue_customname) = "Unbonded"]; // UNBONDING defines a validator that is unbonding. BOND_STATUS_UNBONDING = 2 [(gogoproto.enumvalue_customname) = "Unbonding"]; // BONDED defines a validator that is bonded. BOND_STATUS_BONDED = 3 [(gogoproto.enumvalue_customname) = "Bonded"]; } // ValAddresses defines a repeated set of validator addresses. message ValAddresses { option (gogoproto.goproto_stringer) = false; option (gogoproto.stringer) = true; repeated string addresses = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; } // DVPair is struct that just has a delegator-validator pair with no other data. // It is intended to be used as a marshalable pointer. For example, a DVPair can // be used to construct the key to getting an UnbondingDelegation from state. message DVPair { option (gogoproto.equal) = false; option (gogoproto.goproto_getters) = false; option (gogoproto.goproto_stringer) = false; string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; } // DVPairs defines an array of DVPair objects. message DVPairs { repeated DVPair pairs = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; } // DVVTriplet is struct that just has a delegator-validator-validator triplet // with no other data. It is intended to be used as a marshalable pointer. For // example, a DVVTriplet can be used to construct the key to getting a // Redelegation from state. message DVVTriplet { option (gogoproto.equal) = false; option (gogoproto.goproto_getters) = false; option (gogoproto.goproto_stringer) = false; string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; string validator_src_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; string validator_dst_address = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; } // DVVTriplets defines an array of DVVTriplet objects. message DVVTriplets { repeated DVVTriplet triplets = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; } // Delegation represents the bond with tokens held by an account. It is // owned by one delegator, and is associated with the voting power of one // validator. message Delegation { option (gogoproto.equal) = false; option (gogoproto.goproto_getters) = false; option (gogoproto.goproto_stringer) = false; // delegator_address is the bech32-encoded address of the delegator. string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; // validator_address is the bech32-encoded address of the validator. string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; // shares define the delegation shares received. string shares = 3 [ (cosmos_proto.scalar) = "cosmos.Dec", (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", (gogoproto.nullable) = false ]; } // UnbondingDelegation stores all of a single delegator's unbonding bonds // for a single validator in an time-ordered list. message UnbondingDelegation { option (gogoproto.equal) = false; option (gogoproto.goproto_getters) = false; option (gogoproto.goproto_stringer) = false; // delegator_address is the bech32-encoded address of the delegator. string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; // validator_address is the bech32-encoded address of the validator. string validator_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; // entries are the unbonding delegation entries. repeated UnbondingDelegationEntry entries = 3 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; // unbonding delegation entries } // UnbondingDelegationEntry defines an unbonding object with relevant metadata. message UnbondingDelegationEntry { option (gogoproto.equal) = true; option (gogoproto.goproto_stringer) = false; // creation_height is the height which the unbonding took place. int64 creation_height = 1; // completion_time is the unix time for unbonding completion. google.protobuf.Timestamp completion_time = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; // initial_balance defines the tokens initially scheduled to receive at completion. string initial_balance = 3 [ (cosmos_proto.scalar) = "cosmos.Int", (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", (gogoproto.nullable) = false ]; // balance defines the tokens to receive at completion. string balance = 4 [ (cosmos_proto.scalar) = "cosmos.Int", (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", (gogoproto.nullable) = false ]; // Incrementing id that uniquely identifies this entry uint64 unbonding_id = 5; // Strictly positive if this entry's unbonding has been stopped by external modules int64 unbonding_on_hold_ref_count = 6; } // RedelegationEntry defines a redelegation object with relevant metadata. message RedelegationEntry { option (gogoproto.equal) = true; option (gogoproto.goproto_stringer) = false; // creation_height defines the height which the redelegation took place. int64 creation_height = 1; // completion_time defines the unix time for redelegation completion. google.protobuf.Timestamp completion_time = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdtime) = true]; // initial_balance defines the initial balance when redelegation started. string initial_balance = 3 [ (cosmos_proto.scalar) = "cosmos.Int", (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", (gogoproto.nullable) = false ]; // shares_dst is the amount of destination-validator shares created by redelegation. string shares_dst = 4 [ (cosmos_proto.scalar) = "cosmos.Dec", (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", (gogoproto.nullable) = false ]; // Incrementing id that uniquely identifies this entry uint64 unbonding_id = 5; // Strictly positive if this entry's unbonding has been stopped by external modules int64 unbonding_on_hold_ref_count = 6; } // Redelegation contains the list of a particular delegator's redelegating bonds // from a particular source validator to a particular destination validator. message Redelegation { option (gogoproto.equal) = false; option (gogoproto.goproto_getters) = false; option (gogoproto.goproto_stringer) = false; // delegator_address is the bech32-encoded address of the delegator. string delegator_address = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; // validator_src_address is the validator redelegation source operator address. string validator_src_address = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; // validator_dst_address is the validator redelegation destination operator address. string validator_dst_address = 3 [(cosmos_proto.scalar) = "cosmos.AddressString"]; // entries are the redelegation entries. repeated RedelegationEntry entries = 4 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; // redelegation entries } // Params defines the parameters for the x/staking module. message Params { option (amino.name) = "cosmos-sdk/x/staking/Params"; option (gogoproto.equal) = true; option (gogoproto.goproto_stringer) = false; // unbonding_time is the time duration of unbonding. google.protobuf.Duration unbonding_time = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true, (gogoproto.stdduration) = true]; // max_validators is the maximum number of validators. uint32 max_validators = 2; // max_entries is the max entries for either unbonding delegation or redelegation (per pair/trio). uint32 max_entries = 3; // historical_entries is the number of historical entries to persist. uint32 historical_entries = 4; // bond_denom defines the bondable coin denomination. string bond_denom = 5; // min_commission_rate is the chain-wide minimum commission rate that a validator can charge their delegators string min_commission_rate = 6 [ (gogoproto.moretags) = "yaml:\"min_commission_rate\"", (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Dec", (gogoproto.nullable) = false ]; } // DelegationResponse is equivalent to Delegation except that it contains a // balance in addition to shares which is more suitable for client responses. message DelegationResponse { option (gogoproto.equal) = false; option (gogoproto.goproto_stringer) = false; Delegation delegation = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; cosmos.base.v1beta1.Coin balance = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; } // RedelegationEntryResponse is equivalent to a RedelegationEntry except that it // contains a balance in addition to shares which is more suitable for client // responses. message RedelegationEntryResponse { option (gogoproto.equal) = true; RedelegationEntry redelegation_entry = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; string balance = 4 [ (cosmos_proto.scalar) = "cosmos.Int", (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", (gogoproto.nullable) = false ]; } // RedelegationResponse is equivalent to a Redelegation except that its entries // contain a balance in addition to shares which is more suitable for client // responses. message RedelegationResponse { option (gogoproto.equal) = false; Redelegation redelegation = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; repeated RedelegationEntryResponse entries = 2 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; } // Pool is used for tracking bonded and not-bonded token supply of the bond // denomination. message Pool { option (gogoproto.description) = true; option (gogoproto.equal) = true; string not_bonded_tokens = 1 [ (cosmos_proto.scalar) = "cosmos.Int", (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", (gogoproto.nullable) = false, (gogoproto.jsontag) = "not_bonded_tokens", (amino.dont_omitempty) = true ]; string bonded_tokens = 2 [ (cosmos_proto.scalar) = "cosmos.Int", (gogoproto.customtype) = "github.com/cosmos/cosmos-sdk/types.Int", (gogoproto.nullable) = false, (gogoproto.jsontag) = "bonded_tokens", (amino.dont_omitempty) = true ]; } // Infraction indicates the infraction a validator committed. enum Infraction { // UNSPECIFIED defines an empty infraction. INFRACTION_UNSPECIFIED = 0; // DOUBLE_SIGN defines a validator that double-signs a block. INFRACTION_DOUBLE_SIGN = 1; // DOWNTIME defines a validator that missed signing too many blocks. INFRACTION_DOWNTIME = 2; } // ValidatorUpdates defines an array of abci.ValidatorUpdate objects. // TODO: explore moving this to proto/cosmos/base to separate modules from tendermint dependence message ValidatorUpdates { repeated tendermint.abci.ValidatorUpdate updates = 1 [(gogoproto.nullable) = false, (amino.dont_omitempty) = true]; } ``` At each BeginBlock, the staking keeper will persist the current Header and the Validators that committed the current block in a `HistoricalInfo` object. The Validators are sorted on their address to ensure that they are in a deterministic order. The oldest HistoricalEntries will be pruned to ensure that there only exist the parameter-defined number of historical entries. ## State Transitions ### Validators State transitions in validators are performed on every [`EndBlock`](#validator-set-changes) in order to check for changes in the active `ValidatorSet`. A validator can be `Unbonded`, `Unbonding` or `Bonded`. `Unbonded` and `Unbonding` are collectively called `Not Bonded`. A validator can move directly between all the states, except for from `Bonded` to `Unbonded`. #### Not bonded to Bonded The following transition occurs when a validator's ranking in the `ValidatorPowerIndex` surpasses that of the `LastValidator`. * set `validator.Status` to `Bonded` * send the `validator.Tokens` from the `NotBondedTokens` to the `BondedPool` `ModuleAccount` * delete the existing record from `ValidatorByPowerIndex` * add a new updated record to the `ValidatorByPowerIndex` * update the `Validator` object for this validator * if it exists, delete any `ValidatorQueue` record for this validator #### Bonded to Unbonding When a validator begins the unbonding process the following operations occur: * send the `validator.Tokens` from the `BondedPool` to the `NotBondedTokens` `ModuleAccount` * set `validator.Status` to `Unbonding` * delete the existing record from `ValidatorByPowerIndex` * add a new updated record to the `ValidatorByPowerIndex` * update the `Validator` object for this validator * insert a new record into the `ValidatorQueue` for this validator #### Unbonding to Unbonded A validator moves from unbonding to unbonded when the `ValidatorQueue` object moves from bonded to unbonded * update the `Validator` object for this validator * set `validator.Status` to `Unbonded` #### Jail/Unjail when a validator is jailed it is effectively removed from the CometBFT set. this process may be also be reversed. the following operations occur: * set `Validator.Jailed` and update object * if jailed delete record from `ValidatorByPowerIndex` * if unjailed add record to `ValidatorByPowerIndex` Jailed validators are not present in any of the following stores: * the power store (from consensus power to address) ### Delegations #### Delegate When a delegation occurs both the validator and the delegation objects are affected * determine the delegators shares based on tokens delegated and the validator's exchange rate * remove tokens from the sending account * add shares the delegation object or add them to a created validator object * add new delegator shares and update the `Validator` object * transfer the `delegation.Amount` from the delegator's account to the `BondedPool` or the `NotBondedPool` `ModuleAccount` depending if the `validator.Status` is `Bonded` or not * delete the existing record from `ValidatorByPowerIndex` * add an new updated record to the `ValidatorByPowerIndex` #### Begin Unbonding As a part of the Undelegate and Complete Unbonding state transitions Unbond Delegation may be called. * subtract the unbonded shares from delegator * add the unbonded tokens to an `UnbondingDelegationEntry` * update the delegation or remove the delegation if there are no more shares * if the delegation is the operator of the validator and no more shares exist then trigger a jail validator * update the validator with removed the delegator shares and associated coins * if the validator state is `Bonded`, transfer the `Coins` worth of the unbonded shares from the `BondedPool` to the `NotBondedPool` `ModuleAccount` * remove the validator if it is unbonded and there are no more delegation shares. * remove the validator if it is unbonded and there are no more delegation shares * get a unique `unbondingId` and map it to the `UnbondingDelegationEntry` in `UnbondingDelegationByUnbondingId` * call the `AfterUnbondingInitiated(unbondingId)` hook * add the unbonding delegation to `UnbondingDelegationQueue` with the completion time set to `UnbondingTime` #### Cancel an `UnbondingDelegation` Entry When a `cancel unbond delegation` occurs both the `validator`, the `delegation` and an `UnbondingDelegationQueue` state will be updated. * if cancel unbonding delegation amount equals to the `UnbondingDelegation` entry `balance`, then the `UnbondingDelegation` entry deleted from `UnbondingDelegationQueue`. * if the `cancel unbonding delegation amount is less than the `UnbondingDelegation`entry balance, then the`UnbondingDelegation`entry will be updated with new balance in the`UnbondingDelegationQueue\`. * cancel `amount` is [Delegated](#delegations) back to the original `validator`. #### Complete Unbonding For undelegations which do not complete immediately, the following operations occur when the unbonding delegation queue element matures: * remove the entry from the `UnbondingDelegation` object * transfer the tokens from the `NotBondedPool` `ModuleAccount` to the delegator `Account` #### Begin Redelegation Redelegations affect the delegation, source and destination validators. * perform an `unbond` delegation from the source validator to retrieve the tokens worth of the unbonded shares * using the unbonded tokens, `Delegate` them to the destination validator * if the `sourceValidator.Status` is `Bonded`, and the `destinationValidator` is not, transfer the newly delegated tokens from the `BondedPool` to the `NotBondedPool` `ModuleAccount` * otherwise, if the `sourceValidator.Status` is not `Bonded`, and the `destinationValidator` is `Bonded`, transfer the newly delegated tokens from the `NotBondedPool` to the `BondedPool` `ModuleAccount` * record the token amount in an new entry in the relevant `Redelegation` From when a redelegation begins until it completes, the delegator is in a state of "pseudo-unbonding", and can still be slashed for infractions that occurred before the redelegation began. #### Complete Redelegation When a redelegations complete the following occurs: * remove the entry from the `Redelegation` object ### Slashing #### Slash Validator When a Validator is slashed, the following occurs: * The total `slashAmount` is calculated as the `slashFactor` (a chain parameter) \* `TokensFromConsensusPower`, the total number of tokens bonded to the validator at the time of the infraction. * Every unbonding delegation and pseudo-unbonding redelegation such that the infraction occurred before the unbonding or redelegation began from the validator are slashed by the `slashFactor` percentage of the initialBalance. * Each amount slashed from redelegations and unbonding delegations is subtracted from the total slash amount. * The `remaingSlashAmount` is then slashed from the validator's tokens in the `BondedPool` or `NonBondedPool` depending on the validator's status. This reduces the total supply of tokens. In the case of a slash due to any infraction that requires evidence to submitted (for example double-sign), the slash occurs at the block where the evidence is included, not at the block where the infraction occurred. Put otherwise, validators are not slashed retroactively, only when they are caught. #### Slash Unbonding Delegation When a validator is slashed, so are those unbonding delegations from the validator that began unbonding after the time of the infraction. Every entry in every unbonding delegation from the validator is slashed by `slashFactor`. The amount slashed is calculated from the `InitialBalance` of the delegation and is capped to prevent a resulting negative balance. Completed (or mature) unbondings are not slashed. #### Slash Redelegation When a validator is slashed, so are all redelegations from the validator that began after the infraction. Redelegations are slashed by `slashFactor`. Redelegations that began before the infraction are not slashed. The amount slashed is calculated from the `InitialBalance` of the delegation and is capped to prevent a resulting negative balance. Mature redelegations (that have completed pseudo-unbonding) are not slashed. ### How Shares are calculated At any given point in time, each validator has a number of tokens, `T`, and has a number of shares issued, `S`. Each delegator, `i`, holds a number of shares, `S_i`. The number of tokens is the sum of all tokens delegated to the validator, plus the rewards, minus the slashes. The delegator is entitled to a portion of the underlying tokens proportional to their proportion of shares. So delegator `i` is entitled to `T * S_i / S` of the validator's tokens. When a delegator delegates new tokens to the validator, they receive a number of shares proportional to their contribution. So when delegator `j` delegates `T_j` tokens, they receive `S_j = S * T_j / T` shares. The total number of tokens is now `T + T_j`, and the total number of shares is `S + S_j`. `j`s proportion of the shares is the same as their proportion of the total tokens contributed: `(S + S_j) / S = (T + T_j) / T`. A special case is the initial delegation, when `T = 0` and `S = 0`, so `T_j / T` is undefined. For the initial delegation, delegator `j` who delegates `T_j` tokens receive `S_j = T_j` shares. So a validator that hasn't received any rewards and has not been slashed will have `T = S`. ## Messages In this section we describe the processing of the staking messages and the corresponding updates to the state. All created/modified state objects specified by each message are defined within the [state](#state) section. ### MsgCreateValidator A validator is created using the `MsgCreateValidator` message. The validator must be created with an initial delegation from the operator. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/tx.proto#L20-L21 ``` ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/tx.proto#L50-L73 ``` This message is expected to fail if: * another validator with this operator address is already registered * another validator with this pubkey is already registered * the initial self-delegation tokens are of a denom not specified as the bonding denom * the commission parameters are faulty, namely: * `MaxRate` is either > 1 or \< 0 * the initial `Rate` is either negative or > `MaxRate` * the initial `MaxChangeRate` is either negative or > `MaxRate` * the description fields are too large This message creates and stores the `Validator` object at appropriate indexes. Additionally a self-delegation is made with the initial tokens delegation tokens `Delegation`. The validator always starts as unbonded but may be bonded in the first end-block. ### MsgEditValidator The `Description`, `CommissionRate` of a validator can be updated using the `MsgEditValidator` message. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/tx.proto#L23-L24 ``` ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/tx.proto#L78-L97 ``` This message is expected to fail if: * the initial `CommissionRate` is either negative or > `MaxRate` * the `CommissionRate` has already been updated within the previous 24 hours * the `CommissionRate` is > `MaxChangeRate` * the description fields are too large This message stores the updated `Validator` object. ### MsgDelegate Within this message the delegator provides coins, and in return receives some amount of their validator's (newly created) delegator-shares that are assigned to `Delegation.Shares`. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/tx.proto#L26-L28 ``` ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/tx.proto#L102-L114 ``` This message is expected to fail if: * the validator does not exist * the `Amount` `Coin` has a denomination different than one defined by `params.BondDenom` * the exchange rate is invalid, meaning the validator has no tokens (due to slashing) but there are outstanding shares * the amount delegated is less than the minimum allowed delegation If an existing `Delegation` object for provided addresses does not already exist then it is created as part of this message otherwise the existing `Delegation` is updated to include the newly received shares. The delegator receives newly minted shares at the current exchange rate. The exchange rate is the number of existing shares in the validator divided by the number of currently delegated tokens. The validator is updated in the `ValidatorByPower` index, and the delegation is tracked in validator object in the `Validators` index. It is possible to delegate to a jailed validator, the only difference being it will not be added to the power index until it is unjailed. ![Delegation sequence](https://raw.githubusercontent.com/cosmos/cosmos-sdk/release/v0.46.x/docs/uml/svg/delegation_sequence.svg) ### MsgUndelegate The `MsgUndelegate` message allows delegators to undelegate their tokens from validator. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/tx.proto#L34-L36 ``` ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/tx.proto#L140-L152 ``` This message returns a response containing the completion time of the undelegation: ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/tx.proto#L154-L158 ``` This message is expected to fail if: * the delegation doesn't exist * the validator doesn't exist * the delegation has less shares than the ones worth of `Amount` * existing `UnbondingDelegation` has maximum entries as defined by `params.MaxEntries` * the `Amount` has a denomination different than one defined by `params.BondDenom` When this message is processed the following actions occur: * validator's `DelegatorShares` and the delegation's `Shares` are both reduced by the message `SharesAmount` * calculate the token worth of the shares remove that amount tokens held within the validator * with those removed tokens, if the validator is: * `Bonded` - add them to an entry in `UnbondingDelegation` (create `UnbondingDelegation` if it doesn't exist) with a completion time a full unbonding period from the current time. Update pool shares to reduce BondedTokens and increase NotBondedTokens by token worth of the shares. * `Unbonding` - add them to an entry in `UnbondingDelegation` (create `UnbondingDelegation` if it doesn't exist) with the same completion time as the validator (`UnbondingMinTime`). * `Unbonded` - then send the coins the message `DelegatorAddr` * if there are no more `Shares` in the delegation, then the delegation object is removed from the store * under this situation if the delegation is the validator's self-delegation then also jail the validator. ![Unbond sequence](https://raw.githubusercontent.com/cosmos/cosmos-sdk/release/v0.46.x/docs/uml/svg/unbond_sequence.svg) ### MsgCancelUnbondingDelegation The `MsgCancelUnbondingDelegation` message allows delegators to cancel the `unbondingDelegation` entry and delegate back to a previous validator. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/tx.proto#L38-L42 ``` ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/tx.proto#L160-L175 ``` This message is expected to fail if: * the `unbondingDelegation` entry is already processed. * the `cancel unbonding delegation` amount is greater than the `unbondingDelegation` entry balance. * the `cancel unbonding delegation` height doesn't exist in the `unbondingDelegationQueue` of the delegator. When this message is processed the following actions occur: * if the `unbondingDelegation` Entry balance is zero * in this condition `unbondingDelegation` entry will be removed from `unbondingDelegationQueue`. * otherwise `unbondingDelegationQueue` will be updated with new `unbondingDelegation` entry balance and initial balance * the validator's `DelegatorShares` and the delegation's `Shares` are both increased by the message `Amount`. ### MsgBeginRedelegate The redelegation command allows delegators to instantly switch validators. Once the unbonding period has passed, the redelegation is automatically completed in the EndBlocker. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/tx.proto#L30-L32 ``` ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/tx.proto#L119-L132 ``` This message returns a response containing the completion time of the redelegation: ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/tx.proto#L133-L138 ``` This message is expected to fail if: * the delegation doesn't exist * the source or destination validators don't exist * the delegation has less shares than the ones worth of `Amount` * the source validator has a receiving redelegation which is not matured (aka. the redelegation may be transitive) * existing `Redelegation` has maximum entries as defined by `params.MaxEntries` * the `Amount` `Coin` has a denomination different than one defined by `params.BondDenom` When this message is processed the following actions occur: * the source validator's `DelegatorShares` and the delegations `Shares` are both reduced by the message `SharesAmount` * calculate the token worth of the shares remove that amount tokens held within the source validator. * if the source validator is: * `Bonded` - add an entry to the `Redelegation` (create `Redelegation` if it doesn't exist) with a completion time a full unbonding period from the current time. Update pool shares to reduce BondedTokens and increase NotBondedTokens by token worth of the shares (this may be effectively reversed in the next step however). * `Unbonding` - add an entry to the `Redelegation` (create `Redelegation` if it doesn't exist) with the same completion time as the validator (`UnbondingMinTime`). * `Unbonded` - no action required in this step * Delegate the token worth to the destination validator, possibly moving tokens back to the bonded state. * if there are no more `Shares` in the source delegation, then the source delegation object is removed from the store * under this situation if the delegation is the validator's self-delegation then also jail the validator. ![Begin redelegation sequence](https://raw.githubusercontent.com/cosmos/cosmos-sdk/release/v0.46.x/docs/uml/svg/begin_redelegation_sequence.svg) ### MsgUpdateParams The `MsgUpdateParams` update the staking module parameters. The params are updated through a governance proposal where the signer is the gov module account address. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/tx.proto#L182-L195 ``` The message handling can fail if: * signer is not the authority defined in the staking keeper (usually the gov module account). * the `bond_denom` in the updated params has zero supply in the bank module (i.e., the denom does not exist on-chain). ## Begin-Block Each abci begin block call, the historical info will get stored and pruned according to the `HistoricalEntries` parameter. ### Historical Info Tracking If the `HistoricalEntries` parameter is 0, then the `BeginBlock` performs a no-op. Otherwise, the latest historical info is stored under the key `historicalInfoKey|height`, while any entries older than `height - HistoricalEntries` is deleted. In most cases, this results in a single entry being pruned per block. However, if the parameter `HistoricalEntries` has changed to a lower value there will be multiple entries in the store that must be pruned. ## End-Block Each abci end block call, the operations to update queues and validator set changes are specified to execute. ### Validator Set Changes The staking validator set is updated during this process by state transitions that run at the end of every block. As a part of this process any updated validators are also returned back to CometBFT for inclusion in the CometBFT validator set which is responsible for validating CometBFT messages at the consensus layer. Operations are as following: * the new validator set is taken as the top `params.MaxValidators` number of validators retrieved from the `ValidatorsByPower` index * the previous validator set is compared with the new validator set: * missing validators begin unbonding and their `Tokens` are transferred from the `BondedPool` to the `NotBondedPool` `ModuleAccount` * new validators are instantly bonded and their `Tokens` are transferred from the `NotBondedPool` to the `BondedPool` `ModuleAccount` In all cases, any validators leaving or entering the bonded validator set or changing balances and staying within the bonded validator set incur an update message reporting their new consensus power which is passed back to CometBFT. The `LastTotalPower` and `LastValidatorsPower` hold the state of the total power and validator power from the end of the last block, and are used to check for changes that have occurred in `ValidatorsByPower` and the total new power, which is calculated during `EndBlock`. ### Queues Within staking, certain state-transitions are not instantaneous but take place over a duration of time (typically the unbonding period). When these transitions are mature certain operations must take place in order to complete the state operation. This is achieved through the use of queues which are checked/processed at the end of each block. #### Unbonding Validators When a validator is kicked out of the bonded validator set (either through being jailed, or not having sufficient bonded tokens) it begins the unbonding process along with all its delegations begin unbonding (while still being delegated to this validator). At this point the validator is said to be an "unbonding validator", whereby it will mature to become an "unbonded validator" after the unbonding period has passed. Each block the validator queue is to be checked for mature unbonding validators (namely with a completion time `<=` current time and completion height `<=` current block height). At this point any mature validators which do not have any delegations remaining are deleted from state. For all other mature unbonding validators that still have remaining delegations, the `validator.Status` is switched from `types.Unbonding` to `types.Unbonded`. Unbonding operations can be put on hold by external modules via the `PutUnbondingOnHold(unbondingId)` method. As a result, an unbonding operation (e.g., an unbonding delegation) that is on hold, cannot complete even if it reaches maturity. For an unbonding operation with `unbondingId` to eventually complete (after it reaches maturity), every call to `PutUnbondingOnHold(unbondingId)` must be matched by a call to `UnbondingCanComplete(unbondingId)`. #### Unbonding Delegations Complete the unbonding of all mature `UnbondingDelegations.Entries` within the `UnbondingDelegations` queue with the following procedure: * transfer the balance coins to the delegator's wallet address * remove the mature entry from `UnbondingDelegation.Entries` * remove the `UnbondingDelegation` object from the store if there are no remaining entries. #### Redelegations Complete the unbonding of all mature `Redelegation.Entries` within the `Redelegations` queue with the following procedure: * remove the mature entry from `Redelegation.Entries` * remove the `Redelegation` object from the store if there are no remaining entries. ## Hooks Other modules may register operations to execute when a certain event has occurred within staking. These events can be registered to execute either right `Before` or `After` the staking event (as per the hook name). The following hooks can registered with staking: * `AfterValidatorCreated(Context, ValAddress) error` * called when a validator is created * `BeforeValidatorModified(Context, ValAddress) error` * called when a validator's state is changed * `AfterValidatorRemoved(Context, ConsAddress, ValAddress) error` * called when a validator is deleted * `AfterValidatorBonded(Context, ConsAddress, ValAddress) error` * called when a validator is bonded * `AfterValidatorBeginUnbonding(Context, ConsAddress, ValAddress) error` * called when a validator begins unbonding * `BeforeDelegationCreated(Context, AccAddress, ValAddress) error` * called when a delegation is created * `BeforeDelegationSharesModified(Context, AccAddress, ValAddress) error` * called when a delegation's shares are modified * `AfterDelegationModified(Context, AccAddress, ValAddress) error` * called when a delegation is created or modified * `BeforeDelegationRemoved(Context, AccAddress, ValAddress) error` * called when a delegation is removed * `AfterUnbondingInitiated(Context, UnbondingID)` * called when an unbonding operation (validator unbonding, unbonding delegation, redelegation) was initiated ## Events The staking module emits the following events: ### EndBlocker | Type | Attribute Key | Attribute Value | | ---------------------- | ---------------------- | --------------------------- | | complete\_unbonding | amount | `{totalUnbondingAmount}` | | complete\_unbonding | validator | `{validatorAddress}` | | complete\_unbonding | delegator | `{delegatorAddress}` | | complete\_redelegation | amount | `{totalRedelegationAmount}` | | complete\_redelegation | source\_validator | `{srcValidatorAddress}` | | complete\_redelegation | destination\_validator | `{dstValidatorAddress}` | | complete\_redelegation | delegator | `{delegatorAddress}` | ## Msg's ### MsgCreateValidator | Type | Attribute Key | Attribute Value | | ----------------- | ------------- | -------------------- | | create\_validator | validator | `{validatorAddress}` | | create\_validator | amount | `{delegationAmount}` | | message | module | staking | | message | action | create\_validator | | message | sender | `{senderAddress}` | ### MsgEditValidator | Type | Attribute Key | Attribute Value | | --------------- | --------------------- | --------------------- | | edit\_validator | commission\_rate | `{commissionRate}` | | edit\_validator | min\_self\_delegation | `{minSelfDelegation}` | | message | module | staking | | message | action | edit\_validator | | message | sender | `{senderAddress}` | ### MsgDelegate | Type | Attribute Key | Attribute Value | | -------- | ------------- | -------------------- | | delegate | validator | `{validatorAddress}` | | delegate | amount | `{delegationAmount}` | | message | module | staking | | message | action | delegate | | message | sender | `{senderAddress}` | ### MsgUndelegate | Type | Attribute Key | Attribute Value | | ------- | --------------------- | -------------------- | | unbond | validator | `{validatorAddress}` | | unbond | amount | `{unbondAmount}` | | unbond | completion\_time \[0] | `{completionTime}` | | message | module | staking | | message | action | begin\_unbonding | | message | sender | `{senderAddress}` | * \[0] Time is formatted in the RFC3339 standard ### MsgCancelUnbondingDelegation | Type | Attribute Key | Attribute Value | | ----------------------------- | ---------------- | ----------------------------------- | | cancel\_unbonding\_delegation | validator | `{validatorAddress}` | | cancel\_unbonding\_delegation | delegator | `{delegatorAddress}` | | cancel\_unbonding\_delegation | amount | `{cancelUnbondingDelegationAmount}` | | cancel\_unbonding\_delegation | creation\_height | `{unbondingCreationHeight}` | | message | module | staking | | message | action | cancel\_unbond | | message | sender | `{senderAddress}` | ### MsgBeginRedelegate | Type | Attribute Key | Attribute Value | | ---------- | ---------------------- | ----------------------- | | redelegate | source\_validator | `{srcValidatorAddress}` | | redelegate | destination\_validator | `{dstValidatorAddress}` | | redelegate | amount | `{unbondAmount}` | | redelegate | completion\_time \[0] | `{completionTime}` | | message | module | staking | | message | action | begin\_redelegate | | message | sender | `{senderAddress}` | * \[0] Time is formatted in the RFC3339 standard ## Parameters The staking module contains the following parameters: | Key | Type | Example | | ----------------- | ---------------- | ---------------------- | | UnbondingTime | string (time ns) | "259200000000000" | | MaxValidators | uint16 | 100 | | KeyMaxEntries | uint16 | 7 | | HistoricalEntries | uint16 | 3 | | BondDenom | string | "stake" | | MinCommissionRate | string | "0.000000000000000000" | ## Client ### CLI A user can query and interact with the `staking` module using the CLI. #### Query The `query` commands allows users to query `staking` state. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query staking --help ``` ##### delegation The `delegation` command allows users to query delegations for an individual delegator on an individual validator. Usage: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query staking delegation [delegator-addr] [validator-addr] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query staking delegation cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} balance: amount: "10000000000" denom: stake delegation: delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p shares: "10000000000.000000000000000000" validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj ``` ##### delegations The `delegations` command allows users to query delegations for an individual delegator on all validators. Usage: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query staking delegations [delegator-addr] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query staking delegations cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} delegation_responses: - balance: amount: "10000000000" denom: stake delegation: delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p shares: "10000000000.000000000000000000" validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj - balance: amount: "10000000000" denom: stake delegation: delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p shares: "10000000000.000000000000000000" validator_address: cosmosvaloper1x20lytyf6zkcrv5edpkfkn8sz578qg5sqfyqnp pagination: next_key: null total: "0" ``` ##### delegations-to The `delegations-to` command allows users to query delegations on an individual validator. Usage: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query staking delegations-to [validator-addr] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query staking delegations-to cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} - balance: amount: "504000000" denom: stake delegation: delegator_address: cosmos1q2qwwynhv8kh3lu5fkeex4awau9x8fwt45f5cp shares: "504000000.000000000000000000" validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj - balance: amount: "78125000000" denom: uixo delegation: delegator_address: cosmos1qvppl3479hw4clahe0kwdlfvf8uvjtcd99m2ca shares: "78125000000.000000000000000000" validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj pagination: next_key: null total: "0" ``` ##### historical-info The `historical-info` command allows users to query historical information at given height. Usage: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query staking historical-info [height] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query staking historical-info 10 ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} header: app_hash: Lbx8cXpI868wz8sgp4qPYVrlaKjevR5WP/IjUxwp3oo= chain_id: testnet consensus_hash: BICRvH3cKD93v7+R1zxE2ljD34qcvIZ0Bdi389qtoi8= data_hash: 47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU= evidence_hash: 47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU= height: "10" last_block_id: hash: RFbkpu6pWfSThXxKKl6EZVDnBSm16+U0l0xVjTX08Fk= part_set_header: hash: vpIvXD4rxD5GM4MXGz0Sad9I7//iVYLzZsEU4BVgWIU= total: 1 last_commit_hash: Ne4uXyx4QtNp4Zx89kf9UK7oG9QVbdB6e7ZwZkhy8K0= last_results_hash: 47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU= next_validators_hash: nGBgKeWBjoxeKFti00CxHsnULORgKY4LiuQwBuUrhCs= proposer_address: mMEP2c2IRPLr99LedSRtBg9eONM= time: "2021-10-01T06:00:49.785790894Z" validators_hash: nGBgKeWBjoxeKFti00CxHsnULORgKY4LiuQwBuUrhCs= version: app: "0" block: "11" valset: - commission: commission_rates: max_change_rate: "0.010000000000000000" max_rate: "0.200000000000000000" rate: "0.100000000000000000" update_time: "2021-10-01T05:52:50.380144238Z" consensus_pubkey: '@type': /cosmos.crypto.ed25519.PubKey key: Auxs3865HpB/EfssYOzfqNhEJjzys2Fo6jD5B8tPgC8= delegator_shares: "10000000.000000000000000000" description: details: "" identity: "" moniker: myvalidator security_contact: "" website: "" jailed: false min_self_delegation: "1" operator_address: cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc status: BOND_STATUS_BONDED tokens: "10000000" unbonding_height: "0" unbonding_time: "1970-01-01T00:00:00Z" ``` ##### params The `params` command allows users to query values set as staking parameters. Usage: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query staking params [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query staking params ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} bond_denom: stake historical_entries: 10000 max_entries: 7 max_validators: 50 unbonding_time: 1814400s ``` ##### pool The `pool` command allows users to query values for amounts stored in the staking pool. Usage: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd q staking pool [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd q staking pool ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} bonded_tokens: "10000000" not_bonded_tokens: "0" ``` ##### redelegation The `redelegation` command allows users to query a redelegation record based on delegator and a source and destination validator address. Usage: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query staking redelegation [delegator-addr] [src-validator-addr] [dst-validator-addr] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query staking redelegation cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p cosmosvaloper1l2rsakp388kuv9k8qzq6lrm9taddae7fpx59wm cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} pagination: null redelegation_responses: - entries: - balance: "50000000" redelegation_entry: completion_time: "2021-10-24T20:33:21.960084845Z" creation_height: 2.382847e+06 initial_balance: "50000000" shares_dst: "50000000.000000000000000000" - balance: "5000000000" redelegation_entry: completion_time: "2021-10-25T21:33:54.446846862Z" creation_height: 2.397271e+06 initial_balance: "5000000000" shares_dst: "5000000000.000000000000000000" redelegation: delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p entries: null validator_dst_address: cosmosvaloper1l2rsakp388kuv9k8qzq6lrm9taddae7fpx59wm validator_src_address: cosmosvaloper1l2rsakp388kuv9k8qzq6lrm9taddae7fpx59wm ``` ##### redelegations The `redelegations` command allows users to query all redelegation records for an individual delegator. Usage: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query staking redelegations [delegator-addr] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query staking redelegation cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} pagination: next_key: null total: "0" redelegation_responses: - entries: - balance: "50000000" redelegation_entry: completion_time: "2021-10-24T20:33:21.960084845Z" creation_height: 2.382847e+06 initial_balance: "50000000" shares_dst: "50000000.000000000000000000" - balance: "5000000000" redelegation_entry: completion_time: "2021-10-25T21:33:54.446846862Z" creation_height: 2.397271e+06 initial_balance: "5000000000" shares_dst: "5000000000.000000000000000000" redelegation: delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p entries: null validator_dst_address: cosmosvaloper1uccl5ugxrm7vqlzwqr04pjd320d2fz0z3hc6vm validator_src_address: cosmosvaloper1zppjyal5emta5cquje8ndkpz0rs046m7zqxrpp - entries: - balance: "562770000000" redelegation_entry: completion_time: "2021-10-25T21:42:07.336911677Z" creation_height: 2.39735e+06 initial_balance: "562770000000" shares_dst: "562770000000.000000000000000000" redelegation: delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p entries: null validator_dst_address: cosmosvaloper1uccl5ugxrm7vqlzwqr04pjd320d2fz0z3hc6vm validator_src_address: cosmosvaloper1zppjyal5emta5cquje8ndkpz0rs046m7zqxrpp ``` ##### redelegations-from The `redelegations-from` command allows users to query delegations that are redelegating *from* a validator. Usage: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query staking redelegations-from [validator-addr] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query staking redelegations-from cosmosvaloper1y4rzzrgl66eyhzt6gse2k7ej3zgwmngeleucjy ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} pagination: next_key: null total: "0" redelegation_responses: - entries: - balance: "50000000" redelegation_entry: completion_time: "2021-10-24T20:33:21.960084845Z" creation_height: 2.382847e+06 initial_balance: "50000000" shares_dst: "50000000.000000000000000000" - balance: "5000000000" redelegation_entry: completion_time: "2021-10-25T21:33:54.446846862Z" creation_height: 2.397271e+06 initial_balance: "5000000000" shares_dst: "5000000000.000000000000000000" redelegation: delegator_address: cosmos1pm6e78p4pgn0da365plzl4t56pxy8hwtqp2mph entries: null validator_dst_address: cosmosvaloper1uccl5ugxrm7vqlzwqr04pjd320d2fz0z3hc6vm validator_src_address: cosmosvaloper1y4rzzrgl66eyhzt6gse2k7ej3zgwmngeleucjy - entries: - balance: "221000000" redelegation_entry: completion_time: "2021-10-05T21:05:45.669420544Z" creation_height: 2.120693e+06 initial_balance: "221000000" shares_dst: "221000000.000000000000000000" redelegation: delegator_address: cosmos1zqv8qxy2zgn4c58fz8jt8jmhs3d0attcussrf6 entries: null validator_dst_address: cosmosvaloper10mseqwnwtjaqfrwwp2nyrruwmjp6u5jhah4c3y validator_src_address: cosmosvaloper1y4rzzrgl66eyhzt6gse2k7ej3zgwmngeleucjy ``` ##### unbonding-delegation The `unbonding-delegation` command allows users to query unbonding delegations for an individual delegator on an individual validator. Usage: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query staking unbonding-delegation [delegator-addr] [validator-addr] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query staking unbonding-delegation cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p entries: - balance: "52000000" completion_time: "2021-11-02T11:35:55.391594709Z" creation_height: "55078" initial_balance: "52000000" validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj ``` ##### unbonding-delegations The `unbonding-delegations` command allows users to query all unbonding-delegations records for one delegator. Usage: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query staking unbonding-delegations [delegator-addr] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query staking unbonding-delegations cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} pagination: next_key: null total: "0" unbonding_responses: - delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p entries: - balance: "52000000" completion_time: "2021-11-02T11:35:55.391594709Z" creation_height: "55078" initial_balance: "52000000" validator_address: cosmosvaloper1t8ehvswxjfn3ejzkjtntcyrqwvmvuknzmvtaaa ``` ##### unbonding-delegations-from The `unbonding-delegations-from` command allows users to query delegations that are unbonding *from* a validator. Usage: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query staking unbonding-delegations-from [validator-addr] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query staking unbonding-delegations-from cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} pagination: next_key: null total: "0" unbonding_responses: - delegator_address: cosmos1qqq9txnw4c77sdvzx0tkedsafl5s3vk7hn53fn entries: - balance: "150000000" completion_time: "2021-11-01T21:41:13.098141574Z" creation_height: "46823" initial_balance: "150000000" validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj - delegator_address: cosmos1peteje73eklqau66mr7h7rmewmt2vt99y24f5z entries: - balance: "24000000" completion_time: "2021-10-31T02:57:18.192280361Z" creation_height: "21516" initial_balance: "24000000" validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj ``` ##### validator The `validator` command allows users to query details about an individual validator. Usage: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query staking validator [validator-addr] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query staking validator cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} commission: commission_rates: max_change_rate: "0.020000000000000000" max_rate: "0.200000000000000000" rate: "0.050000000000000000" update_time: "2021-10-01T19:24:52.663191049Z" consensus_pubkey: '@type': /cosmos.crypto.ed25519.PubKey key: sIiexdJdYWn27+7iUHQJDnkp63gq/rzUq1Y+fxoGjXc= delegator_shares: "32948270000.000000000000000000" description: details: Witval is the validator arm from Vitwit. Vitwit is into software consulting and services business since 2015. We are working closely with Cosmos ecosystem since 2018. We are also building tools for the ecosystem, Aneka is our explorer for the cosmos ecosystem. identity: 51468B615127273A moniker: Witval security_contact: "" website: "" jailed: false min_self_delegation: "1" operator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj status: BOND_STATUS_BONDED tokens: "32948270000" unbonding_height: "0" unbonding_time: "1970-01-01T00:00:00Z" ``` ##### validators The `validators` command allows users to query details about all validators on a network. Usage: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query staking validators [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query staking validators ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} pagination: next_key: FPTi7TKAjN63QqZh+BaXn6gBmD5/ total: "0" validators: commission: commission_rates: max_change_rate: "0.020000000000000000" max_rate: "0.200000000000000000" rate: "0.050000000000000000" update_time: "2021-10-01T19:24:52.663191049Z" consensus_pubkey: '@type': /cosmos.crypto.ed25519.PubKey key: sIiexdJdYWn27+7iUHQJDnkp63gq/rzUq1Y+fxoGjXc= delegator_shares: "32948270000.000000000000000000" description: details: Witval is the validator arm from Vitwit. Vitwit is into software consulting and services business since 2015. We are working closely with Cosmos ecosystem since 2018. We are also building tools for the ecosystem, Aneka is our explorer for the cosmos ecosystem. identity: 51468B615127273A moniker: Witval security_contact: "" website: "" jailed: false min_self_delegation: "1" operator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj status: BOND_STATUS_BONDED tokens: "32948270000" unbonding_height: "0" unbonding_time: "1970-01-01T00:00:00Z" - commission: commission_rates: max_change_rate: "0.100000000000000000" max_rate: "0.200000000000000000" rate: "0.050000000000000000" update_time: "2021-10-04T18:02:21.446645619Z" consensus_pubkey: '@type': /cosmos.crypto.ed25519.PubKey key: GDNpuKDmCg9GnhnsiU4fCWktuGUemjNfvpCZiqoRIYA= delegator_shares: "559343421.000000000000000000" description: details: Noderunners is a professional validator in POS networks. We have a huge node running experience, reliable soft and hardware. Our commissions are always low, our support to delegators is always full. Stake with us and start receiving your Cosmos rewards now! identity: 812E82D12FEA3493 moniker: Noderunners security_contact: info@noderunners.biz website: http://noderunners.biz jailed: false min_self_delegation: "1" operator_address: cosmosvaloper1q5ku90atkhktze83j9xjaks2p7uruag5zp6wt7 status: BOND_STATUS_BONDED tokens: "559343421" unbonding_height: "0" unbonding_time: "1970-01-01T00:00:00Z" ``` #### Transactions The `tx` commands allows users to interact with the `staking` module. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx staking --help ``` ##### create-validator The command `create-validator` allows users to create new validator initialized with a self-delegation to it. Usage: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx staking create-validator [path/to/validator.json] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx staking create-validator /path/to/validator.json \ --chain-id="name_of_chain_id" \ --gas="auto" \ --gas-adjustment="1.2" \ --gas-prices="0.025stake" \ --from=mykey ``` where `validator.json` contains: ```json expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "pubkey": { "@type": "/cosmos.crypto.ed25519.PubKey", "key": "BnbwFpeONLqvWqJb3qaUbL5aoIcW3fSuAp9nT3z5f20=" }, "amount": "1000000stake", "moniker": "my-moniker", "website": "https://myweb.site", "security": "security-contact@gmail.com", "details": "description of your validator", "commission-rate": "0.10", "commission-max-rate": "0.20", "commission-max-change-rate": "0.01", "min-self-delegation": "1" } ``` and pubkey can be obtained by using `simd tendermint show-validator` command. ##### delegate The command `delegate` allows users to delegate liquid tokens to a validator. Usage: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx staking delegate [validator-addr] [amount] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx staking delegate cosmosvaloper1l2rsakp388kuv9k8qzq6lrm9taddae7fpx59wm 1000stake --from mykey ``` ##### edit-validator The command `edit-validator` allows users to edit an existing validator account. Usage: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx staking edit-validator [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx staking edit-validator --moniker "new_moniker_name" --website "new_webiste_url" --from mykey ``` ##### redelegate The command `redelegate` allows users to redelegate illiquid tokens from one validator to another. Usage: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx staking redelegate [src-validator-addr] [dst-validator-addr] [amount] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx staking redelegate cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj cosmosvaloper1l2rsakp388kuv9k8qzq6lrm9taddae7fpx59wm 100stake --from mykey ``` ##### unbond The command `unbond` allows users to unbond shares from a validator. Usage: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx staking unbond [validator-addr] [amount] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx staking unbond cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj 100stake --from mykey ``` ##### cancel unbond The command `cancel-unbond` allow users to cancel the unbonding delegation entry and delegate back to the original validator. Usage: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx staking cancel-unbond [validator-addr] [amount] [creation-height] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx staking cancel-unbond cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj 100stake 123123 --from mykey ``` ### gRPC A user can query the `staking` module using gRPC endpoints. #### Validators The `Validators` endpoint queries all validators that match the given status. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.staking.v1beta1.Query/Validators ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext localhost:9090 cosmos.staking.v1beta1.Query/Validators ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "validators": [ { "operatorAddress": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", "consensusPubkey": {"@type":"/cosmos.crypto.ed25519.PubKey","key":"Auxs3865HpB/EfssYOzfqNhEJjzys2Fo6jD5B8tPgC8="}, "status": "BOND_STATUS_BONDED", "tokens": "10000000", "delegatorShares": "10000000000000000000000000", "description": { "moniker": "myvalidator" }, "unbondingTime": "1970-01-01T00:00:00Z", "commission": { "commissionRates": { "rate": "100000000000000000", "maxRate": "200000000000000000", "maxChangeRate": "10000000000000000" }, "updateTime": "2021-10-01T05:52:50.380144238Z" }, "minSelfDelegation": "1" } ], "pagination": { "total": "1" } } ``` #### Validator The `Validator` endpoint queries validator information for given validator address. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.staking.v1beta1.Query/Validator ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext -d '{"validator_addr":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc"}' \ localhost:9090 cosmos.staking.v1beta1.Query/Validator ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "validator": { "operatorAddress": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", "consensusPubkey": {"@type":"/cosmos.crypto.ed25519.PubKey","key":"Auxs3865HpB/EfssYOzfqNhEJjzys2Fo6jD5B8tPgC8="}, "status": "BOND_STATUS_BONDED", "tokens": "10000000", "delegatorShares": "10000000000000000000000000", "description": { "moniker": "myvalidator" }, "unbondingTime": "1970-01-01T00:00:00Z", "commission": { "commissionRates": { "rate": "100000000000000000", "maxRate": "200000000000000000", "maxChangeRate": "10000000000000000" }, "updateTime": "2021-10-01T05:52:50.380144238Z" }, "minSelfDelegation": "1" } } ``` #### ValidatorDelegations The `ValidatorDelegations` endpoint queries delegate information for given validator. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.staking.v1beta1.Query/ValidatorDelegations ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext -d '{"validator_addr":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc"}' \ localhost:9090 cosmos.staking.v1beta1.Query/ValidatorDelegations ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "delegationResponses": [ { "delegation": { "delegatorAddress": "cosmos1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgy3ua5t", "validatorAddress": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", "shares": "10000000000000000000000000" }, "balance": { "denom": "stake", "amount": "10000000" } } ], "pagination": { "total": "1" } } ``` #### ValidatorUnbondingDelegations The `ValidatorUnbondingDelegations` endpoint queries delegate information for given validator. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.staking.v1beta1.Query/ValidatorUnbondingDelegations ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext -d '{"validator_addr":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc"}' \ localhost:9090 cosmos.staking.v1beta1.Query/ValidatorUnbondingDelegations ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "unbonding_responses": [ { "delegator_address": "cosmos1z3pzzw84d6xn00pw9dy3yapqypfde7vg6965fy", "validator_address": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", "entries": [ { "creation_height": "25325", "completion_time": "2021-10-31T09:24:36.797320636Z", "initial_balance": "20000000", "balance": "20000000" } ] }, { "delegator_address": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77", "validator_address": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", "entries": [ { "creation_height": "13100", "completion_time": "2021-10-30T12:53:02.272266791Z", "initial_balance": "1000000", "balance": "1000000" } ] }, ], "pagination": { "next_key": null, "total": "8" } } ``` #### Delegation The `Delegation` endpoint queries delegate information for given validator delegator pair. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.staking.v1beta1.Query/Delegation ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"delegator_addr": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77", validator_addr":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc"}' \ localhost:9090 cosmos.staking.v1beta1.Query/Delegation ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "delegation_response": { "delegation": { "delegator_address":"cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77", "validator_address":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", "shares":"25083119936.000000000000000000" }, "balance": { "denom":"stake", "amount":"25083119936" } } } ``` #### UnbondingDelegation The `UnbondingDelegation` endpoint queries unbonding information for given validator delegator. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.staking.v1beta1.Query/UnbondingDelegation ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"delegator_addr": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77", validator_addr":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc"}' \ localhost:9090 cosmos.staking.v1beta1.Query/UnbondingDelegation ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "unbond": { "delegator_address": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77", "validator_address": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", "entries": [ { "creation_height": "136984", "completion_time": "2021-11-08T05:38:47.505593891Z", "initial_balance": "400000000", "balance": "400000000" }, { "creation_height": "137005", "completion_time": "2021-11-08T05:40:53.526196312Z", "initial_balance": "385000000", "balance": "385000000" } ] } } ``` #### DelegatorDelegations The `DelegatorDelegations` endpoint queries all delegations of a given delegator address. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.staking.v1beta1.Query/DelegatorDelegations ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"delegator_addr": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77"}' \ localhost:9090 cosmos.staking.v1beta1.Query/DelegatorDelegations ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "delegation_responses": [ {"delegation":{"delegator_address":"cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77","validator_address":"cosmosvaloper1eh5mwu044gd5ntkkc2xgfg8247mgc56fww3vc8","shares":"25083339023.000000000000000000"},"balance":{"denom":"stake","amount":"25083339023"}} ], "pagination": { "next_key": null, "total": "1" } } ``` #### DelegatorUnbondingDelegations The `DelegatorUnbondingDelegations` endpoint queries all unbonding delegations of a given delegator address. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.staking.v1beta1.Query/DelegatorUnbondingDelegations ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"delegator_addr": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77"}' \ localhost:9090 cosmos.staking.v1beta1.Query/DelegatorUnbondingDelegations ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "unbonding_responses": [ { "delegator_address": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77", "validator_address": "cosmosvaloper1sjllsnramtg3ewxqwwrwjxfgc4n4ef9uxyejze", "entries": [ { "creation_height": "136984", "completion_time": "2021-11-08T05:38:47.505593891Z", "initial_balance": "400000000", "balance": "400000000" }, { "creation_height": "137005", "completion_time": "2021-11-08T05:40:53.526196312Z", "initial_balance": "385000000", "balance": "385000000" } ] } ], "pagination": { "next_key": null, "total": "1" } } ``` #### Redelegations The `Redelegations` endpoint queries redelegations of given address. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.staking.v1beta1.Query/Redelegations ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"delegator_addr": "cosmos1ld5p7hn43yuh8ht28gm9pfjgj2fctujp2tgwvf", "src_validator_addr" : "cosmosvaloper1j7euyj85fv2jugejrktj540emh9353ltgppc3g", "dst_validator_addr" : "cosmosvaloper1yy3tnegzmkdcm7czzcy3flw5z0zyr9vkkxrfse"}' \ localhost:9090 cosmos.staking.v1beta1.Query/Redelegations ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "redelegation_responses": [ { "redelegation": { "delegator_address": "cosmos1ld5p7hn43yuh8ht28gm9pfjgj2fctujp2tgwvf", "validator_src_address": "cosmosvaloper1j7euyj85fv2jugejrktj540emh9353ltgppc3g", "validator_dst_address": "cosmosvaloper1yy3tnegzmkdcm7czzcy3flw5z0zyr9vkkxrfse", "entries": null }, "entries": [ { "redelegation_entry": { "creation_height": 135932, "completion_time": "2021-11-08T03:52:55.299147901Z", "initial_balance": "2900000", "shares_dst": "2900000.000000000000000000" }, "balance": "2900000" } ] } ], "pagination": null } ``` #### DelegatorValidators The `DelegatorValidators` endpoint queries all validators information for given delegator. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.staking.v1beta1.Query/DelegatorValidators ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"delegator_addr": "cosmos1ld5p7hn43yuh8ht28gm9pfjgj2fctujp2tgwvf"}' \ localhost:9090 cosmos.staking.v1beta1.Query/DelegatorValidators ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "validators": [ { "operator_address": "cosmosvaloper1eh5mwu044gd5ntkkc2xgfg8247mgc56fww3vc8", "consensus_pubkey": { "@type": "/cosmos.crypto.ed25519.PubKey", "key": "UPwHWxH1zHJWGOa/m6JB3f5YjHMvPQPkVbDqqi+U7Uw=" }, "jailed": false, "status": "BOND_STATUS_BONDED", "tokens": "347260647559", "delegator_shares": "347260647559.000000000000000000", "description": { "moniker": "BouBouNode", "identity": "", "website": "https://boubounode.com", "security_contact": "", "details": "AI-based Validator. #1 AI Validator on Game of Stakes. Fairly priced. Don't trust (humans), verify. Made with BouBou love." }, "unbonding_height": "0", "unbonding_time": "1970-01-01T00:00:00Z", "commission": { "commission_rates": { "rate": "0.061000000000000000", "max_rate": "0.300000000000000000", "max_change_rate": "0.150000000000000000" }, "update_time": "2021-10-01T15:00:00Z" }, "min_self_delegation": "1" } ], "pagination": { "next_key": null, "total": "1" } } ``` #### DelegatorValidator The `DelegatorValidator` endpoint queries validator information for given delegator validator ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.staking.v1beta1.Query/DelegatorValidator ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"delegator_addr": "cosmos1eh5mwu044gd5ntkkc2xgfg8247mgc56f3n8rr7", "validator_addr": "cosmosvaloper1eh5mwu044gd5ntkkc2xgfg8247mgc56fww3vc8"}' \ localhost:9090 cosmos.staking.v1beta1.Query/DelegatorValidator ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "validator": { "operator_address": "cosmosvaloper1eh5mwu044gd5ntkkc2xgfg8247mgc56fww3vc8", "consensus_pubkey": { "@type": "/cosmos.crypto.ed25519.PubKey", "key": "UPwHWxH1zHJWGOa/m6JB3f5YjHMvPQPkVbDqqi+U7Uw=" }, "jailed": false, "status": "BOND_STATUS_BONDED", "tokens": "347262754841", "delegator_shares": "347262754841.000000000000000000", "description": { "moniker": "BouBouNode", "identity": "", "website": "https://boubounode.com", "security_contact": "", "details": "AI-based Validator. #1 AI Validator on Game of Stakes. Fairly priced. Don't trust (humans), verify. Made with BouBou love." }, "unbonding_height": "0", "unbonding_time": "1970-01-01T00:00:00Z", "commission": { "commission_rates": { "rate": "0.061000000000000000", "max_rate": "0.300000000000000000", "max_change_rate": "0.150000000000000000" }, "update_time": "2021-10-01T15:00:00Z" }, "min_self_delegation": "1" } } ``` #### HistoricalInfo ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.staking.v1beta1.Query/HistoricalInfo ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext -d '{"height" : 1}' localhost:9090 cosmos.staking.v1beta1.Query/HistoricalInfo ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "hist": { "header": { "version": { "block": "11", "app": "0" }, "chain_id": "simd-1", "height": "140142", "time": "2021-10-11T10:56:29.720079569Z", "last_block_id": { "hash": "9gri/4LLJUBFqioQ3NzZIP9/7YHR9QqaM6B2aJNQA7o=", "part_set_header": { "total": 1, "hash": "Hk1+C864uQkl9+I6Zn7IurBZBKUevqlVtU7VqaZl1tc=" } }, "last_commit_hash": "VxrcS27GtvGruS3I9+AlpT7udxIT1F0OrRklrVFSSKc=", "data_hash": "80BjOrqNYUOkTnmgWyz9AQ8n7SoEmPVi4QmAe8RbQBY=", "validators_hash": "95W49n2hw8RWpr1GPTAO5MSPi6w6Wjr3JjjS7AjpBho=", "next_validators_hash": "95W49n2hw8RWpr1GPTAO5MSPi6w6Wjr3JjjS7AjpBho=", "consensus_hash": "BICRvH3cKD93v7+R1zxE2ljD34qcvIZ0Bdi389qtoi8=", "app_hash": "ZZaxnSY3E6Ex5Bvkm+RigYCK82g8SSUL53NymPITeOE=", "last_results_hash": "47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=", "evidence_hash": "47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=", "proposer_address": "aH6dO428B+ItuoqPq70efFHrSMY=" }, "valset": [ { "operator_address": "cosmosvaloper196ax4vc0lwpxndu9dyhvca7jhxp70rmcqcnylw", "consensus_pubkey": { "@type": "/cosmos.crypto.ed25519.PubKey", "key": "/O7BtNW0pafwfvomgR4ZnfldwPXiFfJs9mHg3gwfv5Q=" }, "jailed": false, "status": "BOND_STATUS_BONDED", "tokens": "1426045203613", "delegator_shares": "1426045203613.000000000000000000", "description": { "moniker": "SG-1", "identity": "48608633F99D1B60", "website": "https://sg-1.online", "security_contact": "", "details": "SG-1 - your favorite validator on Witval. We offer 100% Soft Slash protection." }, "unbonding_height": "0", "unbonding_time": "1970-01-01T00:00:00Z", "commission": { "commission_rates": { "rate": "0.037500000000000000", "max_rate": "0.200000000000000000", "max_change_rate": "0.030000000000000000" }, "update_time": "2021-10-01T15:00:00Z" }, "min_self_delegation": "1" } ] } } ``` #### Pool The `Pool` endpoint queries the pool information. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.staking.v1beta1.Query/Pool ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext -d localhost:9090 cosmos.staking.v1beta1.Query/Pool ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "pool": { "not_bonded_tokens": "369054400189", "bonded_tokens": "15657192425623" } } ``` #### Params The `Params` endpoint queries the pool information. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.staking.v1beta1.Query/Params ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext localhost:9090 cosmos.staking.v1beta1.Query/Params ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "params": { "unbondingTime": "1814400s", "maxValidators": 100, "maxEntries": 7, "historicalEntries": 10000, "bondDenom": "stake" } } ``` ### REST A user can query the `staking` module using REST endpoints. #### DelegatorDelegations The `DelegtaorDelegations` REST endpoint queries all delegations of a given delegator address. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/staking/v1beta1/delegations/{delegatorAddr} ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/delegations/cosmos1vcs68xf2tnqes5tg0khr0vyevm40ff6zdxatp5" -H "accept: application/json" ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "delegation_responses": [ { "delegation": { "delegator_address": "cosmos1vcs68xf2tnqes5tg0khr0vyevm40ff6zdxatp5", "validator_address": "cosmosvaloper1quqxfrxkycr0uzt4yk0d57tcq3zk7srm7sm6r8", "shares": "256250000.000000000000000000" }, "balance": { "denom": "stake", "amount": "256250000" } }, { "delegation": { "delegator_address": "cosmos1vcs68xf2tnqes5tg0khr0vyevm40ff6zdxatp5", "validator_address": "cosmosvaloper194v8uwee2fvs2s8fa5k7j03ktwc87h5ym39jfv", "shares": "255150000.000000000000000000" }, "balance": { "denom": "stake", "amount": "255150000" } } ], "pagination": { "next_key": null, "total": "2" } } ``` #### Redelegations The `Redelegations` REST endpoint queries redelegations of given address. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/staking/v1beta1/delegators/{delegatorAddr}/redelegations ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl -X GET \ "http://localhost:1317/cosmos/staking/v1beta1/delegators/cosmos1thfntksw0d35n2tkr0k8v54fr8wxtxwxl2c56e/redelegations?srcValidatorAddr=cosmosvaloper1lzhlnpahvznwfv4jmay2tgaha5kmz5qx4cuznf&dstValidatorAddr=cosmosvaloper1vq8tw77kp8lvxq9u3c8eeln9zymn68rng8pgt4" \ -H "accept: application/json" ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "redelegation_responses": [ { "redelegation": { "delegator_address": "cosmos1thfntksw0d35n2tkr0k8v54fr8wxtxwxl2c56e", "validator_src_address": "cosmosvaloper1lzhlnpahvznwfv4jmay2tgaha5kmz5qx4cuznf", "validator_dst_address": "cosmosvaloper1vq8tw77kp8lvxq9u3c8eeln9zymn68rng8pgt4", "entries": null }, "entries": [ { "redelegation_entry": { "creation_height": 151523, "completion_time": "2021-11-09T06:03:25.640682116Z", "initial_balance": "200000000", "shares_dst": "200000000.000000000000000000" }, "balance": "200000000" } ] } ], "pagination": null } ``` #### DelegatorUnbondingDelegations The `DelegatorUnbondingDelegations` REST endpoint queries all unbonding delegations of a given delegator address. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/staking/v1beta1/delegators/{delegatorAddr}/unbonding_delegations ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl -X GET \ "http://localhost:1317/cosmos/staking/v1beta1/delegators/cosmos1nxv42u3lv642q0fuzu2qmrku27zgut3n3z7lll/unbonding_delegations" \ -H "accept: application/json" ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "unbonding_responses": [ { "delegator_address": "cosmos1nxv42u3lv642q0fuzu2qmrku27zgut3n3z7lll", "validator_address": "cosmosvaloper1e7mvqlz50ch6gw4yjfemsc069wfre4qwmw53kq", "entries": [ { "creation_height": "2442278", "completion_time": "2021-10-12T10:59:03.797335857Z", "initial_balance": "50000000000", "balance": "50000000000" } ] } ], "pagination": { "next_key": null, "total": "1" } } ``` #### DelegatorValidators The `DelegatorValidators` REST endpoint queries all validators information for given delegator address. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/staking/v1beta1/delegators/{delegatorAddr}/validators ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl -X GET \ "http://localhost:1317/cosmos/staking/v1beta1/delegators/cosmos1xwazl8ftks4gn00y5x3c47auquc62ssune9ppv/validators" \ -H "accept: application/json" ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "validators": [ { "operator_address": "cosmosvaloper1xwazl8ftks4gn00y5x3c47auquc62ssuvynw64", "consensus_pubkey": { "@type": "/cosmos.crypto.ed25519.PubKey", "key": "5v4n3px3PkfNnKflSgepDnsMQR1hiNXnqOC11Y72/PQ=" }, "jailed": false, "status": "BOND_STATUS_BONDED", "tokens": "21592843799", "delegator_shares": "21592843799.000000000000000000", "description": { "moniker": "jabbey", "identity": "", "website": "https://twitter.com/JoeAbbey", "security_contact": "", "details": "just another dad in the cosmos" }, "unbonding_height": "0", "unbonding_time": "1970-01-01T00:00:00Z", "commission": { "commission_rates": { "rate": "0.100000000000000000", "max_rate": "0.200000000000000000", "max_change_rate": "0.100000000000000000" }, "update_time": "2021-10-09T19:03:54.984821705Z" }, "min_self_delegation": "1" } ], "pagination": { "next_key": null, "total": "1" } } ``` #### DelegatorValidator The `DelegatorValidator` REST endpoint queries validator information for given delegator validator pair. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/staking/v1beta1/delegators/{delegatorAddr}/validators/{validatorAddr} ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl -X GET \ "http://localhost:1317/cosmos/staking/v1beta1/delegators/cosmos1xwazl8ftks4gn00y5x3c47auquc62ssune9ppv/validators/cosmosvaloper1xwazl8ftks4gn00y5x3c47auquc62ssuvynw64" \ -H "accept: application/json" ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "validator": { "operator_address": "cosmosvaloper1xwazl8ftks4gn00y5x3c47auquc62ssuvynw64", "consensus_pubkey": { "@type": "/cosmos.crypto.ed25519.PubKey", "key": "5v4n3px3PkfNnKflSgepDnsMQR1hiNXnqOC11Y72/PQ=" }, "jailed": false, "status": "BOND_STATUS_BONDED", "tokens": "21592843799", "delegator_shares": "21592843799.000000000000000000", "description": { "moniker": "jabbey", "identity": "", "website": "https://twitter.com/JoeAbbey", "security_contact": "", "details": "just another dad in the cosmos" }, "unbonding_height": "0", "unbonding_time": "1970-01-01T00:00:00Z", "commission": { "commission_rates": { "rate": "0.100000000000000000", "max_rate": "0.200000000000000000", "max_change_rate": "0.100000000000000000" }, "update_time": "2021-10-09T19:03:54.984821705Z" }, "min_self_delegation": "1" } } ``` #### HistoricalInfo The `HistoricalInfo` REST endpoint queries the historical information for given height. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/staking/v1beta1/historical_info/{height} ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/historical_info/153332" -H "accept: application/json" ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "hist": { "header": { "version": { "block": "11", "app": "0" }, "chain_id": "cosmos-1", "height": "153332", "time": "2021-10-12T09:05:35.062230221Z", "last_block_id": { "hash": "NX8HevR5khb7H6NGKva+jVz7cyf0skF1CrcY9A0s+d8=", "part_set_header": { "total": 1, "hash": "zLQ2FiKM5tooL3BInt+VVfgzjlBXfq0Hc8Iux/xrhdg=" } }, "last_commit_hash": "P6IJrK8vSqU3dGEyRHnAFocoDGja0bn9euLuy09s350=", "data_hash": "eUd+6acHWrNXYju8Js449RJ99lOYOs16KpqQl4SMrEM=", "validators_hash": "mB4pravvMsJKgi+g8aYdSeNlt0kPjnRFyvtAQtaxcfw=", "next_validators_hash": "mB4pravvMsJKgi+g8aYdSeNlt0kPjnRFyvtAQtaxcfw=", "consensus_hash": "BICRvH3cKD93v7+R1zxE2ljD34qcvIZ0Bdi389qtoi8=", "app_hash": "fuELArKRK+CptnZ8tu54h6xEleSWenHNmqC84W866fU=", "last_results_hash": "p/BPexV4LxAzlVcPRvW+lomgXb6Yze8YLIQUo/4Kdgc=", "evidence_hash": "47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=", "proposer_address": "G0MeY8xQx7ooOsni8KE/3R/Ib3Q=" }, "valset": [ { "operator_address": "cosmosvaloper196ax4vc0lwpxndu9dyhvca7jhxp70rmcqcnylw", "consensus_pubkey": { "@type": "/cosmos.crypto.ed25519.PubKey", "key": "/O7BtNW0pafwfvomgR4ZnfldwPXiFfJs9mHg3gwfv5Q=" }, "jailed": false, "status": "BOND_STATUS_BONDED", "tokens": "1416521659632", "delegator_shares": "1416521659632.000000000000000000", "description": { "moniker": "SG-1", "identity": "48608633F99D1B60", "website": "https://sg-1.online", "security_contact": "", "details": "SG-1 - your favorite validator on cosmos. We offer 100% Soft Slash protection." }, "unbonding_height": "0", "unbonding_time": "1970-01-01T00:00:00Z", "commission": { "commission_rates": { "rate": "0.037500000000000000", "max_rate": "0.200000000000000000", "max_change_rate": "0.030000000000000000" }, "update_time": "2021-10-01T15:00:00Z" }, "min_self_delegation": "1" }, { "operator_address": "cosmosvaloper1t8ehvswxjfn3ejzkjtntcyrqwvmvuknzmvtaaa", "consensus_pubkey": { "@type": "/cosmos.crypto.ed25519.PubKey", "key": "uExZyjNLtr2+FFIhNDAMcQ8+yTrqE7ygYTsI7khkA5Y=" }, "jailed": false, "status": "BOND_STATUS_BONDED", "tokens": "1348298958808", "delegator_shares": "1348298958808.000000000000000000", "description": { "moniker": "Cosmostation", "identity": "AE4C403A6E7AA1AC", "website": "https://www.cosmostation.io", "security_contact": "admin@stamper.network", "details": "Cosmostation validator node. Delegate your tokens and Start Earning Staking Rewards" }, "unbonding_height": "0", "unbonding_time": "1970-01-01T00:00:00Z", "commission": { "commission_rates": { "rate": "0.050000000000000000", "max_rate": "1.000000000000000000", "max_change_rate": "0.200000000000000000" }, "update_time": "2021-10-01T15:06:38.821314287Z" }, "min_self_delegation": "1" } ] } } ``` #### Parameters The `Parameters` REST endpoint queries the staking parameters. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/staking/v1beta1/params ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/params" -H "accept: application/json" ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "params": { "unbonding_time": "2419200s", "max_validators": 100, "max_entries": 7, "historical_entries": 10000, "bond_denom": "stake" } } ``` #### Pool The `Pool` REST endpoint queries the pool information. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/staking/v1beta1/pool ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/pool" -H "accept: application/json" ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "pool": { "not_bonded_tokens": "432805737458", "bonded_tokens": "15783637712645" } } ``` #### Validators The `Validators` REST endpoint queries all validators that match the given status. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/staking/v1beta1/validators ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/validators" -H "accept: application/json" ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "validators": [ { "operator_address": "cosmosvaloper1q3jsx9dpfhtyqqgetwpe5tmk8f0ms5qywje8tw", "consensus_pubkey": { "@type": "/cosmos.crypto.ed25519.PubKey", "key": "N7BPyek2aKuNZ0N/8YsrqSDhGZmgVaYUBuddY8pwKaE=" }, "jailed": false, "status": "BOND_STATUS_BONDED", "tokens": "383301887799", "delegator_shares": "383301887799.000000000000000000", "description": { "moniker": "SmartNodes", "identity": "D372724899D1EDC8", "website": "https://smartnodes.co", "security_contact": "", "details": "Earn Rewards with Crypto Staking & Node Deployment" }, "unbonding_height": "0", "unbonding_time": "1970-01-01T00:00:00Z", "commission": { "commission_rates": { "rate": "0.050000000000000000", "max_rate": "0.200000000000000000", "max_change_rate": "0.100000000000000000" }, "update_time": "2021-10-01T15:51:31.596618510Z" }, "min_self_delegation": "1" }, { "operator_address": "cosmosvaloper1q5ku90atkhktze83j9xjaks2p7uruag5zp6wt7", "consensus_pubkey": { "@type": "/cosmos.crypto.ed25519.PubKey", "key": "GDNpuKDmCg9GnhnsiU4fCWktuGUemjNfvpCZiqoRIYA=" }, "jailed": false, "status": "BOND_STATUS_UNBONDING", "tokens": "1017819654", "delegator_shares": "1017819654.000000000000000000", "description": { "moniker": "Noderunners", "identity": "812E82D12FEA3493", "website": "http://noderunners.biz", "security_contact": "info@noderunners.biz", "details": "Noderunners is a professional validator in POS networks. We have a huge node running experience, reliable soft and hardware. Our commissions are always low, our support to delegators is always full. Stake with us and start receiving your cosmos rewards now!" }, "unbonding_height": "147302", "unbonding_time": "2021-11-08T22:58:53.718662452Z", "commission": { "commission_rates": { "rate": "0.050000000000000000", "max_rate": "0.200000000000000000", "max_change_rate": "0.100000000000000000" }, "update_time": "2021-10-04T18:02:21.446645619Z" }, "min_self_delegation": "1" } ], "pagination": { "next_key": "FONDBFkE4tEEf7yxWWKOD49jC2NK", "total": "2" } } ``` #### Validator The `Validator` REST endpoint queries validator information for given validator address. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/staking/v1beta1/validators/{validatorAddr} ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl -X GET \ "http://localhost:1317/cosmos/staking/v1beta1/validators/cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q" \ -H "accept: application/json" ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "validator": { "operator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", "consensus_pubkey": { "@type": "/cosmos.crypto.ed25519.PubKey", "key": "sIiexdJdYWn27+7iUHQJDnkp63gq/rzUq1Y+fxoGjXc=" }, "jailed": false, "status": "BOND_STATUS_BONDED", "tokens": "33027900000", "delegator_shares": "33027900000.000000000000000000", "description": { "moniker": "Witval", "identity": "51468B615127273A", "website": "", "security_contact": "", "details": "Witval is the validator arm from Vitwit. Vitwit is into software consulting and services business since 2015. We are working closely with Cosmos ecosystem since 2018. We are also building tools for the ecosystem, Aneka is our explorer for the cosmos ecosystem." }, "unbonding_height": "0", "unbonding_time": "1970-01-01T00:00:00Z", "commission": { "commission_rates": { "rate": "0.050000000000000000", "max_rate": "0.200000000000000000", "max_change_rate": "0.020000000000000000" }, "update_time": "2021-10-01T19:24:52.663191049Z" }, "min_self_delegation": "1" } } ``` #### ValidatorDelegations The `ValidatorDelegations` REST endpoint queries delegate information for given validator. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/staking/v1beta1/validators/{validatorAddr}/delegations ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/validators/cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q/delegations" -H "accept: application/json" ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "delegation_responses": [ { "delegation": { "delegator_address": "cosmos190g5j8aszqhvtg7cprmev8xcxs6csra7xnk3n3", "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", "shares": "31000000000.000000000000000000" }, "balance": { "denom": "stake", "amount": "31000000000" } }, { "delegation": { "delegator_address": "cosmos1ddle9tczl87gsvmeva3c48nenyng4n56qwq4ee", "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", "shares": "628470000.000000000000000000" }, "balance": { "denom": "stake", "amount": "628470000" } }, { "delegation": { "delegator_address": "cosmos10fdvkczl76m040smd33lh9xn9j0cf26kk4s2nw", "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", "shares": "838120000.000000000000000000" }, "balance": { "denom": "stake", "amount": "838120000" } }, { "delegation": { "delegator_address": "cosmos1n8f5fknsv2yt7a8u6nrx30zqy7lu9jfm0t5lq8", "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", "shares": "500000000.000000000000000000" }, "balance": { "denom": "stake", "amount": "500000000" } }, { "delegation": { "delegator_address": "cosmos16msryt3fqlxtvsy8u5ay7wv2p8mglfg9hrek2e", "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", "shares": "61310000.000000000000000000" }, "balance": { "denom": "stake", "amount": "61310000" } } ], "pagination": { "next_key": null, "total": "5" } } ``` #### Delegation The `Delegation` REST endpoint queries delegate information for given validator delegator pair. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/staking/v1beta1/validators/{validatorAddr}/delegations/{delegatorAddr} ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl -X GET \ "http://localhost:1317/cosmos/staking/v1beta1/validators/cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q/delegations/cosmos1n8f5fknsv2yt7a8u6nrx30zqy7lu9jfm0t5lq8" \ -H "accept: application/json" ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "delegation_response": { "delegation": { "delegator_address": "cosmos1n8f5fknsv2yt7a8u6nrx30zqy7lu9jfm0t5lq8", "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", "shares": "500000000.000000000000000000" }, "balance": { "denom": "stake", "amount": "500000000" } } } ``` #### UnbondingDelegation The `UnbondingDelegation` REST endpoint queries unbonding information for given validator delegator pair. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/staking/v1beta1/validators/{validatorAddr}/delegations/{delegatorAddr}/unbonding_delegation ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl -X GET \ "http://localhost:1317/cosmos/staking/v1beta1/validators/cosmosvaloper13v4spsah85ps4vtrw07vzea37gq5la5gktlkeu/delegations/cosmos1ze2ye5u5k3qdlexvt2e0nn0508p04094ya0qpm/unbonding_delegation" \ -H "accept: application/json" ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "unbond": { "delegator_address": "cosmos1ze2ye5u5k3qdlexvt2e0nn0508p04094ya0qpm", "validator_address": "cosmosvaloper13v4spsah85ps4vtrw07vzea37gq5la5gktlkeu", "entries": [ { "creation_height": "153687", "completion_time": "2021-11-09T09:41:18.352401903Z", "initial_balance": "525111", "balance": "525111" } ] } } ``` #### ValidatorUnbondingDelegations The `ValidatorUnbondingDelegations` REST endpoint queries unbonding delegations of a validator. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/staking/v1beta1/validators/{validatorAddr}/unbonding_delegations ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl -X GET \ "http://localhost:1317/cosmos/staking/v1beta1/validators/cosmosvaloper13v4spsah85ps4vtrw07vzea37gq5la5gktlkeu/unbonding_delegations" \ -H "accept: application/json" ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "unbonding_responses": [ { "delegator_address": "cosmos1q9snn84jfrd9ge8t46kdcggpe58dua82vnj7uy", "validator_address": "cosmosvaloper13v4spsah85ps4vtrw07vzea37gq5la5gktlkeu", "entries": [ { "creation_height": "90998", "completion_time": "2021-11-05T00:14:37.005841058Z", "initial_balance": "24000000", "balance": "24000000" } ] }, { "delegator_address": "cosmos1qf36e6wmq9h4twhdvs6pyq9qcaeu7ye0s3dqq2", "validator_address": "cosmosvaloper13v4spsah85ps4vtrw07vzea37gq5la5gktlkeu", "entries": [ { "creation_height": "47478", "completion_time": "2021-11-01T22:47:26.714116854Z", "initial_balance": "8000000", "balance": "8000000" } ] } ], "pagination": { "next_key": null, "total": "2" } } ``` # x/upgrade Source: https://docs.cosmos.network/sdk/latest/modules/upgrade/README ## Abstract `x/upgrade` is an implementation of a Cosmos SDK module that facilitates smoothly upgrading a live Cosmos chain to a new (breaking) software version. It accomplishes this by providing a `PreBlocker` hook that prevents the blockchain state machine from proceeding once a pre-defined upgrade block height has been reached. The module does not prescribe anything regarding how governance decides to do an upgrade, but just the mechanism for coordinating the upgrade safely. Without software support for upgrades, upgrading a live chain is risky because all of the validators need to pause their state machines at exactly the same point in the process. If this is not done correctly, there can be state inconsistencies which are hard to recover from. * [Concepts](#concepts) * [State](#state) * [Events](#events) * [Client](#client) * [CLI](#cli) * [REST](#rest) * [gRPC](#grpc) * [Resources](#resources) ## Concepts ### Plan The `x/upgrade` module defines a `Plan` type in which a live upgrade is scheduled to occur. A `Plan` can be scheduled at a specific block height. A `Plan` is created once a (frozen) release candidate along with an appropriate upgrade `Handler` (see below) is agreed upon, where the `Name` of a `Plan` corresponds to a specific `Handler`. Typically, a `Plan` is created through a governance proposal process, where if voted upon and passed, will be scheduled. The `Info` of a `Plan` may contain various metadata about the upgrade, typically application specific upgrade info to be included on-chain such as a git commit that validators could automatically upgrade to. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type Plan struct { Name string Height int64 Info string } ``` #### Sidecar Process If an operator running the application binary also runs a sidecar process to assist in the automatic download and upgrade of a binary, the `Info` allows this process to be seamless. This tool is [Cosmovisor](https://github.com/cosmos/cosmos-sdk/tree/main/tools/cosmovisor#readme). ### Handler The `x/upgrade` module facilitates upgrading from major version X to major version Y. To accomplish this, node operators must first upgrade their current binary to a new binary that has a corresponding `Handler` for the new version Y. It is assumed that this version has fully been tested and approved by the community at large. This `Handler` defines what state migrations need to occur before the new binary Y can successfully run the chain. Naturally, this `Handler` is application specific and not defined on a per-module basis. Registering a `Handler` is done via `Keeper#SetUpgradeHandler` in the application. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type UpgradeHandler func(Context, Plan, VersionMap) (VersionMap, error) ``` During each `EndBlock` execution, the `x/upgrade` module checks if there exists a `Plan` that should execute (is scheduled at that height). If so, the corresponding `Handler` is executed. If the `Plan` is expected to execute but no `Handler` is registered or if the binary was upgraded too early, the node will gracefully panic and exit. ### StoreLoader The `x/upgrade` module also facilitates store migrations as part of the upgrade. The `StoreLoader` sets the migrations that need to occur before the new binary can successfully run the chain. This `StoreLoader` is also application specific and not defined on a per-module basis. Registering this `StoreLoader` is done via `app#SetStoreLoader` in the application. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func UpgradeStoreLoader (upgradeHeight int64, storeUpgrades *store.StoreUpgrades) baseapp.StoreLoader ``` If there's a planned upgrade and the upgrade height is reached, the old binary writes `Plan` to the disk before panicking. This information is critical to ensure the `StoreUpgrades` happens smoothly at the correct height and expected upgrade. It eliminates the chances for the new binary to execute `StoreUpgrades` multiple times every time on restart. Also, if there are multiple upgrades planned on the same height, the `Name` will ensure these `StoreUpgrades` take place only in the planned upgrade handler. ### Proposal Typically, a `Plan` is proposed and submitted through governance via a proposal containing a `MsgSoftwareUpgrade` message. This proposal prescribes to the standard governance process. If the proposal passes, the `Plan`, which targets a specific `Handler`, is persisted and scheduled. The upgrade can be delayed or hastened by updating the `Plan.Height` in a new proposal. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/upgrade/v1beta1/tx.proto#L29-L41 ``` #### Cancelling Upgrade Proposals Upgrade proposals can be cancelled. There exists a gov-enabled `MsgCancelUpgrade` message type, which can be embedded in a proposal, voted on and, if passed, will remove the scheduled upgrade `Plan`. Of course this requires that the upgrade was known to be a bad idea well before the upgrade itself, to allow time for a vote. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Reference: https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/upgrade/v1beta1/tx.proto#L48-L57 ``` If such a possibility is desired, the upgrade height is to be `2 * (VotingPeriod + DepositPeriod) + (SafetyDelta)` from the beginning of the upgrade proposal. The `SafetyDelta` is the time available from the success of an upgrade proposal and the realization it was a bad idea (due to external social consensus). A `MsgCancelUpgrade` proposal can also be made while the original `MsgSoftwareUpgrade` proposal is still being voted upon, as long as the `VotingPeriod` ends after the `MsgSoftwareUpgrade` proposal. ## State The internal state of the `x/upgrade` module is relatively minimal and simple. The state contains the currently active upgrade `Plan` (if one exists) by key `0x0` and if a `Plan` is marked as "done" by key `0x1`. The state contains the consensus versions of all app modules in the application. The versions are stored as big endian `uint64`, and can be accessed with prefix `0x2` appended by the corresponding module name of type `string`. The state maintains a `Protocol Version` which can be accessed by key `0x3`. * Plan: `0x0 -> Plan` * Done: `0x1 | byte(plan name) -> BigEndian(Block Height)` * ConsensusVersion: `0x2 | byte(module name) -> BigEndian(Module Consensus Version)` * ProtocolVersion: `0x3 -> BigEndian(Protocol Version)` The `x/upgrade` module contains no genesis state. ## Events The `x/upgrade` does not emit any events by itself. Any and all proposal related events are emitted through the `x/gov` module. ## Client ### CLI A user can query and interact with the `upgrade` module using the CLI. #### Query The `query` commands allow users to query `upgrade` state. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query upgrade --help ``` ##### applied The `applied` command allows users to query the block header for height at which a completed upgrade was applied. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query upgrade applied [upgrade-name] [flags] ``` If upgrade-name was previously executed on the chain, this returns the header for the block at which it was applied. This helps a client determine which binary was valid over a given range of blocks, as well as more context to understand past migrations. Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query upgrade applied "test-upgrade" ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} "block_id": { "hash": "A769136351786B9034A5F196DC53F7E50FCEB53B48FA0786E1BFC45A0BB646B5", "parts": { "total": 1, "hash": "B13CBD23011C7480E6F11BE4594EE316548648E6A666B3575409F8F16EC6939E" } }, "block_size": "7213", "header": { "version": { "block": "11" }, "chain_id": "testnet-2", "height": "455200", "time": "2021-04-10T04:37:57.085493838Z", "last_block_id": { "hash": "0E8AD9309C2DC411DF98217AF59E044A0E1CCEAE7C0338417A70338DF50F4783", "parts": { "total": 1, "hash": "8FE572A48CD10BC2CBB02653CA04CA247A0F6830FF19DC972F64D339A355E77D" } }, "last_commit_hash": "DE890239416A19E6164C2076B837CC1D7F7822FC214F305616725F11D2533140", "data_hash": "E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855", "validators_hash": "A31047ADE54AE9072EE2A12FF260A8990BA4C39F903EAF5636B50D58DBA72582", "next_validators_hash": "A31047ADE54AE9072EE2A12FF260A8990BA4C39F903EAF5636B50D58DBA72582", "consensus_hash": "048091BC7DDC283F77BFBF91D73C44DA58C3DF8A9CBC867405D8B7F3DAADA22F", "app_hash": "28ECC486AFC332BA6CC976706DBDE87E7D32441375E3F10FD084CD4BAF0DA021", "last_results_hash": "E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855", "evidence_hash": "E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855", "proposer_address": "2ABC4854B1A1C5AA8403C4EA853A81ACA901CC76" }, "num_txs": "0" } ``` ##### module versions The `module_versions` command gets a list of module names and their respective consensus versions. Following the command with a specific module name will return only that module's information. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query upgrade module_versions [optional module_name] [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query upgrade module_versions ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} module_versions: - name: auth version: "2" - name: authz version: "1" - name: bank version: "2" - name: distribution version: "2" - name: evidence version: "1" - name: feegrant version: "1" - name: genutil version: "1" - name: gov version: "2" - name: ibc version: "2" - name: mint version: "1" - name: params version: "1" - name: slashing version: "2" - name: staking version: "2" - name: transfer version: "1" - name: upgrade version: "1" - name: vesting version: "1" ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} regen query upgrade module_versions ibc ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} module_versions: - name: ibc version: "2" ``` ##### plan The `plan` command gets the currently scheduled upgrade plan, if one exists. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} regen query upgrade plan [flags] ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query upgrade plan ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} height: "130" info: "" name: test-upgrade time: "0001-01-01T00:00:00Z" upgraded_client_state: null ``` #### Transactions The upgrade module supports the following transactions: * `software-proposal` - submits an upgrade proposal: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx upgrade software-upgrade v2 --title="Test Proposal" --summary="testing" --deposit="100000000stake" --upgrade-height 1000000 \ --upgrade-info '{ "binaries": { "linux/amd64":"https://example.com/simd.zip?checksum=sha256:aec070645fe53ee3b3763059376134f058cc337247c978add178b6ccdfb0019f" } }' --from cosmos1.. ``` * `cancel-software-upgrade` - cancels a previously submitted upgrade proposal: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx upgrade cancel-software-upgrade --title="Test Proposal" --summary="testing" --deposit="100000000stake" --from cosmos1.. ``` ### REST A user can query the `upgrade` module using REST endpoints. #### Applied Plan `AppliedPlan` queries a previously applied upgrade plan by its name. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/upgrade/v1beta1/applied_plan/{name} ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl -X GET "http://localhost:1317/cosmos/upgrade/v1beta1/applied_plan/v2.0-upgrade" -H "accept: application/json" ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "height": "30" } ``` #### Current Plan `CurrentPlan` queries the current upgrade plan. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/upgrade/v1beta1/current_plan ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl -X GET "http://localhost:1317/cosmos/upgrade/v1beta1/current_plan" -H "accept: application/json" ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "plan": "v2.1-upgrade" } ``` #### Module versions `ModuleVersions` queries the list of module versions from state. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} /cosmos/upgrade/v1beta1/module_versions ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl -X GET "http://localhost:1317/cosmos/upgrade/v1beta1/module_versions" -H "accept: application/json" ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "module_versions": [ { "name": "auth", "version": "2" }, { "name": "authz", "version": "1" }, { "name": "bank", "version": "2" }, { "name": "distribution", "version": "2" }, { "name": "evidence", "version": "1" }, { "name": "feegrant", "version": "1" }, { "name": "genutil", "version": "1" }, { "name": "gov", "version": "2" }, { "name": "ibc", "version": "2" }, { "name": "mint", "version": "1" }, { "name": "params", "version": "1" }, { "name": "slashing", "version": "2" }, { "name": "staking", "version": "2" }, { "name": "transfer", "version": "1" }, { "name": "upgrade", "version": "1" }, { "name": "vesting", "version": "1" } ] } ``` ### gRPC A user can query the `upgrade` module using gRPC endpoints. #### Applied Plan `AppliedPlan` queries a previously applied upgrade plan by its name. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.upgrade.v1beta1.Query/AppliedPlan ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"name":"v2.0-upgrade"}' \ localhost:9090 \ cosmos.upgrade.v1beta1.Query/AppliedPlan ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "height": "30" } ``` #### Current Plan `CurrentPlan` queries the current upgrade plan. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.upgrade.v1beta1.Query/CurrentPlan ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext localhost:9090 cosmos.slashing.v1beta1.Query/CurrentPlan ``` Example Output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "plan": "v2.1-upgrade" } ``` #### Module versions `ModuleVersions` queries the list of module versions from state. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cosmos.upgrade.v1beta1.Query/ModuleVersions ``` Example: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext localhost:9090 cosmos.slashing.v1beta1.Query/ModuleVersions ``` Example Output: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "module_versions": [ { "name": "auth", "version": "2" }, { "name": "authz", "version": "1" }, { "name": "bank", "version": "2" }, { "name": "distribution", "version": "2" }, { "name": "evidence", "version": "1" }, { "name": "feegrant", "version": "1" }, { "name": "genutil", "version": "1" }, { "name": "gov", "version": "2" }, { "name": "ibc", "version": "2" }, { "name": "mint", "version": "1" }, { "name": "params", "version": "1" }, { "name": "slashing", "version": "2" }, { "name": "staking", "version": "2" }, { "name": "transfer", "version": "1" }, { "name": "upgrade", "version": "1" }, { "name": "vesting", "version": "1" } ] } ``` ## Resources A list of (external) resources to learn more about the `x/upgrade` module. * [Cosmos Dev Series: Cosmos Blockchain Upgrade](https://medium.com/web3-surfers/cosmos-dev-series-cosmos-sdk-based-blockchain-upgrade-b5e99181554c) - The blog post that explains how software upgrades work in detail. # Interacting with a Node Source: https://docs.cosmos.network/sdk/latest/node/interact-node **Synopsis** There are multiple ways to interact with a node: using the CLI, gRPC, or REST endpoints. **Prerequisite Readings** * [gRPC, REST and CometBFT Endpoints](/sdk/latest/learn/concepts/cli-grpc-rest) * [Running a Node](/sdk/latest/node/run-node) ## Using the CLI Now that your chain is running, it is time to try sending tokens from the first account you created to a second account. In a new terminal window, start by running the following query command: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd query bank balances $MY_VALIDATOR_ADDRESS ``` You should see the current balance of the account you created, equal to the original balance of `stake` you granted it minus the amount you delegated via the `gentx`. Now, create a second account: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd keys add recipient --keyring-backend test # Put the generated address in a variable for later use. RECIPIENT=$(simd keys show recipient -a --keyring-backend test) ``` The command above creates a local key-pair that is not yet registered on the chain. An account is created the first time it receives tokens from another account. Now, run the following command to send tokens to the `recipient` account: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx bank send $MY_VALIDATOR_ADDRESS $RECIPIENT 1000000stake --chain-id my-test-chain --keyring-backend test # Check that the recipient account did receive the tokens. simd query bank balances $RECIPIENT ``` Add the `-y` or `--yes` flag to skip the confirmation prompt, which is useful for scripts and automation: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx bank send $MY_VALIDATOR_ADDRESS $RECIPIENT 1000000stake --chain-id my-test-chain --keyring-backend test -y ``` Finally, delegate some of the stake tokens sent to the `recipient` account to the validator: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx staking delegate $(simd keys show my_validator --bech val -a --keyring-backend test) 500stake --from recipient --chain-id my-test-chain --keyring-backend test # Query the total delegations to `validator`. simd query staking delegations-to $(simd keys show my_validator --bech val -a --keyring-backend test) ``` You should see two delegations, the first one made from the `gentx`, and the second one you just performed from the `recipient` account. ## Using gRPC The Protobuf ecosystem developed tools for different use cases, including code-generation from `*.proto` files into various languages. These tools allow the building of clients easily. Often, the client connection (i.e. the transport) can be plugged and replaced very easily. This section explores one of the most popular transports: [gRPC](/sdk/latest/learn/concepts/cli-grpc-rest). Since the code generation library largely depends on your own tech stack, three alternatives are presented: * `grpcurl` for generic debugging and testing, * programmatically via Go, * CosmJS for JavaScript/TypeScript developers. ### grpcurl [grpcurl](https://github.com/fullstorydev/grpcurl) is like `curl` but for gRPC. It is also available as a Go library, but this tutorial uses it only as a CLI command for debugging and testing purposes. Follow the instructions in the previous link to install it. Assuming you have a local node running (either a localnet, or connected to a live network), you should be able to run the following command to list the Protobuf services available (you can replace `localhost:9090` with the gRPC server endpoint of another node, which is configured under the `grpc.address` field inside [`app.toml`](/sdk/latest/node/run-node#configuring-the-node-using-apptoml-and-configtoml)): ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext localhost:9090 list ``` You should see a list of gRPC services, like `cosmos.bank.v1beta1.Query`. This is called reflection, which is a Protobuf endpoint returning a description of all available endpoints. Each of these represents a different Protobuf service, and each service exposes multiple RPC methods you can query against. In order to get a description of the service you can run the following command: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ localhost:9090 \ describe cosmos.bank.v1beta1.Query # Service we want to inspect ``` It's also possible to execute an RPC call to query the node for information: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl \ -plaintext \ -d "{\"address\":\"$MY_VALIDATOR_ADDRESS\"}" \ localhost:9090 \ cosmos.bank.v1beta1.Query/AllBalances ``` The list of all available gRPC query endpoints is [coming soon](https://github.com/cosmos/cosmos-sdk/issues/7786). #### Query for historical state using grpcurl You may also query for historical data by passing some [gRPC metadata](https://github.com/grpc/grpc-go/blob/master/Documentation/grpc-metadata.md) to the query: the `x-cosmos-block-height` metadata should contain the block to query. Using grpcurl as above, the command looks like: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl \ -plaintext \ -H "x-cosmos-block-height: 123" \ -d "{\"address\":\"$MY_VALIDATOR_ADDRESS\"}" \ localhost:9090 \ cosmos.bank.v1beta1.Query/AllBalances ``` Assuming the state at that block has not yet been pruned by the node, this query should return a non-empty response. ### Programmatically via Go The following snippet shows how to query the state using gRPC inside a Go program. The idea is to create a gRPC connection, and use the Protobuf-generated client code to query the gRPC server. #### Install Cosmos SDK ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} go get github.com/cosmos/cosmos-sdk@main ``` ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package main import ( "context" "fmt" "google.golang.org/grpc" "github.com/cosmos/cosmos-sdk/codec" sdk "github.com/cosmos/cosmos-sdk/types" banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" ) func queryState() error { myAddress, err := sdk.AccAddressFromBech32("cosmos1...") // the my_validator or recipient address. if err != nil { return err } // Create a connection to the gRPC server. grpcConn, err := grpc.Dial( "127.0.0.1:9090", // your gRPC server address. grpc.WithInsecure(), // The Cosmos SDK doesn't support any transport security mechanisms. // This instantiates a general gRPC codec which handles proto bytes. We pass in a nil interface registry // if the request/response types contain an interface instead of 'nil' you should pass the application specific codec. grpc.WithDefaultCallOptions(grpc.ForceCodec(codec.NewProtoCodec(nil).GRPCCodec())), ) if err != nil { return err } defer grpcConn.Close() // This creates a gRPC client to query the x/bank service. bankClient := banktypes.NewQueryClient(grpcConn) bankRes, err := bankClient.Balance( context.Background(), &banktypes.QueryBalanceRequest{ Address: myAddress.String(), Denom: "stake" }, ) if err != nil { return err } fmt.Println(bankRes.GetBalance()) // Prints the account balance return nil } func main() { if err := queryState(); err != nil { panic(err) } } ``` You can replace the query client (here we are using `x/bank`'s) with one generated from any other Protobuf service. The list of all available gRPC query endpoints is [coming soon](https://github.com/cosmos/cosmos-sdk/issues/7786). #### Query for historical state using Go Querying for historical blocks is done by adding the block height metadata in the gRPC request. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package main import ( "context" "fmt" "google.golang.org/grpc" "google.golang.org/grpc/metadata" "github.com/cosmos/cosmos-sdk/codec" sdk "github.com/cosmos/cosmos-sdk/types" grpctypes "github.com/cosmos/cosmos-sdk/types/grpc" banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" ) func queryState() error { myAddress, err := sdk.AccAddressFromBech32("cosmos1yerherx4d43gj5wa3zl5vflj9d4pln42n7kuzu") // the my_validator or recipient address. if err != nil { return err } // Create a connection to the gRPC server. grpcConn, err := grpc.Dial( "127.0.0.1:9090", // your gRPC server address. grpc.WithInsecure(), // The Cosmos SDK doesn't support any transport security mechanisms. // This instantiates a general gRPC codec which handles proto bytes. We pass in a nil interface registry // if the request/response types contain an interface instead of 'nil' you should pass the application specific codec. grpc.WithDefaultCallOptions(grpc.ForceCodec(codec.NewProtoCodec(nil).GRPCCodec())), ) if err != nil { return err } defer grpcConn.Close() // This creates a gRPC client to query the x/bank service. bankClient := banktypes.NewQueryClient(grpcConn) var header metadata.MD _, err = bankClient.Balance( metadata.AppendToOutgoingContext(context.Background(), grpctypes.GRPCBlockHeightHeader, "12"), // Add metadata to request &banktypes.QueryBalanceRequest{ Address: myAddress.String(), Denom: "stake" }, grpc.Header(&header), // Retrieve header from response ) if err != nil { return err } blockHeight := header.Get(grpctypes.GRPCBlockHeightHeader) fmt.Println(blockHeight) // Prints the block height (12) return nil } func main() { if err := queryState(); err != nil { panic(err) } } ``` ### CosmJS CosmJS documentation can be found at [Link](https://cosmos.github.io/cosmjs). ## Using the REST Endpoints As described in the [gRPC guide](/sdk/latest/learn/concepts/cli-grpc-rest), all gRPC services on the Cosmos SDK are made available for more convenient REST-based queries through gRPC-gateway. The format of the URL path is based on the Protobuf service method's full-qualified name, but may contain small customizations so that final URLs look more idiomatic. For example, the REST endpoint for the `cosmos.bank.v1beta1.Query/AllBalances` method is `GET /cosmos/bank/v1beta1/balances/{address}`. Request arguments are passed as query parameters. Note that the REST endpoints are not enabled by default. To enable them, edit the `api` section of your `~/.simapp/config/app.toml` file: ```toml theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} # Enable defines if the API server should be enabled. enable = true ``` After enabling the API, you must restart your node for the changes to take effect. Stop the node with `Ctrl+C` and run `simd start` again. As a concrete example, the `curl` command to make balances request is: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl \ -X GET \ -H "Content-Type: application/json" \ http://localhost:1317/cosmos/bank/v1beta1/balances/$MY_VALIDATOR_ADDRESS ``` Make sure to replace `localhost:1317` with the REST endpoint of your node, configured under the `api.address` field. The list of all available REST endpoints is available as a Swagger specification file, which can be viewed at `localhost:1317/swagger`. Make sure that the `api.swagger` field is set to true in your [`app.toml`](/sdk/latest/node/run-node#configuring-the-node-using-apptoml-and-configtoml) file. ### Query for historical state using REST Querying for historical state is done using the HTTP header `x-cosmos-block-height`. For example, a curl command would look like: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl \ -X GET \ -H "Content-Type: application/json" \ -H "x-cosmos-block-height: 123" \ http://localhost:1317/cosmos/bank/v1beta1/balances/$MY_VALIDATOR_ADDRESS ``` Assuming the state at that block has not yet been pruned by the node, this query should return a non-empty response. ### Cross-Origin Resource Sharing (CORS) [CORS policies](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) are not enabled by default to help with security. If you would like to use the rest-server in a public environment, we recommend you provide a reverse proxy, which can be done with [nginx](https://www.nginx.com/). For testing and development purposes, there is an `enabled-unsafe-cors` field inside [`app.toml`](/sdk/latest/node/run-node#configuring-the-node-using-apptoml-and-configtoml). ## Congratulations! You have successfully interacted with your Cosmos SDK node using the CLI, gRPC, and REST endpoints. You can now query state and submit transactions through multiple interfaces. ## Next steps * [Generate and sign transactions](/sdk/latest/node/txs) to learn manual transaction workflows * Explore the [gRPC and REST guide](/sdk/latest/learn/concepts/cli-grpc-rest) for more advanced querying techniques # Setting up the keyring Source: https://docs.cosmos.network/sdk/latest/node/keyring **Prerequisite Readings** * [Prerequisites](/sdk/latest/node/prerequisites) - Set up Go and build the `simd` binary The keyring holds the private/public key pairs used to interact with a node. A validator key needs to be set up before running the blockchain node so that blocks can be correctly signed. ## Create a key 1. Create a new key for your validator: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd keys add my_validator --keyring-backend test ``` 2. Store the address for later use: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} MY_VALIDATOR_ADDRESS=$(simd keys show my_validator -a --keyring-backend test) ``` This generates a 24-word mnemonic phrase and stores your key. **Save the mnemonic** if you'll use this key for value-bearing tokens. This tutorial uses the `test` backend (unencrypted, for testing only). For production, use the `os` backend which integrates with your system's secure keyring. See [keyring backends](#reference:-keyring-backends) below for more information. ## Next steps You have just created your first key. The keyring is now ready to manage keys for interacting with your blockchain node. If you are running through this tutorial as a test, continue to [Run a node](/sdk/latest/node/run-node) to initialize your blockchain and start your node. For more information on the keyring and its various backends, continue reading below. ## Reference: Keyring backends The Cosmos SDK keyring supports multiple storage backends. The private key can be stored in different locations such as a file or the operating system's own key storage. ### The `os` backend The `os` backend relies on operating system-specific defaults to handle key storage securely. Typically, an operating system's credential subsystem handles password prompts, private key storage, and user sessions according to the user's password policies. Here is a list of the most popular operating systems and their respective password managers: * macOS: [Keychain](https://support.apple.com/en-gb/guide/keychain-access/welcome/mac) * Windows: [Credentials Management API](https://docs.microsoft.com/en-us/windows/win32/secauthn/credentials-management) * GNU/Linux: * [libsecret](https://gitlab.gnome.org/GNOME/libsecret) * [kwallet](https://api.kde.org/kwallet-index.html) * [keyctl](https://www.kernel.org/doc/html/latest/security/keys/core.html) GNU/Linux distributions that use GNOME as the default desktop environment typically come with [Seahorse](https://wiki.gnome.org/Apps/Seahorse). Users of KDE based distributions are commonly provided with [KDE Wallet Manager](https://userbase.kde.org/KDE_Wallet_Manager). Whilst the former is in fact a `libsecret` convenient frontend, the latter is a `kwallet` client. `keyctl` is a secure backend that leverages the Linux's kernel security key management system to store cryptographic keys securely in memory. `os` is the default option since operating systems' default credentials managers are designed to meet users' most common needs and provide them with a comfortable experience without compromising on security. The recommended backends for headless environments are `file` and `pass`. ### The `file` backend The `file` backend more closely resembles the keybase implementation used prior to v0.38.1. It stores the keyring encrypted within the app's configuration directory. This keyring will request a password each time it is accessed, which may occur multiple times in a single command, resulting in repeated password prompts. If using bash scripts to execute commands using the `file` option, you may want to utilize the following format for multiple prompts: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} # assuming that KEYPASSWD is set in the environment $ gaiacli config keyring-backend file # use file backend $ (echo $KEYPASSWD; echo $KEYPASSWD) | gaiacli keys add me # multiple prompts $ echo $KEYPASSWD | gaiacli keys show me # single prompt ``` The first time you add a key to an empty keyring, you will be prompted to type the password twice. ### The `pass` backend The `pass` backend uses the [pass](https://www.passwordstore.org/) utility to manage on-disk encryption of keys' sensitive data and metadata. Keys are stored inside `gpg`-encrypted files within app-specific directories. `pass` is available for the most popular UNIX operating systems as well as GNU/Linux distributions. Please refer to its manual page for information on how to download and install it. **pass** uses [GnuPG](https://gnupg.org/) for encryption. `gpg` automatically invokes the `gpg-agent` daemon upon execution, which handles the caching of GnuPG credentials. Please refer to `gpg-agent` man page for more information on how to configure cache parameters such as credentials TTL and passphrase expiration. The password store must be set up prior to first use: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} pass init ``` Replace `` with your GPG key ID. You can use your personal GPG key or an alternative one you may want to use specifically to encrypt the password store. ### The `kwallet` backend The `kwallet` backend uses `KDE Wallet Manager`, which comes installed by default on the GNU/Linux distributions that ships KDE as default desktop environment. Please refer to [KWallet Handbook](https://userbase.kde.org/KDE_Wallet_Manager) for more information. ### The `keyctl` backend The *Kernel Key Retention Service* is a security facility that has been added to the Linux kernel relatively recently. It allows sensitive cryptographic data such as passwords, private keys, authentication tokens, etc. to be stored securely in memory. The `keyctl` backend is available on Linux platforms only. ### The `test` backend The `test` backend is a password-less variation of the `file` backend. Keys are stored unencrypted on disk. **Provided for testing purposes only. The `test` backend is not recommended for use in production environments**. ### The `memory` backend The `memory` backend stores keys in memory. The keys are immediately deleted after the program has exited. **Provided for testing purposes only. The `memory` backend is not recommended for use in production environments**. ### Setting backend using the env variable You can set the keyring-backend using an environment variable: `BINNAME_KEYRING_BACKEND`. For example, if your binary name is `gaia-v5`, then set: `export GAIA_V5_KEYRING_BACKEND=pass` ### Additional key management By default, the keyring generates a `secp256k1` keypair. The keyring also supports `ed25519` keys, which may be created by passing the `--algo ed25519` flag. A keyring can hold both types of keys simultaneously, and the Cosmos SDK's `x/auth` module supports both public key algorithms natively. For help with key management commands, use `simd keys --help` or `simd keys [command] --help`. # Prerequisites Source: https://docs.cosmos.network/sdk/latest/node/prerequisites ## Introduction The Cosmos SDK requires Go and a built binary to run a blockchain node. This tutorial walks through installing Go, building the `simd` binary, and configuring your environment to run a Cosmos SDK node. ## Prerequisites This tutorial assumes you have the following installed: * A terminal application * A code editor * Basic familiarity with command-line operations ## 1. Install Go The Cosmos SDK requires [Go](https://go.dev/) version 1.25 or higher. Download the installer from the [official Go downloads page](https://go.dev/dl/) and follow the installation instructions for your operating system. Verify the installation: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} go version ``` ### Configure Go environment variables Open your shell config file (`~/.bashrc` or `~/.zshrc`) and add: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} export GOPATH=$HOME/go export PATH=$PATH:$GOPATH/bin ``` Apply changes: `source ~/.bashrc` Open `~/.zshrc` and add: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} export GOPATH=$HOME/go export PATH=$PATH:$GOPATH/bin ``` Apply changes: `source ~/.zshrc` In PowerShell (as Administrator): ```powershell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} [System.Environment]::SetEnvironmentVariable('GOPATH', "$HOME\go", 'User') [System.Environment]::SetEnvironmentVariable('Path', "$env:Path;$HOME\go\bin", 'User') ``` Restart PowerShell after setting. Verify: `go env GOPATH` ## 2. Clone the Cosmos SDK repository This tutorial uses `simapp`, the Cosmos SDK example application. Clone the [Cosmos SDK repository](https://github.com/cosmos/cosmos-sdk) to access `simapp`. 1. Navigate to your preferred directory. This example uses `~/Documents/GitHub`: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cd ~/Documents/GitHub ``` 2. Clone the Cosmos SDK repository: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} git clone https://github.com/cosmos/cosmos-sdk.git ``` 3. Navigate into the cloned repository: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cd cosmos-sdk ``` If you are building your own chain, clone your chain's repository instead. Replace `simd` with your chain's binary name throughout this tutorial. ## 3. Build the simd binary The `simd` binary is the command-line interface for interacting with the Cosmos SDK blockchain. Build and install the `simd` binary: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} make install ``` **Windows users**: If `make` is not available, you can install it via [Chocolatey](https://chocolatey.org/) (`choco install make`) or use WSL2. Verify `simd` is working: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd version ``` Your environment is now set up to run a Cosmos SDK node. ## Next steps * [Set up the keyring](/sdk/latest/node/keyring) to create and manage cryptographic keys # Running a Node Source: https://docs.cosmos.network/sdk/latest/node/run-node **Synopsis** This section explains how to run a blockchain node. The application used in this tutorial is [`simapp`](https://github.com/cosmos/cosmos-sdk/tree/main/simapp), and its corresponding CLI binary `simd`. **Prerequisite Readings** * [Prerequisites](/sdk/latest/node/prerequisites) - Set up Go and build the `simd` binary * [Anatomy of a Cosmos SDK Application](/sdk/latest/learn/intro/sdk-app-architecture) * [Setting up the keyring](/sdk/latest/node/keyring) ## 1. Initialize the chain Before running the node, initialize the chain and its genesis file. Use the `init` subcommand: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} # The argument is the custom username of your node, it should be human-readable. simd init --chain-id my-test-chain ``` The command above creates all the configuration files needed for your node to run, as well as a default genesis file, which defines the initial state of the network. All these configuration files are in `~/.simapp` by default, but you can overwrite the location of this folder by passing the `--home` flag to each command, or set an `$APPD_HOME` environment variable (where `APPD` is the name of the binary). **Windows users**: Replace `~/.simapp` with `%USERPROFILE%\.simapp` (or `$HOME\.simapp` in PowerShell). For `jq` and `sed` commands in this tutorial, install via [Chocolatey](https://chocolatey.org/) (`choco install jq sed`) or use [Git Bash](https://gitforwindows.org/)/WSL2. The `~/.simapp` folder has the following structure: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} . # ~/.simapp |- data # Contains the databases used by the node. |- config/ |- app.toml # Application-related configuration file. |- config.toml # CometBFT-related configuration file. |- genesis.json # The genesis file. |- node_key.json # Private key to use for node authentication in the p2p protocol. |- priv_validator_key.json # Private key to use as a validator in the consensus protocol. ``` ## 2. Update configuration settings (optional) To change field values in configuration files (for example, genesis.json), use `jq` ([installation](https://stedolan.github.io/jq/download/) & [docs](https://stedolan.github.io/jq/manual/#Assignment)) and `sed` commands. A few examples are listed here. ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} # to change the chain-id jq '.chain_id = "testing"' genesis.json > temp.json && mv temp.json genesis.json # to enable the api server sed -i '/\[api\]/,+3 s/enable = false/enable = true/' app.toml # to change the voting_period jq '.app_state.gov.voting_params.voting_period = "600s"' genesis.json > temp.json && mv temp.json genesis.json # to change the inflation jq '.app_state.mint.minter.inflation = "0.300000000000000000"' genesis.json > temp.json && mv temp.json genesis.json ``` ### Client Interaction When instantiating a node, gRPC and REST are defaulted to localhost to avoid unknown exposure of your node to the public. It is recommended not to expose these endpoints without a proxy that can handle load balancing or authentication set up between your node and the public. A commonly used tool for this is [nginx](https://nginx.org). ## 3. Add genesis accounts Earlier in this tutorial, you [created an account in the keyring](/sdk/latest/node/keyring#create-a-key) named `my_validator` under the `test` keyring backend. Now, you can grant this account some `stake` tokens in your chain's genesis file. Doing so will also make sure your chain is aware of this account's existence: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd genesis add-genesis-account $MY_VALIDATOR_ADDRESS 100000000000stake ``` Recall that `$MY_VALIDATOR_ADDRESS` is a variable that holds the address of the `my_validator` key in the [keyring](/sdk/latest/node/keyring#create-a-key). Also note that the tokens in the Cosmos SDK have the `{amount}{denom}` format: `amount` is an 18-digit-precision decimal number, and `denom` is the unique token identifier with its denomination key (e.g., `atom` or `uatom`). Here, `stake` tokens are granted, as `stake` is the token identifier used for staking in [`simapp`](https://github.com/cosmos/cosmos-sdk/tree/main/simapp). For your own chain with its own staking denom, that token identifier should be used instead. ## 4. Create genesis transaction Now that your account has some tokens, you need to add a validator to your chain. Validators are special full-nodes that participate in the consensus process (implemented in the [underlying consensus engine](/sdk/latest/learn/intro/sdk-app-architecture#cometbft)) in order to add new blocks to the chain. Any account can declare its intention to become a validator operator, but only those with sufficient delegation get to enter the active set (for example, only the top 125 validator candidates with the most delegation get to be validators in the Cosmos Hub). For this guide, your local node (created via the `init` command above) will be added as a validator of your chain. Validators can be declared before a chain is first started via a special transaction included in the genesis file called a `gentx`: 1. Create a gentx. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd genesis gentx my_validator 100000000stake --chain-id my-test-chain --keyring-backend test ``` 2. Add the gentx to the genesis file: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd genesis collect-gentxs ``` A `gentx` does three things: 1. Registers the validator account you created as a validator operator account (i.e., the account that controls the validator). 2. Self-delegates the provided `amount` of staking tokens. 3. Link the operator account with a CometBFT node pubkey that will be used for signing blocks. If no `--pubkey` flag is provided, it defaults to the local node pubkey created via the `simd init` command above. For more information on `gentx`, use the following command: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd genesis gentx --help ``` ## 5. Configure the node using `app.toml` and `config.toml` The Cosmos SDK automatically generates two configuration files inside `~/.simapp/config`: * `config.toml`: used to configure the CometBFT, learn more on [CometBFT's documentation](/cometbft/latest/docs/core/configuration), * `app.toml`: generated by the Cosmos SDK, and used to configure your app, such as state pruning strategies, telemetry, gRPC and REST server configuration, state sync... Both files are heavily commented, please refer to them directly to tweak your node. One example config to tweak is the `minimum-gas-prices` field inside `app.toml`, which defines the minimum gas prices the validator node is willing to accept for processing a transaction. Depending on the chain, it might be an empty string or not. If it's empty, make sure to edit the field with some value, for example `10token`, or else the node will halt on startup. For the purposes of this tutorial, the minimum gas price is set to 0: ```toml theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} # The minimum gas prices a validator is willing to accept for processing a # transaction. A transaction's fees must meet the minimum of any denomination # specified in this config (e.g. 0.25token1;0.0001token2). minimum-gas-prices = "0stake" ``` When running a node (not a validator!) and not wanting to run the application mempool, set the `max-txs` field to `-1`. ```toml theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} [mempool] # Setting max-txs to 0 will allow for an unbounded amount of transactions in the mempool. # Setting max_txs to negative 1 (-1) will disable transactions from being inserted into the mempool. # Setting max_txs to a positive number (> 0) will limit the number of transactions in the mempool, by the specified amount. # # Note, this configuration only applies to SDK built-in app-side mempool # implementations. max-txs = "-1" ``` ## 6. Start the node Now that everything is set up, you can finally start your node: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd start ``` You should see blocks come in. ### What happens when the node starts The `start` command (defined in [`server/start.go`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/server/start.go)) boots up the full-node in the following sequence: 1. It opens the `db` (LevelDB by default) containing the latest persisted state. On first start, this is empty. 2. It creates a new instance of the application via an `appCreator` function, which is the [application constructor](/sdk/latest/learn/intro/sdk-app-architecture#constructor-function). 3. It instantiates a CometBFT node using the application. As part of `node.New`, CometBFT checks that the application's block height matches its own. If the application is behind, it replays blocks to catch up. If the height is `0`, it calls [`InitChain`](/sdk/latest/learn/concepts/baseapp#initchain) to initialize state from the genesis file. 4. Once in sync, the node starts its RPC and P2P servers and begins dialing peers. During the handshake, if the node is behind its peers, it queries missing blocks sequentially. Once caught up, it waits for new block proposals and validator signatures. The previous command allows you to run a single node. This is enough for the next section on interacting with this node, but you may wish to run multiple nodes at the same time, and see how consensus happens between them. The naive way would be to run the same commands again in separate terminal windows. This is possible. However, [Docker Compose](https://docs.docker.com/compose/) can be leveraged to run a localnet. If you need inspiration on how to set up your own localnet with Docker Compose, refer to the Cosmos SDK's [`docker-compose.yml`](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/docker-compose.yml). ### Standalone App/CometBFT By default, the Cosmos SDK runs CometBFT in-process with the application If you want to run the application and CometBFT in separate processes, start the application with the `--with-comet=false` flag and set `rpc.laddr` in `config.toml` to the CometBFT node's RPC address. ## Logging Logging provides a way to see what is going on with a node. The default logging level is `info`. This is a global level and all info logs will be outputted to the terminal. If you would like to filter specific logs to the terminal instead of all, then setting `module:log_level` is how this can work. Example in `config.toml`: ```toml theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} log_level: "state:info,p2p:info,consensus:info,x/staking:info,x/ibc:info,*error" ``` ### Verbose log level Some operations, such as chain upgrades, emit additional log messages when a higher log level is active. You can control this with the `--verbose_log_level` flag: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd start --verbose_log_level debug ``` See the [Log Overview](/sdk/latest/guides/testing/log) for more information on logging. ## State Sync State sync is the act in which a node syncs the latest or close to the latest state of a blockchain. This is useful for users who don't want to sync all the blocks in history. Read more in [CometBFT documentation](/cometbft/latest/docs/core/state-sync). State sync works thanks to snapshots. Read how the SDK handles snapshots [here](https://github.com/cosmos/cosmos-sdk/blob/825245d/store/snapshots/README.md). ### Local State Sync Local state sync works similarly to normal state sync except that it works off a local snapshot of state instead of one provided via the p2p network. The steps to start local state sync are similar to normal state sync with a few different design considerations. 1. As mentioned in the [state sync documentation](/cometbft/latest/docs/core/state-sync), one must set a height and hash in the config.toml along with a few RPC servers (the aforementioned link has instructions on how to do this). 2. Run ` snapshot restore ` to restore a local snapshot (note: first load it from a file with the *load* command). 3. Bootstrapping Comet state to start the node after the snapshot has been ingested. This can be done with the bootstrap command ` comet bootstrap-state` ### Snapshots Commands The Cosmos SDK provides commands for managing snapshots. These commands can be added in an app with the following snippet in `cmd//root.go`: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import ( "github.com/cosmos/cosmos-sdk/client/snapshot" ) func initRootCmd(/* ... */) { // ... rootCmd.AddCommand( snapshot.Cmd(appCreator), ) } ``` Then the following commands are available at ` snapshots [command]`: * **list**: list local snapshots * **load**: Load a snapshot archive file into snapshot store * **restore**: Restore app state from local snapshot * **export**: Export app state to snapshot store * **dump**: Dump the snapshot as portable archive format * **delete**: Delete a local snapshot ## Congratulations! Your node is now running and producing blocks. You have successfully initialized a Cosmos SDK blockchain from scratch. ## Next steps * [Interact with the node](/sdk/latest/node/interact-node) to send transactions and query state * [Generate and sign transactions](/sdk/latest/node/txs) to learn advanced transaction workflows # Running in Production Source: https://docs.cosmos.network/sdk/latest/node/run-production **Synopsis** This section describes how to securely run a node in a public setting and/or on a mainnet on one of the many Cosmos SDK public blockchains. When operating a node, full node, or validator in production it is important to set your server up securely. This walkthrough assumes the underlying operating system is Ubuntu. There are many different ways to secure a server and your node. The steps described here are for informational purposes only. ## Server Setup ### User When creating a server most times it is created as user `root`. This user has heightened privileges on the server. When operating a node, it is recommended to not run your node as the root user. 1. Create a new user ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} sudo adduser change_me ``` 2. We want to allow this user to perform sudo tasks ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} sudo usermod -aG sudo change_me ``` Now when logging into the server, the non `root` user can be used. ### Go 1. Install the [Go](https://go.dev/doc/install) version recommended by the application. In the past, validators [have had issues](https://github.com/cosmos/cosmos-sdk/issues/13976) when using different versions of Go. It is recommended that the whole validator set uses the version of Go that is recommended by the application. ### Firewall Nodes should not have all ports open to the public; this is a simple way to get DDoS'd. Secondly, it is recommended by [CometBFT](https://github.com/cometbft/cometbft) to never expose ports that are not required to operate a node. When setting up a firewall, there are a few ports that can be open when operating a Cosmos SDK node. These include the CometBFT JSON-RPC, Prometheus, p2p, remote signer, and Cosmos SDK gRPC and REST. If the node is being operated as a node that does not offer endpoints to be used for submission or querying, then a maximum of three endpoints are needed. Most, if not all servers come equipped with [ufw](https://help.ubuntu.com/community/UFW). Ufw will be used in this tutorial. 1. Reset UFW to disallow all incoming connections and allow outgoing ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} sudo ufw default deny incoming sudo ufw default allow outgoing ``` 2. Let's make sure that port 22 (SSH) stays open. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} sudo ufw allow ssh ``` or ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} sudo ufw allow 22 ``` Both of the above commands are the same. 3. Allow Port 26656 (cometbft p2p port). If the node has a modified p2p port then that port must be used here. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} sudo ufw allow 26656/tcp ``` 4. Allow port 26660 (CometBFT [Prometheus](https://prometheus.io)). This acts as the application's monitoring port as well. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} sudo ufw allow 26660/tcp ``` 5. If the node which is being set up would like to expose CometBFT's JSON-RPC and Cosmos SDK gRPC and REST, then follow this step. (Optional) ##### CometBFT JSON-RPC ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} sudo ufw allow 26657/tcp ``` ##### Cosmos SDK gRPC ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} sudo ufw allow 9090/tcp ``` ##### Cosmos SDK REST ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} sudo ufw allow 1317/tcp ``` 6. Lastly, enable ufw ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} sudo ufw enable ``` ### Signing If the node that is being started is a validator there are multiple ways a validator could sign blocks. #### File File-based signing is the simplest and default approach. This approach works by storing the consensus key generated on initialization to sign blocks. This approach is only as safe as your server setup, as if the server is compromised, so is your key. This key is located in the `config/priv_val_key.json` directory generated on initialization. A second file exists that users must be aware of; the file is located in the data directory `data/priv_val_state.json`. This file protects your node from double signing. It keeps track of the consensus key's last sign height, round, and latest signature. If the node crashes and needs to be recovered, this file must be kept in order to ensure that the consensus key will not be used for signing a block that was previously signed. #### Remote Signer A remote signer is a secondary server that is separate from the running node that signs blocks with the consensus key. This means that the consensus key does not live on the node itself. This increases security because your full node which is connected to the remote signer can be swapped without missing blocks. The two most used remote signers are [tmkms](https://github.com/iqlusioninc/tmkms) from [Iqlusion](https://www.iqlusion.io) and [horcrux](https://github.com/strangelove-ventures/horcrux) from [Strangelove](https://strange.love). ##### TMKMS ###### Dependencies 1. Update server dependencies and install extras needed. ```sh theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} sudo apt update -y && sudo apt install build-essential curl jq -y ``` 2. Install Rust: ```sh theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh ``` 3. Install Libusb: ```sh theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} sudo apt install libusb-1.0-0-dev ``` ###### Setup There are two ways to install tmkms, from source or `cargo install`. In the examples we will cover downloading or building from source and using softsign. Softsign stands for software signing, but you could use a [yubihsm](https://www.yubico.com/products/hardware-security-module/) as your signing key if you wish. 1. Build: From source: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cd $HOME git clone https://github.com/iqlusioninc/tmkms.git cd $HOME/tmkms cargo install tmkms --features=softsign tmkms init config tmkms softsign keygen ./config/secrets/secret_connection_key ``` or Cargo install: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cargo install tmkms --features=softsign tmkms init config tmkms softsign keygen ./config/secrets/secret_connection_key ``` To use tmkms with a yubikey install the binary with `--features=yubihsm`. 2. Migrate the validator key from the full node to the new tmkms instance. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} scp user@123.456.32.123:~/.simd/config/priv_validator_key.json ~/tmkms/config/secrets ``` 3. Import the validator key into tmkms. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} tmkms softsign import $HOME/tmkms/config/secrets/priv_validator_key.json $HOME/tmkms/config/secrets/priv_validator_key ``` At this point, it is necessary to delete the `priv_validator_key.json` from the validator node and the tmkms node. Since the key has been imported into tmkms (above) it is no longer necessary on the nodes. The key can be safely stored offline. 4. Modify the `tmkms.toml`. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} vim $HOME/tmkms/config/tmkms.toml ``` This example shows a configuration that could be used for soft signing. The example has an IP of `123.456.12.345` with a port of `26659` and a chain\_id of `test-chain-waSDSe`. These are items that must be modified for the use case of tmkms and the network. ```toml expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} # CometBFT KMS configuration file ## Chain Configuration [[chain]] id = "osmosis-1" key_format = { type = "bech32", account_key_prefix = "cosmospub", consensus_key_prefix = "cosmosvalconspub" } state_file = "/root/tmkms/config/state/priv_validator_state.json" ## Signing Provider Configuration ### Software-based Signer Configuration [[providers.softsign]] chain_ids = ["test-chain-waSDSe"] key_type = "consensus" path = "/root/tmkms/config/secrets/priv_validator_key" ## Validator Configuration [[validator]] chain_id = "test-chain-waSDSe" addr = "tcp://123.456.12.345:26659" secret_key = "/root/tmkms/config/secrets/secret_connection_key" protocol_version = "v0.34" reconnect = true ``` 5. Set the address of the tmkms instance. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} vim $HOME/.simd/config/config.toml priv_validator_laddr = "tcp://0.0.0.0:26659" ``` The above address is set to `0.0.0.0`, but it is recommended to set the tmkms server address to secure the startup. It is recommended to comment or delete the lines that specify the path of the validator key and validator: ```toml theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} # Path to the JSON file containing the private key to use as a validator in the consensus protocol # priv_validator_key_file = "config/priv_validator_key.json" # Path to the JSON file containing the last sign state of a validator # priv_validator_state_file = "data/priv_validator_state.json" ``` 6. Start the two processes. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} tmkms start -c $HOME/tmkms/config/tmkms.toml ``` ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd start ``` # Running a Testnet Source: https://docs.cosmos.network/sdk/latest/node/run-testnet **Synopsis** The `simd testnet` subcommand makes it easy to initialize and start a simulated test network for testing purposes. In addition to the commands for [running a node](/sdk/latest/node/run-node), the `simd` binary also includes a `testnet` command that allows you to start a simulated test network in-process or to initialize files for a simulated test network that runs in a separate process. ## Initialize Files First, let's take a look at the `init-files` subcommand. This is similar to the `init` command when initializing a single node, but in this case we are initializing multiple nodes, generating the genesis transactions for each node, and then collecting those transactions. The `init-files` subcommand initializes the necessary files to run a test network in a separate process (i.e. using a Docker container). Running this command is not a prerequisite for the `start` subcommand ([see below](#start-testnet)). In order to initialize the files for a test network, run the following command: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd testnet init-files ``` You should see the following output in your terminal: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} Successfully initialized 4 node directories ``` The default output directory is a relative `.testnets` directory. Let's take a look at the files created within the `.testnets` directory. ### gentxs The `gentxs` directory includes a genesis transaction for each validator node. Each file includes a JSON encoded genesis transaction used to register a validator node at the time of genesis. The genesis transactions are added to the `genesis.json` file within each node directory during the initialization process. ### nodes A node directory is created for each validator node. Within each node directory is a `simd` directory. The `simd` directory is the home directory for each node, which includes the configuration and data files for that node (i.e. the same files included in the default `~/.simapp` directory when running a single node). ## Start Testnet Now, let's take a look at the `start` subcommand. The `start` subcommand both initializes and starts an in-process test network. This is the fastest way to spin up a local test network for testing purposes. You can start the local test network by running the following command: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd testnet start ``` You should see something similar to the following: ```bash expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} acquiring test network lock preparing test network with chain-id "chain-mtoD9v" +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ++ THIS MNEMONIC IS FOR TESTING PURPOSES ONLY ++ ++ DO NOT USE IN PRODUCTION ++ ++ ++ ++ sustain know debris minute gate hybrid stereo custom ++ ++ divorce cross spoon machine latin vibrant term oblige ++ ++ moment beauty laundry repeat grab game bronze truly ++ +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ starting test network... started test network press the Enter Key to terminate ``` The first validator node is now running in-process, which means the test network will terminate once you either close the terminal window or you press the Enter key. In the output, the mnemonic phrase for the first validator node is provided for testing purposes. The validator node is using the same default addresses being used when initializing and starting a single node (no need to provide a `--node` flag). Check the status of the first validator node: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd status ``` Import the key from the provided mnemonic: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd keys add test --recover --keyring-backend test ``` Check the balance of the account address: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd q bank balances [address] ``` Use this test account to manually test against the test network. ## Testnet Options You can customize the configuration of the test network with flags. In order to see all flag options, append the `--help` flag to each command. # Generating, Signing and Broadcasting Transactions Source: https://docs.cosmos.network/sdk/latest/node/txs **Synopsis** This document describes how to generate an (unsigned) transaction, signing it (with one or multiple keys), and broadcasting it to the network. ## Using the CLI The easiest way to send transactions is using the CLI, as shown in the previous page when [interacting with a node](/sdk/latest/node/interact-node#using-the-cli). For example, running the following command ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx bank send $MY_VALIDATOR_ADDRESS $RECIPIENT 1000stake --chain-id my-test-chain --keyring-backend test ``` will run the following steps: * generate a transaction with one `Msg` (`x/bank`'s `MsgSend`), and print the generated transaction to the console. * ask the user for confirmation to send the transaction from the `$MY_VALIDATOR_ADDRESS` account. * fetch `$MY_VALIDATOR_ADDRESS` from the keyring. This is possible because the [CLI's keyring was set up](/sdk/latest/node/keyring) in a previous step. * sign the generated transaction with the keyring's account. * broadcast the signed transaction to the network. This is possible because the CLI connects to the node's CometBFT RPC endpoint. The CLI bundles all the necessary steps into a simple-to-use user experience. However, it is possible to run all the steps individually too. ### Generating a Transaction Generating a transaction can simply be done by appending the `--generate-only` flag on any `tx` command, e.g.: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx bank send $MY_VALIDATOR_ADDRESS $RECIPIENT 1000stake --chain-id my-test-chain --generate-only ``` This will output the unsigned transaction as JSON in the console. The unsigned transaction can also be saved to a file (to be passed around between signers more easily) by appending `> unsigned_tx.json` to the above command. ### Signing a Transaction Signing a transaction using the CLI requires the unsigned transaction to be saved in a file. For this example, assume the unsigned transaction is in a file called `unsigned_tx.json` in the current directory (see previous paragraph on how to do that). Then, simply run the following command: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx sign unsigned_tx.json --chain-id my-test-chain --keyring-backend test --from $MY_VALIDATOR_ADDRESS ``` This command will decode the unsigned transaction and sign it with `SIGN_MODE_DIRECT` with `$MY_VALIDATOR_ADDRESS`'s key, which was already set up in the keyring. The signed transaction will be output as JSON to the console, and, as above, it can be saved to a file by appending `--output-document signed_tx.json`. Some useful flags to consider in the `tx sign` command: * `--sign-mode`: you may use `amino-json` to sign the transaction using `SIGN_MODE_LEGACY_AMINO_JSON`, * `--offline`: sign in offline mode. This means that the `tx sign` command doesn't connect to the node to retrieve the signer's account number and sequence, both needed for signing. In this case, you must manually supply the `--account-number` and `--sequence` flags. This is useful for offline signing, i.e. signing in a secure environment which doesn't have access to the internet. #### Signing with Multiple Signers Please note that signing a transaction with multiple signers or with a multisig account, where at least one signer uses `SIGN_MODE_DIRECT`, is not yet possible. You may follow [this Github issue](https://github.com/cosmos/cosmos-sdk/issues/8141) for more info. Signing with multiple signers is done with the `tx multisign` command. This command assumes that all signers use `SIGN_MODE_LEGACY_AMINO_JSON`. The flow is similar to the `tx sign` command flow, but instead of signing an unsigned transaction file, each signer signs the file signed by previous signer(s). The `tx multisign` command will append signatures to the existing transactions. It is important that signers sign the transaction **in the same order** as given by the transaction, which is retrievable using the `GetSigners()` method. For example, starting with the `unsigned_tx.json`, and assuming the transaction has 4 signers, we would run: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} # Let signer1 sign the unsigned tx. simd tx multisign unsigned_tx.json signer_key_1 --chain-id my-test-chain --keyring-backend test > partial_tx_1.json # Now signer1 will send the partial_tx_1.json to the signer2. # Signer2 appends their signature: simd tx multisign partial_tx_1.json signer_key_2 --chain-id my-test-chain --keyring-backend test > partial_tx_2.json # Signer2 sends the partial_tx_2.json file to signer3, and signer3 can append his signature: simd tx multisign partial_tx_2.json signer_key_3 --chain-id my-test-chain --keyring-backend test > partial_tx_3.json ``` ### Broadcasting a Transaction Broadcasting a transaction is done using the following command: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx broadcast tx_signed.json ``` You may optionally pass the `--broadcast-mode` flag to specify which response to receive from the node: * `sync`: the CLI waits for a CheckTx execution response only. * `async`: the CLI returns immediately (transaction might fail). ### Encoding a Transaction In order to broadcast a transaction using the gRPC or REST endpoints, the transaction will need to be encoded first. This can be done using the CLI. Encoding a transaction is done using the following command: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx encode tx_signed.json ``` This will read the transaction from the file, serialize it using Protobuf, and output the transaction bytes as base64 in the console. ### Decoding a Transaction The CLI can also be used to decode transaction bytes. Decoding a transaction is done using the following command: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx decode [protobuf-byte-string] ``` This will decode the transaction bytes and output the transaction as JSON in the console. You can also save the transaction to a file by appending `> tx.json` to the above command. ## Programmatically with Go It is possible to manipulate transactions programmatically via Go using the Cosmos SDK's `TxBuilder` interface. ### Generating a Transaction Before generating a transaction, a new instance of a `TxBuilder` needs to be created. Since the Cosmos SDK supports both Amino and Protobuf transactions, the first step would be to decide which encoding scheme to use. All the subsequent steps remain unchanged, whether you're using Amino or Protobuf, as `TxBuilder` abstracts the encoding mechanisms. In the following snippet, we will use Protobuf. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import ( "github.com/cosmos/cosmos-sdk/simapp" ) func sendTx() error { // Choose your codec: Amino or Protobuf. Here, we use Protobuf, given by the following function. app := simapp.NewSimApp(...) // Create a new TxBuilder. txBuilder := app.TxConfig().NewTxBuilder() // --snip-- } ``` The following example sets up some keys and addresses that will send and receive the transactions. For the purpose of this tutorial, dummy data is used to create keys. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import ( "github.com/cosmos/cosmos-sdk/testutil/testdata" ) priv1, _, addr1 := testdata.KeyTestPubAddr() priv2, _, addr2 := testdata.KeyTestPubAddr() priv3, _, addr3 := testdata.KeyTestPubAddr() ``` Populating the `TxBuilder` can be done via its methods: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package client import ( "time" txsigning "cosmossdk.io/x/tx/signing" codectypes "github.com/cosmos/cosmos-sdk/codec/types" sdk "github.com/cosmos/cosmos-sdk/types" "github.com/cosmos/cosmos-sdk/types/tx" signingtypes "github.com/cosmos/cosmos-sdk/types/tx/signing" "github.com/cosmos/cosmos-sdk/x/auth/signing" ) type ( // TxEncodingConfig defines an interface that contains transaction // encoders and decoders TxEncodingConfig interface { TxEncoder() sdk.TxEncoder TxDecoder() sdk.TxDecoder TxJSONEncoder() sdk.TxEncoder TxJSONDecoder() sdk.TxDecoder MarshalSignatureJSON([]signingtypes.SignatureV2) ([]byte, error) UnmarshalSignatureJSON([]byte) ([]signingtypes.SignatureV2, error) } // TxConfig defines an interface a client can utilize to generate an // application-defined concrete transaction type. The type returned must // implement TxBuilder. TxConfig interface { TxEncodingConfig NewTxBuilder() TxBuilder WrapTxBuilder(sdk.Tx) (TxBuilder, error) SignModeHandler() *txsigning.HandlerMap SigningContext() *txsigning.Context } // TxBuilder defines an interface which an application-defined concrete transaction // type must implement. Namely, it must be able to set messages, generate // signatures, and provide canonical bytes to sign over. The transaction must // also know how to encode itself. TxBuilder interface { GetTx() signing.Tx SetMsgs(msgs ...sdk.Msg) error SetSignatures(signatures ...signingtypes.SignatureV2) error SetMemo(memo string) SetFeeAmount(amount sdk.Coins) SetFeePayer(feePayer sdk.AccAddress) SetGasLimit(limit uint64) SetTimeoutHeight(height uint64) SetTimeoutTimestamp(timestamp time.Time) SetUnordered(v bool) SetFeeGranter(feeGranter sdk.AccAddress) AddAuxSignerData(tx.AuxSignerData) error } // ExtendedTxBuilder extends the TxBuilder interface, // which is used to set extension options to be included in a transaction. ExtendedTxBuilder interface { SetExtensionOptions(extOpts ...*codectypes.Any) } ) ``` ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import ( banktypes "github.com/cosmos/cosmos-sdk/x/bank/types" ) func sendTx() error { // --snip-- // Define two x/bank MsgSend messages: // - from addr1 to addr3 // - from addr2 to addr3 // This means that the transaction needs two signers: addr1 and addr2. msg1 := banktypes.NewMsgSend(addr1, addr3, sdk.NewCoins(sdk.NewInt64Coin("atom", 12))) msg2 := banktypes.NewMsgSend(addr2, addr3, sdk.NewCoins(sdk.NewInt64Coin("atom", 34))) err := txBuilder.SetMsgs(msg1, msg2) if err != nil { return err } txBuilder.SetGasLimit(...) txBuilder.SetFeeAmount(...) txBuilder.SetMemo(...) txBuilder.SetTimeoutHeight(...) } ``` At this point, `TxBuilder`'s underlying transaction is ready to be signed. #### Generating an Unordered Transaction Starting with Cosmos SDK v0.53.0, users may send unordered transactions to chains that have the feature enabled. Unordered transactions MUST leave sequence values unset. When a transaction is both unordered and contains a non-zero sequence value, the transaction will be rejected. External services that operate on prior assumptions about transaction sequence values should be updated to handle unordered transactions. Services should be aware that when the transaction is unordered, the transaction sequence will always be zero. Using the example above, we can set the required fields to mark a transaction as unordered. By default, unordered transactions charge an extra 2240 units of gas to offset the additional storage overhead that supports their functionality. The extra units of gas are customizable and therefore vary by chain, so be sure to check the chain's ante handler for the gas value set, if any. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func sendTx() error { // --snip-- expiration := 5 * time.Minute txBuilder.SetUnordered(true) txBuilder.SetTimeoutTimestamp(time.Now().Add(expiration + (1 * time.Nanosecond))) } ``` Unordered transactions from the same account must use a unique timeout timestamp value. The difference between each timeout timestamp value may be as small as a nanosecond, however. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import ( "github.com/cosmos/cosmos-sdk/client" ) func sendMessages(txBuilders []client.TxBuilder) error { // --snip-- expiration := 5 * time.Minute for _, txb := range txBuilders { txb.SetUnordered(true) txb.SetTimeoutTimestamp(time.Now().Add(expiration + (1 * time.Nanosecond))) } } ``` ### Signing a Transaction The encoding config is set to use Protobuf, which will use `SIGN_MODE_DIRECT` by default. As per [ADR-020](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-020-protobuf-transaction-encoding.md), each signer needs to sign the `SignerInfo`s of all other signers. This means that two steps must be performed sequentially: * for each signer, populate the signer's `SignerInfo` inside `TxBuilder` * once all `SignerInfo`s are populated, for each signer, sign the `SignDoc` (the payload to be signed). In the current `TxBuilder`'s API, both steps are done using the same method: `SetSignatures()`. The current API requires a first round of `SetSignatures()` *with empty signatures*, only to populate `SignerInfo`s, and a second round of `SetSignatures()` to actually sign the correct payload. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import ( cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" "github.com/cosmos/cosmos-sdk/types/tx/signing" xauthsigning "github.com/cosmos/cosmos-sdk/x/auth/signing" ) func sendTx() error { // --snip-- privs := []cryptotypes.PrivKey{ priv1, priv2 } accNums:= []uint64{..., ... } // The accounts' account numbers accSeqs:= []uint64{..., ... } // The accounts' sequence numbers // First round: we gather all the signer infos. We use the "set empty // signature" hack to do that. var sigsV2 []signing.SignatureV2 for i, priv := range privs { sigV2 := signing.SignatureV2{ PubKey: priv.PubKey(), Data: &signing.SingleSignatureData{ SignMode: encCfg.TxConfig.SignModeHandler().DefaultMode(), Signature: nil, }, Sequence: accSeqs[i], } sigsV2 = append(sigsV2, sigV2) } err := txBuilder.SetSignatures(sigsV2...) if err != nil { return err } // Second round: all signer infos are set, so each signer can sign. sigsV2 = []signing.SignatureV2{ } for i, priv := range privs { signerData := xauthsigning.SignerData{ ChainID: chainID, AccountNumber: accNums[i], Sequence: accSeqs[i], } sigV2, err := tx.SignWithPrivKey( encCfg.TxConfig.SignModeHandler().DefaultMode(), signerData, txBuilder, priv, encCfg.TxConfig, accSeqs[i]) if err != nil { return nil, err } sigsV2 = append(sigsV2, sigV2) } err = txBuilder.SetSignatures(sigsV2...) if err != nil { return err } } ``` The `TxBuilder` is now correctly populated. To print it, you can use the `TxConfig` interface from the initial encoding config `encCfg`: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func sendTx() error { // --snip-- // Generated Protobuf-encoded bytes. txBytes, err := encCfg.TxConfig.TxEncoder()(txBuilder.GetTx()) if err != nil { return err } // Generate a JSON string. txJSONBytes, err := encCfg.TxConfig.TxJSONEncoder()(txBuilder.GetTx()) if err != nil { return err } txJSON := string(txJSONBytes) } ``` ### Broadcasting a Transaction The preferred way to broadcast a transaction is to use gRPC, though using REST (via `gRPC-gateway`) or the CometBFT RPC is also possible. An overview of the differences between these methods is exposed [here](/sdk/latest/learn/concepts/cli-grpc-rest). For this tutorial, we will only describe the gRPC method. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import ( "context" "fmt" "google.golang.org/grpc" "github.com/cosmos/cosmos-sdk/types/tx" ) func sendTx(ctx context.Context) error { // --snip-- // Create a connection to the gRPC server. grpcConn, err := grpc.Dial( "127.0.0.1:9090", // Or your gRPC server address. grpc.WithInsecure(), // The Cosmos SDK doesn't support any transport security mechanisms. ) if err != nil { return err } defer grpcConn.Close() // Broadcast the tx via gRPC. We create a new client for the Protobuf Tx // service. txClient := tx.NewServiceClient(grpcConn) // We then call the BroadcastTx method on this client. grpcRes, err := txClient.BroadcastTx( ctx, &tx.BroadcastTxRequest{ Mode: tx.BroadcastMode_BROADCAST_MODE_SYNC, TxBytes: txBytes, // Proto-binary of the signed transaction, see previous step. }, ) if err != nil { return err } fmt.Println(grpcRes.TxResponse.Code) // Should be `0` if the tx is successful return nil } ``` #### Simulating a Transaction Before broadcasting a transaction, we sometimes may want to dry-run the transaction to estimate some information about the transaction without actually committing it. This is called simulating a transaction, and can be done as follows: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import ( "context" "fmt" "testing" "github.com/cosmos/cosmos-sdk/client" "github.com/cosmos/cosmos-sdk/types/tx" authtx "github.com/cosmos/cosmos-sdk/x/auth/tx" ) func simulateTx() error { // --snip-- // Simulate the tx via gRPC. We create a new client for the Protobuf Tx // service. txClient := tx.NewServiceClient(grpcConn) txBytes := /* Fill in with your signed transaction bytes. */ // We then call the Simulate method on this client. grpcRes, err := txClient.Simulate( context.Background(), &tx.SimulateRequest{ TxBytes: txBytes, }, ) if err != nil { return err } fmt.Println(grpcRes.GasInfo) // Prints estimated gas used. return nil } ``` ## Using gRPC It is not possible to generate or sign a transaction using gRPC, only to broadcast one. In order to broadcast a transaction using gRPC, you will need to generate, sign, and encode the transaction using either the CLI or programmatically with Go. ### Broadcasting a Transaction Broadcasting a transaction using the gRPC endpoint can be done by sending a `BroadcastTx` request as follows, where the `txBytes` are the protobuf-encoded bytes of a signed transaction: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} grpcurl -plaintext \ -d '{"tx_bytes":"{{txBytes}}","mode":"BROADCAST_MODE_SYNC"}' \ localhost:9090 \ cosmos.tx.v1beta1.Service/BroadcastTx ``` ## Using REST It is not possible to generate or sign a transaction using REST, only to broadcast one. In order to broadcast a transaction using REST, you will need to generate, sign, and encode the transaction using either the CLI or programmatically with Go. ### Broadcasting a Transaction Broadcasting a transaction using the REST endpoint (served by `gRPC-gateway`) can be done by sending a POST request as follows, where the `txBytes` are the protobuf-encoded bytes of a signed transaction: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl -X POST \ -H "Content-Type: application/json" \ -d'{"tx_bytes":"{{txBytes}}","mode":"BROADCAST_MODE_SYNC"}' \ localhost:1317/cosmos/tx/v1beta1/txs ``` ## Using CosmJS (JavaScript & TypeScript) CosmJS aims to build client libraries in JavaScript that can be embedded in web applications. Please see [Link](https://cosmos.github.io/cosmjs) for more information. ## Congratulations! You have learned how to manually generate, sign, and broadcast transactions using the Cosmos SDK. These workflows provide the foundation for building custom transaction tools and integrations. ## Next steps * [Run in production](/sdk/latest/node/run-production) for security and deployment best practices. * [Run a testnet](/sdk/latest/node/run-testnet) to test your blockchain. # ADR Creation Process Source: https://docs.cosmos.network/sdk/latest/reference/architecture/PROCESS 1. Copy the `adr-template.md` file. Use the following filename pattern: `adr-next_number-title.md` 2. Create a draft Pull Request if you want to get early feedback. 3. Make sure the context and solution are clear and well documented. 4. Add an entry to the list in the [README](/sdk/latest/reference/architecture/README) file. 5. Create a Pull Request to propose a new ADR. ## What is an ADR? An ADR is a document to document an implementation and design that may or may not have been discussed in an RFC. While an RFC is meant to replace synchronous communication in a distributed environment, an ADR is meant to document an already made decision. An ADR won't come with much of a communication overhead because the discussion was recorded in an RFC or a synchronous discussion. If the consensus came from a synchronous discussion, then a short excerpt should be added to the ADR to explain the goals. ## ADR life cycle ADR creation is an **iterative** process. Instead of having a high amount of communication overhead, an ADR is used when there is already a decision made and implementation details need to be added. The ADR should document what the collective consensus for the specific issue is and how to solve it. 1. Every ADR should start with either an RFC or a discussion where consensus has been met. 2. Once consensus is met, a GitHub Pull Request (PR) is created with a new document based on the `adr-template.md`. 3. If a *proposed* ADR is merged, then it should clearly document outstanding issues either in ADR document notes or in a GitHub Issue. 4. The PR SHOULD always be merged. In the case of a faulty ADR, we still prefer to merge it with a *rejected* status. The only time the ADR SHOULD NOT be merged is if the author abandons it. 5. Merged ADRs SHOULD NOT be pruned. ### ADR status Status has two components: ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} {CONSENSUS STATUS} {IMPLEMENTATION STATUS} ``` IMPLEMENTATION STATUS is either `Implemented` or `Not Implemented`. #### Consensus Status ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} DRAFT -> PROPOSED -> LAST CALL yyyy-mm-dd -> ACCEPTED | REJECTED -> SUPERSEDED by ADR-xxx \ | \ | v v ABANDONED ``` * `DRAFT`: \[optional] an ADR which is a work in progress, not being ready for a general review. This is to present an early work and get early feedback in a Draft Pull Request form. * `PROPOSED`: an ADR covering a full solution architecture and still in the review - project stakeholders haven't reached an agreement yet. * `LAST CALL `: \[optional] Notifies that we are close to accepting updates. Changing a status to `LAST CALL` means that social consensus (of Cosmos SDK maintainers) has been reached, and we still want to give it a time to let the community react or analyze. * `ACCEPTED`: ADR which will represent a currently implemented or to be implemented architecture design. * `REJECTED`: ADR can go from PROPOSED or ACCEPTED to rejected if the consensus among project stakeholders will decide so. * `SUPERSEDED by ADR-xxx`: ADR which has been superseded by a new ADR. * `ABANDONED`: the ADR is no longer pursued by the original authors. ## Language used in ADR * The context/background should be written in the present tense. * Avoid using a first, personal form. # Architecture Decision Records (ADR) Source: https://docs.cosmos.network/sdk/latest/reference/architecture/README This is a location to record all high-level architecture decisions in the Cosmos-SDK. This is a location to record all high-level architecture decisions in the Cosmos-SDK. An Architectural Decision (**AD**) is a software design choice that addresses a functional or non-functional requirement that is architecturally significant. An Architecturally Significant Requirement (**ASR**) is a requirement that has a measurable effect on a software system’s architecture and quality. An Architectural Decision Record (**ADR**) captures a single AD, such as often done when writing personal notes or meeting minutes; the collection of ADRs created and maintained in a project constitute its decision log. All these are within the topic of Architectural Knowledge Management (AKM). You can read more about the ADR concept in this [blog post](https://product.reverb.com/documenting-architecture-decisions-the-reverb-way-a3563bb24bd0#.78xhdix6t). ## Rationale ADRs are intended to be the primary mechanism for proposing new feature designs and new processes, for collecting community input on an issue, and for documenting the design decisions. An ADR should provide: * Context on the relevant goals and the current state * Proposed changes to achieve the goals * Summary of pros and cons * References * Changelog Note the distinction between an ADR and a spec. The ADR provides the context, intuition, reasoning, and justification for a change in architecture, or for the architecture of something new. The spec is a much more compressed and streamlined summary of everything as it stands today. If recorded decisions turned out to be lacking, convene a discussion, record the new decisions here, and then modify the code to match. ## Creating new ADR Read about the [PROCESS](/sdk/latest/reference/architecture/PROCESS). ### Use RFC 2119 Keywords When writing ADRs, follow the same best practices for writing RFCs. When writing RFCs, key words are used to signify the requirements in the specification. These words are often capitalized: "MUST," "MUST NOT," "REQUIRED," "SHALL," "SHALL NOT," "SHOULD," "SHOULD NOT," "RECOMMENDED," "MAY," and "OPTIONAL." They are to be interpreted as described in [RFC 2119](https://datatracker.ietf.org/doc/html/rfc2119). ## ADR Table of Contents ### Accepted * [ADR 002: SDK Documentation Structure](/sdk/latest/reference/architecture/adr-002-docs-structure) * [ADR 004: Split Denomination Keys](/sdk/latest/reference/architecture/adr-004-split-denomination-keys) * [ADR 006: Secret Store Replacement](/sdk/latest/reference/architecture/adr-006-secret-store-replacement) * [ADR 009: Evidence Module](/sdk/latest/reference/architecture/adr-009-evidence-module) * [ADR 010: Modular AnteHandler](/sdk/latest/reference/architecture/adr-010-modular-antehandler) * [ADR 019: Protocol Buffer State Encoding](/sdk/latest/reference/architecture/adr-019-protobuf-state-encoding) * [ADR 020: Protocol Buffer Transaction Encoding](/sdk/latest/reference/architecture/adr-020-protobuf-transaction-encoding) * [ADR 021: Protocol Buffer Query Encoding](/sdk/latest/reference/architecture/adr-021-protobuf-query-encoding) * [ADR 023: Protocol Buffer Naming and Versioning](/sdk/latest/reference/architecture/adr-023-protobuf-naming) * [ADR 029: Fee Grant Module](/sdk/latest/reference/architecture/adr-029-fee-grant-module) * [ADR 030: Message Authorization Module](/sdk/latest/reference/architecture/adr-030-authz-module) * [ADR 031: Protobuf Msg Services](/sdk/latest/reference/architecture/adr-031-msg-service) * [ADR 055: ORM](/sdk/latest/reference/architecture/adr-055-orm) * [ADR 058: Auto-Generated CLI](/sdk/latest/reference/architecture/adr-058-auto-generated-cli) * [ADR 060: ABCI 1.0 (Phase I)](/sdk/latest/reference/architecture/adr-060-abci-1.0) * [ADR 061: Liquid Staking](/sdk/latest/reference/architecture/adr-061-liquid-staking) ### Proposed * [ADR 003: Dynamic Capability Store](/sdk/latest/reference/architecture/adr-003-dynamic-capability-store) * [ADR 011: Generalize Genesis Accounts](/sdk/latest/reference/architecture/adr-011-generalize-genesis-accounts) * [ADR 012: State Accessors](/sdk/latest/reference/architecture/adr-012-state-accessors) * [ADR 013: Metrics](/sdk/latest/reference/architecture/adr-013-metrics) * [ADR 016: Validator Consensus Key Rotation](/sdk/latest/reference/architecture/adr-016-validator-consensus-key-rotation) * [ADR 017: Historical Header Module](/sdk/latest/reference/architecture/adr-017-historical-header-module) * [ADR 018: Extendable Voting Periods](/sdk/latest/reference/architecture/adr-018-extendable-voting-period) * [ADR 022: Custom baseapp panic handling](/sdk/latest/reference/architecture/adr-022-custom-panic-handling) * [ADR 024: Coin Metadata](/sdk/latest/reference/architecture/adr-024-coin-metadata) * [ADR 027: Deterministic Protobuf Serialization](/sdk/latest/reference/architecture/adr-027-deterministic-protobuf-serialization) * [ADR 028: Public Key Addresses](/sdk/latest/reference/architecture/adr-028-public-key-addresses) * [ADR 032: Typed Events](/sdk/latest/reference/architecture/adr-032-typed-events) * [ADR 033: Inter-module RPC](/sdk/latest/reference/architecture/adr-033-protobuf-inter-module-comm) * [ADR 035: Rosetta API Support](/sdk/latest/reference/architecture/adr-035-rosetta-api-support) * [ADR 037: Governance Split Votes](/sdk/latest/reference/architecture/adr-037-gov-split-vote) * [ADR 038: State Listening](/sdk/latest/reference/architecture/adr-038-state-listening) * [ADR 039: Epoched Staking](/sdk/latest/reference/architecture/adr-039-epoched-staking) * [ADR 040: Storage and SMT State Commitments](/sdk/latest/reference/architecture/adr-040-storage-and-smt-state-commitments) * [ADR 046: Module Params](/sdk/latest/reference/architecture/adr-046-module-params) * [ADR 054: Semver Compatible SDK Modules](/sdk/latest/reference/architecture/adr-054-semver-compatible-modules) * [ADR 057: App Wiring](/sdk/latest/reference/architecture/adr-057-app-wiring) * [ADR 059: Test Scopes](/sdk/latest/reference/architecture/adr-059-test-scopes) * [ADR 062: Collections State Layer](/sdk/latest/reference/architecture/adr-062-collections-state-layer) * [ADR 063: Core Module API](/sdk/latest/reference/architecture/adr-063-core-module-api) * [ADR 065: Store V2](/sdk/latest/reference/architecture/adr-065-store-v2) * [ADR 076: Transaction Malleability Risk Review and Recommendations](/sdk/latest/reference/architecture/adr-076-tx-malleability) ### Draft * [ADR 044: Guidelines for Updating Protobuf Definitions](/sdk/latest/reference/architecture/adr-044-protobuf-updates-guidelines) * [ADR 047: Extend Upgrade Plan](/sdk/latest/reference/architecture/adr-047-extend-upgrade-plan) * [ADR 053: Go Module Refactoring](/sdk/latest/reference/architecture/adr-053-go-module-refactoring) * [ADR 068: Preblock](/sdk/latest/reference/architecture/adr-068-preblock) # ADR 002: SDK Documentation Structure Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-002-docs-structure There is a need for a scalable structure of the Cosmos SDK documentation. Current documentation includes a lot of non-related Cosmos SDK material, is difficult to maintain and hard to follow as a user. ## Context There is a need for a scalable structure of the Cosmos SDK documentation. Current documentation includes a lot of non-related Cosmos SDK material, is difficult to maintain and hard to follow as a user. Ideally, we would have: * All docs related to dev frameworks or tools live in their respective github repos (sdk repo would contain sdk docs, hub repo would contain hub docs, lotion repo would contain lotion docs, etc.) * All other docs (faqs, whitepaper, high-level material about Cosmos) would live on the website. ## Decision Re-structure the `/docs` folder of the Cosmos SDK github repo as follows: ```text expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} docs/ ├── README ├── intro/ ├── concepts/ │ ├── baseapp │ ├── types │ ├── store │ ├── server │ ├── modules/ │ │ ├── keeper │ │ ├── handler │ │ ├── cli │ ├── gas │ └── commands ├── clients/ │ ├── lite/ │ ├── service-providers ├── modules/ ├── spec/ ├── translations/ └── architecture/ ``` The files in each sub-folder do not matter and will likely change. What matters is the sectioning: * `README`: Landing page of the docs. * `intro`: Introductory material. Goal is to have a short explainer of the Cosmos SDK and then channel people to the resource they need. The [Cosmos SDK tutorial](https://github.com/cosmos/sdk-application-tutorial/) will be highlighted, as well as the `godocs`. * `concepts`: Contains high-level explanations of the abstractions of the Cosmos SDK. It does not contain specific code implementation and does not need to be updated often. **It is not an API specification of the interfaces**. API spec is the `godoc`. * `clients`: Contains specs and info about the various Cosmos SDK clients. * `spec`: Contains specs of modules, and others. * `modules`: Contains links to `godocs` and the spec of the modules. * `architecture`: Contains architecture-related docs like the present one. * `translations`: Contains different translations of the documentation. Website docs sidebar will only include the following sections: * `README` * `intro` * `concepts` * `clients` `architecture` need not be displayed on the website. ## Status Accepted ## Consequences ### Positive * Much clearer organization of the Cosmos SDK docs. * The `/docs` folder now only contains Cosmos SDK and gaia related material. Later, it will only contain Cosmos SDK related material. * Developers only have to update `/docs` folder when they open a PR (and not `/examples` for example). * Easier for developers to find what they need to update in the docs thanks to reworked architecture. * Cleaner vuepress build for website docs. * Will help build an executable doc (cf [Link](https://github.com/cosmos/cosmos-sdk/issues/2611)) ### Neutral * We need to move a bunch of deprecated stuff to `/_attic` folder. * We need to integrate content in `sdk/docs/core` in `concepts`. * We need to move all the content that currently lives in `docs` and does not fit in new structure (like `lotion`, intro material, whitepaper) to the website repository. * Update `DOCS_README.md` ## References * [Link](https://github.com/cosmos/cosmos-sdk/issues/1460) * [Link](https://github.com/cosmos/cosmos-sdk/pull/2695) * [Link](https://github.com/cosmos/cosmos-sdk/issues/2611) # ADR 003: Dynamic Capability Store Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-003-dynamic-capability-store 12 December 2019: Initial version 02 April 2020: Memory Store Revisions ## Changelog * 12 December 2019: Initial version * 02 April 2020: Memory Store Revisions ## Context Full implementation of the [IBC specification](https://github.com/cosmos/ibc) requires the ability to create and authenticate object-capability keys at runtime (i.e., during transaction execution), as described in [ICS 5](https://github.com/cosmos/ibc/tree/master/spec/core/ics-005-port-allocation#technical-specification). In the IBC specification, capability keys are created for each newly initialized port & channel, and are used to authenticate future usage of the port or channel. Since channels and potentially ports can be initialized during transaction execution, the state machine must be able to create object-capability keys at this time. At present, the Cosmos SDK does not have the ability to do this. Object-capability keys are currently pointers (memory addresses) of `StoreKey` structs created at application initialisation in `app.go` ([example](https://github.com/cosmos/gaia/blob/dcbddd9f04b3086c0ad07ee65de16e7adedc7da4/app/app.go#L132)) and passed to Keepers as fixed arguments ([example](https://github.com/cosmos/gaia/blob/dcbddd9f04b3086c0ad07ee65de16e7adedc7da4/app/app.go#L160)). Keepers cannot create or store capability keys during transaction execution — although they could call `NewKVStoreKey` and take the memory address of the returned struct, storing this in the Merklised store would result in a consensus fault, since the memory address will be different on each machine (this is intentional — were this not the case, the keys would be predictable and couldn't serve as object capabilities). Keepers need a way to keep a private map of store keys which can be altered during transaction execution, along with a suitable mechanism for regenerating the unique memory addresses (capability keys) in this map whenever the application is started or restarted, along with a mechanism to revert capability creation on tx failure. This ADR proposes such an interface & mechanism. ## Decision The Cosmos SDK will include a new `CapabilityKeeper` abstraction, which is responsible for provisioning, tracking, and authenticating capabilities at runtime. During application initialisation in `app.go`, the `CapabilityKeeper` will be hooked up to modules through unique function references (by calling `ScopeToModule`, defined below) so that it can identify the calling module when later invoked. When the initial state is loaded from disk, the `CapabilityKeeper`'s `Initialize` function will create new capability keys for all previously allocated capability identifiers (allocated during execution of past transactions and assigned to particular modes), and keep them in a memory-only store while the chain is running. The `CapabilityKeeper` will include a persistent `KVStore`, a `MemoryStore`, and an in-memory map. The persistent `KVStore` tracks which capability is owned by which modules. The `MemoryStore` stores a forward mapping that map from module name, capability tuples to capability names and a reverse mapping that map from module name, capability name to the capability index. Since we cannot marshal the capability into a `KVStore` and unmarshal without changing the memory location of the capability, the reverse mapping in the KVStore will simply map to an index. This index can then be used as a key in the ephemeral go-map to retrieve the capability at the original memory location. The `CapabilityKeeper` will define the following types & functions: The `Capability` is similar to `StoreKey`, but has a globally unique `Index()` instead of a name. A `String()` method is provided for debugging. A `Capability` is simply a struct, the address of which is taken for the actual capability. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type Capability struct { index uint64 } ``` A `CapabilityKeeper` contains a persistent store key, memory store key, and mapping of allocated module names. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type CapabilityKeeper struct { persistentKey StoreKey memKey StoreKey capMap map[uint64]*Capability moduleNames map[string]interface{ } sealed bool } ``` The `CapabilityKeeper` provides the ability to create *scoped* sub-keepers which are tied to a particular module name. These `ScopedCapabilityKeeper`s must be created at application initialisation and passed to modules, which can then use them to claim capabilities they receive and retrieve capabilities which they own by name, in addition to creating new capabilities & authenticating capabilities passed by other modules. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type ScopedCapabilityKeeper struct { persistentKey StoreKey memKey StoreKey capMap map[uint64]*Capability moduleName string } ``` `ScopeToModule` is used to create a scoped sub-keeper with a particular name, which must be unique. It MUST be called before `InitializeAndSeal`. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (ck CapabilityKeeper) ScopeToModule(moduleName string) ScopedCapabilityKeeper { if k.sealed { panic("cannot scope to module via a sealed capability keeper") } if _, ok := k.scopedModules[moduleName]; ok { panic(fmt.Sprintf("cannot create multiple scoped keepers for the same module name: %s", moduleName)) } k.scopedModules[moduleName] = struct{ }{ } return ScopedKeeper{ cdc: k.cdc, storeKey: k.storeKey, memKey: k.memKey, capMap: k.capMap, module: moduleName, } } ``` `InitializeAndSeal` MUST be called exactly once, after loading the initial state and creating all necessary `ScopedCapabilityKeeper`s, in order to populate the memory store with newly-created capability keys in accordance with the keys previously claimed by particular modules and prevent the creation of any new `ScopedCapabilityKeeper`s. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (ck CapabilityKeeper) InitializeAndSeal(ctx Context) { if ck.sealed { panic("capability keeper is sealed") } persistentStore := ctx.KVStore(ck.persistentKey) map := ctx.KVStore(ck.memKey) // initialise memory store for all names in persistent store for index, value := range persistentStore.Iter() { capability = &CapabilityKey{ index: index } for moduleAndCapability := range value { moduleName, capabilityName := moduleAndCapability.Split("/") memStore.Set(moduleName + "/fwd/" + capability, capabilityName) memStore.Set(moduleName + "/rev/" + capabilityName, index) ck.capMap[index] = capability } } ck.sealed = true } ``` `NewCapability` can be called by any module to create a new unique, unforgeable object-capability reference. The newly created capability is automatically persisted; the calling module need not call `ClaimCapability`. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (sck ScopedCapabilityKeeper) NewCapability(ctx Context, name string) (Capability, error) { // check name not taken in memory store if capStore.Get("rev/" + name) != nil { return nil, errors.New("name already taken") } // fetch the current index index := persistentStore.Get("index") // create a new capability capability := &CapabilityKey{ index: index } // set persistent store persistentStore.Set(index, Set.singleton(sck.moduleName + "/" + name)) // update the index index++ persistentStore.Set("index", index) // set forward mapping in memory store from capability to name memStore.Set(sck.moduleName + "/fwd/" + capability, name) // set reverse mapping in memory store from name to index memStore.Set(sck.moduleName + "/rev/" + name, index) // set the in-memory mapping from index to capability pointer capMap[index] = capability // return the newly created capability return capability } ``` `AuthenticateCapability` can be called by any module to check that a capability does in fact correspond to a particular name (the name can be untrusted user input) with which the calling module previously associated it. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (sck ScopedCapabilityKeeper) AuthenticateCapability(name string, capability Capability) bool { // return whether forward mapping in memory store matches name return memStore.Get(sck.moduleName + "/fwd/" + capability) === name } ``` `ClaimCapability` allows a module to claim a capability key which it has received from another module so that future `GetCapability` calls will succeed. `ClaimCapability` MUST be called if a module which receives a capability wishes to access it by name in the future. Capabilities are multi-owner, so if multiple modules have a single `Capability` reference, they will all own it. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (sck ScopedCapabilityKeeper) ClaimCapability(ctx Context, capability Capability, name string) error { persistentStore := ctx.KVStore(sck.persistentKey) // set forward mapping in memory store from capability to name memStore.Set(sck.moduleName + "/fwd/" + capability, name) // set reverse mapping in memory store from name to capability memStore.Set(sck.moduleName + "/rev/" + name, capability) // update owner set in persistent store owners := persistentStore.Get(capability.Index()) owners.add(sck.moduleName + "/" + name) persistentStore.Set(capability.Index(), owners) } ``` `GetCapability` allows a module to fetch a capability which it has previously claimed by name. The module is not allowed to retrieve capabilities which it does not own. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (sck ScopedCapabilityKeeper) GetCapability(ctx Context, name string) (Capability, error) { // fetch the index of capability using reverse mapping in memstore index := memStore.Get(sck.moduleName + "/rev/" + name) // fetch capability from go-map using index capability := capMap[index] // return the capability return capability } ``` `ReleaseCapability` allows a module to release a capability which it had previously claimed. If no more owners exist, the capability will be deleted globally. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (sck ScopedCapabilityKeeper) ReleaseCapability(ctx Context, capability Capability) err { persistentStore := ctx.KVStore(sck.persistentKey) name := capStore.Get(sck.moduleName + "/fwd/" + capability) if name == nil { return error("capability not owned by module") } // delete forward mapping in memory store memoryStore.Delete(sck.moduleName + "/fwd/" + capability, name) // delete reverse mapping in memory store memoryStore.Delete(sck.moduleName + "/rev/" + name, capability) // update owner set in persistent store owners := persistentStore.Get(capability.Index()) owners.remove(sck.moduleName + "/" + name) if owners.size() > 0 { // there are still other owners, keep the capability around persistentStore.Set(capability.Index(), owners) } else { // no more owners, delete the capability persistentStore.Delete(capability.Index()) delete(capMap[capability.Index()]) } } ``` ### Usage patterns #### Initialisation Any modules which use dynamic capabilities must be provided a `ScopedCapabilityKeeper` in `app.go`: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} ck := NewCapabilityKeeper(persistentKey, memoryKey) mod1Keeper := NewMod1Keeper(ck.ScopeToModule("mod1"), ....) mod2Keeper := NewMod2Keeper(ck.ScopeToModule("mod2"), ....) // other initialisation logic ... // load initial state... ck.InitializeAndSeal(initialContext) ``` #### Creating, passing, claiming and using capabilities Consider the case where `mod1` wants to create a capability, associate it with a resource (e.g. an IBC channel) by name, then pass it to `mod2` which will use it later: Module 1 would have the following code: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} capability := scopedCapabilityKeeper.NewCapability(ctx, "resourceABC") mod2Keeper.SomeFunction(ctx, capability, args...) ``` `SomeFunction`, running in module 2, could then claim the capability: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (k Mod2Keeper) SomeFunction(ctx Context, capability Capability) { k.sck.ClaimCapability(ctx, capability, "resourceABC") // other logic... } ``` Later on, module 2 can retrieve that capability by name and pass it to module 1, which will authenticate it against the resource: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (k Mod2Keeper) SomeOtherFunction(ctx Context, name string) { capability := k.sck.GetCapability(ctx, name) mod1.UseResource(ctx, capability, "resourceABC") } ``` Module 1 will then check that this capability key is authenticated to use the resource before allowing module 2 to use it: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (k Mod1Keeper) UseResource(ctx Context, capability Capability, resource string) { if !k.sck.AuthenticateCapability(name, capability) { return errors.New("unauthenticated") } // do something with the resource } ``` If module 2 passed the capability key to module 3, module 3 could then claim it and call module 1 just like module 2 did (in which case module 1, module 2, and module 3 would all be able to use this capability). ## Status Proposed. ## Consequences ### Positive * Dynamic capability support. * Allows CapabilityKeeper to return same capability pointer from go-map while reverting any writes to the persistent `KVStore` and in-memory `MemoryStore` on tx failure. ### Negative * Requires an additional keeper. * Some overlap with existing `StoreKey` system (in the future they could be combined, since this is a superset functionality-wise). * Requires an extra level of indirection in the reverse mapping, since MemoryStore must map to index which must then be used as key in a go map to retrieve the actual capability ### Neutral (none known) ## References * [Original discussion](https://github.com/cosmos/cosmos-sdk/pull/5230#discussion_r343978513) # ADR 004: Split Denomination Keys Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-004-split-denomination-keys 2020-01-08: Initial version 2020-01-09: Alterations to handle vesting accounts 2020-01-14: Updates from review feedback 2020-01-30: Updates from implementation ## Changelog * 2020-01-08: Initial version * 2020-01-09: Alterations to handle vesting accounts * 2020-01-14: Updates from review feedback * 2020-01-30: Updates from implementation ### Glossary * denom / denomination key -- unique token identifier. ## Context With permissionless IBC, anyone will be able to send arbitrary denominations to any other account. Currently, all non-zero balances are stored along with the account in an `sdk.Coins` struct, which creates a potential denial-of-service concern, as too many denominations will become expensive to load & store each time the account is modified. See issues [5467](https://github.com/cosmos/cosmos-sdk/issues/5467) and [4982](https://github.com/cosmos/cosmos-sdk/issues/4982) for additional context. Simply rejecting incoming deposits after a denomination count limit doesn't work, since it opens up a griefing vector: someone could send a user lots of nonsensical coins over IBC, and then prevent the user from receiving real denominations (such as staking rewards). ## Decision Balances shall be stored per-account & per-denomination under a denomination- and account-unique key, thus enabling O(1) read & write access to the balance of a particular account in a particular denomination. ### Account interface (x/auth) `GetCoins()` and `SetCoins()` will be removed from the account interface, since coin balances will now be stored in & managed by the bank module. The vesting account interface will replace `SpendableCoins` in favor of `LockedCoins` which does not require the account balance anymore. In addition, `TrackDelegation()` will now accept the account balance of all tokens denominated in the vesting balance instead of loading the entire account balance. Vesting accounts will continue to store original vesting, delegated free, and delegated vesting coins (which is safe since these cannot contain arbitrary denominations). ### Bank keeper (x/bank) The following APIs will be added to the `x/bank` keeper: * `GetAllBalances(ctx Context, addr AccAddress) Coins` * `GetBalance(ctx Context, addr AccAddress, denom string) Coin` * `SetBalance(ctx Context, addr AccAddress, coin Coin)` * `LockedCoins(ctx Context, addr AccAddress) Coins` * `SpendableCoins(ctx Context, addr AccAddress) Coins` Additional APIs may be added to facilitate iteration and auxiliary functionality not essential to core functionality or persistence. Balances will be stored first by the address, then by the denomination (the reverse is also possible, but retrieval of all balances for a single account is presumed to be more frequent): ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} var BalancesPrefix = []byte("balances") func (k Keeper) SetBalance(ctx Context, addr AccAddress, balance Coin) error { if !balance.IsValid() { return err } store := ctx.KVStore(k.storeKey) balancesStore := prefix.NewStore(store, BalancesPrefix) accountStore := prefix.NewStore(balancesStore, addr.Bytes()) bz := Marshal(balance) accountStore.Set([]byte(balance.Denom), bz) return nil } ``` This will result in the balances being indexed by the byte representation of `balances/{address}/{denom}`. `DelegateCoins()` and `UndelegateCoins()` will be altered to only load each individual account balance by denomination found in the (un)delegation amount. As a result, any mutations to the account balance by will made by denomination. `SubtractCoins()` and `AddCoins()` will be altered to read & write the balances directly instead of calling `GetCoins()` / `SetCoins()` (which no longer exist). `trackDelegation()` and `trackUndelegation()` will be altered to no longer update account balances. External APIs will need to scan all balances under an account to retain backwards-compatibility. It is advised that these APIs use `GetBalance` and `SetBalance` instead of `GetAllBalances` when possible as to not load the entire account balance. ### Supply module The supply module, in order to implement the total supply invariant, will now need to scan all accounts & call `GetAllBalances` using the `x/bank` Keeper, then sum the balances and check that they match the expected total supply. ## Status Accepted. ## Consequences ### Positive * O(1) reads & writes of balances (with respect to the number of denominations for which an account has non-zero balances). Note, this does not relate to the actual I/O cost, rather the total number of direct reads needed. ### Negative * Slightly less efficient reads/writes when reading & writing all balances of a single account in a transaction. ### Neutral None in particular. ## References * Ref: [Link](https://github.com/cosmos/cosmos-sdk/issues/4982) * Ref: [Link](https://github.com/cosmos/cosmos-sdk/issues/5467) * Ref: [Link](https://github.com/cosmos/cosmos-sdk/issues/5492) # ADR 006: Secret Store Replacement Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-006-secret-store-replacement July 29th, 2019: Initial draft September 11th, 2019: Work has started November 4th: Cosmos SDK changes merged in November 18th: Gaia changes merged in ## Changelog * July 29th, 2019: Initial draft * September 11th, 2019: Work has started * November 4th: Cosmos SDK changes merged in * November 18th: Gaia changes merged in ## Context Currently, a Cosmos SDK application's CLI directory stores key material and metadata in a plain text database in the user's home directory. Key material is encrypted by a passphrase, protected by the bcrypt hashing algorithm. Metadata (e.g. addresses, public keys, key storage details) is available in plain text. This is not desirable for a number of reasons. Perhaps the biggest reason is insufficient security protection of key material and metadata. Leaking the plain text allows an attacker to surveil what keys a given computer controls via a number of techniques, like compromised dependencies without any privileged execution. This could be followed by a more targeted attack on a particular user/computer. All modern desktop operating systems (Ubuntu, Debian, macOS, Windows) provide a built-in secret store that is designed to allow applications to store information that is isolated from all other applications and requires passphrase entry to access the data. We are seeking a solution that provides a common abstraction layer to the many different backends and reasonable fallback for minimal platforms that don't provide a native secret store. ## Decision We recommend replacing the current Keybase backend based on LevelDB with [Keyring](https://github.com/99designs/keyring) by 99designs. This application is designed to provide a common abstraction and uniform interface between many secret stores and is used by the AWS Vault application by 99designs. This appears to fulfill the requirement of protecting both key material and metadata from rogue software on a user's machine. ## Status Accepted ## Consequences ### Positive Increased safety for users. ### Negative Users must manually migrate. Testing against all supported backends is difficult. Running tests locally on a Mac requires numerous repetitive password entries. ### Neutral No neutral consequences identified. ## References * \#4754 Switch secret store to the keyring secret store (original PR by @poldsam) \[**CLOSED**] * \#5029 Add support for github.com/99designs/keyring-backed keybases \[**MERGED**] * \#5097 Add keys migrate command \[**MERGED**] * \#5180 Drop on-disk keybase in favor of keyring \[*PENDING\_REVIEW*] * cosmos/gaia#164 Drop on-disk keybase in favor of keyring (gaia's changes) \[*PENDING\_REVIEW*] # ADR 007: Specialization Groups Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-007-specialization-groups 2019 Jul 31: Initial Draft ## Changelog * 2019 Jul 31: Initial Draft ## Context This idea was first conceived of in order to fulfill the use case of the creation of a decentralized Computer Emergency Response Team (dCERT), whose members would be elected by a governing community and would fulfill the role of coordinating the community under emergency situations. This thinking can be further abstracted into the conception of "blockchain specialization groups". The creation of these groups are the beginning of specialization capabilities within a wider blockchain community which could be used to enable a certain level of delegated responsibilities. Examples of specialization which could be beneficial to a blockchain community include: code auditing, emergency response, code development etc. This type of community organization paves the way for individual stakeholders to delegate votes by issue type, if in the future governance proposals include a field for issue type. ## Decision A specialization group can be broadly broken down into the following functions (herein containing examples): * Membership Admittance * Membership Acceptance * Membership Revocation * (probably) Without Penalty * member steps down (self-Revocation) * replaced by new member from governance * (probably) With Penalty * due to breach of soft-agreement (determined through governance) * due to breach of hard-agreement (determined by code) * Execution of Duties * Special transactions which only execute for members of a specialization group (for example, dCERT members voting to turn off transaction routes in an emergency scenario) * Compensation * Group compensation (further distribution decided by the specialization group) * Individual compensation for all constituents of a group from the greater community Membership admittance to a specialization group could take place over a wide variety of mechanisms. The most obvious example is through a general vote among the entire community, however in certain systems a community may want to allow the members already in a specialization group to internally elect new members, or maybe the community may assign a permission to a particular specialization group to appoint members to other 3rd party groups. The sky is really the limit as to how membership admittance can be structured. We attempt to capture some of these possiblities in a common interface dubbed the `Electionator`. For its initial implementation as a part of this ADR we recommend that the general election abstraction (`Electionator`) is provided as well as a basic implementation of that abstraction which allows for a continuous election of members of a specialization group. ```golang expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // The Electionator abstraction covers the concept space for // a wide variety of election kinds. type Electionator interface { // is the election object accepting votes. Active() bool // functionality to execute for when a vote is cast in this election, here // the vote field is anticipated to be marshalled into a vote type used // by an election. // // NOTE There are no explicit ids here. Just votes which pertain specifically // to one electionator. Anyone can create and send a vote to the electionator item // which will presumably attempt to marshal those bytes into a particular struct // and apply the vote information in some arbitrary way. There can be multiple // Electionators within the Cosmos-Hub for multiple specialization groups, votes // would need to be routed to the Electionator upstream of here. Vote(addr sdk.AccAddress, vote []byte) // here lies all functionality to authenticate and execute changes for // when a member accepts being elected AcceptElection(sdk.AccAddress) // Register a revoker object RegisterRevoker(Revoker) // No more revokers may be registered after this function is called SealRevokers() // register hooks to call when an election actions occur RegisterHooks(ElectionatorHooks) // query for the current winner(s) of this election based on arbitrary // election ruleset QueryElected() []sdk.AccAddress // query metadata for an address in the election this // could include for example position that an address // is being elected for within a group // // this metadata may be directly related to // voting information and/or privileges enabled // to members within a group. QueryMetadata(sdk.AccAddress) []byte } // ElectionatorHooks, once registered with an Electionator, // trigger execution of relevant interface functions when // Electionator events occur. type ElectionatorHooks interface { AfterVoteCast(addr sdk.AccAddress, vote []byte) AfterMemberAccepted(addr sdk.AccAddress) AfterMemberRevoked(addr sdk.AccAddress, cause []byte) } // Revoker defines the function required for a membership revocation rule-set // used by a specialization group. This could be used to create self revoking, // and evidence based revoking, etc. Revokers types may be created and // reused for different election types. // // When revoking the "cause" bytes may be arbitrarily marshalled into evidence, // memos, etc. type Revoker interface { RevokeName() string // identifier for this revoker type RevokeMember(addr sdk.AccAddress, cause []byte) error } ``` Certain level of commonality likely exists between the existing code within `x/governance` and required functionality of elections. This common functionality should be abstracted during implementation. Similarly for each vote implementation client CLI/REST functionality should be abstracted to be reused for multiple elections. The specialization group abstraction firstly extends the `Electionator` but also further defines traits of the group. ```golang expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type SpecializationGroup interface { Electionator GetName() string GetDescription() string // general soft contract the group is expected // to fulfill with the greater community GetContract() string // messages which can be executed by the members of the group Handler(ctx sdk.Context, msg sdk.Msg) sdk.Result // logic to be executed at endblock, this may for instance // include payment of a stipend to the group members // for participation in the security group. EndBlocker(ctx sdk.Context) } ``` ## Status > Proposed ## Consequences ### Positive * increases specialization capabilities of a blockchain * improve abstractions in `x/gov/` such that they can be used with specialization groups ### Negative * could be used to increase centralization within a community ### Neutral ## References * [dCERT ADR](/sdk/v0.50/build/architecture/adr-008-dCERT-group) # ADR 008: Decentralized Computer Emergency Response Team (dCERT) Group Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-008-dCERT-group 2019 Jul 31: Initial Draft ## Changelog * 2019 Jul 31: Initial Draft ## Context In order to reduce the number of parties involved with handling sensitive information in an emergency scenario, we propose the creation of a specialization group named The Decentralized Computer Emergency Response Team (dCERT). Initially this group's role is intended to serve as coordinators between various actors within a blockchain community such as validators, bug-hunters, and developers. During a time of crisis, the dCERT group would aggregate and relay input from a variety of stakeholders to the developers who are actively devising a patch to the software, this way sensitive information does not need to be publicly disclosed while some input from the community can still be gained. Additionally, a special privilege is proposed for the dCERT group: the capacity to "circuit-break" (aka. temporarily disable) a particular message path. Note that this privilege should be enabled/disabled globally with a governance parameter such that this privilege could start disabled and later be enabled through a parameter change proposal, once a dCERT group has been established. In the future it is foreseeable that the community may wish to expand the roles of dCERT with further responsibilities such as the capacity to "pre-approve" a security update on behalf of the community prior to a full community wide vote whereby the sensitive information would be revealed prior to a vulnerability being patched on the live network. ## Decision The dCERT group is proposed to include an implementation of a `SpecializationGroup` as defined in [ADR 007](/sdk/v0.50/build/architecture/adr-007-specialization-groups). This will include the implementation of: * continuous voting * slashing due to breach of soft contract * revoking a member due to breach of soft contract * emergency disband of the entire dCERT group (ex. for colluding maliciously) * compensation stipend from the community pool or other means decided by governance This system necessitates the following new parameters: * blockly stipend allowance per dCERT member * maximum number of dCERT members * required staked slashable tokens for each dCERT member * quorum for suspending a particular member * proposal wager for disbanding the dCERT group * stabilization period for dCERT member transition * circuit break dCERT privileges enabled These parameters are expected to be implemented through the param keeper such that governance may change them at any given point. ### Continuous Voting Electionator An `Electionator` object is to be implemented as continuous voting and with the following specifications: * All delegation addresses may submit votes at any point which updates their preferred representation on the dCERT group. * Preferred representation may be arbitrarily split between addresses (ex. 50% to John, 25% to Sally, 25% to Carol) * In order for a new member to be added to the dCERT group they must send a transaction accepting their admission at which point the validity of their admission is to be confirmed. * A sequence number is assigned when a member is added to dCERT group. If a member leaves the dCERT group and then enters back, a new sequence number is assigned. * Addresses which control the greatest amount of preferred-representation are eligible to join the dCERT group (up the *maximum number of dCERT members*). If the dCERT group is already full and new member is admitted, the existing dCERT member with the lowest amount of votes is kicked from the dCERT group. * In the split situation where the dCERT group is full but a vying candidate has the same amount of vote as an existing dCERT member, the existing member should maintain its position. * In the split situation where somebody must be kicked out but the two addresses with the smallest number of votes have the same number of votes, the address with the smallest sequence number maintains its position. * A stabilization period can be optionally included to reduce the "flip-flopping" of the dCERT membership tail members. If a stabilization period is provided which is greater than 0, when members are kicked due to insufficient support, a queue entry is created which documents which member is to replace which other member. While this entry is in the queue, no new entries to kick that same dCERT member can be made. When the entry matures at the duration of the stabilization period, the new member is instantiated, and old member kicked. ### Staking/Slashing All members of the dCERT group must stake tokens *specifically* to maintain eligibility as a dCERT member. These tokens can be staked directly by the vying dCERT member or out of the good will of a 3rd party (who shall gain no on-chain benefits for doing so). This staking mechanism should use the existing global unbonding time of tokens staked for network validator security. A dCERT member can *only be* a member if it has the required tokens staked under this mechanism. If those tokens are unbonded then the dCERT member must be automatically kicked from the group. Slashing of a particular dCERT member due to soft-contract breach should be performed by governance on a per member basis based on the magnitude of the breach. The process flow is anticipated to be that a dCERT member is suspended by the dCERT group prior to being slashed by governance. Membership suspension by the dCERT group takes place through a voting procedure by the dCERT group members. After this suspension has taken place, a governance proposal to slash the dCERT member must be submitted, if the proposal is not approved by the time the rescinding member has completed unbonding their tokens, then the tokens are no longer staked and unable to be slashed. Additionally in the case of an emergency situation of a colluding and malicious dCERT group, the community needs the capability to disband the entire dCERT group and likely fully slash them. This could be achieved though a special new proposal type (implemented as a general governance proposal) which would halt the functionality of the dCERT group until the proposal was concluded. This special proposal type would likely need to also have a fairly large wager which could be slashed if the proposal creator was malicious. The reason a large wager should be required is because as soon as the proposal is made, the capability of the dCERT group to halt message routes is put on temporarily suspended, meaning that a malicious actor who created such a proposal could then potentially exploit a bug during this period of time, with no dCERT group capable of shutting down the exploitable message routes. ### dCERT membership transactions Active dCERT members * change of the description of the dCERT group * circuit break a message route * vote to suspend a dCERT member. Here circuit-breaking refers to the capability to disable a groups of messages, This could for instance mean: "disable all staking-delegation messages", or "disable all distribution messages". This could be accomplished by verifying that the message route has not been "circuit-broken" at CheckTx time (in `baseapp/baseapp.go`). "unbreaking" a circuit is anticipated only to occur during a hard fork upgrade meaning that no capability to unbreak a message route on a live chain is required. Note also, that if there was a problem with governance voting (for instance a capability to vote many times) then governance would be broken and should be halted with this mechanism, it would be then up to the validator set to coordinate and hard-fork upgrade to a patched version of the software where governance is re-enabled (and fixed). If the dCERT group abuses this privilege they should all be severely slashed. ## Status > Proposed ## Consequences ### Positive * Potential to reduces the number of parties to coordinate with during an emergency * Reduction in possibility of disclosing sensitive information to malicious parties ### Negative * Centralization risks ### Neutral ## References [Specialization Groups ADR](/sdk/v0.50/build/architecture/adr-007-specialization-groups) # ADR 009: Evidence Module Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-009-evidence-module 2019 July 31: Initial draft 2019 October 24: Initial implementation ## Changelog * 2019 July 31: Initial draft * 2019 October 24: Initial implementation ## Status Accepted ## Context In order to support building highly secure, robust and interoperable blockchain applications, it is vital for the Cosmos SDK to expose a mechanism in which arbitrary evidence can be submitted, evaluated and verified resulting in some agreed upon penalty for any misbehavior committed by a validator, such as equivocation (double-voting), signing when unbonded, signing an incorrect state transition (in the future), etc. Furthermore, such a mechanism is paramount for any [IBC](https://github.com/cosmos/ics/blob/master/ibc/2_IBC_ARCHITECTURE.md) or cross-chain validation protocol implementation in order to support the ability for any misbehavior to be relayed back from a collateralized chain to a primary chain so that the equivocating validator(s) can be slashed. ## Decision We will implement an evidence module in the Cosmos SDK supporting the following functionality: * Provide developers with the abstractions and interfaces necessary to define custom evidence messages, message handlers, and methods to slash and penalize accordingly for misbehavior. * Support the ability to route evidence messages to handlers in any module to determine the validity of submitted misbehavior. * Support the ability, through governance, to modify slashing penalties of any evidence type. * Querier implementation to support querying params, evidence types, params, and all submitted valid misbehavior. ### Types First, we define the `Evidence` interface type. The `x/evidence` module may implement its own types that can be used by many chains (e.g. `CounterFactualEvidence`). In addition, other modules may implement their own `Evidence` types in a similar manner in which governance is extensible. It is important to note any concrete type implementing the `Evidence` interface may include arbitrary fields such as an infraction time. We want the `Evidence` type to remain as flexible as possible. When submitting evidence to the `x/evidence` module, the concrete type must provide the validator's consensus address, which should be known by the `x/slashing` module (assuming the infraction is valid), the height at which the infraction occurred and the validator's power at same height in which the infraction occurred. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type Evidence interface { Route() string Type() string String() string Hash() HexBytes ValidateBasic() error // The consensus address of the malicious validator at time of infraction GetConsensusAddress() ConsAddress // Height at which the infraction occurred GetHeight() int64 // The total power of the malicious validator at time of infraction GetValidatorPower() int64 // The total validator set power at time of infraction GetTotalPower() int64 } ``` ### Routing & Handling Each `Evidence` type must map to a specific unique route and be registered with the `x/evidence` module. It accomplishes this through the `Router` implementation. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type Router interface { AddRoute(r string, h Handler) Router HasRoute(r string) bool GetRoute(path string) Handler Seal() } ``` Upon successful routing through the `x/evidence` module, the `Evidence` type is passed through a `Handler`. This `Handler` is responsible for executing all corresponding business logic necessary for verifying the evidence as valid. In addition, the `Handler` may execute any necessary slashing and potential jailing. Since slashing fractions will typically result from some form of static functions, allow the `Handler` to do this provides the greatest flexibility. An example could be `k * evidence.GetValidatorPower()` where `k` is an on-chain parameter controlled by governance. The `Evidence` type should provide all the external information necessary in order for the `Handler` to make the necessary state transitions. If no error is returned, the `Evidence` is considered valid. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type Handler func(Context, Evidence) error ``` ### Submission `Evidence` is submitted through a `MsgSubmitEvidence` message type which is internally handled by the `x/evidence` module's `SubmitEvidence`. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type MsgSubmitEvidence struct { Evidence } func handleMsgSubmitEvidence(ctx Context, keeper Keeper, msg MsgSubmitEvidence) Result { if err := keeper.SubmitEvidence(ctx, msg.Evidence); err != nil { return err.Result() } // emit events... return Result{ // ... } } ``` The `x/evidence` module's keeper is responsible for matching the `Evidence` against the module's router and invoking the corresponding `Handler` which may include slashing and jailing the validator. Upon success, the submitted evidence is persisted. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (k Keeper) SubmitEvidence(ctx Context, evidence Evidence) error { handler := keeper.router.GetRoute(evidence.Route()) if err := handler(ctx, evidence); err != nil { return ErrInvalidEvidence(keeper.codespace, err) } keeper.setEvidence(ctx, evidence) return nil } ``` ### Genesis Finally, we need to represent the genesis state of the `x/evidence` module. The module only needs a list of all submitted valid infractions and any necessary params for which the module needs in order to handle submitted evidence. The `x/evidence` module will naturally define and route native evidence types for which it'll most likely need slashing penalty constants for. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type GenesisState struct { Params Params Infractions []Evidence } ``` ## Consequences ### Positive * Allows the state machine to process misbehavior submitted on-chain and penalize validators based on agreed upon slashing parameters. * Allows evidence types to be defined and handled by any module. This further allows slashing and jailing to be defined by more complex mechanisms. * Does not solely rely on Tendermint to submit evidence. ### Negative * No easy way to introduce new evidence types through governance on a live chain due to the inability to introduce the new evidence type's corresponding handler ### Neutral * Should we persist infractions indefinitely? Or should we rather rely on events? ## References * [ICS](https://github.com/cosmos/ics) * [IBC Architecture](https://github.com/cosmos/ics/blob/master/ibc/1_IBC_ARCHITECTURE.md) * [Tendermint Fork Accountability](https://github.com/tendermint/spec/blob/7b3138e69490f410768d9b1ffc7a17abc23ea397/spec/consensus/fork-accountability.md) # ADR 010: Modular AnteHandler Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-010-modular-antehandler 2019 Aug 31: Initial draft 2021 Sep 14: Superseded by ADR-045 ## Changelog * 2019 Aug 31: Initial draft * 2021 Sep 14: Superseded by ADR-045 ## Status SUPERSEDED by ADR-045 ## Context The current AnteHandler design allows users to either use the default AnteHandler provided in `x/auth` or to build their own AnteHandler from scratch. Ideally AnteHandler functionality is split into multiple, modular functions that can be chained together along with custom ante-functions so that users do not have to rewrite common antehandler logic when they want to implement custom behavior. For example, let's say a user wants to implement some custom signature verification logic. In the current codebase, the user would have to write their own Antehandler from scratch largely reimplementing much of the same code and then set their own custom, monolithic antehandler in the baseapp. Instead, we would like to allow users to specify custom behavior when necessary and combine them with default ante-handler functionality in a way that is as modular and flexible as possible. ## Proposals ### Per-Module AnteHandler One approach is to use the [ModuleManager](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/types/module) and have each module implement its own antehandler if it requires custom antehandler logic. The ModuleManager can then be passed in an AnteHandler order in the same way it has an order for BeginBlockers and EndBlockers. The ModuleManager returns a single AnteHandler function that will take in a tx and run each module's `AnteHandle` in the specified order. The module manager's AnteHandler is set as the baseapp's AnteHandler. Pros: 1. Simple to implement 2. Utilizes the existing ModuleManager architecture Cons: 1. Improves granularity but still cannot get more granular than a per-module basis. e.g. If auth's `AnteHandle` function is in charge of validating memo and signatures, users cannot swap the signature-checking functionality while keeping the rest of auth's `AnteHandle` functionality. 2. Module AnteHandler are run one after the other. There is no way for one AnteHandler to wrap or "decorate" another. ### Decorator Pattern The [weave project](https://github.com/iov-one/weave) achieves AnteHandler modularity through the use of a decorator pattern. The interface is designed as follows: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Decorator wraps a Handler to provide common functionality // like authentication, or fee-handling, to many Handlers type Decorator interface { Check(ctx Context, store KVStore, tx Tx, next Checker) (*CheckResult, error) Deliver(ctx Context, store KVStore, tx Tx, next Deliverer) (*DeliverResult, error) } ``` Each decorator works like a modularized Cosmos SDK antehandler function, but it can take in a `next` argument that may be another decorator or a Handler (which does not take in a next argument). These decorators can be chained together, one decorator being passed in as the `next` argument of the previous decorator in the chain. The chain ends in a Router which can take a tx and route to the appropriate msg handler. A key benefit of this approach is that one Decorator can wrap its internal logic around the next Checker/Deliverer. A weave Decorator may do the following: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Example Decorator's Deliver function func (example Decorator) Deliver(ctx Context, store KVStore, tx Tx, next Deliverer) { // Do some pre-processing logic res, err := next.Deliver(ctx, store, tx) // Do some post-processing logic given the result and error } ``` Pros: 1. Weave Decorators can wrap over the next decorator/handler in the chain. The ability to both pre-process and post-process may be useful in certain settings. 2. Provides a nested modular structure that isn't possible in the solution above, while also allowing for a linear one-after-the-other structure like the solution above. Cons: 1. It is hard to understand at first glance the state updates that would occur after a Decorator runs given the `ctx`, `store`, and `tx`. A Decorator can have an arbitrary number of nested Decorators being called within its function body, each possibly doing some pre- and post-processing before calling the next decorator on the chain. Thus to understand what a Decorator is doing, one must also understand what every other decorator further along the chain is also doing. This can get quite complicated to understand. A linear, one-after-the-other approach while less powerful, may be much easier to reason about. ### Chained Micro-Functions The benefit of Weave's approach is that the Decorators can be very concise, which when chained together allows for maximum customizability. However, the nested structure can get quite complex and thus hard to reason about. Another approach is to split the AnteHandler functionality into tightly scoped "micro-functions", while preserving the one-after-the-other ordering that would come from the ModuleManager approach. We can then have a way to chain these micro-functions so that they run one after the other. Modules may define multiple ante micro-functions and then also provide a default per-module AnteHandler that implements a default, suggested order for these micro-functions. Users can order the AnteHandlers easily by simply using the ModuleManager. The ModuleManager will take in a list of AnteHandlers and return a single AnteHandler that runs each AnteHandler in the order of the list provided. If the user is comfortable with the default ordering of each module, this is as simple as providing a list with each module's antehandler (exactly the same as BeginBlocker and EndBlocker). If however, users wish to change the order or add, modify, or delete ante micro-functions in anyway; they can always define their own ante micro-functions and add them explicitly to the list that gets passed into module manager. #### Default Workflow This is an example of a user's AnteHandler if they choose not to make any custom micro-functions. ##### Cosmos SDK code ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Chains together a list of AnteHandler micro-functions that get run one after the other. // Returned AnteHandler will abort on first error. func Chainer(order []AnteHandler) AnteHandler { return func(ctx Context, tx Tx, simulate bool) (newCtx Context, err error) { for _, ante := range order { ctx, err := ante(ctx, tx, simulate) if err != nil { return ctx, err } } return ctx, err } } ``` ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // AnteHandler micro-function to verify signatures func VerifySignatures(ctx Context, tx Tx, simulate bool) (newCtx Context, err error) { // verify signatures // Returns InvalidSignature Result and abort=true if sigs invalid // Return OK result and abort=false if sigs are valid } // AnteHandler micro-function to validate memo func ValidateMemo(ctx Context, tx Tx, simulate bool) (newCtx Context, err error) { // validate memo } // Auth defines its own default ante-handler by chaining its micro-functions in a recommended order AuthModuleAnteHandler := Chainer([]AnteHandler{ VerifySignatures, ValidateMemo }) ``` ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Distribution micro-function to deduct fees from tx func DeductFees(ctx Context, tx Tx, simulate bool) (newCtx Context, err error) { // Deduct fees from tx // Abort if insufficient funds in account to pay for fees } // Distribution micro-function to check if fees > mempool parameter func CheckMempoolFees(ctx Context, tx Tx, simulate bool) (newCtx Context, err error) { // If CheckTx: Abort if the fees are less than the mempool's minFee parameter } // Distribution defines its own default ante-handler by chaining its micro-functions in a recommended order DistrModuleAnteHandler := Chainer([]AnteHandler{ CheckMempoolFees, DeductFees }) ``` ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type ModuleManager struct { // other fields AnteHandlerOrder []AnteHandler } func (mm ModuleManager) GetAnteHandler() AnteHandler { retun Chainer(mm.AnteHandlerOrder) } ``` ##### User Code ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Note: Since user is not making any custom modifications, we can just SetAnteHandlerOrder with the default AnteHandlers provided by each module in our preferred order moduleManager.SetAnteHandlerOrder([]AnteHandler(AuthModuleAnteHandler, DistrModuleAnteHandler)) app.SetAnteHandler(mm.GetAnteHandler()) ``` #### Custom Workflow This is an example workflow for a user that wants to implement custom antehandler logic. In this example, the user wants to implement custom signature verification and change the order of antehandler so that validate memo runs before signature verification. ##### User Code ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // User can implement their own custom signature verification antehandler micro-function func CustomSigVerify(ctx Context, tx Tx, simulate bool) (newCtx Context, err error) { // do some custom signature verification logic } ``` ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Micro-functions allow users to change order of when they get executed, and swap out default ante-functionality with their own custom logic. // Note that users can still chain the default distribution module handler, and auth micro-function along with their custom ante function moduleManager.SetAnteHandlerOrder([]AnteHandler(ValidateMemo, CustomSigVerify, DistrModuleAnteHandler)) ``` Pros: 1. Allows for ante functionality to be as modular as possible. 2. For users that do not need custom ante-functionality, there is little difference between how antehandlers work and how BeginBlock and EndBlock work in ModuleManager. 3. Still easy to understand Cons: 1. Cannot wrap antehandlers with decorators like you can with Weave. ### Simple Decorators This approach takes inspiration from Weave's decorator design while trying to minimize the number of breaking changes to the Cosmos SDK and maximizing simplicity. Like Weave decorators, this approach allows one `AnteDecorator` to wrap the next AnteHandler to do pre- and post-processing on the result. This is useful since decorators can do defer/cleanups after an AnteHandler returns as well as perform some setup beforehand. Unlike Weave decorators, these `AnteDecorator` functions can only wrap over the AnteHandler rather than the entire handler execution path. This is deliberate as we want decorators from different modules to perform authentication/validation on a `tx`. However, we do not want decorators being capable of wrapping and modifying the results of a `MsgHandler`. In addition, this approach will not break any core Cosmos SDK API's. Since we preserve the notion of an AnteHandler and still set a single AnteHandler in baseapp, the decorator is simply an additional approach available for users that desire more customization. The API of modules (namely `x/auth`) may break with this approach, but the core API remains untouched. Allow Decorator interface that can be chained together to create a Cosmos SDK AnteHandler. This allows users to choose between implementing an AnteHandler by themselves and setting it in the baseapp, or use the decorator pattern to chain their custom decorators with the Cosmos SDK provided decorators in the order they wish. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // An AnteDecorator wraps an AnteHandler, and can do pre- and post-processing on the next AnteHandler type AnteDecorator interface { AnteHandle(ctx Context, tx Tx, simulate bool, next AnteHandler) (newCtx Context, err error) } ``` ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // ChainAnteDecorators will recursively link all of the AnteDecorators in the chain and return a final AnteHandler function // This is done to preserve the ability to set a single AnteHandler function in the baseapp. func ChainAnteDecorators(chain ...AnteDecorator) AnteHandler { if len(chain) == 1 { return func(ctx Context, tx Tx, simulate bool) { chain[0].AnteHandle(ctx, tx, simulate, nil) } } return func(ctx Context, tx Tx, simulate bool) { chain[0].AnteHandle(ctx, tx, simulate, ChainAnteDecorators(chain[1:])) } } ``` #### Example Code Define AnteDecorator functions ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Setup GasMeter, catch OutOfGasPanic and handle appropriately type SetUpContextDecorator struct{ } func (sud SetUpContextDecorator) AnteHandle(ctx Context, tx Tx, simulate bool, next AnteHandler) (newCtx Context, err error) { ctx.GasMeter = NewGasMeter(tx.Gas) defer func() { // recover from OutOfGas panic and handle appropriately } return next(ctx, tx, simulate) } // Signature Verification decorator. Verify Signatures and move on type SigVerifyDecorator struct{ } func (svd SigVerifyDecorator) AnteHandle(ctx Context, tx Tx, simulate bool, next AnteHandler) (newCtx Context, err error) { // verify sigs. Return error if invalid // call next antehandler if sigs ok return next(ctx, tx, simulate) } // User-defined Decorator. Can choose to pre- and post-process on AnteHandler type UserDefinedDecorator struct{ // custom fields } func (udd UserDefinedDecorator) AnteHandle(ctx Context, tx Tx, simulate bool, next AnteHandler) (newCtx Context, err error) { // pre-processing logic ctx, err = next(ctx, tx, simulate) // post-processing logic } ``` Link AnteDecorators to create a final AnteHandler. Set this AnteHandler in baseapp. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Create final antehandler by chaining the decorators together antehandler := ChainAnteDecorators(NewSetUpContextDecorator(), NewSigVerifyDecorator(), NewUserDefinedDecorator()) // Set chained Antehandler in the baseapp bapp.SetAnteHandler(antehandler) ``` Pros: 1. Allows one decorator to pre- and post-process the next AnteHandler, similar to the Weave design. 2. Do not need to break baseapp API. Users can still set a single AnteHandler if they choose. Cons: 1. Decorator pattern may have a deeply nested structure that is hard to understand, this is mitigated by having the decorator order explicitly listed in the `ChainAnteDecorators` function. 2. Does not make use of the ModuleManager design. Since this is already being used for BeginBlocker/EndBlocker, this proposal seems unaligned with that design pattern. ## Consequences Since pros and cons are written for each approach, it is omitted from this section ## References * [#4572](https://github.com/cosmos/cosmos-sdk/issues/4572): Modular AnteHandler Issue * [#4582](https://github.com/cosmos/cosmos-sdk/pull/4583): Initial Implementation of Per-Module AnteHandler Approach * [Weave Decorator Code](https://github.com/iov-one/weave/blob/master/handler.go#L35) * [Weave Design Videos](https://vimeo.com/showcase/6189877) # ADR 011: Generalize Genesis Accounts Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-011-generalize-genesis-accounts 2019-08-30: initial draft ## Changelog * 2019-08-30: initial draft ## Context Currently, the Cosmos SDK allows for custom account types; the `auth` keeper stores any type fulfilling its `Account` interface. However `auth` does not handle exporting or loading accounts to/from a genesis file, this is done by `genaccounts`, which only handles one of 4 concrete account types (`BaseAccount`, `ContinuousVestingAccount`, `DelayedVestingAccount` and `ModuleAccount`). Projects desiring to use custom accounts (say custom vesting accounts) need to fork and modify `genaccounts`. ## Decision In summary, we will (un)marshal all accounts (interface types) directly using amino, rather than converting to `genaccounts`’s `GenesisAccount` type. Since doing this removes the majority of `genaccounts`'s code, we will merge `genaccounts` into `auth`. Marshalled accounts will be stored in `auth`'s genesis state. Detailed changes: ### 1) (Un)Marshal accounts directly using amino The `auth` module's `GenesisState` gains a new field `Accounts`. Note these aren't of type `exported.Account` for reasons outlined in section 3. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // GenesisState - all auth state that must be provided at genesis type GenesisState struct { Params Params `json:"params" yaml:"params"` Accounts []GenesisAccount `json:"accounts" yaml:"accounts"` } ``` Now `auth`'s `InitGenesis` and `ExportGenesis` (un)marshal accounts as well as the defined params. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // InitGenesis - Init store state from genesis data func InitGenesis(ctx sdk.Context, ak AccountKeeper, data GenesisState) { ak.SetParams(ctx, data.Params) // load the accounts for _, a := range data.Accounts { acc := ak.NewAccount(ctx, a) // set account number ak.SetAccount(ctx, acc) } } // ExportGenesis returns a GenesisState for a given context and keeper func ExportGenesis(ctx sdk.Context, ak AccountKeeper) GenesisState { params := ak.GetParams(ctx) var genAccounts []exported.GenesisAccount ak.IterateAccounts(ctx, func(account exported.Account) bool { genAccount := account.(exported.GenesisAccount) genAccounts = append(genAccounts, genAccount) return false }) return NewGenesisState(params, genAccounts) } ``` ### 2) Register custom account types on the `auth` codec The `auth` codec must have all custom account types registered to marshal them. We will follow the pattern established in `gov` for proposals. An example custom account definition: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import authtypes "github.com/cosmos/cosmos-sdk/x/auth/types" // Register the module account type with the auth module codec so it can decode module accounts stored in a genesis file func init() { authtypes.RegisterAccountTypeCodec(ModuleAccount{ }, "cosmos-sdk/ModuleAccount") } type ModuleAccount struct { ... ``` The `auth` codec definition: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} var ModuleCdc *codec.LegacyAmino func init() { ModuleCdc = codec.NewLegacyAmino() // register module msg's and Account interface ... // leave the codec unsealed } // RegisterAccountTypeCodec registers an external account type defined in another module for the internal ModuleCdc. func RegisterAccountTypeCodec(o interface{ }, name string) { ModuleCdc.RegisterConcrete(o, name, nil) } ``` ### 3) Genesis validation for custom account types Modules implement a `ValidateGenesis` method. As `auth` does not know of account implementations, accounts will need to validate themselves. We will unmarshal accounts into a `GenesisAccount` interface that includes a `Validate` method. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type GenesisAccount interface { exported.Account Validate() error } ``` Then the `auth` `ValidateGenesis` function becomes: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // ValidateGenesis performs basic validation of auth genesis data returning an // error for any failed validation criteria. func ValidateGenesis(data GenesisState) error { // Validate params ... // Validate accounts addrMap := make(map[string]bool, len(data.Accounts)) for _, acc := range data.Accounts { // check for duplicated accounts addrStr := acc.GetAddress().String() if _, ok := addrMap[addrStr]; ok { return fmt.Errorf("duplicate account found in genesis state; address: %s", addrStr) } addrMap[addrStr] = true // check account specific validation if err := acc.Validate(); err != nil { return fmt.Errorf("invalid account found in genesis state; address: %s, error: %s", addrStr, err.Error()) } } return nil } ``` ### 4) Move add-genesis-account cli to `auth` The `genaccounts` module contains a cli command to add base or vesting accounts to a genesis file. This will be moved to `auth`. We will leave it to projects to write their own commands to add custom accounts. An extensible cli handler, similar to `gov`, could be created but it is not worth the complexity for this minor use case. ### 5) Update module and vesting accounts Under the new scheme, module and vesting account types need some minor updates: * Type registration on `auth`'s codec (shown above) * A `Validate` method for each `Account` concrete type ## Status Proposed ## Consequences ### Positive * custom accounts can be used without needing to fork `genaccounts` * reduction in lines of code ### Negative ### Neutral * `genaccounts` module no longer exists * accounts in genesis files are stored under `accounts` in `auth` rather than in the `genaccounts` module. -`add-genesis-account` cli command now in `auth` ## References # ADR 012: State Accessors Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-012-state-accessors 2019 Sep 04: Initial draft ## Changelog * 2019 Sep 04: Initial draft ## Context Cosmos SDK modules currently use the `KVStore` interface and `Codec` to access their respective state. While this provides a large degree of freedom to module developers, it is hard to modularize and the UX is mediocre. First, each time a module tries to access the state, it has to marshal the value and set or get the value and finally unmarshal. Usually this is done by declaring `Keeper.GetXXX` and `Keeper.SetXXX` functions, which are repetitive and hard to maintain. Second, this makes it harder to align with the object capability theorem: the right to access the state is defined as a `StoreKey`, which gives full access on the entire Merkle tree, so a module cannot send the access right to a specific key-value pair (or a set of key-value pairs) to another module safely. Finally, because the getter/setter functions are defined as methods of a module's `Keeper`, the reviewers have to consider the whole Merkle tree space when they reviewing a function accessing any part of the state. There is no static way to know which part of the state that the function is accessing (and which is not). ## Decision We will define a type named `Value`: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type Value struct { m Mapping key []byte } ``` The `Value` works as a reference for a key-value pair in the state, where `Value.m` defines the key-value space it will access and `Value.key` defines the exact key for the reference. We will define a type named `Mapping`: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type Mapping struct { storeKey sdk.StoreKey cdc *codec.LegacyAmino prefix []byte } ``` The `Mapping` works as a reference for a key-value space in the state, where `Mapping.storeKey` defines the IAVL (sub-)tree and `Mapping.prefix` defines the optional subspace prefix. We will define the following core methods for the `Value` type: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Get and unmarshal stored data, noop if not exists, panic if cannot unmarshal func (Value) Get(ctx Context, ptr interface{ }) { } // Get and unmarshal stored data, return error if not exists or cannot unmarshal func (Value) GetSafe(ctx Context, ptr interface{ }) { } // Get stored data as raw byte slice func (Value) GetRaw(ctx Context) []byte { } // Marshal and set a raw value func (Value) Set(ctx Context, o interface{ }) { } // Check if a raw value exists func (Value) Exists(ctx Context) bool { } // Delete a raw value value func (Value) Delete(ctx Context) { } ``` We will define the following core methods for the `Mapping` type: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Constructs key-value pair reference corresponding to the key argument in the Mapping space func (Mapping) Value(key []byte) Value { } // Get and unmarshal stored data, noop if not exists, panic if cannot unmarshal func (Mapping) Get(ctx Context, key []byte, ptr interface{ }) { } // Get and unmarshal stored data, return error if not exists or cannot unmarshal func (Mapping) GetSafe(ctx Context, key []byte, ptr interface{ }) // Get stored data as raw byte slice func (Mapping) GetRaw(ctx Context, key []byte) []byte { } // Marshal and set a raw value func (Mapping) Set(ctx Context, key []byte, o interface{ }) { } // Check if a raw value exists func (Mapping) Has(ctx Context, key []byte) bool { } // Delete a raw value value func (Mapping) Delete(ctx Context, key []byte) { } ``` Each method of the `Mapping` type that is passed the arguments `ctx`, `key`, and `args...` will proxy the call to `Mapping.Value(key)` with arguments `ctx` and `args...`. In addition, we will define and provide a common set of types derived from the `Value` type: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type Boolean struct { Value } type Enum struct { Value } type Integer struct { Value; enc IntEncoding } type String struct { Value } // ... ``` Where the encoding schemes can be different, `o` arguments in core methods are typed, and `ptr` arguments in core methods are replaced by explicit return types. Finally, we will define a family of types derived from the `Mapping` type: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type Indexer struct { m Mapping enc IntEncoding } ``` Where the `key` argument in core method is typed. Some of the properties of the accessor types are: * State access happens only when a function which takes a `Context` as an argument is invoked * Accessor type structs give rights to access the state only that the struct is referring, no other * Marshalling/Unmarshalling happens implicitly within the core methods ## Status Proposed ## Consequences ### Positive * Serialization will be done automatically * Shorter code size, less boilerplate, better UX * References to the state can be transferred safely * Explicit scope of accessing ### Negative * Serialization format will be hidden * Different architecture from the current, but the use of accessor types can be opt-in * Type-specific types (e.g. `Boolean` and `Integer`) have to be defined manually ### Neutral ## References * [#4554](https://github.com/cosmos/cosmos-sdk/issues/4554) # ADR 013: Observability Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-013-metrics 20-01-2020: Initial Draft ## Changelog * 20-01-2020: Initial Draft ## Status Proposed ## Context Telemetry is paramount into debugging and understanding what the application is doing and how it is performing. We aim to expose metrics from modules and other core parts of the Cosmos SDK. In addition, we should aim to support multiple configurable sinks that an operator may choose from. By default, when telemetry is enabled, the application should track and expose metrics that are stored in-memory. The operator may choose to enable additional sinks, where we support only [Prometheus](https://prometheus.io/) for now, as it's battle-tested, simple to setup, open source, and is rich with ecosystem tooling. We must also aim to integrate metrics into the Cosmos SDK in the most seamless way possible such that metrics may be added or removed at will and without much friction. To do this, we will use the [go-metrics](https://github.com/hashicorp/go-metrics) library. Finally, operators may enable telemetry along with specific configuration options. If enabled, metrics will be exposed via `/metrics?format={text|prometheus}` via the API server. ## Decision We will add an additional configuration block to `app.toml` that defines telemetry settings: ```toml expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} ############################################################################### ### Telemetry Configuration ### ############################################################################### [telemetry] # Prefixed with keys to separate services service-name = {{ .Telemetry.ServiceName }} # Enabled enables the application telemetry functionality. When enabled, # an in-memory sink is also enabled by default. Operators may also enabled # other sinks such as Prometheus. enabled = {{ .Telemetry.Enabled }} # Enable prefixing gauge values with hostname enable-hostname = {{ .Telemetry.EnableHostname }} # Enable adding hostname to labels enable-hostname-label = {{ .Telemetry.EnableHostnameLabel }} # Enable adding service to labels enable-service-label = {{ .Telemetry.EnableServiceLabel }} # PrometheusRetentionTime, when positive, enables a Prometheus metrics sink. prometheus-retention-time = {{ .Telemetry.PrometheusRetentionTime }} ``` The given configuration allows for two sinks -- in-memory and Prometheus. We create a `Metrics` type that performs all the bootstrapping for the operator, so capturing metrics becomes seamless. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Metrics defines a wrapper around application telemetry functionality. It allows // metrics to be gathered at any point in time. When creating a Metrics object, // internally, a global metrics is registered with a set of sinks as configured // by the operator. In addition to the sinks, when a process gets a SIGUSR1, a // dump of formatted recent metrics will be sent to STDERR. type Metrics struct { memSink *metrics.InmemSink prometheusEnabled bool } // Gather collects all registered metrics and returns a GatherResponse where the // metrics are encoded depending on the type. Metrics are either encoded via // Prometheus or JSON if in-memory. func (m *Metrics) Gather(format string) (GatherResponse, error) { switch format { case FormatPrometheus: return m.gatherPrometheus() case FormatText: return m.gatherGeneric() case FormatDefault: return m.gatherGeneric() default: return GatherResponse{ }, fmt.Errorf("unsupported metrics format: %s", format) } } ``` In addition, `Metrics` allows us to gather the current set of metrics at any given point in time. An operator may also choose to send a signal, SIGUSR1, to dump and print formatted metrics to STDERR. During an application's bootstrapping and construction phase, if `Telemetry.Enabled` is `true`, the API server will create an instance of a reference to `Metrics` object and will register a metrics handler accordingly. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (s *Server) Start(cfg config.Config) error { // ... if cfg.Telemetry.Enabled { m, err := telemetry.New(cfg.Telemetry) if err != nil { return err } s.metrics = m s.registerMetrics() } // ... } func (s *Server) registerMetrics() { metricsHandler := func(w http.ResponseWriter, r *http.Request) { format := strings.TrimSpace(r.FormValue("format")) gr, err := s.metrics.Gather(format) if err != nil { rest.WriteErrorResponse(w, http.StatusBadRequest, fmt.Sprintf("failed to gather metrics: %s", err)) return } w.Header().Set("Content-Type", gr.ContentType) _, _ = w.Write(gr.Metrics) } s.Router.HandleFunc("/metrics", metricsHandler).Methods("GET") } ``` Application developers may track counters, gauges, summaries, and key/value metrics. There is no additional lifting required by modules to leverage profiling metrics. To do so, it's as simple as: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (k BaseKeeper) MintCoins(ctx sdk.Context, moduleName string, amt sdk.Coins) error { defer metrics.MeasureSince(time.Now(), "MintCoins") // ... } ``` ## Consequences ### Positive * Exposure into the performance and behavior of an application ### Negative ### Neutral ## References # ADR 14: Proportional Slashing Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-014-proportional-slashing 2019-10-15: Initial draft 2020-05-25: Removed correlation root slashing 2020-07-01: Updated to include S-curve function instead of linear ## Changelog * 2019-10-15: Initial draft * 2020-05-25: Removed correlation root slashing * 2020-07-01: Updated to include S-curve function instead of linear ## Context In Proof of Stake-based chains, centralization of consensus power amongst a small set of validators can cause harm to the network due to increased risk of censorship, liveness failure, fork attacks, etc. However, while this centralization causes a negative externality to the network, it is not directly felt by the delegators contributing towards delegating towards already large validators. We would like a way to pass on the negative externality cost of centralization onto those large validators and their delegators. ## Decision ### Design To solve this problem, we will implement a procedure called Proportional Slashing. The desire is that the larger a validator is, the more they should be slashed. The first naive attempt is to make a validator's slash percent proportional to their share of consensus voting power. ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} slash_amount = k * power // power is the faulting validator's voting power and k is some on-chain constant ``` However, this will incentivize validators with large amounts of stake to split up their voting power amongst accounts (sybil attack), so that if they fault, they all get slashed at a lower percent. The solution to this is to take into account not just a validator's own voting percentage, but also the voting percentage of all the other validators who get slashed in a specified time frame. ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} slash_amount = k * (power_1 + power_2 + ... + power_n) // where power_i is the voting power of the ith validator faulting in the specified time frame and k is some on-chain constant ``` Now, if someone splits a validator of 10% into two validators of 5% each which both fault, then they both fault in the same time frame, they both will get slashed at the sum 10% amount. However in practice, we likely don't want a linear relation between amount of stake at fault, and the percentage of stake to slash. In particular, solely 5% of stake double signing effectively did nothing to majorly threaten security, whereas 30% of stake being at fault clearly merits a large slashing factor, due to being very close to the point at which Tendermint security is threatened. A linear relation would require a factor of 6 gap between these two, whereas the difference in risk posed to the network is much larger. We propose using S-curves (formally [logistic functions](https://en.wikipedia.org/wiki/Logistic_function) to solve this). S-Curves capture the desired criterion quite well. They allow the slashing factor to be minimal for small values, and then grow very rapidly near some threshold point where the risk posed becomes notable. #### Parameterization This requires parameterizing a logistic function. It is very well understood how to parameterize this. It has four parameters: 1. A minimum slashing factor 2. A maximum slashing factor 3. The inflection point of the S-curve (essentially where do you want to center the S) 4. The rate of growth of the S-curve (How elongated is the S) #### Correlation across non-sybil validators One will note, that this model doesn't differentiate between multiple validators run by the same operators vs validators run by different operators. This can be seen as an additional benefit in fact. It incentivizes validators to differentiate their setups from other validators, to avoid having correlated faults with them or else they risk a higher slash. So for example, operators should avoid using the same popular cloud hosting platforms or using the same Staking as a Service providers. This will lead to a more resilient and decentralized network. #### Griefing Griefing, the act of intentionally getting oneself slashed in order to make another's slash worse, could be a concern here. However, using the protocol described here, the attacker also gets equally impacted by the grief as the victim, so it would not provide much benefit to the griefer. ### Implementation In the slashing module, we will add two queues that will track all of the recent slash events. For double sign faults, we will define "recent slashes" as ones that have occurred within the last `unbonding period`. For liveness faults, we will define "recent slashes" as ones that have occurred withing the last `jail period`. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type SlashEvent struct { Address sdk.ValAddress ValidatorVotingPercent sdk.Dec SlashedSoFar sdk.Dec } ``` These slash events will be pruned from the queue once they are older than their respective "recent slash period". Whenever a new slash occurs, a `SlashEvent` struct is created with the faulting validator's voting percent and a `SlashedSoFar` of 0. Because recent slash events are pruned before the unbonding period and unjail period expires, it should not be possible for the same validator to have multiple SlashEvents in the same Queue at the same time. We then will iterate over all the SlashEvents in the queue, adding their `ValidatorVotingPercent` to calculate the new percent to slash all the validators in the queue at, using the "Square of Sum of Roots" formula introduced above. Once we have the `NewSlashPercent`, we then iterate over all the `SlashEvent`s in the queue once again, and if `NewSlashPercent > SlashedSoFar` for that SlashEvent, we call the `staking.Slash(slashEvent.Address, slashEvent.Power, Math.Min(Math.Max(minSlashPercent, NewSlashPercent - SlashedSoFar), maxSlashPercent)` (we pass in the power of the validator before any slashes occurred, so that we slash the right amount of tokens). We then set `SlashEvent.SlashedSoFar` amount to `NewSlashPercent`. ## Status Proposed ## Consequences ### Positive * Increases decentralization by disincentivizing delegating to large validators * Incentivizes Decorrelation of Validators * More severely punishes attacks than accidental faults * More flexibility in slashing rates parameterization ### Negative * More computationally expensive than current implementation. Will require more data about "recent slashing events" to be stored on chain. # ADR 016: Validator Consensus Key Rotation Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-016-validator-consensus-key-rotation 2019 Oct 23: Initial draft 2019 Nov 28: Add key rotation fee ## Changelog * 2019 Oct 23: Initial draft * 2019 Nov 28: Add key rotation fee ## Context Validator consensus key rotation feature has been discussed and requested for a long time, for the sake of safer validator key management policy (e.g. [Link](https://github.com/tendermint/tendermint/issues/1136)). So, we suggest one of the simplest form of validator consensus key rotation implementation mostly onto Cosmos SDK. We don't need to make any update on consensus logic in Tendermint because Tendermint does not have any mapping information of consensus key and validator operator key, meaning that from Tendermint point of view, a consensus key rotation of a validator is simply a replacement of a consensus key to another. Also, it should be noted that this ADR includes only the simplest form of consensus key rotation without considering multiple consensus keys concept. Such multiple consensus keys concept shall remain a long term goal of Tendermint and Cosmos SDK. ## Decision ### Pseudo procedure for consensus key rotation * create new random consensus key. * create and broadcast a transaction with a `MsgRotateConsPubKey` that states the new consensus key is now coupled with the validator operator with signature from the validator's operator key. * old consensus key becomes unable to participate on consensus immediately after the update of key mapping state on-chain. * start validating with new consensus key. * validators using HSM and KMS should update the consensus key in HSM to use the new rotated key after the height `h` when `MsgRotateConsPubKey` committed to the blockchain. ### Considerations * consensus key mapping information management strategy * store history of each key mapping changes in the kvstore. * the state machine can search corresponding consensus key paired with given validator operator for any arbitrary height in a recent unbonding period. * the state machine does not need any historical mapping information which is past more than unbonding period. * key rotation costs related to LCD and IBC * LCD and IBC will have traffic/computation burden when there exists frequent power changes * In current Tendermint design, consensus key rotations are seen as power changes from LCD or IBC perspective * Therefore, to minimize unnecessary frequent key rotation behavior, we limited maximum number of rotation in recent unbonding period and also applied exponentially increasing rotation fee * limits * a validator cannot rotate its consensus key more than `MaxConsPubKeyRotations` time for any unbonding period, to prevent spam. * parameters can be decided by governance and stored in genesis file. * key rotation fee * a validator should pay `KeyRotationFee` to rotate the consensus key which is calculated as below * `KeyRotationFee` = (max(`VotingPowerPercentage` *100, 1)* `InitialKeyRotationFee`) \* 2^(number of rotations in `ConsPubKeyRotationHistory` in recent unbonding period) * evidence module * evidence module can search corresponding consensus key for any height from slashing keeper so that it can decide which consensus key is supposed to be used for given height. * abci.ValidatorUpdate * tendermint already has ability to change a consensus key by ABCI communication(`ValidatorUpdate`). * validator consensus key update can be done via creating new + delete old by change the power to zero. * therefore, we expect we even do not need to change tendermint codebase at all to implement this feature. * new genesis parameters in `staking` module * `MaxConsPubKeyRotations` : maximum number of rotation can be executed by a validator in recent unbonding period. default value 10 is suggested(11th key rotation will be rejected) * `InitialKeyRotationFee` : the initial key rotation fee when no key rotation has happened in recent unbonding period. default value 1atom is suggested(1atom fee for the first key rotation in recent unbonding period) ### Workflow 1. The validator generates a new consensus keypair. 2. The validator generates and signs a `MsgRotateConsPubKey` tx with their operator key and new ConsPubKey ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type MsgRotateConsPubKey struct { ValidatorAddress sdk.ValAddress NewPubKey crypto.PubKey } ``` 3. `handleMsgRotateConsPubKey` gets `MsgRotateConsPubKey`, calls `RotateConsPubKey` with emits event 4. `RotateConsPubKey` * checks if `NewPubKey` is not duplicated on `ValidatorsByConsAddr` * checks if the validator is does not exceed parameter `MaxConsPubKeyRotations` by iterating `ConsPubKeyRotationHistory` * checks if the signing account has enough balance to pay `KeyRotationFee` * pays `KeyRotationFee` to community fund * overwrites `NewPubKey` in `validator.ConsPubKey` * deletes old `ValidatorByConsAddr` * `SetValidatorByConsAddr` for `NewPubKey` * Add `ConsPubKeyRotationHistory` for tracking rotation ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type ConsPubKeyRotationHistory struct { OperatorAddress sdk.ValAddress OldConsPubKey crypto.PubKey NewConsPubKey crypto.PubKey RotatedHeight int64 } ``` 5. `ApplyAndReturnValidatorSetUpdates` checks if there is `ConsPubKeyRotationHistory` with `ConsPubKeyRotationHistory.RotatedHeight == ctx.BlockHeight()` and if so, generates 2 `ValidatorUpdate` , one for a remove validator and one for create new validator ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} abci.ValidatorUpdate{ PubKey: cmttypes.TM2PB.PubKey(OldConsPubKey), Power: 0, } abci.ValidatorUpdate{ PubKey: cmttypes.TM2PB.PubKey(NewConsPubKey), Power: v.ConsensusPower(), } ``` 6. at `previousVotes` Iteration logic of `AllocateTokens`, `previousVote` using `OldConsPubKey` match up with `ConsPubKeyRotationHistory`, and replace validator for token allocation 7. Migrate `ValidatorSigningInfo` and `ValidatorMissedBlockBitArray` from `OldConsPubKey` to `NewConsPubKey` * Note : All above features shall be implemented in `staking` module. ## Status Proposed ## Consequences ### Positive * Validators can immediately or periodically rotate their consensus key to have better security policy * improved security against Long-Range attacks given a validator throws away the old consensus key(s) ### Negative * Slash module needs more computation because it needs to lookup corresponding consensus key of validators for each height * frequent key rotations will make light client bisection less efficient ### Neutral ## References * on tendermint repo : [Link](https://github.com/tendermint/tendermint/issues/1136) * on cosmos-sdk repo : [Link](https://github.com/cosmos/cosmos-sdk/issues/5231) * about multiple consensus keys : [Link](https://github.com/tendermint/tendermint/issues/1758#issuecomment-545291698) # ADR 17: Historical Header Module Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-017-historical-header-module 26 November 2019: Start of first version 2 December 2019: Final draft of first version ## Changelog * 26 November 2019: Start of first version * 2 December 2019: Final draft of first version ## Context In order for the Cosmos SDK to implement the [IBC specification](https://github.com/cosmos/ics), modules within the Cosmos SDK must have the ability to introspect recent consensus states (validator sets & commitment roots) as proofs of these values on other chains must be checked during the handshakes. ## Decision The application MUST store the most recent `n` headers in a persistent store. At first, this store MAY be the current Merklised store. A non-Merklised store MAY be used later as no proofs are necessary. The application MUST store this information by storing new headers immediately when handling `abci.RequestBeginBlock`: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func BeginBlock(ctx sdk.Context, keeper HistoricalHeaderKeeper, req abci.RequestBeginBlock) abci.ResponseBeginBlock { info := HistoricalInfo{ Header: ctx.BlockHeader(), ValSet: keeper.StakingKeeper.GetAllValidators(ctx), // note that this must be stored in a canonical order } keeper.SetHistoricalInfo(ctx, ctx.BlockHeight(), info) n := keeper.GetParamRecentHeadersToStore() keeper.PruneHistoricalInfo(ctx, ctx.BlockHeight() - n) // continue handling request } ``` Alternatively, the application MAY store only the hash of the validator set. The application MUST make these past `n` committed headers available for querying by Cosmos SDK modules through the `Keeper`'s `GetHistoricalInfo` function. This MAY be implemented in a new module, or it MAY also be integrated into an existing one (likely `x/staking` or `x/ibc`). `n` MAY be configured as a parameter store parameter, in which case it could be changed by `ParameterChangeProposal`s, although it will take some blocks for the stored information to catch up if `n` is increased. ## Status Proposed. ## Consequences Implementation of this ADR will require changes to the Cosmos SDK. It will not require changes to Tendermint. ### Positive * Easy retrieval of headers & state roots for recent past heights by modules anywhere in the Cosmos SDK. * No RPC calls to Tendermint required. * No ABCI alterations required. ### Negative * Duplicates `n` headers data in Tendermint & the application (additional disk usage) - in the long term, an approach such as [this](https://github.com/tendermint/tendermint/issues/4210) might be preferable. ### Neutral (none known) ## References * [ICS 2: "Consensus state introspection"](https://github.com/cosmos/ibc/tree/master/spec/core/ics-002-client-semantics#consensus-state-introspection) # ADR 18: Extendable Voting Periods Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-018-extendable-voting-period 1 January 2020: Start of first version ## Changelog * 1 January 2020: Start of first version ## Context Currently the voting period for all governance proposals is the same. However, this is suboptimal as all governance proposals do not require the same time period. For more non-contentious proposals, they can be dealt with more efficiently with a faster period, while more contentious or complex proposals may need a longer period for extended discussion/consideration. ## Decision We would like to design a mechanism for making the voting period of a governance proposal variable based on the demand of voters. We would like it to be based on the view of the governance participants, rather than just the proposer of a governance proposal (thus, allowing the proposer to select the voting period length is not sufficient). However, we would like to avoid the creation of an entire second voting process to determine the length of the voting period, as it just pushed the problem to determining the length of that first voting period. Thus, we propose the following mechanism: ### Params * The current gov param `VotingPeriod` is to be replaced by a `MinVotingPeriod` param. This is the default voting period that all governance proposal voting periods start with. * There is a new gov param called `MaxVotingPeriodExtension`. ### Mechanism There is a new `Msg` type called `MsgExtendVotingPeriod`, which can be sent by any staked account during a proposal's voting period. It allows the sender to unilaterally extend the length of the voting period by `MaxVotingPeriodExtension * sender's share of voting power`. Every address can only call `MsgExtendVotingPeriod` once per proposal. So for example, if the `MaxVotingPeriodExtension` is set to 100 Days, then anyone with 1% of voting power can extend the voting power by 1 day. If 33% of voting power has sent the message, the voting period will be extended by 33 days. Thus, if absolutely everyone chooses to extend the voting period, the absolute maximum voting period will be `MinVotingPeriod + MaxVotingPeriodExtension`. This system acts as a sort of distributed coordination, where individual stakers choosing to extend or not, allows the system the guage the conentiousness/complexity of the proposal. It is extremely unlikely that many stakers will choose to extend at the exact same time, it allows stakers to view how long others have already extended thus far, to decide whether or not to extend further. ### Dealing with Unbonding/Redelegation There is one thing that needs to be addressed. How to deal with redelegation/unbonding during the voting period. If a staker of 5% calls `MsgExtendVotingPeriod` and then unbonds, does the voting period then decrease by 5 days again? This is not good as it can give people a false sense of how long they have to make their decision. For this reason, we want to design it such that the voting period length can only be extended, not shortened. To do this, the current extension amount is based on the highest percent that voted extension at any time. This is best explained by example: 1. Let's say 2 stakers of voting power 4% and 3% respectively vote to extend. The voting period will be extended by 7 days. 2. Now the staker of 3% decides to unbond before the end of the voting period. The voting period extension remains 7 days. 3. Now, let's say another staker of 2% voting power decides to extend voting period. There is now 6% of active voting power choosing the extend. The voting power remains 7 days. 4. If a fourth staker of 10% chooses to extend now, there is a total of 16% of active voting power wishing to extend. The voting period will be extended to 16 days. ### Delegators Just like votes in the actual voting period, delegators automatically inherit the extension of their validators. If their validator chooses to extend, their voting power will be used in the validator's extension. However, the delegator is unable to override their validator and "unextend" as that would contradict the "voting power length can only be ratcheted up" principle described in the previous section. However, a delegator may choose the extend using their personal voting power, if their validator has not done so. ## Status Proposed ## Consequences ### Positive * More complex/contentious governance proposals will have more time to properly digest and deliberate ### Negative * Governance process becomes more complex and requires more understanding to interact with effectively * Can no longer predict when a governance proposal will end. Can't assume order in which governance proposals will end. ### Neutral * The minimum voting period can be made shorter ## References * [Cosmos Forum post where idea first originated](https://forum.cosmos.network/t/proposal-draft-reduce-governance-voting-period-to-7-days/3032/9) # ADR 019: Protocol Buffer State Encoding Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-019-protobuf-state-encoding ## Changelog * 2020 Feb 15: Initial Draft * 2020 Feb 24: Updates to handle messages with interface fields * 2020 Apr 27: Convert usages of `oneof` for interfaces to `Any` * 2020 May 15: Describe `cosmos_proto` extensions and amino compatibility * 2020 Dec 4: Move and rename `MarshalAny` and `UnmarshalAny` into the `codec.Codec` interface. * 2021 Feb 24: Remove mentions of `HybridCodec`, which has been abandoned in [#6843](https://github.com/cosmos/cosmos-sdk/pull/6843). ## Status Accepted ## Context Currently, the Cosmos SDK utilizes [go-amino](https://github.com/tendermint/go-amino/) for binary and JSON object encoding over the wire bringing parity between logical objects and persistence objects. From the Amino docs: > Amino is an object encoding specification. It is a subset of Proto3 with an extension for interface > support. See the [Proto3 spec](https://developers.google.com/protocol-buffers/docs/proto3) for more > information on Proto3, which Amino is largely compatible with (but not with Proto2). > > The goal of the Amino encoding protocol is to bring parity into logic objects and persistence objects. Amino also aims to have the following goals (not a complete list): * Binary bytes must be decode-able with a schema. * Schema must be upgradeable. * The encoder and decoder logic must be reasonably simple. However, we believe that Amino does not fulfill these goals completely and does not fully meet the needs of a truly flexible cross-language and multi-client compatible encoding protocol in the Cosmos SDK. Namely, Amino has proven to be a big pain-point in regards to supporting object serialization across clients written in various languages while providing virtually little in the way of true backwards compatibility and upgradeability. Furthermore, through profiling and various benchmarks, Amino has been shown to be an extremely large performance bottleneck in the Cosmos SDK 1. This is largely reflected in the performance of simulations and application transaction throughput. Thus, we need to adopt an encoding protocol that meets the following criteria for state serialization: * Language agnostic * Platform agnostic * Rich client support and thriving ecosystem * High performance * Minimal encoded message size * Codegen-based over reflection-based * Supports backward and forward compatibility Note, migrating away from Amino should be viewed as a two-pronged approach, state and client encoding. This ADR focuses on state serialization in the Cosmos SDK state machine. A corresponding ADR will be made to address client-side encoding. ## Decision We will adopt [Protocol Buffers](https://developers.google.com/protocol-buffers) for serializing persisted structured data in the Cosmos SDK while providing a clean mechanism and developer UX for applications wishing to continue to use Amino. We will provide this mechanism by updating modules to accept a codec interface, `Marshaler`, instead of a concrete Amino codec. Furthermore, the Cosmos SDK will provide two concrete implementations of the `Marshaler` interface: `AminoCodec` and `ProtoCodec`. * `AminoCodec`: Uses Amino for both binary and JSON encoding. * `ProtoCodec`: Uses Protobuf for both binary and JSON encoding. Modules will use whichever codec that is instantiated in the app. By default, the Cosmos SDK's `simapp` instantiates a `ProtoCodec` as the concrete implementation of `Marshaler`, inside the `MakeTestEncodingConfig` function. This can be easily overwritten by app developers if they so desire. The ultimate goal will be to replace Amino JSON encoding with Protobuf encoding and thus have modules accept and/or extend `ProtoCodec`. Until then, Amino JSON is still provided for legacy use-cases. A handful of places in the Cosmos SDK still have Amino JSON hardcoded, such as the Legacy API REST endpoints and the `x/params` store. They are planned to be converted to Protobuf in a gradual manner. ### Module Codecs Modules that do not require the ability to work with and serialize interfaces, the path to Protobuf migration is pretty straightforward. These modules are to simply migrate any existing types that are encoded and persisted via their concrete Amino codec to Protobuf and have their keeper accept a `Marshaler` that will be a `ProtoCodec`. This migration is simple as things will just work as-is. Note, any business logic that needs to encode primitive types like `bool` or `int64` should use [gogoprotobuf](https://github.com/cosmos/gogoproto) Value types. Example: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} ts, err := gogotypes.TimestampProto(completionTime) if err != nil { // ... } bz := cdc.MustMarshal(ts) ``` However, modules can vary greatly in purpose and design and so we must support the ability for modules to be able to encode and work with interfaces (e.g. `Account` or `Content`). For these modules, they must define their own codec interface that extends `Marshaler`. These specific interfaces are unique to the module and will contain method contracts that know how to serialize the needed interfaces. Example: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // x/auth/types/codec.go type Codec interface { codec.Codec MarshalAccount(acc exported.Account) ([]byte, error) UnmarshalAccount(bz []byte) (exported.Account, error) MarshalAccountJSON(acc exported.Account) ([]byte, error) UnmarshalAccountJSON(bz []byte) (exported.Account, error) } ``` ### Usage of `Any` to encode interfaces In general, module-level .proto files should define messages which encode interfaces using [`google.protobuf.Any`](https://github.com/protocolbuffers/protobuf/blob/master/src/google/protobuf/any.proto). After [extension discussion](https://github.com/cosmos/cosmos-sdk/issues/6030), this was chosen as the preferred alternative to application-level `oneof`s as in our original protobuf design. The arguments in favor of `Any` can be summarized as follows: * `Any` provides a simpler, more consistent client UX for dealing with interfaces than app-level `oneof`s that will need to be coordinated more carefully across applications. Creating a generic transaction signing library using `oneof`s may be cumbersome and critical logic may need to be reimplemented for each chain * `Any` provides more resistance against human error than `oneof` * `Any` is generally simpler to implement for both modules and apps The main counter-argument to using `Any` centers around its additional space and possibly performance overhead. The space overhead could be dealt with using compression at the persistence layer in the future and the performance impact is likely to be small. Thus, not using `Any` is seem as a pre-mature optimization, with user experience as the higher order concern. Note, that given the Cosmos SDK's decision to adopt the `Codec` interfaces described above, apps can still choose to use `oneof` to encode state and transactions but it is not the recommended approach. If apps do choose to use `oneof`s instead of `Any` they will likely lose compatibility with client apps that support multiple chains. Thus developers should think carefully about whether they care more about what is possibly a pre-mature optimization or end-user and client developer UX. ### Safe usage of `Any` By default, the [gogo protobuf implementation of `Any`](https://pkg.go.dev/github.com/cosmos/gogoproto/types) uses [global type registration](https://github.com/cosmos/gogoproto/blob/master/proto/properties.go#L540) to decode values packed in `Any` into concrete go types. This introduces a vulnerability where any malicious module in the dependency tree could register a type with the global protobuf registry and cause it to be loaded and unmarshaled by a transaction that referenced it in the `type_url` field. To prevent this, we introduce a type registration mechanism for decoding `Any` values into concrete types through the `InterfaceRegistry` interface which bears some similarity to type registration with Amino: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type InterfaceRegistry interface { // RegisterInterface associates protoName as the public name for the // interface passed in as iface // Ex: // registry.RegisterInterface("cosmos_sdk.Msg", (*sdk.Msg)(nil)) RegisterInterface(protoName string, iface interface{ }) // RegisterImplementations registers impls as a concrete implementations of // the interface iface // Ex: // registry.RegisterImplementations((*sdk.Msg)(nil), &MsgSend{ }, &MsgMultiSend{ }) RegisterImplementations(iface interface{ }, impls ...proto.Message) } ``` In addition to serving as a whitelist, `InterfaceRegistry` can also serve to communicate the list of concrete types that satisfy an interface to clients. In .proto files: * fields which accept interfaces should be annotated with `cosmos_proto.accepts_interface` using the same full-qualified name passed as `protoName` to `InterfaceRegistry.RegisterInterface` * interface implementations should be annotated with `cosmos_proto.implements_interface` using the same full-qualified name passed as `protoName` to `InterfaceRegistry.RegisterInterface` In the future, `protoName`, `cosmos_proto.accepts_interface`, `cosmos_proto.implements_interface` may be used via code generation, reflection &/or static linting. The same struct that implements `InterfaceRegistry` will also implement an interface `InterfaceUnpacker` to be used for unpacking `Any`s: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type InterfaceUnpacker interface { // UnpackAny unpacks the value in any to the interface pointer passed in as // iface. Note that the type in any must have been registered with // RegisterImplementations as a concrete type for that interface // Ex: // var msg sdk.Msg // err := ctx.UnpackAny(any, &msg) // ... UnpackAny(any *Any, iface interface{ }) error } ``` Note that `InterfaceRegistry` usage does not deviate from standard protobuf usage of `Any`, it just introduces a security and introspection layer for golang usage. `InterfaceRegistry` will be a member of `ProtoCodec` described above. In order for modules to register interface types, app modules can optionally implement the following interface: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type InterfaceModule interface { RegisterInterfaceTypes(InterfaceRegistry) } ``` The module manager will include a method to call `RegisterInterfaceTypes` on every module that implements it in order to populate the `InterfaceRegistry`. ### Using `Any` to encode state The Cosmos SDK will provide support methods `MarshalInterface` and `UnmarshalInterface` to hide a complexity of wrapping interface types into `Any` and allow easy serialization. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import "github.com/cosmos/cosmos-sdk/codec" // note: eviexported.Evidence is an interface type func MarshalEvidence(cdc codec.BinaryCodec, e eviexported.Evidence) ([]byte, error) { return cdc.MarshalInterface(e) } func UnmarshalEvidence(cdc codec.BinaryCodec, bz []byte) (eviexported.Evidence, error) { var evi eviexported.Evidence err := cdc.UnmarshalInterface(&evi, bz) return err, nil } ``` ### Using `Any` in `sdk.Msg`s A similar concept is to be applied for messages that contain interfaces fields. For example, we can define `MsgSubmitEvidence` as follows where `Evidence` is an interface: ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // x/evidence/types/types.proto message MsgSubmitEvidence { bytes submitter = 1 [ (gogoproto.casttype) = "github.com/cosmos/cosmos-sdk/types.AccAddress" ]; google.protobuf.Any evidence = 2; } ``` Note that in order to unpack the evidence from `Any` we do need a reference to `InterfaceRegistry`. In order to reference evidence in methods like `ValidateBasic` which shouldn't have to know about the `InterfaceRegistry`, we introduce an `UnpackInterfaces` phase to deserialization which unpacks interfaces before they're needed. ### Unpacking Interfaces To implement the `UnpackInterfaces` phase of deserialization which unpacks interfaces wrapped in `Any` before they're needed, we create an interface that `sdk.Msg`s and other types can implement: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type UnpackInterfacesMessage interface { UnpackInterfaces(InterfaceUnpacker) error } ``` We also introduce a private `cachedValue interface{}` field onto the `Any` struct itself with a public getter `GetCachedValue() interface{}`. The `UnpackInterfaces` method is to be invoked during message deserialization right after `Unmarshal` and any interface values packed in `Any`s will be decoded and stored in `cachedValue` for reference later. Then unpacked interface values can safely be used in any code afterwards without knowledge of the `InterfaceRegistry` and messages can introduce a simple getter to cast the cached value to the correct interface type. This has the added benefit that unmarshaling of `Any` values only happens once during initial deserialization rather than every time the value is read. Also, when `Any` values are first packed (for instance in a call to `NewMsgSubmitEvidence`), the original interface value is cached so that unmarshaling isn't needed to read it again. `MsgSubmitEvidence` could implement `UnpackInterfaces`, plus a convenience getter `GetEvidence` as follows: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (msg MsgSubmitEvidence) UnpackInterfaces(ctx sdk.InterfaceRegistry) error { var evi eviexported.Evidence return ctx.UnpackAny(msg.Evidence, *evi) } func (msg MsgSubmitEvidence) GetEvidence() eviexported.Evidence { return msg.Evidence.GetCachedValue().(eviexported.Evidence) } ``` ### Amino Compatibility Our custom implementation of `Any` can be used transparently with Amino if used with the proper codec instance. What this means is that interfaces packed within `Any`s will be amino marshaled like regular Amino interfaces (assuming they have been registered properly with Amino). In order for this functionality to work: * **all legacy code must use `*codec.LegacyAmino` instead of `*amino.Codec` which is now a wrapper which properly handles `Any`** * **all new code should use `Marshaler` which is compatible with both amino and protobuf** * Also, before v0.39, `codec.LegacyAmino` will be renamed to `codec.LegacyAmino`. ### Why Wasn't X Chosen Instead For a more complete comparison to alternative protocols, see [here](https://codeburst.io/json-vs-protocol-buffers-vs-flatbuffers-a4247f8bda6f). ### Cap'n Proto While [Cap’n Proto](https://capnproto.org/) does seem like an advantageous alternative to Protobuf due to it's native support for interfaces/generics and built in canonicalization, it does lack the rich client ecosystem compared to Protobuf and is a bit less mature. ### FlatBuffers [FlatBuffers](https://google.github.io/flatbuffers/) is also a potentially viable alternative, with the primary difference being that FlatBuffers does not need a parsing/unpacking step to a secondary representation before you can access data, often coupled with per-object memory allocation. However, it would require great efforts into research and full understanding the scope of the migration and path forward -- which isn't immediately clear. In addition, FlatBuffers aren't designed for untrusted inputs. ## Future Improvements & Roadmap In the future we may consider a compression layer right above the persistence layer which doesn't change tx or merkle tree hashes, but reduces the storage overhead of `Any`. In addition, we may adopt protobuf naming conventions which make type URLs a bit more concise while remaining descriptive. Additional code generation support around the usage of `Any` is something that could also be explored in the future to make the UX for go developers more seamless. ## Consequences ### Positive * Significant performance gains. * Supports backward and forward type compatibility. * Better support for cross-language clients. ### Negative * Learning curve required to understand and implement Protobuf messages. * Slightly larger message size due to use of `Any`, although this could be offset by a compression layer in the future ### Neutral ## References 1. [Link](https://github.com/cosmos/cosmos-sdk/issues/4977) 2. [Link](https://github.com/cosmos/cosmos-sdk/issues/5444) # Cosmos SDK Repository Source: https://docs.cosmos.network/sdk/latest/reference/cosmos-sdk-repo # Tutorial Example Repository Source: https://docs.cosmos.network/sdk/latest/reference/example-repo # Node Tutorial Source: https://docs.cosmos.network/sdk/latest/tutorials Version: v0.54 This guide covers everything you need to run, configure, and maintain a Cosmos SDK node. Whether you're setting up a local development node, deploying to a testnet, or running production infrastructure, you'll find step-by-step instructions and best practices. The node tutorial uses the `simapp` example application and its corresponding CLI binary `simd` as the blockchain application and CLI. You can view the source code for `simapp` [on GitHub](https://github.com/cosmos/cosmos-sdk/tree/main/simapp). Learn the fundamentals of running a node, from initial setup through keyring management and starting your node. Connect to your node and query data using CLI, gRPC, or REST endpoints. Create, sign, and broadcast transactions to your node using various methods. Configure and deploy nodes for testnet environments and production networks. Monitor your node's health and performance using built-in telemetry and metrics collection. Manage chain upgrades and migrations safely using in-place upgrade mechanisms and Cosmovisor. # Tutorial Intro Source: https://docs.cosmos.network/sdk/latest/tutorials/example/00-overview Build a module from scratch, wire it into a chain, and run it locally, all in minutes. The Cosmos SDK is a developer-first framework for building custom blockchains. This tutorial series shows you how to build a module from scratch, wire it into a chain, and run it locally, all in minutes. By the end, you will have: * A working Cosmos SDK chain running on your machine * A custom module you built yourself, wired into the chain * A clear mental model of how modules, keepers, messages, and queries fit together This series starts from zero; you don't need any prior Cosmos SDK experience to follow along. ## The example repo All tutorials in this series are based on [cosmos/example](https://github.com/cosmos/example), a reference Cosmos SDK chain built around a custom `x/counter` module. The repo has two main branches: * `main`: the complete chain with the full `x/counter` module wired in. This is used in the [Quickstart guide](/sdk/latest/tutorials/example/02-quickstart). * `tutorial/start`: the same chain without the counter module. The `x/counter` directory and its app wiring are stripped out so you can build the module from scratch by [following the tutorial](/sdk/latest/tutorials/example/03-build-a-module). If you want to follow along and build the module yourself, start from `tutorial/start`. If you want to browse the finished implementation first, use `main`. ## What's in this series 1. [Prerequisites](/sdk/latest/tutorials/example/01-prerequisites): Install Go, Make, Docker, and Git. Clone the repo and get familiar with the layout. 2. [Quickstart](/sdk/latest/tutorials/example/02-quickstart): Build and run the chain in minutes. Submit a transaction, query the result, and see the counter module in action before you build it yourself. 3. [Build a Module from Scratch](/sdk/latest/tutorials/example/03-build-a-module): Build a minimal counter module step by step: proto definitions, keeper, message server, query server, and app wiring. Start here if you want to understand how a module comes together. 4. [Full Module Walkthrough](/sdk/latest/tutorials/example/04-counter-walkthrough): Walk through the complete `x/counter` implementation on `main`. Covers everything added on top of the minimal module: params, governance-gated authority, validation, fees, sentinel errors, telemetry, AutoCLI, simulation, block hooks, and a full unit test suite. 5. [Run and Test](/sdk/latest/tutorials/example/05-run-and-test): Learn the full development workflow: running a local chain, using the CLI, and working with the three layers of testing: unit tests, end-to-end tests, and simulation. # Prerequisites Source: https://docs.cosmos.network/sdk/latest/tutorials/example/01-prerequisites Install dependencies Before starting the tutorial, make sure you have the following tools installed. This tutorial is intended for macOS and Linux systems. Other systems may have additional requirements. ## Go The example chain requires Go 1.25 or higher. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} go version # go version go1.25.0 linux/amd64 # Linux # go version go1.25.0 darwin/arm64 # macOS ``` If Go is not installed, download it from [go.dev/dl](https://go.dev/dl). ### Configure Go Environment Variables After installing Go, make sure `$GOPATH/bin` is on your `PATH` so installed binaries (like `exampled`) are accessible. Open your shell config file (`~/.zshrc` on macOS or `~/.bashrc` on Linux) and add: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} export GOPATH=$HOME/go export PATH=$PATH:$GOPATH/bin ``` Then apply the changes: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} source ~/.zshrc # macOS source ~/.bashrc # Linux ``` Verify: `go env GOPATH` ## Make Make is used to run build and development commands throughout the tutorial. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} make --version # GNU Make 3.81 ``` Make is pre-installed on most Linux and macOS systems. If it is missing: * **macOS:** `xcode-select --install` * **Linux (Debian/Ubuntu):** `sudo apt install build-essential` ## Docker Docker is required to run `make proto-gen`, which generates Go code from the module's proto files using [buf](https://buf.build). ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} docker --version # Docker version 29.2.1 ``` Download Docker from [docs.docker.com/get-docker](https://docs.docker.com/get-docker). Docker must be running before you execute `make proto-gen`. ## Git ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} git --version # git version 2.52.0 ``` ## Clone the repository Clone [cosmos/example](https://github.com/cosmos/example) and navigate into it: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} git clone https://github.com/cosmos/example cd example ``` The repo has two branches used in this tutorial series: * `main` — the complete chain with the full `x/counter` module wired in. * `tutorial/start` — the same chain with the counter module stripped out. Start here if you want to build the module yourself from scratch. ## Repository Layout After cloning, the repository looks like this: ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} example/ ├── exampled/ # Binary entrypoint (main.go + CLI root command) ├── app.go # Chain application, module wiring lives here ├── proto/ # Proto definitions for all modules ├── x/ # Module implementations │ └── counter/ # The example counter module ├── tests/ # E2E and integration tests ├── scripts/ # Local node and proto generation scripts ├── docs/ # This tutorial series └── Makefile # Build, test, and dev commands ``` ## Where things live The tutorials in this section will walk you through the most common kinds of chain changes and show you where they usually live in the repo: * Add or modify a module: `x//` and `proto/` * Wire a module into the chain: `app.go` * Change the binary or CLI: `exampled/` * Run the chain or tests: `Makefile` targets *** Next: [Quickstart →](/sdk/latest/tutorials/example/02-quickstart) # Chain Quickstart Source: https://docs.cosmos.network/sdk/latest/tutorials/example/02-quickstart Start a chain, submit a transaction, and query the result in minutes Building on Cosmos is simple: you can start a chain with a [single command](#start-the-chain). This quickstart gets you from zero to a running chain, a submitted transaction, and a queried result in minutes. `exampled` is a simple Cosmos SDK chain that shows the core pieces of a working app chain. It includes the basic building-block modules for accounts, bank, staking, distribution, slashing, governance, and more, plus a custom `x/counter` module. In the next tutorials, you'll build a simple version of that module yourself and then walk through the full implementation. Before continuing, make sure you have completed the [Prerequisites](/sdk/latest/tutorials/example/01-prerequisites) to get your environment set up. ## Install the binary Run the following to compile the `exampled` binary and place it on your `$PATH`. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} make install ``` Verify the install by running: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} exampled version ``` You can also run the following to see all available node CLI commands: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} exampled ``` ## Start the chain Run the following to start a single-node local chain. It handles all setup automatically: initializes the chain data, creates test accounts, and starts the node. Leave it running in this terminal. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} make start ``` ## Query the counter Open a second terminal and query the current count: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} exampled query counter count ``` You should see the following output, which means the counter is starting at `0`: ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} {} ``` You can also query the module parameters: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} exampled query counter params ``` This shows that the fee to increment the counter is stored as a module parameter. The base coin denomination for the `exampled` chain is `stake`. ```yaml theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} params: add_cost: - amount: "100" denom: stake max_add_value: "100" ``` ## Submit an add transaction Send an `Add` transaction to increment the counter. This charges a fee from the funded `alice` account you are sending the transaction from: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} exampled tx counter add 5 --from alice --chain-id demo --yes ``` ## Query the counter again After submitting the transaction, query the counter again to see the updated module state: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} exampled query counter count ``` You should see the following: ``` count: "5" ``` Congratulations! You just ran a blockchain, submitted a transaction, and queried module state. ## Next steps In the following tutorials, you will: 1. Build a minimal version of this module from scratch to understand the core pattern 2. Walk through the full `x/counter` module example to see what it adds 3. See how modules are wired into a chain and how to run the full test suite Next: [Build a Module from Scratch →](/sdk/latest/tutorials/example/03-build-a-module) # Build a Module from Scratch Source: https://docs.cosmos.network/sdk/latest/tutorials/example/03-build-a-module Build a simple counter module from scratch in minutes In [quickstart](/sdk/latest/tutorials/example/02-quickstart), you started a chain and submitted a transaction to increase the counter. In this tutorial, you'll build a simple counter module from scratch. It follows the same overall structure as the full `x/counter`, but uses a stripped-down version so you can focus on the core steps of building and wiring a module yourself. By the end, you'll have built a working module and wired it into a running chain. For a deeper dive into how modules work in the Cosmos SDK, see [Intro to Modules](/sdk/latest/learn/concepts/modules). Before continuing, you must follow the [Prerequisites guide](/sdk/latest/tutorials/example/01-prerequisites) to make sure everything is installed. ## Making modules The Cosmos SDK makes it easy to build custom business logic directly into your chain through modules. Every module follows the same overall pattern: ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} proto files → code generation → keeper → msg server → query server → module.go → app wiring ``` First, you'll define what the module does: * Define messages: users can send `Add` to increase the counter * Define queries: users can query `Count` to read the current value * Define genesis state: the module starts with a count of `0` Then you'll wire that behavior into the SDK: * Run `proto-gen` to generate the Go types and interfaces * Implement your business logic in a `keeper` to store the count and update it * Implement `MsgServer` and `QueryServer` to pass messages and queries into the keeper * Register the module in `module.go` * Wire it into the chain in `app.go` You'll build the following module structure: ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} proto/example/counter/v1/ ├── tx.proto # Transaction message and Msg service definition ├── query.proto # Query message and Query service definition └── genesis.proto # Genesis state definition x/counter/ ├── keeper/ │ ├── keeper.go # Keeper struct and state methods │ ├── msg_server.go # MsgServer implementation │ └── query_server.go # QueryServer implementation ├── types/ │ ├── keys.go # Module name and store key constants │ ├── codec.go # Interface registration │ └── *.pb.go # Generated from proto — do not edit ├── module.go # AppModule wiring └── autocli.go # CLI command definitions ``` ## Step 1: Setup This tutorial uses the `tutorial/start` branch, which is a blank template for you to create the module from scratch and wire it into `app.go`. 1. Clone the repo if you haven't already: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} git clone https://github.com/cosmos/example cd example ``` 2. Check out the `tutorial/start` branch and make the new module directories: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} git checkout tutorial/start mkdir -p x/counter/keeper x/counter/types proto/example/counter/v1 ``` You should see empty placeholder directories at `x/counter/` and `proto/example/counter/v1/`. ## Step 2: Proto files Proto files are the source of truth for the module's public API. You define messages and services here. For a deeper look at how protobuf is used across modules, see [Encoding and Protobuf](/sdk/latest/learn/concepts/encoding#how-protobuf-is-used-in-modules). In this tutorial, the counter module stores one number, `Add` increases it by the amount the user submits, and the query returns the current value. First, create the three proto files: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} touch proto/example/counter/v1/tx.proto \ proto/example/counter/v1/query.proto \ proto/example/counter/v1/genesis.proto ``` Then add the following contents to each file. ### tx.proto This is the first module file you define. It declares the transaction message shape for `Add`: what the user sends to increment the counter, and what the module returns after handling it. To learn more about how messages are defined and routed, see [Messages](/sdk/latest/learn/concepts/transactions#messages). Add the following code to `tx.proto`. ```proto theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} syntax = "proto3"; // Matches the module's protobuf namespace. package example.counter; // Provides Cosmos SDK message annotations like signer and service markers. import "cosmos/msg/v1/msg.proto"; // Generated Go types are written into x/counter/types. option go_package = "github.com/cosmos/example/x/counter/types"; service Msg { // Marks this as a transaction service, not a normal gRPC service. option (cosmos.msg.v1.service) = true; // Add is the one transaction this minimal module supports. rpc Add(MsgAddRequest) returns (MsgAddResponse); } message MsgAddRequest { // The sender signs this message. option (cosmos.msg.v1.signer) = "sender"; string sender = 1; uint64 add = 2; } message MsgAddResponse { // Return the new counter value after the add succeeds. uint64 updated_count = 1; } ``` ### query.proto This file defines the read-only gRPC query service and the response type for fetching the current count. To learn more about how queries differ from transactions, see [Queries](/sdk/latest/learn/concepts/transactions#queries). Add the following code to `query.proto`. ```proto theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} syntax = "proto3"; // Matches the module's protobuf namespace. package example.counter; // Enables the REST gateway route annotation below. import "google/api/annotations.proto"; // Generated Go types are written into x/counter/types. option go_package = "github.com/cosmos/example/x/counter/types"; service Query { rpc Count(QueryCountRequest) returns (QueryCountResponse) { // Exposes this query over the HTTP API as well as gRPC. option (google.api.http).get = "/example/counter/v1/count"; } } // Empty because this query only needs the module's current state. message QueryCountRequest {} message QueryCountResponse { // The current counter value. uint64 count = 1; } ``` ### genesis.proto This file defines the data the module stores in genesis so the counter can be initialized when the chain starts. Add the following code to `genesis.proto`. ```proto theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} syntax = "proto3"; // Matches the module's protobuf namespace. package example.counter; // Generated Go types are written into x/counter/types. option go_package = "github.com/cosmos/example/x/counter/types"; message GenesisState { // The counter value to load when the chain initializes. uint64 count = 1; } ``` ## Step 3: Generate Code 1. Make sure Docker is running. 2. The first time you run proto-gen you need to build the builder image. Run the following commands: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} make proto-image-build make proto-gen ``` This compiles the proto files using [buf](https://buf.build) inside Docker to produce the Go interfaces you will then implement. The generated files will appear in `x/counter/types/`: ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} x/counter/types/ ├── tx.pb.go # MsgAddRequest, MsgAddResponse, MsgServer interface ├── query.pb.go # QueryCountRequest, QueryCountResponse, QueryServer interface ├── query.pb.gw.go # REST gateway registration └── genesis.pb.go # GenesisState ``` > **Do not edit generated files.** Changes to public types belong in the proto files. Re-run `make proto-gen` after any proto change. The most important generated output is the `MsgServer` and `QueryServer` interfaces. In Steps 5 and 6, you'll implement them in `keeper/msg_server.go` and `keeper/query_server.go`. ## Step 4: Types Next, you'll define the module types and identifiers in `x/counter/types` that the rest of the module depends on. Create the two files for this section: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} touch x/counter/types/keys.go \ x/counter/types/codec.go ``` Then add the following contents to each file. ### keys.go This file defines the module's basic identifiers: the module name used throughout the SDK, and the store key used to claim the module's KV store namespace. For more on how modules access state through store keys, see [How modules access state](/sdk/latest/learn/concepts/store#how-modules-access-state). ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // x/counter/types/keys.go package types const ( // ModuleName is the name the SDK uses to refer to this module. ModuleName = "counter" // StoreKey is the key for this module's KV store. StoreKey = ModuleName ) ``` `ModuleName` identifies the module throughout the SDK (routing, events, governance). `StoreKey` is the key used to claim the module's isolated namespace in the chain's KV store (set equal to `ModuleName` by convention). ### Interface Registration This file registers your generated message types with the SDK interface registry so the application can decode and route your module's transactions correctly. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // x/counter/types/codec.go package types import ( codectypes "github.com/cosmos/cosmos-sdk/codec/types" sdk "github.com/cosmos/cosmos-sdk/types" "github.com/cosmos/cosmos-sdk/types/msgservice" ) func RegisterInterfaces(registry codectypes.InterfaceRegistry) { // Register MsgAddRequest as an sdk.Msg so the app can decode it from transactions. registry.RegisterImplementations((*sdk.Msg)(nil), &MsgAddRequest{}, ) // Register the generated Msg service description for routing. msgservice.RegisterMsgServiceDesc(registry, &_Msg_serviceDesc) } ``` `_Msg_serviceDesc` is generated by `make proto-gen` — it describes the `Msg` gRPC service defined in `tx.proto`. ## Step 5: Keeper In this step, you create the keeper, which is the part of the module that owns the counter state and provides the methods the rest of the module will call. For a conceptual overview of the keeper's role, see [Keeper](/sdk/latest/learn/concepts/modules#keeper). Create the keeper file: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} touch x/counter/keeper/keeper.go ``` Then add the following contents. This file defines the keeper struct, sets up the counter's storage item, and implements the core state methods for reading, updating, and loading the counter at genesis. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // x/counter/keeper/keeper.go package keeper import ( "context" "errors" "cosmossdk.io/collections" "cosmossdk.io/core/store" "github.com/cosmos/cosmos-sdk/codec" "github.com/cosmos/example/x/counter/types" ) type Keeper struct { Schema collections.Schema counter collections.Item[uint64] } func NewKeeper(storeService store.KVStoreService, cdc codec.Codec) *Keeper { sb := collections.NewSchemaBuilder(storeService) k := Keeper{ // Store the counter under prefix 0 in this module's KV store. counter: collections.NewItem(sb, collections.NewPrefix(0), "counter", collections.Uint64Value), } schema, err := sb.Build() if err != nil { panic(err) } k.Schema = schema return &k } func (k *Keeper) GetCount(ctx context.Context) (uint64, error) { count, err := k.counter.Get(ctx) // Treat missing state as zero so a fresh chain starts cleanly. if err != nil && !errors.Is(err, collections.ErrNotFound) { return 0, err } return count, nil } func (k *Keeper) AddCount(ctx context.Context, amount uint64) (uint64, error) { count, err := k.GetCount(ctx) if err != nil { return 0, err } // Increment the current count and write it back to state. newCount := count + amount return newCount, k.counter.Set(ctx, newCount) } func (k *Keeper) InitGenesis(ctx context.Context, gs *types.GenesisState) error { return k.counter.Set(ctx, gs.Count) } func (k *Keeper) ExportGenesis(ctx context.Context) (*types.GenesisState, error) { count, err := k.GetCount(ctx) if err != nil { return nil, err } return &types.GenesisState{Count: count}, nil } ``` `collections.Item[uint64]` is a typed KV store entry; the `collections` package handles encoding and namespacing. `GetCount` treats `ErrNotFound` as zero so the counter starts at zero without explicit initialization. > **State layout** > > * `StoreKey` (`"counter"`) is the module's isolated namespace within the chain's global KV store. No other module can read or write this namespace. > * `collections.NewPrefix(0)` is a single-byte prefix that identifies the `counter` item within the module's namespace. A module with multiple items would use `NewPrefix(0)`, `NewPrefix(1)`, etc. to keep them separate. > * `ErrNotFound` treated as zero means the keeper never needs to explicitly set an initial value — the first `GetCount` call on a fresh chain returns `0` by convention. ## Step 6: MsgServer In this step, you implement the transaction handler for the generated `MsgServer` interface. This is the code path that runs when a user submits `tx counter add`. For a conceptual overview of message execution, see [Message execution](/sdk/latest/learn/concepts/modules#message-execution-msgserver). Create the message server file: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} touch x/counter/keeper/msg_server.go ``` Then add the following contents. This file implements the generated `MsgServer` interface and forwards the `Add` transaction to the keeper's `AddCount` method. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // x/counter/keeper/msg_server.go package keeper import ( "context" "github.com/cosmos/example/x/counter/types" ) type msgServer struct { *Keeper } func NewMsgServerImpl(k *Keeper) types.MsgServer { return &msgServer{k} } func (m msgServer) Add(ctx context.Context, req *types.MsgAddRequest) (*types.MsgAddResponse, error) { // Delegate the state update to the keeper. newCount, err := m.AddCount(ctx, req.GetAdd()) if err != nil { return nil, err } // Return the updated count back to the caller. return &types.MsgAddResponse{UpdatedCount: newCount}, nil } ``` `msgServer` embeds `*Keeper` and delegates directly to `AddCount`. The handler itself contains no business logic. ## Step 7: QueryServer In this step, you implement the read-only query handler for the generated `QueryServer` interface. This is the code path that runs when someone queries the current counter value. For more on how modules expose queries, see [Queries](/sdk/latest/learn/concepts/modules#queries). Create the query server file: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} touch x/counter/keeper/query_server.go ``` Then add the following contents. This file implements the generated `QueryServer` interface and returns the current counter value from the keeper. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // x/counter/keeper/query_server.go package keeper import ( "context" "github.com/cosmos/example/x/counter/types" ) type queryServer struct { *Keeper } func NewQueryServer(k *Keeper) types.QueryServer { return &queryServer{k} } func (q queryServer) Count(ctx context.Context, _ *types.QueryCountRequest) (*types.QueryCountResponse, error) { // Read the current count from state and return it in the query response. count, err := q.GetCount(ctx) if err != nil { return nil, err } return &types.QueryCountResponse{Count: count}, nil } ``` ## Step 8: module.go In this step, you connect your keeper and generated services to the Cosmos SDK module framework so the application knows how to initialize the module, expose its query routes, and register its transaction handlers. Create the module file: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} touch x/counter/module.go ``` Then add the following contents. This file defines the app module types and wires your keeper into genesis handling, service registration, and gRPC gateway registration. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // x/counter/module.go package counter import ( "context" "encoding/json" "cosmossdk.io/core/appmodule" "github.com/cosmos/cosmos-sdk/client" "github.com/cosmos/cosmos-sdk/codec" codecTypes "github.com/cosmos/cosmos-sdk/codec/types" sdk "github.com/cosmos/cosmos-sdk/types" "github.com/cosmos/cosmos-sdk/types/module" "github.com/grpc-ecosystem/grpc-gateway/runtime" "github.com/cosmos/example/x/counter/keeper" countertypes "github.com/cosmos/example/x/counter/types" ) var ( // Compile-time checks that AppModule implements the required module interfaces. _ appmodule.AppModule = AppModule{} _ module.HasConsensusVersion = AppModule{} _ module.HasGenesis = AppModule{} _ module.HasServices = AppModule{} ) type AppModuleBasic struct { cdc codec.Codec } func (a AppModuleBasic) Name() string { return countertypes.ModuleName } func (a AppModuleBasic) RegisterLegacyAminoCodec(*codec.LegacyAmino) {} func (a AppModuleBasic) RegisterInterfaces(registry codecTypes.InterfaceRegistry) { countertypes.RegisterInterfaces(registry) } func (a AppModuleBasic) DefaultGenesis(cdc codec.JSONCodec) json.RawMessage { // Start the module with a zero counter by default. return cdc.MustMarshalJSON(&countertypes.GenesisState{Count: 0}) } func (a AppModuleBasic) ValidateGenesis(cdc codec.JSONCodec, _ client.TxEncodingConfig, bz json.RawMessage) error { gs := countertypes.GenesisState{} return cdc.UnmarshalJSON(bz, &gs) } func (a AppModuleBasic) RegisterGRPCGatewayRoutes(clientCtx client.Context, mux *runtime.ServeMux) { // Expose the Query service through the HTTP gateway. if err := countertypes.RegisterQueryHandlerClient(context.Background(), mux, countertypes.NewQueryClient(clientCtx)); err != nil { panic(err) } } type AppModule struct { AppModuleBasic keeper *keeper.Keeper } func NewAppModule(cdc codec.Codec, k *keeper.Keeper) AppModule { return AppModule{AppModuleBasic: AppModuleBasic{cdc: cdc}, keeper: k} } func (a AppModule) IsOnePerModuleType() {} func (a AppModule) IsAppModule() {} func (a AppModule) ConsensusVersion() uint64 { return 1 } func (a AppModule) RegisterServices(cfg module.Configurator) { // Connect the generated service interfaces to your keeper-backed implementations. countertypes.RegisterMsgServer(cfg.MsgServer(), keeper.NewMsgServerImpl(a.keeper)) countertypes.RegisterQueryServer(cfg.QueryServer(), keeper.NewQueryServer(a.keeper)) } func (a AppModule) InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, bz json.RawMessage) { gs := &countertypes.GenesisState{} cdc.MustUnmarshalJSON(bz, gs) // Load the initial counter value into state at chain start. if err := a.keeper.InitGenesis(ctx, gs); err != nil { panic(err) } } func (a AppModule) ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) json.RawMessage { gs, err := a.keeper.ExportGenesis(ctx) if err != nil { panic(err) } // Write the current counter value back out for exports. return cdc.MustMarshalJSON(gs) } ``` The `var _ interface = Struct{}` block at the top is a Go compile-time check — if the struct is missing any required method, the build fails immediately. `RegisterServices` is the most important method. It connects the generated server interfaces to your implementations, making them reachable from the SDK's message and query routers. ## Step 9: AutoCLI In this step, you define the CLI metadata for your module. AutoCLI reads this configuration together with your proto services and generates the `exampled query counter` and `exampled tx counter` commands automatically. Create the AutoCLI file: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} touch x/counter/autocli.go ``` Then add the following contents. This file tells `AutoCLI` how to expose the `Count` query and `Add` transaction as simple command-line commands. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // x/counter/autocli.go package counter import ( autocliv1 "cosmossdk.io/api/cosmos/autocli/v1" ) func (a AppModule) AutoCLIOptions() *autocliv1.ModuleOptions { return &autocliv1.ModuleOptions{ Query: &autocliv1.ServiceCommandDescriptor{ Service: "example.counter.Query", RpcCommandOptions: []*autocliv1.RpcCommandOptions{ // exampled query counter count {RpcMethod: "Count", Use: "count", Short: "Query the current counter value"}, }, }, Tx: &autocliv1.ServiceCommandDescriptor{ Service: "example.counter.Msg", RpcCommandOptions: []*autocliv1.RpcCommandOptions{ // exampled tx counter add 4 --from alice {RpcMethod: "Add", Use: "add [amount]", Short: "Add to the counter", PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ProtoField: "add"}}}, }, }, } } ``` `PositionalArgs` maps the first CLI argument to the `add` field in `MsgAddRequest`, so `add 4` works instead of `add --add 4`. ## Step 10: Wire into app.go In this step, you wire your new module into the application so the chain creates its store, constructs its keeper, and includes it in module startup and genesis handling. For a full explanation of what `app.go` does and why the wiring order matters, see [app.go Overview](/sdk/latest/learn/concepts/app-go). Open `app.go` and find each marker comment. Paste the code directly below it. ### 1. Imports Add the counter module, keeper, and shared types imports to `app.go`. Find the comment in `app.go` and add the code directly below it. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // counter tutorial app wiring 1: add counter imports below counter "github.com/cosmos/example/x/counter" counterkeeper "github.com/cosmos/example/x/counter/keeper" countertypes "github.com/cosmos/example/x/counter/types" ``` ### 2. Keeper Field Store the counter keeper on `ExampleApp` so the rest of the app can reference it. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // counter tutorial app wiring 2: add the counter keeper field below CounterKeeper *counterkeeper.Keeper ``` ### 3. Store Key Give the counter module its own KV store namespace. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // counter tutorial app wiring 3: add the counter store key below countertypes.StoreKey, ``` ### 4. Keeper Instantiation Construct the counter keeper using the module store and app codec. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // counter tutorial app wiring 4: create the counter keeper below app.CounterKeeper = counterkeeper.NewKeeper( runtime.NewKVStoreService(keys[countertypes.StoreKey]), appCodec, ) ``` ### 5. Module Manager Register the counter module with the app's module manager. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // counter tutorial app wiring 5: register the counter module below counter.NewAppModule(appCodec, app.CounterKeeper), ``` ### 6. Genesis Order Include the counter module when the app initializes state from genesis. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // counter tutorial app wiring 6: add the counter module to genesis order below countertypes.ModuleName, ``` ### 7. Export Order Include the counter module when the app exports state back out to genesis. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // counter tutorial app wiring 7: add the counter module to export order below countertypes.ModuleName, ``` ## Step 11: Build Run the following to compile the app and make sure the new module wiring is valid before you try to run the chain. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} go build ./... ``` Fix any compilation errors before continuing. ## Step 12: Test your module Now you'll run the app locally and use one transaction plus one query to confirm the module works end-to-end. ### Start the chain First, install the binary and start the demo chain. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} make install make start ``` This builds and installs `exampled` and then runs `scripts/local_node.sh`, which: * resets the local chain data * initializes genesis * creates and funds the `alice` and `bob` test accounts * creates a validator transaction * starts the chain You'll see the chain running and it should start producing blocks. ### Submit a transaction Open a second terminal and submit a transaction that adds `4` to the counter: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} exampled tx counter add 4 --from alice --chain-id demo --yes ``` If the transaction succeeds, the response should include `code: 0`, which means the chain accepted and executed the transaction without an application error: ``` code: 0 ``` ### Query the chain Query the counter to confirm the stored value changed using the query command that `AutoCLI` generated earlier: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} exampled query counter count ``` You should see the following output: ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} count: "4" ``` Congratulations, you've just created a Cosmos module from scratch and wired it into a real chain! If you are planning to build a production module, see [Module Design Considerations](/sdk/latest/guides/module-design/module-design-considerations) for guidance on state structure, message surface, dependencies, and upgrade planning before you ship. ## Next steps The simple counter module you built here follows the same structure as the full `x/counter` example in the `main` branch. Next, you'll see how the full module extends that foundation with features like params, fee collection, tests, and more. Next: [Full Counter Module Walkthrough →](/sdk/latest/tutorials/example/04-counter-walkthrough) # Full Counter Module Walkthrough Source: https://docs.cosmos.network/sdk/latest/tutorials/example/04-counter-walkthrough If you came here from the module building tutorial, switch back to the `main` branch of the [`cosmos/example` repo](https://github.com/cosmos/example) first: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} git checkout main ``` The minimal counter you built in the previous tutorial captures the core SDK module pattern. The full `x/counter` module example in `main` follows the same pattern and adds several features on top. This walkthrough is meant to show you exactly what each feature is, what it does, and how you can add a similar feature to any module. ## Minimal vs full counter The full counter in the `main` branch adds quite a bit of functionality to the minimal tutorial counter. | Feature | minimal x/counter | full x/counter | | -------------------------------------------------- | ----------------- | ----------------------------------- | | [State](#params-and-authority) | `count` | `count` + `params` | | [Messages](#params-and-authority) | `Add` | `Add` + `UpdateParams` | | [Queries](#params-and-authority) | `Count` | `Count` + `Params` | | [Validation](#expected-keepers-and-fee-collection) | None | `MaxAddValue` limit, overflow check | | [Fees](#expected-keepers-and-fee-collection) | None | `AddCost` charged via bank module | | [Authority](#params-and-authority) | None | Governance-gated param updates | | [Errors](#sentinel-errors) | Generic | Named sentinel errors | | [Telemetry](#telemetry) | None | OpenTelemetry counter metric | | [CLI](#autocli) | AutoCLI | AutoCLI + `EnhanceCustomCommand` | | [Simulation](#simulation) | None | `simsx` weighted operations | | [Block hooks](#beginblock-and-endblock) | None | `BeginBlock` + `EndBlock` | | [Unit tests](#unit-tests) | None | Full keeper/msg/query test suite | The wiring code in [`msg_server.go`](https://github.com/cosmos/example/blob/main/x/counter/keeper/msg_server.go), [`query_server.go`](https://github.com/cosmos/example/blob/main/x/counter/keeper/query_server.go), [`module.go`](https://github.com/cosmos/example/blob/main/x/counter/module.go), and [`types/`](https://github.com/cosmos/example/tree/main/x/counter/types) is structurally similar between the two. Much of the new keeper logic lives in a single method: `AddCount` in [`keeper.go`](https://github.com/cosmos/example/blob/main/x/counter/keeper/keeper.go). ## Params and authority A [module param](/sdk/latest/learn/concepts/modules#params) is on-chain configuration that controls how the module behaves without changing the code. The full counter adds a `Params` type that lets the chain governance configure the module's behavior at runtime. In the full module, params control how large an `Add` can be and how much it costs. ### Where the code lives * [`proto/example/counter/v1/state.proto`](https://github.com/cosmos/example/blob/main/proto/example/counter/v1/state.proto) defines the `Params` type * [`proto/example/counter/v1/tx.proto`](https://github.com/cosmos/example/blob/main/proto/example/counter/v1/tx.proto) adds the `UpdateParams` message * [`proto/example/counter/v1/query.proto`](https://github.com/cosmos/example/blob/main/proto/example/counter/v1/query.proto) adds the `Params` query * [`x/counter/keeper/keeper.go`](https://github.com/cosmos/example/blob/main/x/counter/keeper/keeper.go) stores the params and authority * [`x/counter/keeper/msg_server.go`](https://github.com/cosmos/example/blob/main/x/counter/keeper/msg_server.go) checks the authority on updates * [`x/counter/keeper/query_server.go`](https://github.com/cosmos/example/blob/main/x/counter/keeper/query_server.go) returns the current params ### Try it You can inspect the current params with: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} exampled query counter params ``` ### Add this to your module To add runtime-configurable params to your own module, make these changes: 1. Define a `Params` type in proto 2. Add a privileged `UpdateParams` message 3. Add a query to read the current params 4. Store the params and authority in your keeper 5. Check the authority in `MsgServer` before writing new params ### state.proto The relevant addition in [`state.proto`](https://github.com/cosmos/example/blob/main/proto/example/counter/v1/state.proto) is: ```proto theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message Params { uint64 max_add_value = 1; repeated cosmos.base.v1beta1.Coin add_cost = 2 [ (gogoproto.nullable) = false, (gogoproto.castrepeated) = "github.com/cosmos/cosmos-sdk/types.Coins", (amino.dont_omitempty) = true ]; } ``` `MaxAddValue` caps how much a single `Add` call can increment the counter. `AddCost` sets an optional fee charged for each add operation. ### tx.proto - UpdateParams The relevant addition in [`tx.proto`](https://github.com/cosmos/example/blob/main/proto/example/counter/v1/tx.proto) is: ```proto theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} rpc UpdateParams(MsgUpdateParams) returns (MsgUpdateParamsResponse); message MsgUpdateParams { option (cosmos.msg.v1.signer) = "authority"; string authority = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; Params params = 2 [(gogoproto.nullable) = false]; } message MsgUpdateParamsResponse {} ``` `UpdateParams` is a privileged message. Only the `authority` address can call it. By default that address is the governance module account, so params can only be changed through a governance proposal. ### query.proto - Params [`query.proto`](https://github.com/cosmos/example/blob/main/proto/example/counter/v1/query.proto) adds a second query to expose the current params: ```proto theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} rpc Params(QueryParamsRequest) returns (QueryParamsResponse); ``` ### The authority pattern The keeper stores the authority address and checks it on every `UpdateParams` call: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type Keeper struct { // ... // authority is the address capable of executing a MsgUpdateParams message. // Typically, this should be the x/gov module account. authority string } ``` ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // msg_server.go func (m msgServer) UpdateParams(ctx context.Context, msg *types.MsgUpdateParams) (*types.MsgUpdateParamsResponse, error) { if m.authority != msg.Authority { return nil, sdkerrors.Wrapf(govtypes.ErrInvalidSigner, "invalid authority; expected %s, got %s", m.authority, msg.Authority) } return &types.MsgUpdateParamsResponse{}, m.SetParams(ctx, msg.Params) } ``` The authority defaults to the governance module account at keeper construction: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} authority: authtypes.NewModuleAddress(govtypes.ModuleName).String(), ``` This pattern, storing authority in the keeper and checking it in `MsgServer`, is the standard Cosmos SDK approach to governance-gated configuration. ## Expected keepers and fee collection This section shows the standard Cosmos SDK pattern for [module-to-module interaction](/sdk/latest/learn/concepts/modules#inter-module-access). `x/counter` uses an expected keeper to call into the bank module and charge a fee for each add operation. ### Where the code lives * [`x/counter/types/expected_keepers.go`](https://github.com/cosmos/example/blob/main/x/counter/types/expected_keepers.go) defines the narrow bank keeper interface * [`x/counter/keeper/keeper.go`](https://github.com/cosmos/example/blob/main/x/counter/keeper/keeper.go) stores the bank keeper dependency and charges the fee in `AddCount` * [`app.go`](https://github.com/cosmos/example/blob/main/app.go) passes `app.BankKeeper` into `counterkeeper.NewKeeper` * [`app.go`](https://github.com/cosmos/example/blob/main/app.go) adds a module account entry so the counter module can receive fees ### app.go changes This feature requires two `app.go` changes: * add `countertypes.ModuleName: nil` to `maccPerms` * pass `app.BankKeeper` into `counterkeeper.NewKeeper(...)` In [`app.go`](https://github.com/cosmos/example/blob/main/app.go), those changes look like this: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} maccPerms = map[string][]string{ // ... countertypes.ModuleName: nil, } ``` ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} app.CounterKeeper = counterkeeper.NewKeeper( runtime.NewKVStoreService(keys[countertypes.StoreKey]), appCodec, app.BankKeeper, ) ``` ### Try it Submit an add transaction and the configured `AddCost` fee will be charged from the sender: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} exampled tx counter add 5 --from alice --chain-id demo --yes ``` ### Add this to your module To add fee collection through the bank module, make these changes: 1. Define a narrow bank keeper interface in `types/expected_keepers.go` 2. Add a `bankKeeper` field to your keeper 3. Charge the fee inside your keeper business logic 4. Add a module account entry in `maccPerms` 5. Pass `app.BankKeeper` into your keeper constructor in `app.go` ### expected\_keepers.go Rather than importing the bank module directly, the counter module defines the minimal interface it needs: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // x/counter/types/expected_keepers.go type BankKeeper interface { SendCoinsFromAccountToModule(ctx context.Context, senderAddr sdk.AccAddress, recipientModule string, amt sdk.Coins) error } ``` This keeps the dependency explicit and narrow. The counter module cannot accidentally call any other bank method. ### Keeper struct ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type Keeper struct { Schema collections.Schema counter collections.Item[uint64] params collections.Item[types.Params] bankKeeper types.BankKeeper authority string } ``` ### Fee charging in AddCount ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (k *Keeper) AddCount(ctx context.Context, sender string, amount uint64) (uint64, error) { if amount >= math.MaxUint64 { return 0, ErrNumTooLarge } params, err := k.GetParams(ctx) if err != nil { return 0, err } if params.MaxAddValue > 0 && amount > params.MaxAddValue { return 0, ErrExceedsMaxAdd } if !params.AddCost.IsZero() { senderAddr, err := sdk.AccAddressFromBech32(sender) if err != nil { return 0, err } if err := k.bankKeeper.SendCoinsFromAccountToModule(ctx, senderAddr, types.ModuleName, params.AddCost); err != nil { return 0, sdkerrors.Wrap(ErrInsufficientFunds, err.Error()) } } count, err := k.GetCount(ctx) if err != nil { return 0, err } newCount := count + amount if err := k.counter.Set(ctx, newCount); err != nil { return 0, err } sdkCtx := sdk.UnwrapSDKContext(ctx) sdkCtx.EventManager().EmitEvent( sdk.NewEvent( "count_increased", sdk.NewAttribute("count", fmt.Sprintf("%v", newCount)), ), ) countMetric.Add(ctx, int64(amount)) return newCount, nil } ``` All the business logic, validation, fee charging, state mutation, events, and telemetry, lives in `AddCount`. The `MsgServer` stays thin: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (m msgServer) Add(ctx context.Context, req *types.MsgAddRequest) (*types.MsgAddResponse, error) { newCount, err := m.AddCount(ctx, req.GetSender(), req.GetAdd()) if err != nil { return nil, err } return &types.MsgAddResponse{UpdatedCount: newCount}, nil } ``` Because `AddCount` is a named keeper method, it can also be called from `BeginBlock`, governance hooks, or other modules, not just from the `MsgServer`. ### Module accounts A module account is an on-chain account owned by a module instead of a user. Modules use module accounts to hold funds, receive fees, or get special permissions like minting or burning. Because `x/counter` receives fees from users, it needs a module account entry in [`app.go`](https://github.com/cosmos/example/blob/main/app.go): ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} maccPerms = map[string][]string{ // ... countertypes.ModuleName: nil, } ``` This lives in the `maccPerms` map in [`app.go`](https://github.com/cosmos/example/blob/main/app.go). Here, `nil` means the module account can receive funds but does not get extra permissions like minting or burning. ## Sentinel errors Rather than returning generic errors, `x/counter` defines named sentinel errors with registered codes. That makes failures easier to understand and easier for clients to match on programmatically. ### Where the code lives * [`x/counter/keeper/errors.go`](https://github.com/cosmos/example/blob/main/x/counter/keeper/errors.go) defines the registered module errors * [`x/counter/keeper/keeper.go`](https://github.com/cosmos/example/blob/main/x/counter/keeper/keeper.go) returns those errors from business logic checks ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // keeper/errors.go var ( ErrNumTooLarge = errors.Register("counter", 0, "requested integer to add is too large") ErrExceedsMaxAdd = errors.Register("counter", 1, "add value exceeds max allowed") ErrInsufficientFunds = errors.Register("counter", 2, "insufficient funds to pay add cost") ) ``` Registered errors produce structured error responses on-chain that clients can match against by code, not just by string. Each error code must be unique within the module and greater than zero (code `1` is reserved for internal SDK errors). To check whether an error is of a specific sentinel type, use `errors.Is(err, ErrInsufficientFunds)` — this works correctly even when the error has been wrapped with additional context via `errorsmod.Wrap` or `errorsmod.Wrapf`. All validation — both stateless field checks and stateful business logic checks — should live in the `msgServer` method or the keeper function it calls. The older `ValidateBasic` method on message types is deprecated: prefer performing all validation inside the message server. If your message type does implement `ValidateBasic`, the SDK still calls it for backward compatibility, but new modules should not rely on it. ## Telemetry [Telemetry](/sdk/latest/guides/testing/telemetry) records how often the counter is updated so you can observe module activity in an OpenTelemetry-compatible system. ### Where the code lives * [`x/counter/keeper/telemetry.go`](https://github.com/cosmos/example/blob/main/x/counter/keeper/telemetry.go) defines the meter and counter metric * [`x/counter/keeper/keeper.go`](https://github.com/cosmos/example/blob/main/x/counter/keeper/keeper.go) records the metric from `AddCount` ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // x/counter/keeper/telemetry.go var ( meter = otel.Meter("github.com/cosmos/example/x/counter") countMetric metric.Int64Counter ) func init() { var err error countMetric, err = meter.Int64Counter("count") if err != nil { panic(err) } } ``` `countMetric.Add(ctx, int64(amount))` in `AddCount` increments an OpenTelemetry counter every time the module state is updated. This makes module activity visible in any OTel-compatible observability system. ## AutoCLI [AutoCLI](/sdk/latest/guides/tooling/autocli) exposes the module's queries and transactions as CLI commands. The full module example keeps the same basic AutoCLI setup as the minimal module and adds the recommended setting for custom command integration. ### Where the code lives * [`x/counter/autocli.go`](https://github.com/cosmos/example/blob/main/x/counter/autocli.go) defines the generated query and tx commands ### Try it These commands come from the AutoCLI configuration. `count` and `add` are customized explicitly in `autocli.go`, and `params` is still available from the generated query service. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} exampled query counter count exampled query counter params exampled tx counter add 5 --from alice --chain-id demo --yes ``` Both modules use AutoCLI. The only difference is that `x/counter` sets `EnhanceCustomCommand: true`, which merges any hand-written CLI commands with the auto-generated ones. Since neither module has hand-written commands, it is a no-op here, but it is a good default for fuller modules. The [`autocli.go`](https://github.com/cosmos/example/blob/main/x/counter/autocli.go) file in `x/counter`: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // autocli.go func (a AppModule) AutoCLIOptions() *autocliv1.ModuleOptions { return &autocliv1.ModuleOptions{ Query: &autocliv1.ServiceCommandDescriptor{ Service: "example.counter.Query", EnhanceCustomCommand: true, RpcCommandOptions: []*autocliv1.RpcCommandOptions{ {RpcMethod: "Count", Use: "count", Short: "Query the current counter value"}, }, }, Tx: &autocliv1.ServiceCommandDescriptor{ Service: "example.counter.Msg", EnhanceCustomCommand: true, RpcCommandOptions: []*autocliv1.RpcCommandOptions{ {RpcMethod: "Add", Use: "add [amount]", Short: "Add to the counter", PositionalArgs: []*autocliv1.PositionalArgDescriptor{{ProtoField: "add"}}}, }, }, } } ``` ## Simulation [Simulation](/sdk/latest/guides/testing/simulator) lets the SDK generate randomized transactions against the module during fuzz-style testing. ### Where the code lives * [`x/counter/simulation/msg_factory.go`](https://github.com/cosmos/example/blob/main/x/counter/simulation/msg_factory.go) defines how to generate random `Add` messages * [`x/counter/module.go`](https://github.com/cosmos/example/blob/main/x/counter/module.go) registers those weighted operations ### Test it You can exercise simulation through the repo's simulation test targets described in the running and testing tutorial. `x/counter` implements `simsx`-based simulation, which lets the SDK's simulation framework generate random `Add` transactions during fuzz testing: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // x/counter/simulation/msg_factory.go func MsgAddFactory() simsx.SimMsgFactoryFn[*types.MsgAddRequest] { return func(ctx context.Context, testData *simsx.ChainDataSource, reporter simsx.SimulationReporter) ([]simsx.SimAccount, *types.MsgAddRequest) { sender := testData.AnyAccount(reporter) if reporter.IsSkipped() { return nil, nil } r := testData.Rand() addAmount := uint64(r.Intn(100) + 1) msg := &types.MsgAddRequest{ Sender: sender.AddressBech32, Add: addAmount, } return []simsx.SimAccount{sender}, msg } } ``` `module.go` registers this factory: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (a AppModule) WeightedOperationsX(weights simsx.WeightSource, reg simsx.Registry) { reg.Add(weights.Get("msg_add", 100), simulation.MsgAddFactory()) } ``` ## BeginBlock and EndBlock These [hooks](/sdk/latest/learn/concepts/modules#block-hooks) let a module run code automatically at the start or end of every block. In `x/counter`, they are purposefully empty to demonstrate where and how these features can be added. ### Where the code lives * [`x/counter/module.go`](https://github.com/cosmos/example/blob/main/x/counter/module.go) implements `BeginBlock` and `EndBlock` * [`app.go`](https://github.com/cosmos/example/blob/main/app.go) adds the module to `SetOrderBeginBlockers` and `SetOrderEndBlockers` ### app.go changes Because the module advertises block hooks, [`app.go`](https://github.com/cosmos/example/blob/main/app.go) must include `countertypes.ModuleName` in both blocker order lists. ### Add this to your module To add begin and end blockers to your own module, make two changes: 1. Implement the hooks in `x//module.go` 2. Add your module name to `SetOrderBeginBlockers` and `SetOrderEndBlockers` in `app.go` `module.go` implements `HasBeginBlocker` and `HasEndBlocker`: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (a AppModule) BeginBlock(ctx context.Context) error { // optional: logic to execute at the start of every block return nil } func (a AppModule) EndBlock(ctx context.Context) error { // optional: logic to execute at the end of every block return nil } ``` In [`app.go`](https://github.com/cosmos/example/blob/main/app.go), the module is added to the blocker order lists like this: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} app.ModuleManager.SetOrderBeginBlockers( // ... countertypes.ModuleName, ) app.ModuleManager.SetOrderEndBlockers( // ... countertypes.ModuleName, ) ``` `x/counter` has no per-block logic, so both methods return nil. They exist to demonstrate the pattern: modules that need per-block execution (staking, distribution) implement real logic here. For example, a counter that auto-increments every block would call `k.AddCount(ctx, 1)` from `BeginBlock` instead of exposing a message type. ## Unit tests The full module example includes a real [test suite](/sdk/latest/learn/concepts/testing) for keeper logic, query behavior, message handling, and bank keeper interactions. ### Where the code lives * [`x/counter/keeper/keeper_test.go`](https://github.com/cosmos/example/blob/main/x/counter/keeper/keeper_test.go) * [`x/counter/keeper/msg_server_test.go`](https://github.com/cosmos/example/blob/main/x/counter/keeper/msg_server_test.go) * [`x/counter/keeper/query_server_test.go`](https://github.com/cosmos/example/blob/main/x/counter/keeper/query_server_test.go) ### Run them You can run the counter module tests directly with: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} go test ./x/counter/... ``` ### Add this to your module Start with keeper, message server, and query server tests. If your module depends on another keeper, use a small mock interface like `MockBankKeeper` so you can control success and failure cases in isolation. `x/counter` ships a full test suite in [`x/counter/keeper/`](https://github.com/cosmos/example/tree/main/x/counter/keeper): | File | What it tests | | ---------------------- | -------------------------------------------------------------------------------------------- | | `keeper_test.go` | `KeeperTestSuite` setup, `InitGenesis`, `ExportGenesis`, `GetCount`, `AddCount`, `SetParams` | | `msg_server_test.go` | `MsgAdd`, event emission, `MsgUpdateParams` | | `query_server_test.go` | `QueryCount`, `QueryParams` | All three files share the `KeeperTestSuite` struct defined in [`keeper_test.go`](https://github.com/cosmos/example/blob/main/x/counter/keeper/keeper_test.go), which sets up an isolated in-memory store, a mock bank keeper, and a real keeper instance: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type KeeperTestSuite struct { suite.Suite ctx sdk.Context keeper *keeper.Keeper queryClient types.QueryClient msgServer types.MsgServer bankKeeper *MockBankKeeper authority string } ``` `MockBankKeeper` lets tests control exactly what the bank keeper returns without needing a real bank module: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type MockBankKeeper struct { SendCoinsFromAccountToModuleFn func(ctx context.Context, senderAddr sdk.AccAddress, recipientModule string, amt sdk.Coins) error } ``` Tests set `SendCoinsFromAccountToModuleFn` to simulate success or failure: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} s.bankKeeper.SendCoinsFromAccountToModuleFn = func(...) error { return errors.New("insufficient funds") } ``` ## Gas `minimum-gas-prices` in `app.toml` sets the minimum fee a node requires before it will accept and relay a transaction. The local dev chain started by `make start` leaves this empty, so transactions are accepted with no fee beyond the `AddCost` module parameter. To require a minimum network fee, set it in `app.toml`: ```toml theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} minimum-gas-prices = "0.025stake" ``` Transactions that don't meet the minimum will be rejected by the node before they reach your module. This is a per-node setting, not a chain-wide consensus rule, so validators on a live network each configure their own threshold. Next: [Running and Testing →](/sdk/latest/tutorials/example/05-run-and-test) # Run, Test, and Configure Source: https://docs.cosmos.network/sdk/latest/tutorials/example/05-run-and-test Learn how to run and test a chain Now that you've [built a module from scratch](/sdk/latest/tutorials/example/03-build-a-module) and walked through the [full counter module](/sdk/latest/tutorials/example/04-counter-walkthrough), the next step is learning the workflow for running and validating a production-ready chain. This page shows how to start the chain locally, interact with it through the CLI, and use the main layers of testing before shipping changes. ## Single-node local chain Use a single-node chain for the fastest local development loop. It gives you one validator with predictable state so you can quickly test queries and transactions. ### Start ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} make start ``` This builds the binary, initializes chain data, and starts a single validator node. It handles cleanup automatically — existing chain state is reset on each run. The chain uses: * Chain ID: `demo` * Pre-funded accounts: `alice`, `bob` * Default denomination: `stake` ### Stop Press `Ctrl+C` in the terminal running `make start`. ### Reset chain state ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} make start ``` Re-running `make start` resets state automatically. There is no separate reset command. ## Localnet (multi-validator) Use localnet when you want a setup that is closer to a real network. It runs multiple validators in Docker so you can test multi-node behavior locally. For a multi-validator setup using Docker: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} # Initialize localnet configuration make localnet-init # Start all validators make localnet-start # View logs make localnet-logs # Stop make localnet-stop # Clean all localnet data make localnet-clean ``` ## CLI reference Once the chain is running, these are the core [CLI](/sdk/latest/learn/concepts/cli-grpc-rest#cli) commands you'll use to inspect state and submit transactions. ### Query commands Use query commands to read module state without changing anything on-chain. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} # Query the current counter value exampled query counter count # Query the module parameters exampled query counter params # Query with a specific node (if not using default localhost:26657) exampled query counter count --node tcp://localhost:26657 ``` ### Transaction commands Use transaction commands to submit state-changing messages to the chain. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} # Add to the counter exampled tx counter add 10 --from alice --chain-id demo --yes # Add with a gas limit exampled tx counter add 10 --from alice --chain-id demo --gas 200000 --yes # Update module parameters (requires governance authority) exampled tx counter update-params --from alice --chain-id demo --yes ``` ### Useful flags These flags are the ones you'll use most often while iterating locally. | Flag | Description | | --------------- | ----------------------------------------------- | | `--from` | Key name or address to sign with | | `--chain-id` | Chain ID (use `demo` for local) | | `--yes` | Skip confirmation prompt | | `--gas` | Gas limit for the transaction | | `--node` | RPC endpoint (default: `tcp://localhost:26657`) | | `--output json` | Output response as JSON | ## Node Configuration When you run `make start`, the chain creates `~/.exampleapp/config/` automatically and initializes two config files inside it: | File | What it controls | | ------------- | -------------------------------------------------------------------------- | | `app.toml` | SDK application settings: gas prices, pruning, API/gRPC servers, telemetry | | `config.toml` | CometBFT settings: peer networking, consensus timeouts, mempool, RPC | ### app.toml The most common settings to change during development: | Setting | Default | Description | | -------------------- | ----------- | -------------------------------------------------------------------------------- | | `minimum-gas-prices` | `"0stake"` | Minimum fee the node accepts before processing a transaction | | `pruning` | `"default"` | How much historical state to keep (`default`, `nothing`, `everything`, `custom`) | | `api.enable` | `true` | Enables the REST API on port 1317 | | `grpc.enable` | `true` | Enables the gRPC server on port 9090 | ### config.toml The settings most likely to change during development: | Setting | Default | Description | | -------------------------- | -------- | ------------------------------------------------------------------------ | | `moniker` | `"test"` | Human-readable name for the node | | `log_level` | `"info"` | Log verbosity (`debug`, `info`, `error`) | | `consensus.timeout_commit` | `"5s"` | How long to wait after a block is committed before starting the next one | | `p2p.seeds` | `""` | Seed nodes to connect to on a live network | | `p2p.persistent_peers` | `""` | Peers to maintain permanent connections to | ## Unit tests Start here when you want fast feedback on module logic without running a chain. These tests isolate the [keeper](/sdk/latest/learn/concepts/testing#keeper-unit-tests) and gRPC servers from the rest of the app. The unit test logic lives in the counter keeper package on `main`: the shared suite setup is in [x/counter/keeper/keeper\_test.go](https://github.com/cosmos/example/blob/main/x/counter/keeper/keeper_test.go), message-path tests are in [x/counter/keeper/msg\_server\_test.go](https://github.com/cosmos/example/blob/main/x/counter/keeper/msg_server_test.go), and query-path tests are in [x/counter/keeper/query\_server\_test.go](https://github.com/cosmos/example/blob/main/x/counter/keeper/query_server_test.go). The keeper test suite covers the keeper, msg server, and query server in isolation using an in-memory store and a mock bank keeper. No running chain is required. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} go test ./x/counter/... ``` To run with verbose output: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} go test -v ./x/counter/... ``` To run a specific test: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} go test -v -run TestKeeperTestSuite/TestAddCount ./x/counter/... ``` The test suite is structured around three files: | File | Tests | | ----------------------------- | -------------------------------------------- | | `keeper/keeper_test.go` | Genesis, `GetCount`, `AddCount`, `SetParams` | | `keeper/msg_server_test.go` | `MsgAdd`, event emission, `MsgUpdateParams` | | `keeper/query_server_test.go` | `QueryCount`, `QueryParams` | ## E2E tests Run [E2E tests](/sdk/latest/learn/concepts/testing#integration-tests) when you want to verify the full request path against a real node. They give you higher confidence than unit tests, but take longer to complete. The E2E logic lives on `main` in [tests/counter\_test.go](https://github.com/cosmos/example/blob/main/tests/counter_test.go), which starts an in-process network, builds signed transactions, and verifies query results. The shared network fixture it uses is defined in [tests/test\_helpers.go](https://github.com/cosmos/example/blob/main/tests/test_helpers.go). The E2E test suite starts a real in-process validator network and submits actual transactions against it. This tests the full stack: transaction encoding, message routing, keeper logic, and query responses. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} go test -v -run TestE2ETestSuite ./tests/... ``` E2E tests take longer than unit tests because they spin up a real node. Run them before merging significant changes. ## Simulation tests [Simulation tests](/sdk/latest/learn/concepts/testing#simulation-tests) stress the chain with randomized activity to catch edge cases that targeted tests can miss. In this repo, that simulation flow is built with `simsx`, the Cosmos SDK's higher-level simulation framework for defining random on-chain activity at the module level. The top-level simulation test commands on `main` run through [sim\_test.go](https://github.com/cosmos/example/blob/main/sim_test.go). The counter module's `simsx` registration lives in [x/counter/module.go](https://github.com/cosmos/example/blob/main/x/counter/module.go), the random `MsgAdd` generation lives in [x/counter/simulation/msg\_factory.go](https://github.com/cosmos/example/blob/main/x/counter/simulation/msg_factory.go), and randomized counter genesis lives in [x/counter/simulation/genesis.go](https://github.com/cosmos/example/blob/main/x/counter/simulation/genesis.go). In practice, `simsx` lets each module describe three things: how to generate random starting state, which operations can happen during simulation, and how often each operation should be chosen. For `x/counter`, that means generating a random initial counter value, registering `MsgAdd` as a simulation operation, and assigning it a weight so the simulator knows how frequently to try it relative to other module operations. When you run a simulation target, the test harness repeatedly builds app instances, creates random accounts and balances, generates random transactions from the registered module operations, and executes them over many blocks. That makes `simsx` useful for catching issues that are hard to cover with hand-written tests, like state machine bugs, unexpected panics, invariant violations, and non-deterministic behavior across runs. Simulation runs the chain with randomly generated transactions to detect non-determinism and invariant violations. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} # Full simulation make test-sim-full # Determinism check make test-sim-determinism # All simulation tests make test-sim ``` Simulation requires the `sims` build tag, which the Makefile targets handle automatically. ## Lint Linting is the quickest way to catch style problems and common code-quality issues before CI or code review does. The lint commands are defined in the repo [Makefile](https://github.com/cosmos/example/blob/main/Makefile), which installs `golangci-lint` and runs it across the full module tree. ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} make lint ``` This installs and runs `golangci-lint` across the repository. To auto-fix issues where possible: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} make lint-fix ``` ## Test summary Use this table as a quick reference for choosing the right validation command for the kind of change you made. | Command | What it validates | | ------------------------------------------- | ---------------------------------------------- | | `go test ./x/counter/...` | Keeper, MsgServer, QueryServer in isolation | | `go test -run TestE2ETestSuite ./tests/...` | Full transaction and query flow on a live node | | `make test-sim-full` | Non-determinism and invariant violations | | `make lint` | Code style and static analysis | # v0.54 Release Notes Source: https://docs.cosmos.network/sdk/latest/upgrade/release What's new in the latest Cosmos SDK release, including performance improvements, new features, and removals. If you are upgrading to v0.54, see the [upgrade guide](/sdk/latest/upgrade/upgrade). For a full list of changes, see the [changelog](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/CHANGELOG.md). ## Overview This release introduces order of magnitude improvements to network stability and throughput. In testing, we are able to support sustained 1K TPS on a variety of network configurations with no degradation in block time, whereas previously block production would have slowed / halted almost immediately after 200+ TPS. This is made possible through 2 critical performance improvements targeting different layers of the stack: * **Parallel transactions (BlockSTM)**: When applied to blocks containing fully parallelizable transactions, Block STM shows between 5-10x improvements in execution time depending on the available CPUs, size of the blocks, and types of transactions being run. We have modified the underlying implementations of Cosmos bank sends and EVM native sends to ensure they are parallelizable, so you will benefit from speed ups of these transactions immediately. It is possible to do the same for other common kinds of Cosmos transactions (e.g. governance, staking, auth), but we haven’t optimized them yet. Custom transaction types and EVM smart contracts may similarly require implementation modifications to benefit from parallelization. See our guide [here](/sdk/latest/experimental/blockstm) for more information. * **Enhanced Networking (LibP2P):** The lib-p2p based reactor implementation outperforms Comet’s existing p2p implementation on latency benchmarks across a variety of workloads, reducing p99 latency metrics by a factor of 100 and up to 1000 in some cases. libp2p is industry-standard in peer-to-peer data exchange. Under the hood, it leverages QUIC, a modern low-latency UDP-based communication protocol. At this time, lib-p2p is meant for usage in centrally managed Cosmos networks, as peer exchange and upgradeability from comet’s networking stack are not supported yet. Please reach out if you are interested in testing libp2p in devnet or testnet environments and potentially contributing these improvements. We want to work closely with teams to gather feedback. See the [LibP2P guide](/cometbft/latest/docs/experimental/lib-p2p) for more information. ## Additional Features 1. **AdaptiveSync** helps nodes catchup when they fall behind by letting consensus and blocksync work simultaneously. During traffic spikes or short block times, this keeps nodes progressing with the network while preserving normal consensus safety and finality behavior. Especially valuable for RPC-heavy nodes. See the [block sync guide](/cometbft/latest/docs/core/block-sync#adaptivesync) for more information. 2. **Log/v2** supports the transition of the Cosmos SDK’s observability to OpenTelemetry, enabling automatic trace correlation across all log output (show via the logged keys `trace_id`, `span_id`, and `trace_flags`, if a span is present in the `ctx`). This is powered by four new required contextual logging methods on the `Logger` interface (`InfoContext`, `WarnContext`, etc). Additionally, a new `MultiLogger` allows fanning out to multiple logging backends simultaneously, which the server now uses automatically when OpenTelemetry is configured. See the [logging guide](/sdk/latest/guides/testing/log) and [telemetry guide](/sdk/latest/guides/testing/telemetry) for more information. 3. **IBC General Message Passing (GMP)**: General Message Passing in IBC enables calling arbitrary smart contracts on remote networks. Unlike Interchain Accounts, the caller does not need to own an account on the destination chain (though it is general enough to support this usage pattern). Instead, GMP directly calls contracts on the destination chain. This makes it especially useful for implementing mint/burn bridges (See [below](#upcoming-features-available-soon-in-minor-releases) for more details) ## Enterprise Features The following features are released as part of [Cosmos Enterprise](/enterprise/overview): 1. The **Groups module** enables on-chain multisig and collective decision-making for any set of accounts. Groups are formed with weighted members and one or more configurable decision policies that define how proposals pass. Members submit proposals containing arbitrary SDK messages, vote, and any account can trigger execution once a proposal is accepted. Two built-in decision policies are included: threshold (absolute weighted vote count) and percentage (proportion of YES votes), each with configurable voting and minimum execution periods. The decision policy interface supports custom extensions. See the [Groups module docs](/enterprise/components/group/overview) for more information. 2. The **POA module** provides an admin-managed validator set as a drop-in replacement for the staking, distribution, and slashing modules. Purpose-built for institutional deployments run by a known set of operators, it offers a streamlined validator lifecycle with no native token required. Fee distribution to validators and full governance compatibility are included out of the box. See the [POA module docs](/enterprise/components/poa/overview) for more information. ## Upcoming Features (Available soon in Minor Releases) 1. **Krakatoa mempool (Cosmos EVM only)**: This mempool significantly improves transaction throughput and network stability by making the comet mempool stateless and introducing two new concurrent ABCI methods for transaction processing (`reapTxs` and `insertTx`). The upshot is that transaction processing is more concurrent and more lightweight, resulting in performance and stability gains. This will be available for Cosmos EVM chains at the end of April. 2. **Interchain Fungible Token Standard (IFT):** This is a more modern and flexible approach to token transfers in IBC compared to ICS20 that enables mint/burn based bridging. IFT decouples the contract or module that mints a token from the IBC channel. Importantly, this allows token issuers to establish canonical, owned deployments of their tokens on any networks they choose and manage cross-chain mints/burns with IBC, rather than using “wrapped” tokens that they cannot control. It also allows a single token to support fungibility over multiple IBC paths and to upgrade/change the IBC connection in the background without worrying about the “token path” changing. This is coming shortly to ibc-go, ibc-solidity, and ibc-sol. 3. **IBC support for any EVM network:** IBC functionality will extend directly to any EVM network as a collection of Solidity contracts that implement IBC Eureka. This will enable direct IBC connectivity without requiring any modifications to the EVM chain. This means Ethereum, Base, Arbitrum, Optimism, and other EVM networks can participate directly in IBC transfers. Combined with IFT, token issuers can manage canonical token deployments across Cosmos and any number of EVM chains from a single source of truth. 4. **IBC support for Solana:** Similar to EVM support, IBC connectivity will extend to Solana with a native program implementation. This will allow Solana to participate directly in IBC transfers with Cosmos and EVM chains, enabling cross-ecosystem token movement without wrapped tokens or intermediary chains. 5. **IBC v2 relayer:** A standalone, production-ready, request-driven relayer service for the IBC v2 protocol. This relayer will support interoperating between a Cosmos-based chain and major EVM networks (Ethereum, Base, Optimism, Arbitrum, Polygon, and more). Operators submit a source transaction hash and can track each packet's status in real time, from submission through relay completion, with full retry and failure recovery handled automatically. ## Removals The following features have been removed from this release family: * **ibc-apps/async-icq:** We have never had official support for ibc-apps/async-icq middleware. This is us just stating this explicitly. We will not be updating it as a part of this release or going forward. We will not be testing its compatibility with IBC-go v11.0.0 * **ibc-apps/pfm (packet forwarding middleware):** We have never had official support for PFM , but historically, we did update it and make a best effort to ensure compatibility with IBC in during previous release cycles. We will not be doing that as a part of this release or going forward. Instead, we are upstreaming PFM into IBC-Go to streamline our support. We will guarantee equivalent functionality and APIs as part of this migration. The upstreamed version will be available for you to migrate to in IBC-go v11.1.0, which we are planning to release towards the end of April 2026. * **ibc-apps/rate-limits:** We have never had official support for ibc-apps/rate-limits middleware, but historically, we did update it and make a best effort to ensure compatibility with IBC in during previous release cycles. We will not be doing that as a part of this release or going forward. Instead, we are upstreaming PFM into IBC-Go to streamline our support. We will guarantee equivalent functionality and APIs as part of this migration. The upstreamed version will be available for you to migrate to in IBC-go v11.2.0, which we are planning to release in the first weeks of May 2026. * **ibc-apps/ibc-hooks:** We have never had official support for ibc-apps/ibc-hooks middleware, but historically, we did update it and make a best effort to ensure compatibility with IBC in during previous release cycles. We will not be doing that as a part of this release or going forward. Instead, we are introducing and will maintain a new `callbacks` middleware that enables calling Cosmwasm contracts (like ibc-hooks) as well as Cosmos modules and EVM contracts when processing ICS20 packets. We are working to ensure the upcoming wasmd release will enable Cosmwasm contracts to adopt this without changing contract interfaces. # v0.54 Upgrade Guide Source: https://docs.cosmos.network/sdk/latest/upgrade/upgrade Reference for upgrading from v0.53 to v0.54 of Cosmos SDK This document provides a reference for upgrading from `v0.53.x` to `v0.54.x` of Cosmos SDK. However, this guide is not exhaustive for all breaking changes. For a comprehensive list of all breaking changes in v0.54.0, see the [Changelog](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/CHANGELOG.md). Always read the [App Wiring Changes](#app-wiring-changes) section for more information on application wiring updates. ## Table of Contents * [Upgrade Checklist](#upgrade-checklist) * [Required Changes](#required-changes) * [App Wiring Changes](#app-wiring-changes) * [x/gov](#x/gov) * [Keeper Initialization](#keeper-initialization) * [GovHooks Interface](#govhooks-interface) * [x/epochs](#x/epochs) * [x/bank](#x/bank) * [NodeService](#nodeservice) * [Removed Go Modules](#removed-go-modules) * [Renamed Go Modules](#renamed-go-modules) * [Module Version Updates](#module-version-updates) * [Log v2](#log-v2) * [Store v2](#store-v2) * [Conditional Changes](#conditional-changes) * [Module Deprecations](#module-deprecations) * [x/circuit](#x/circuit) * [x/nft](#x/nft) * [x/crisis](#x/crisis) * [Cosmos Enterprise](#cosmos-enterprise) * [Groups Module](#groups-module) * [PoA Module](#poa-module) * [New Features and Non-Breaking Changes](#new-features-and-non-breaking-changes) * [Telemetry](#telemetry) * [OpenTelemetry](#opentelemetry) * [Centralized Authority via Consensus Params](#centralized-authority-via-consensus-params) * [How AuthorityParams Works](#how-authorityparams-works) * [Upgrade Handler](#upgrade-handler) * [IBC v11 Updates](#ibc-v11-updates) * [Cosmos Performance Upgrades (Experimental)](#cosmos-performance-upgrades-experimental) * [Cosmos SDK](#cosmos-sdk) * [BlockSTM](#blockstm) * [CometBFT v0.39 Updates](#cometbft-v039-updates) * [LibP2P](#libp2p) * [`AdaptiveSync`](#adaptivesync) ## Upgrade Checklist Use this checklist first, then read the linked sections for the exact code or wiring changes. * [ ] Update `x/gov` keeper wiring, as the `x/gov` module has been decoupled from `x/staking`. See [Keeper Initialization](#keeper-initialization). * [ ] Update your governance hooks if you implement `AfterProposalSubmission`. See [GovHooks Interface](#govhooks-interface). * [ ] Update `x/epochs.NewAppModule` if your app includes `x/epochs`. See [x/epochs](#x/epochs). * [ ] Put `x/bank` first in `SetOrderEndBlockers`. See [x/bank](#x/bank). * [ ] Update your node service registration if your app exposes `NodeService`. See [NodeService](#nodeservice). * [ ] Migrate imports for removed `x/` Go modules. See [Removed Go Modules](#removed-go-modules). * [ ] Update required Cosmos SDK Go module dependencies. See [Module Version Updates](#module-version-updates). * [ ] Migrate to `contrib/` imports if you use `x/circuit`, `x/nft`, or `x/crisis`. See [Module Deprecations](#module-deprecations). * [ ] Migrate to Cosmos Enterprise if you use the `x/group` module. See [Groups Module](#groups-module). * [ ] Update imports to `cosmossdk.io/log/v2` if your app imports the log package directly. See [Log v2](#log-v2). * [ ] Migrate imports to `github.com/cosmos/cosmos-sdk/store/v2`. See [Store v2](#store-v2). * [ ] Migrate any remaining `BaseApp.NewUncachedContext()` usage. See [Store v2](#store-v2). * [ ] If using `systemtests` update import to `github.com/cosmos/cosmos-sdk/tools/systemtests`. See [Renamed Go Modules](#renamed-go-modules). * [ ] Review [IBC v11 Updates](#ibc-v11-updates) if your chain uses IBC. Several APIs have been removed. * [ ] Review [Centralized Authority via Consensus Params](#centralized-authority-via-consensus-params). No upgrade action is required to keep using per-keeper authorities. * [ ] Review [Telemetry](#telemetry). No upgrade action is required to keep existing telemetry wiring, but upgrading to OpenTelemetry is strongly encouraged. * [ ] Review [PoA Module](#poa-module) if you are interested in adopting the new Cosmos Enterprise Proof of Authority module. * [ ] Review [Cosmos Performance Upgrades (Experimental)](#cosmos-performance-upgrades-experimental) if you are interested in experimenting with BlockSTM, LibP2P, or AdaptiveSync. ## Required Changes All chains upgrading to `v0.54.x` should review and apply the changes in this section. This guide provides an overview of the major changes in v0.54.0. However, this guide is not exhaustive for all breaking changes. For a comprehensive list of all breaking changes in v0.54.0, see the [Changelog](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/CHANGELOG.md). ### App Wiring Changes #### x/gov #### Keeper Initialization The `x/gov` module has been decoupled from `x/staking`. The `keeper.NewKeeper` constructor now requires a `CalculateVoteResultsAndVotingPowerFn` parameter instead of a `StakingKeeper`. **Before:** ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} govKeeper := govkeeper.NewKeeper( appCodec, runtime.NewKVStoreService(keys[govtypes.StoreKey]), app.AccountKeeper, app.BankKeeper, app.StakingKeeper, // REMOVED IN v0.54 app.DistrKeeper, app.MsgServiceRouter(), govConfig, authtypes.NewModuleAddress(govtypes.ModuleName).String(), ) ``` **After:** ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} govKeeper := govkeeper.NewKeeper( appCodec, runtime.NewKVStoreService(keys[govtypes.StoreKey]), app.AccountKeeper, app.BankKeeper, app.DistrKeeper, app.MsgServiceRouter(), govConfig, authtypes.NewModuleAddress(govtypes.ModuleName).String(), govkeeper.NewDefaultCalculateVoteResultsAndVotingPower(app.StakingKeeper), // ADDED IN v0.54 ) ``` For applications using depinject, the governance module now accepts an optional `CalculateVoteResultsAndVotingPowerFn`. If not provided, it will use the `StakingKeeper` (also optional) to create the default function. #### GovHooks Interface The `AfterProposalSubmission` hook now includes the proposer address as a parameter. **Before:** ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (h MyGovHooks) AfterProposalSubmission(ctx context.Context, proposalID uint64) error { // implementation } ``` **After:** ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (h MyGovHooks) AfterProposalSubmission(ctx context.Context, proposalID uint64, proposerAddr sdk.AccAddress) error { // implementation } ``` #### x/epochs The epochs module's `NewAppModule` function now requires the epoch keeper by pointer instead of value, fixing a bug related to setting hooks via depinject. #### x/bank The bank module now contains an `EndBlock` method to support the new BlockSTM experimental package. BlockSTM requires coordinating object store access across parallel execution workers, and `x/bank`'s `EndBlock` handles the finalization step for that. **All applications must make this change**, whether or not they enable BlockSTM, because the `EndBlock` registration is now part of the module's standard lifecycle. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} app.ModuleManager.SetOrderEndBlockers( banktypes.ModuleName, // other modules... ) ``` #### NodeService The node service has been updated to return the node's earliest store height in the `Status` query. Please update your registration with the following code (make sure you are already updated to `github.com/cosmos/cosmos-sdk/store/v2`): ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (app *SimApp) RegisterNodeService(clientCtx client.Context, cfg config.Config) { nodeservice.RegisterNodeService(clientCtx, app.GRPCQueryRouter(), cfg, func() int64 { return app.CommitMultiStore().EarliestVersion() }) } ``` ### Removed Go Modules Most `cosmossdk.io` vanity URLs for modules under `x/` have been removed. These separate Go modules caused dependency version management to be unpredictable; different modules could be pinned to different SDK versions, leading to compatibility issues. Consolidating everything under `github.com/cosmos/cosmos-sdk` gives developers a single, versioned dependency to manage. The following must be updated: * `cosmossdk.io/x/evidence` -> `github.com/cosmos/cosmos-sdk/x/evidence` * `cosmossdk.io/x/feegrant` -> `github.com/cosmos/cosmos-sdk/x/feegrant` * `cosmossdk.io/x/upgrade` -> `github.com/cosmos/cosmos-sdk/x/upgrade` * `cosmossdk.io/x/tx` -> `github.com/cosmos/cosmos-sdk/x/tx` ### Renamed Go Modules The `cosmossdk.io/systemtests` go module is now named `github.com/cosmos/cosmos-sdk/tools/systemtests`. ### Module Version Updates * `cosmossdk.io/client/v2` has been updated to v2.11.0 ### Log v2 The log package has been updated to `v2`. Applications using v0.54.0+ of Cosmos SDK will be required to update imports to `cosmossdk.io/log/v2`. Usage of the logger itself does not need to be updated. The v2 release of log adds contextual methods to the logger interface (InfoContext, DebugContext, etc.), allowing logs to be correlated with OpenTelemetry traces. To learn more about the new features offered in `log/v2`, as well as setting up log correlation, see the [log package documentation](https://docs.cosmos.network/sdk/latest/guides/testing/log). ### Store v2 Store v2 introduces breaking changes. For a comprehensive list of all breaking changes, see the [Changelog](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/CHANGELOG.md). The store package has been updated to `v2`. Applications using v0.54.0+ of Cosmos SDK will be required to update imports to `github.com/cosmos/cosmos-sdk/store/v2`. `BaseApp.NewUncachedContext()` was deprecated as part of this work. With store v2, writes must go through a cache/branch first; the SDK no longer exposes a helper that lets applications write directly against the root `CommitMultiStore`. If you previously used `BaseApp.NewUncachedContext()` in tests: * Replace `app.NewUncachedContext(false, header)` with `app.NewNextBlockContext(header)` when the test needs a writable context between `Commit()` and the next `FinalizeBlock()`. * Replace `app.NewUncachedContext(true, header)` with `app.NewContext(true)` or `app.NewContextLegacy(true, header)` when the test only needs the `CheckTx` state. Below is an example of migrating away from `NewUncachedContext`. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func TestApp(t *testing.T) { db := dbm.NewMemDB() logger := log.NewTestLogger(t) app := NewSimappWithCustomOptions(t, false, SetupOptions{ Logger: logger.With("instance", "first"), DB: db, AppOpts: simtestutil.NewAppOptionsWithFlagHome(t.TempDir()), }) /* Before the updates, most code would look like: ctx := gaiaApp.NewUncachedContext(true, tmproto.Header{}) // CheckTx context app.MyKeeper.MyMethod(ctx, ...) The main thing to be aware of is when you are using checkTx state and finalizeState. NewNextBlockContext will overwrite the finalize state and return a context that writes to that state. Reading from checkTx state without committing will not reflect the changes made in finalizeBlock state UNLESS you have committed. */ ctx := app.BaseApp.NewNextBlockContext(cmtproto.Header{}) // gets finalize block state app.BankKeeper.SetSendEnabled(ctx, "foobar", true) _, err := app.Commit() // commit the out-of-band changes. require.NoError(t, err) // since we committed, we can now read the out-of-band changes via checkTx state. // If we didn't commit above, we could read this value by passing `false` to NewContext, which would give us a handle // on the finalize block state. However, if you DID commit like we did above, you MUST use `true` here. res, err := app.BankKeeper.SendEnabled(app.BaseApp.NewContext(true), &banktypes.QuerySendEnabledRequest{ Denoms: []string{"foobar"}, Pagination: nil, }) require.NoError(t, err) require.Len(t, res.SendEnabled, 1) require.Equal(t, "foobar", res.SendEnabled[0].Denom) } ``` ## Conditional Changes These changes apply if your chain uses the affected modules, packages, or integrations. ### Module Deprecations Cosmos SDK v0.54.0 drops support for the circuit, nft, and crisis modules. Developers can still use these modules, however, they will no longer be actively maintained by Cosmos Labs. #### x/circuit The circuit module is no longer being actively maintained by Cosmos Labs and was moved to `contrib/x/circuit`. #### x/nft The nft module is no longer being actively maintained by Cosmos Labs and was moved to `contrib/x/nft`. #### x/crisis The crisis module is no longer being actively maintained by Cosmos Labs and was moved to `contrib/x/crisis`. ### Cosmos Enterprise [Cosmos Enterprise](https://docs.cosmos.network/enterprise/overview) is a comprehensive suite of blockchain services and technologies to future-proof your organization's digital ledger capabilities. It combines hardened protocol modules, infrastructure components, and proactive support and enablement from the engineers building the Cosmos technology stack. Cosmos Enterprise is built for organizations that require reliability, security, and operational confidence as they scale critical blockchain infrastructure in enterprise production environments. #### Groups Module The groups module is now maintained under the Cosmos Enterprise offering. If your application uses `x/group`, you will need to migrate your code to the Enterprise-distributed package and obtain a Cosmos Enterprise license to continue using it. Please see [Cosmos Enterprise](https://docs.cosmos.network/enterprise/overview) to learn more. #### PoA Module Cosmos SDK v0.54 includes a Proof of Authority (POA) module under the Cosmos Enterprise offering. Please see [Cosmos Enterprise](https://docs.cosmos.network/enterprise/components/poa/overview) to learn more about using the PoA module in your application. ## New Features and Non-Breaking Changes These changes are informational and optional to adopt during the upgrade; they are not required for a successful migration. ### Telemetry The telemetry package has been deprecated and users are encouraged to switch to OpenTelemetry. #### OpenTelemetry Previously, Cosmos SDK telemetry support was provided by `github.com/hashicorp/go-metrics` which was undermaintained and only supported metrics instrumentation. OpenTelemetry provides an integrated solution for metrics, traces, and logging which is widely adopted and actively maintained. The existing wrapper functions in the `telemetry` package required acquiring mutex locks and map lookups for every metric operation which is suboptimal. OpenTelemetry's API uses atomic concurrency wherever possible and should introduce less performance overhead during metric collection. See the [telemetry documentation](https://docs.cosmos.network/sdk/latest/guides/testing/telemetry) to learn how to set up OpenTelemetry with Cosmos SDK v0.54.0+. Below is a quick reference on setting up and using meters and traces with OpenTelemetry: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package mymodule import ( "context" "go.opentelemetry.io/otel" "go.opentelemetry.io/otel/attribute" "go.opentelemetry.io/otel/codes" "go.opentelemetry.io/otel/metric" "go.opentelemetry.io/otel/trace" sdk "github.com/cosmos/cosmos-sdk/types" ) // Declare package-level meter and tracer using otel.Meter() and otel.Tracer(). // Instruments should be created once at package initialization and reused. var ( tracer = otel.Tracer("cosmos-sdk/x/mymodule") meter = otel.Meter("cosmos-sdk/x/mymodule") txCounter metric.Int64Counter latencyHist metric.Float64Histogram ) func init() { var err error txCounter, err = meter.Int64Counter( "mymodule.tx.count", metric.WithDescription("Number of transactions processed"), ) if err != nil { panic(err) } latencyHist, err = meter.Float64Histogram( "mymodule.tx.latency", metric.WithDescription("Transaction processing latency"), metric.WithUnit("ms"), ) if err != nil { panic(err) } } // ExampleWithContext demonstrates tracing with a standard context.Context. // Use tracer.Start directly when you have a Go context. func ExampleWithContext(ctx context.Context) error { ctx, span := tracer.Start(ctx, "ExampleWithContext", trace.WithAttributes(attribute.String("key", "value")), ) defer span.End() // Record metrics txCounter.Add(ctx, 1) if err := doWork(ctx); err != nil { span.RecordError(err) span.SetStatus(codes.Error, err.Error()) return err } return nil } // ExampleWithSDKContext demonstrates tracing with sdk.Context. // Use ctx.StartSpan to properly propagate the span through the SDK context. func ExampleWithSDKContext(ctx sdk.Context) error { ctx, span := ctx.StartSpan(tracer, "ExampleWithSDKContext", trace.WithAttributes(attribute.String("module", "mymodule")), ) defer span.End() // Record metrics (sdk.Context implements context.Context) txCounter.Add(ctx, 1) // Create child spans for sub-operations ctx, childSpan := ctx.StartSpan(tracer, "ExampleWithSDKContext.SubOperation") // ... do sub-operation work ... childSpan.End() return nil } ``` ### Centralized Authority via Consensus Params Authority management can now be centralized via the `x/consensus` module. A new `AuthorityParams` field in `ConsensusParams` stores the authority address on-chain. When set, it takes precedence over the per-keeper authority parameter. **This feature introduces no breaking changes**: Keeper constructors still accept the `authority` parameter. It is now used as a **fallback** when no authority is configured in consensus params. Existing code continues to work without changes. #### How AuthorityParams Works When a module validates authority (e.g., in `UpdateParams`), it checks consensus params first. If no authority is set there, it falls back to the keeper's `authority` field: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} authority := sdkCtx.Authority() // from consensus params if authority == "" { authority = k.authority // fallback to keeper field } if authority != msg.Authority { return nil, errors.Wrapf(...) } ``` To enable centralized authority, set the `AuthorityParams` in consensus params via a governance proposal targeting the `x/consensus` module's `MsgUpdateParams`. ## Upgrade Handler This section provides a reference example for implementing the on-chain upgrade itself. The following is an example upgrade handler for upgrading from **v0.53.6** to **v0.54.0**. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} const UpgradeName = "v0.53.6-to-v0.54.0" func (app SimApp) RegisterUpgradeHandlers() { app.UpgradeKeeper.SetUpgradeHandler( UpgradeName, func(ctx context.Context, _ upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { return app.ModuleManager.RunMigrations(ctx, app.Configurator(), fromVM) }, ) upgradeInfo, err := app.UpgradeKeeper.ReadUpgradeInfoFromDisk() if err != nil { panic(err) } if upgradeInfo.Name == UpgradeName && !app.UpgradeKeeper.IsSkipHeight(upgradeInfo.Height) { storeUpgrades := storetypes.StoreUpgrades{ Added: []string{}, } // configure store loader that checks if version == upgradeHeight and applies store upgrades app.SetStoreLoader(upgradetypes.UpgradeStoreLoader(upgradeInfo.Height, &storeUpgrades)) } } ``` ## IBC v11 Updates IBC v11 introduces several improvements, removes long-deprecated APIs (`ParamSubspace` from all Keeper constructors, `MsgSubmitMisbehaviour`, and `ibcwasmtypes.Checksums`), and adds custom address codec support in the transfer module to enable Cosmos EVM compatibility with IBC transfers. Read the [Changelog](https://github.com/cosmos/ibc-go/blob/main/CHANGELOG.md) and [v11 Migration Guide](https://docs.cosmos.network/ibc/latest/migrations/v10-to-v11) for more information. ## Cosmos Performance Upgrades (Experimental) For Q1 of 2026, Cosmos Labs has been focusing on greatly improving performance of Cosmos SDK applications. v0.54 of Cosmos SDK introduces support for several performance-related features across the stack. The SDK introduces [BlockSTM](#blockstm) for concurrent transactions, and CometBFT introduces [LibP2P](#libp2p) and [`AdaptiveSync`](#adaptivesync). NOTE: It is important to emphasize that the following are **experimental** features. We DO NOT recommend running chains with these features enabled in production without extensive testing. ### Cosmos SDK #### BlockSTM BlockSTM enables deterministic, concurrent execution of transactions, improving block execution speeds and throughput. Developers interested in experimenting with BlockSTM should read the [documentation](https://docs.cosmos.network/sdk/latest/experimental/blockstm). Below is an example of setting up BlockSTM: > **⚠️ Warning:** BlockSTM is experimental. Ensure thorough testing before enabling in production. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import ( "runtime" "github.com/cosmos/cosmos-sdk/baseapp/blockstm" ) oKeys := storetypes.NewObjectStoreKeys(banktypes.ObjectStoreKey) keys := storetypes.NewKVStoreKeys( authtypes.StoreKey, banktypes.StoreKey, stakingtypes.StoreKey, // ... other store keys ) // Collect non-transient store keys var nonTransientKeys []storetypes.StoreKey for _, k := range keys { nonTransientKeys = append(nonTransientKeys, k) } for _, k := range oKeys { nonTransientKeys = append(nonTransientKeys, k) } // Enable BlockSTM runner bApp.SetBlockSTMTxRunner(blockstm.NewSTMRunner( txConfig.TxDecoder(), nonTransientKeys, min(runtime.GOMAXPROCS(0), runtime.NumCPU()), true, // debug logging sdk.DefaultBondDenom, )) // Optionally disable block gas meter for better performance bApp.SetDisableBlockGasMeter(true) // Set ObjectStoreKey on bank module app.BankKeeper = app.BankKeeper.WithObjStoreKey(oKeys[banktypes.ObjectStoreKey]) ``` ### CometBFT v0.39 Updates #### LibP2P libp2p replaces CometBFT's legacy `comet-p2p` transport layer with [go-libp2p](https://libp2p.io/). It adds native stream-oriented transport, concurrent receive pipelines, and autoscaled worker pools per reactor, reducing queue pressure and improving message flow under load. In benchmarks, libp2p has been a key contributor to reaching over 2000 TPS. Beyond raw throughput, it improves network liveness by making peer communication and block propagation more resilient under sustained congestion and sudden load spikes. Unlike other opt-in features, **to opt-in to libp2p, every validator in the network must upgrade together**. CometBFT p2p and libp2p are fundamentally incompatible and cannot interoperate. Because of this, a coordinated network-wide migration at a specific upgrade height is required. See the [libp2p page](https://docs.cosmos.network/cometbft/latest/docs/experimental/lib-p2p) in the CometBFT documentation for details. #### `AdaptiveSync` `AdaptiveSync` allows a node to run `blocksync` and consensus at the same time for faster recovery behavior. In the default flow, a node starts in `blocksync`, catches up, then switches to consensus. Under sustained load, a node can remain behind and struggle to catch up. With `adaptive_sync` enabled, consensus still works normally, but it can also ingest already available blocks from `blocksync`, allowing nodes to recover more quickly during traffic spikes. `AdaptiveSync` does not change consensus safety or finality rules. See the [`AdaptiveSync` documentation](https://docs.cosmos.network/cometbft/latest/docs/core/block-sync#adaptivesync) for details. # ADR 020: Protocol Buffer Transaction Encoding Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-020-protobuf-transaction-encoding ## Changelog * 2020 March 06: Initial Draft * 2020 March 12: API Updates * 2020 April 13: Added details on interface `oneof` handling * 2020 April 30: Switch to `Any` * 2020 May 14: Describe public key encoding * 2020 June 08: Store `TxBody` and `AuthInfo` as bytes in `SignDoc`; Document `TxRaw` as broadcast and storage type. * 2020 August 07: Use ADR 027 for serializing `SignDoc`. * 2020 August 19: Move sequence field from `SignDoc` to `SignerInfo`, as discussed in [#6966](https://github.com/cosmos/cosmos-sdk/issues/6966). * 2020 September 25: Remove `PublicKey` type in favor of `secp256k1.PubKey`, `ed25519.PubKey` and `multisig.LegacyAminoPubKey`. * 2020 October 15: Add `GetAccount` and `GetAccountWithHeight` methods to the `AccountRetriever` interface. * 2021 Feb 24: The Cosmos SDK does not use Tendermint's `PubKey` interface anymore, but its own `cryptotypes.PubKey`. Updates to reflect this. * 2021 May 3: Rename `clientCtx.JSONMarshaler` to `clientCtx.JSONCodec`. * 2021 June 10: Add `clientCtx.Codec: codec.Codec`. ## Status Accepted ## Context This ADR is a continuation of the motivation, design, and context established in [ADR 019](/sdk/v0.50/build/architecture/adr-019-protobuf-state-encoding), namely, we aim to design the Protocol Buffer migration path for the client-side of the Cosmos SDK. Specifically, the client-side migration path primarily includes tx generation and signing, message construction and routing, in addition to CLI & REST handlers and business logic (i.e. queriers). With this in mind, we will tackle the migration path via two main areas, txs and querying. However, this ADR solely focuses on transactions. Querying should be addressed in a future ADR, but it should build off of these proposals. Based on detailed discussions ([#6030](https://github.com/cosmos/cosmos-sdk/issues/6030) and [#6078](https://github.com/cosmos/cosmos-sdk/issues/6078)), the original design for transactions was changed substantially from an `oneof` /JSON-signing approach to the approach described below. ## Decision ### Transactions Since interface values are encoded with `google.protobuf.Any` in state (see [ADR 019](/sdk/latest/reference/architecture/adr-019-protobuf-state-encoding)), `sdk.Msg`s are encoding with `Any` in transactions. One of the main goals of using `Any` to encode interface values is to have a core set of types which is reused by apps so that clients can safely be compatible with as many chains as possible. It is one of the goals of this specification to provide a flexible cross-chain transaction format that can serve a wide variety of use cases without breaking client compatibility. In order to facilitate signing, transactions are separated into `TxBody`, which will be re-used by `SignDoc` below, and `signatures`: ```protobuf expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // types/types.proto package cosmos_sdk.v1; message Tx { TxBody body = 1; AuthInfo auth_info = 2; // A list of signatures that matches the length and order of AuthInfo's signer_infos to // allow connecting signature meta information like public key and signing mode by position. repeated bytes signatures = 3; } // A variant of Tx that pins the signer's exact binary represenation of body and // auth_info. This is used for signing, broadcasting and verification. The binary // `serialize(tx: TxRaw)` is stored in Tendermint and the hash `sha256(serialize(tx: TxRaw))` // becomes the "txhash", commonly used as the transaction ID. message TxRaw { // A protobuf serialization of a TxBody that matches the representation in SignDoc. bytes body = 1; // A protobuf serialization of an AuthInfo that matches the representation in SignDoc. bytes auth_info = 2; // A list of signatures that matches the length and order of AuthInfo's signer_infos to // allow connecting signature meta information like public key and signing mode by position. repeated bytes signatures = 3; } message TxBody { // A list of messages to be executed. The required signers of those messages define // the number and order of elements in AuthInfo's signer_infos and Tx's signatures. // Each required signer address is added to the list only the first time it occurs. // // By convention, the first required signer (usually from the first message) is referred // to as the primary signer and pays the fee for the whole transaction. repeated google.protobuf.Any messages = 1; string memo = 2; int64 timeout_height = 3; repeated google.protobuf.Any extension_options = 1023; } message AuthInfo { // This list defines the signing modes for the required signers. The number // and order of elements must match the required signers from TxBody's messages. // The first element is the primary signer and the one which pays the fee. repeated SignerInfo signer_infos = 1; // The fee can be calculated based on the cost of evaluating the body and doing signature verification of the signers. This can be estimated via simulation. Fee fee = 2; } message SignerInfo { // The public key is optional for accounts that already exist in state. If unset, the // verifier can use the required signer address for this position and lookup the public key. google.protobuf.Any public_key = 1; // ModeInfo describes the signing mode of the signer and is a nested // structure to support nested multisig pubkey's ModeInfo mode_info = 2; // sequence is the sequence of the account, which describes the // number of committed transactions signed by a given address. It is used to prevent // replay attacks. uint64 sequence = 3; } message ModeInfo { oneof sum { Single single = 1; Multi multi = 2; } // Single is the mode info for a single signer. It is structured as a message // to allow for additional fields such as locale for SIGN_MODE_TEXTUAL in the future message Single { SignMode mode = 1; } // Multi is the mode info for a multisig public key message Multi { // bitarray specifies which keys within the multisig are signing CompactBitArray bitarray = 1; // mode_infos is the corresponding modes of the signers of the multisig // which could include nested multisig public keys repeated ModeInfo mode_infos = 2; } } enum SignMode { SIGN_MODE_UNSPECIFIED = 0; SIGN_MODE_DIRECT = 1; SIGN_MODE_TEXTUAL = 2; SIGN_MODE_LEGACY_AMINO_JSON = 127; } ``` As will be discussed below, in order to include as much of the `Tx` as possible in the `SignDoc`, `SignerInfo` is separated from signatures so that only the raw signatures themselves live outside of what is signed over. Because we are aiming for a flexible, extensible cross-chain transaction format, new transaction processing options should be added to `TxBody` as soon those use cases are discovered, even if they can't be implemented yet. Because there is coordination overhead in this, `TxBody` includes an `extension_options` field which can be used for any transaction processing options that are not already covered. App developers should, nevertheless, attempt to upstream important improvements to `Tx`. ### Signing All of the signing modes below aim to provide the following guarantees: * **No Malleability**: `TxBody` and `AuthInfo` cannot change once the transaction is signed * **Predictable Gas**: if I am signing a transaction where I am paying a fee, the final gas is fully dependent on what I am signing These guarantees give the maximum amount confidence to message signers that manipulation of `Tx`s by intermediaries can't result in any meaningful changes. #### `SIGN_MODE_DIRECT` The "direct" signing behavior is to sign the raw `TxBody` bytes as broadcast over the wire. This has the advantages of: * requiring the minimum additional client capabilities beyond a standard protocol buffers implementation * leaving effectively zero holes for transaction malleability (i.e. there are no subtle differences between the signing and encoding formats which could potentially be exploited by an attacker) Signatures are structured using the `SignDoc` below which reuses the serialization of `TxBody` and `AuthInfo` and only adds the fields which are needed for signatures: ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // types/types.proto message SignDoc { // A protobuf serialization of a TxBody that matches the representation in TxRaw. bytes body = 1; // A protobuf serialization of an AuthInfo that matches the representation in TxRaw. bytes auth_info = 2; string chain_id = 3; uint64 account_number = 4; } ``` In order to sign in the default mode, clients take the following steps: 1. Serialize `TxBody` and `AuthInfo` using any valid protobuf implementation. 2. Create a `SignDoc` and serialize it using [ADR 027](/sdk/v0.50/build/architecture/adr-027-deterministic-protobuf-serialization). 3. Sign the encoded `SignDoc` bytes. 4. Build a `TxRaw` and serialize it for broadcasting. Signature verification is based on comparing the raw `TxBody` and `AuthInfo` bytes encoded in `TxRaw` not based on any ["canonicalization"](https://github.com/regen-network/canonical-proto3) algorithm which creates added complexity for clients in addition to preventing some forms of upgradeability (to be addressed later in this document). Signature verifiers do: 1. Deserialize a `TxRaw` and pull out `body` and `auth_info`. 2. Create a list of required signer addresses from the messages. 3. For each required signer: * Pull account number and sequence from the state. * Obtain the public key either from state or `AuthInfo`'s `signer_infos`. * Create a `SignDoc` and serialize it using [ADR 027](/sdk/v0.50/build/architecture/adr-027-deterministic-protobuf-serialization). * Verify the signature at the same list position against the serialized `SignDoc`. #### `SIGN_MODE_LEGACY_AMINO` In order to support legacy wallets and exchanges, Amino JSON will be temporarily supported transaction signing. Once wallets and exchanges have had a chance to upgrade to protobuf based signing, this option will be disabled. In the meantime, it is foreseen that disabling the current Amino signing would cause too much breakage to be feasible. Note that this is mainly a requirement of the Cosmos Hub and other chains may choose to disable Amino signing immediately. Legacy clients will be able to sign a transaction using the current Amino JSON format and have it encoded to protobuf using the REST `/tx/encode` endpoint before broadcasting. #### `SIGN_MODE_TEXTUAL` As was discussed extensively in [#6078](https://github.com/cosmos/cosmos-sdk/issues/6078), there is a desire for a human-readable signing encoding, especially for hardware wallets like the [Ledger](https://www.ledger.com) which display transaction contents to users before signing. JSON was an attempt at this but falls short of the ideal. `SIGN_MODE_TEXTUAL` is intended as a placeholder for a human-readable encoding which will replace Amino JSON. This new encoding should be even more focused on readability than JSON, possibly based on formatting strings like [MessageFormat](http://userguide.icu-project.org/formatparse/messages). In order to ensure that the new human-readable format does not suffer from transaction malleability issues, `SIGN_MODE_TEXTUAL` requires that the *human-readable bytes are concatenated with the raw `SignDoc`* to generate sign bytes. Multiple human-readable formats (maybe even localized messages) may be supported by `SIGN_MODE_TEXTUAL` when it is implemented. ### Unknown Field Filtering Unknown fields in protobuf messages should generally be rejected by transaction processors because: * important data may be present in the unknown fields, that if ignored, will cause unexpected behavior for clients * they present a malleability vulnerability where attackers can bloat tx size by adding random uninterpreted data to unsigned content (i.e. the master `Tx`, not `TxBody`) There are also scenarios where we may choose to safely ignore unknown fields ([Link](https://github.com/cosmos/cosmos-sdk/issues/6078#issuecomment-624400188)) to provide graceful forwards compatibility with newer clients. We propose that field numbers with bit 11 set (for most use cases this is the range of 1024-2047) be considered non-critical fields that can safely be ignored if unknown. To handle this we will need an unknown field filter that: * always rejects unknown fields in unsigned content (i.e. top-level `Tx` and unsigned parts of `AuthInfo` if present based on the signing mode) * rejects unknown fields in all messages (including nested `Any`s) other than fields with bit 11 set This will likely need to be a custom protobuf parser pass that takes message bytes and `FileDescriptor`s and returns a boolean result. ### Public Key Encoding Public keys in the Cosmos SDK implement the `cryptotypes.PubKey` interface. We propose to use `Any` for protobuf encoding as we are doing with other interfaces (for example, in `BaseAccount.PubKey` and `SignerInfo.PublicKey`). The following public keys are implemented: secp256k1, secp256r1, ed25519 and legacy-multisignature. Ex: ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message PubKey { bytes key = 1; } ``` `multisig.LegacyAminoPubKey` has an array of `Any`'s member to support any protobuf public key type. Apps should only attempt to handle a registered set of public keys that they have tested. The provided signature verification ante handler decorators will enforce this. ### CLI & REST Currently, the REST and CLI handlers encode and decode types and txs via Amino JSON encoding using a concrete Amino codec. Being that some of the types dealt with in the client can be interfaces, similar to how we described in [ADR 019](/sdk/v0.50/build/architecture/adr-019-protobuf-state-encoding), the client logic will now need to take a codec interface that knows not only how to handle all the types, but also knows how to generate transactions, signatures, and messages. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type AccountRetriever interface { GetAccount(clientCtx Context, addr sdk.AccAddress) (client.Account, error) GetAccountWithHeight(clientCtx Context, addr sdk.AccAddress) (client.Account, int64, error) EnsureExists(clientCtx client.Context, addr sdk.AccAddress) error GetAccountNumberSequence(clientCtx client.Context, addr sdk.AccAddress) (uint64, uint64, error) } type Generator interface { NewTx() TxBuilder NewFee() ClientFee NewSignature() ClientSignature MarshalTx(tx types.Tx) ([]byte, error) } type TxBuilder interface { GetTx() sdk.Tx SetMsgs(...sdk.Msg) error GetSignatures() []sdk.Signature SetSignatures(...sdk.Signature) GetFee() sdk.Fee SetFee(sdk.Fee) GetMemo() string SetMemo(string) } ``` We then update `Context` to have new fields: `Codec`, `TxGenerator`, and `AccountRetriever`, and we update `AppModuleBasic.GetTxCmd` to take a `Context` which should have all of these fields pre-populated. Each client method should then use one of the `Init` methods to re-initialize the pre-populated `Context`. `tx.GenerateOrBroadcastTx` can be used to generate or broadcast a transaction. For example: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import "github.com/spf13/cobra" import "github.com/cosmos/cosmos-sdk/client" import "github.com/cosmos/cosmos-sdk/client/tx" func NewCmdDoSomething(clientCtx client.Context) *cobra.Command { return &cobra.Command{ RunE: func(cmd *cobra.Command, args []string) error { clientCtx := ctx.InitWithInput(cmd.InOrStdin()) msg := NewSomeMsg{... } tx.GenerateOrBroadcastTx(clientCtx, msg) }, } } ``` ## Future Improvements ### `SIGN_MODE_TEXTUAL` specification A concrete specification and implementation of `SIGN_MODE_TEXTUAL` is intended as a near-term future improvement so that the ledger app and other wallets can gracefully transition away from Amino JSON. ### `SIGN_MODE_DIRECT_AUX` (\*Documented as option (3) in [Link](https://github.com/cosmos/cosmos-sdk/issues/6078#issuecomment-628026933)) We could add a mode `SIGN_MODE_DIRECT_AUX` to support scenarios where multiple signatures are being gathered into a single transaction but the message composer does not yet know which signatures will be included in the final transaction. For instance, I may have a 3/5 multisig wallet and want to send a `TxBody` to all 5 signers to see who signs first. As soon as I have 3 signatures then I will go ahead and build the full transaction. With `SIGN_MODE_DIRECT`, each signer needs to sign the full `AuthInfo` which includes the full list of all signers and their signing modes, making the above scenario very hard. `SIGN_MODE_DIRECT_AUX` would allow "auxiliary" signers to create their signature using only `TxBody` and their own `PublicKey`. This allows the full list of signers in `AuthInfo` to be delayed until signatures have been collected. An "auxiliary" signer is any signer besides the primary signer who is paying the fee. For the primary signer, the full `AuthInfo` is actually needed to calculate gas and fees because that is dependent on how many signers and which key types and signing modes they are using. Auxiliary signers, however, do not need to worry about fees or gas and thus can just sign `TxBody`. To generate a signature in `SIGN_MODE_DIRECT_AUX` these steps would be followed: 1. Encode `SignDocAux` (with the same requirement that fields must be serialized in order): ```protobuf expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // types/types.proto message SignDocAux { bytes body_bytes = 1; // PublicKey is included in SignDocAux : // 1. as a special case for multisig public keys. For multisig public keys, // the signer should use the top-level multisig public key they are signing // against, not their own public key. This is to prevent against a form // of malleability where a signature could be taken out of context of the // multisig key that was intended to be signed for // 2. to guard against scenario where configuration information is encoded // in public keys (it has been proposed) such that two keys can generate // the same signature but have different security properties // // By including it here, the composer of AuthInfo cannot reference the // a public key variant the signer did not intend to use PublicKey public_key = 2; string chain_id = 3; uint64 account_number = 4; } ``` 2. Sign the encoded `SignDocAux` bytes 3. Send their signature and `SignerInfo` to primary signer who will then sign and broadcast the final transaction (with `SIGN_MODE_DIRECT` and `AuthInfo` added) once enough signatures have been collected ### `SIGN_MODE_DIRECT_RELAXED` (*Documented as option (1)(a) in [Link](https://github.com/cosmos/cosmos-sdk/issues/6078#issuecomment-628026933)*) This is a variation of `SIGN_MODE_DIRECT` where multiple signers wouldn't need to coordinate public keys and signing modes in advance. It would involve an alternate `SignDoc` similar to `SignDocAux` above with fee. This could be added in the future if client developers found the burden of collecting public keys and modes in advance too burdensome. ## Consequences ### Positive * Significant performance gains. * Supports backward and forward type compatibility. * Better support for cross-language clients. * Multiple signing modes allow for greater protocol evolution ### Negative * `google.protobuf.Any` type URLs increase transaction size although the effect may be negligible or compression may be able to mitigate it. ### Neutral ## References # ADR 021: Protocol Buffer Query Encoding Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-021-protobuf-query-encoding 2020 March 27: Initial Draft ## Changelog * 2020 March 27: Initial Draft ## Status Accepted ## Context This ADR is a continuation of the motivation, design, and context established in [ADR 019](/sdk/v0.50/build/architecture/adr-019-protobuf-state-encoding) and [ADR 020](/sdk/v0.50/build/architecture/adr-020-protobuf-transaction-encoding), namely, we aim to design the Protocol Buffer migration path for the client-side of the Cosmos SDK. This ADR continues from [ADD 020](/sdk/v0.50/build/architecture/adr-020-protobuf-transaction-encoding) to specify the encoding of queries. ## Decision ### Custom Query Definition Modules define custom queries through a protocol buffers `service` definition. These `service` definitions are generally associated with and used by the GRPC protocol. However, the protocol buffers specification indicates that they can be used more generically by any request/response protocol that uses protocol buffer encoding. Thus, we can use `service` definitions for specifying custom ABCI queries and even reuse a substantial amount of the GRPC infrastructure. Each module with custom queries should define a service canonically named `Query`: ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // x/bank/types/types.proto service Query { rpc QueryBalance(QueryBalanceParams) returns (cosmos_sdk.v1.Coin) { } rpc QueryAllBalances(QueryAllBalancesParams) returns (QueryAllBalancesResponse) { } } ``` #### Handling of Interface Types Modules that use interface types and need true polymorphism generally force a `oneof` up to the app-level that provides the set of concrete implementations of that interface that the app supports. While app's are welcome to do the same for queries and implement an app-level query service, it is recommended that modules provide query methods that expose these interfaces via `google.protobuf.Any`. There is a concern on the transaction level that the overhead of `Any` is too high to justify its usage. However for queries this is not a concern, and providing generic module-level queries that use `Any` does not preclude apps from also providing app-level queries that return use the app-level `oneof`s. A hypothetical example for the `gov` module would look something like: ```protobuf expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // x/gov/types/types.proto import "google/protobuf/any.proto"; service Query { rpc GetProposal(GetProposalParams) returns (AnyProposal) { } } message AnyProposal { ProposalBase base = 1; google.protobuf.Any content = 2; } ``` ### Custom Query Implementation In order to implement the query service, we can reuse the existing [gogo protobuf](https://github.com/cosmos/gogoproto) grpc plugin, which for a service named `Query` generates an interface named `QueryServer` as below: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type QueryServer interface { QueryBalance(context.Context, *QueryBalanceParams) (*types.Coin, error) QueryAllBalances(context.Context, *QueryAllBalancesParams) (*QueryAllBalancesResponse, error) } ``` The custom queries for our module are implemented by implementing this interface. The first parameter in this generated interface is a generic `context.Context`, whereas querier methods generally need an instance of `sdk.Context` to read from the store. Since arbitrary values can be attached to `context.Context` using the `WithValue` and `Value` methods, the Cosmos SDK should provide a function `sdk.UnwrapSDKContext` to retrieve the `sdk.Context` from the provided `context.Context`. An example implementation of `QueryBalance` for the bank module as above would look something like: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type Querier struct { Keeper } func (q Querier) QueryBalance(ctx context.Context, params *types.QueryBalanceParams) (*sdk.Coin, error) { balance := q.GetBalance(sdk.UnwrapSDKContext(ctx), params.Address, params.Denom) return &balance, nil } ``` ### Custom Query Registration and Routing Query server implementations as above would be registered with `AppModule`s using a new method `RegisterQueryService(grpc.Server)` which could be implemented simply as below: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // x/bank/module.go func (am AppModule) RegisterQueryService(server grpc.Server) { types.RegisterQueryServer(server, keeper.Querier{ am.keeper }) } ``` Underneath the hood, a new method `RegisterService(sd *grpc.ServiceDesc, handler interface{})` will be added to the existing `baseapp.QueryRouter` to add the queries to the custom query routing table (with the routing method being described below). The signature for this method matches the existing `RegisterServer` method on the GRPC `Server` type where `handler` is the custom query server implementation described above. GRPC-like requests are routed by the service name (ex. `cosmos_sdk.x.bank.v1.Query`) and method name (ex. `QueryBalance`) combined with `/`s to form a full method name (ex. `/cosmos_sdk.x.bank.v1.Query/QueryBalance`). This gets translated into an ABCI query as `custom/cosmos_sdk.x.bank.v1.Query/QueryBalance`. Service handlers registered with `QueryRouter.RegisterService` will be routed this way. Beyond the method name, GRPC requests carry a protobuf encoded payload, which maps naturally to `RequestQuery.Data`, and receive a protobuf encoded response or error. Thus there is a quite natural mapping of GRPC-like rpc methods to the existing `sdk.Query` and `QueryRouter` infrastructure. This basic specification allows us to reuse protocol buffer `service` definitions for ABCI custom queries substantially reducing the need for manual decoding and encoding in query methods. ### GRPC Protocol Support In addition to providing an ABCI query pathway, we can easily provide a GRPC proxy server that routes requests in the GRPC protocol to ABCI query requests under the hood. In this way, clients could use their host languages' existing GRPC implementations to make direct queries against Cosmos SDK app's using these `service` definitions. In order for this server to work, the `QueryRouter` on `BaseApp` will need to expose the service handlers registered with `QueryRouter.RegisterService` to the proxy server implementation. Nodes could launch the proxy server on a separate port in the same process as the ABCI app with a command-line flag. ### REST Queries and Swagger Generation [grpc-gateway](https://github.com/grpc-ecosystem/grpc-gateway) is a project that translates REST calls into GRPC calls using special annotations on service methods. Modules that want to expose REST queries should add `google.api.http` annotations to their `rpc` methods as in this example below. ```protobuf expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // x/bank/types/types.proto service Query { rpc QueryBalance(QueryBalanceParams) returns (cosmos_sdk.v1.Coin) { option (google.api.http) = { get: "/x/bank/v1/balance/{address}/{denom}" }; } rpc QueryAllBalances(QueryAllBalancesParams) returns (QueryAllBalancesResponse) { option (google.api.http) = { get: "/x/bank/v1/balances/{address}" }; } } ``` grpc-gateway will work direcly against the GRPC proxy described above which will translate requests to ABCI queries under the hood. grpc-gateway can also generate Swagger definitions automatically. In the current implementation of REST queries, each module needs to implement REST queries manually in addition to ABCI querier methods. Using the grpc-gateway approach, there will be no need to generate separate REST query handlers, just query servers as described above as grpc-gateway handles the translation of protobuf to REST as well as Swagger definitions. The Cosmos SDK should provide CLI commands for apps to start GRPC gateway either in a separate process or the same process as the ABCI app, as well as provide a command for generating grpc-gateway proxy `.proto` files and the `swagger.json` file. ### Client Usage The gogo protobuf grpc plugin generates client interfaces in addition to server interfaces. For the `Query` service defined above we would get a `QueryClient` interface like: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type QueryClient interface { QueryBalance(ctx context.Context, in *QueryBalanceParams, opts ...grpc.CallOption) (*types.Coin, error) QueryAllBalances(ctx context.Context, in *QueryAllBalancesParams, opts ...grpc.CallOption) (*QueryAllBalancesResponse, error) } ``` Via a small patch to gogo protobuf ([gogo/protobuf#675](https://github.com/gogo/protobuf/pull/675)) we have tweaked the grpc codegen to use an interface rather than concrete type for the generated client struct. This allows us to also reuse the GRPC infrastructure for ABCI client queries. 1Context`will receive a new method`QueryConn`that returns a`ClientConn\` that routes calls to ABCI queries Clients (such as CLI methods) will then be able to call query methods like this: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} clientCtx := client.NewContext() queryClient := types.NewQueryClient(clientCtx.QueryConn()) params := &types.QueryBalanceParams{ addr, denom } result, err := queryClient.QueryBalance(gocontext.Background(), params) ``` ### Testing Tests would be able to create a query client directly from keeper and `sdk.Context` references using a `QueryServerTestHelper` as below: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} queryHelper := baseapp.NewQueryServerTestHelper(ctx) types.RegisterQueryServer(queryHelper, keeper.Querier{ app.BankKeeper }) queryClient := types.NewQueryClient(queryHelper) ``` ## Future Improvements ## Consequences ### Positive * greatly simplified querier implementation (no manual encoding/decoding) * easy query client generation (can use existing grpc and swagger tools) * no need for REST query implementations * type safe query methods (generated via grpc plugin) * going forward, there will be less breakage of query methods because of the backwards compatibility guarantees provided by buf ### Negative * all clients using the existing ABCI/REST queries will need to be refactored for both the new GRPC/REST query paths as well as protobuf/proto-json encoded data, but this is more or less unavoidable in the protobuf refactoring ### Neutral ## References # ADR 022: Custom BaseApp panic handling Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-022-custom-panic-handling 2020 Apr 24: Initial Draft 2021 Sep 14: Superseded by ADR-045 ## Changelog * 2020 Apr 24: Initial Draft * 2021 Sep 14: Superseded by ADR-045 ## Status SUPERSEDED by ADR-045 ## Context The current implementation of BaseApp does not allow developers to write custom error handlers during panic recovery [runTx()](https://github.com/cosmos/cosmos-sdk/blob/bad4ca75f58b182f600396ca350ad844c18fc80b/baseapp/baseapp.go#L539) method. We think that this method can be more flexible and can give Cosmos SDK users more options for customizations without the need to rewrite whole BaseApp. Also there's one special case for `sdk.ErrorOutOfGas` error handling, that case might be handled in a "standard" way (middleware) alongside the others. We propose middleware-solution, which could help developers implement the following cases: * add external logging (let's say sending reports to external services like [Sentry](https://sentry.io)); * call panic for specific error cases; It will also make `OutOfGas` case and `default` case one of the middlewares. `Default` case wraps recovery object to an error and logs it ([example middleware implementation](#Recovery-middleware)). Our project has a sidecar service running alongside the blockchain node (smart contracts virtual machine). It is essential that node `<->` sidecar connectivity stays stable for TXs processing. So when the communication breaks we need to crash the node and reboot it once the problem is solved. That behavior makes node's state machine execution deterministic. As all keeper panics are caught by runTx's `defer()` handler, we have to adjust the BaseApp code in order to customize it. ## Decision ### Design #### Overview Instead of hardcoding custom error handling into BaseApp we suggest using set of middlewares which can be customized externally and will allow developers use as many custom error handlers as they want. Implementation with tests can be found [here](https://github.com/cosmos/cosmos-sdk/pull/6053). #### Implementation details ##### Recovery handler New `RecoveryHandler` type added. `recoveryObj` input argument is an object returned by the standard Go function `recover()` from the `builtin` package. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type RecoveryHandler func(recoveryObj interface{ }) error ``` Handler should type assert (or other methods) an object to define if object should be handled. `nil` should be returned if input object can't be handled by that `RecoveryHandler` (not a handler's target type). Not `nil` error should be returned if input object was handled and middleware chain execution should be stopped. An example: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func exampleErrHandler(recoveryObj interface{ }) error { err, ok := recoveryObj.(error) if !ok { return nil } if someSpecificError.Is(err) { panic(customPanicMsg) } else { return nil } } ``` This example breaks the application execution, but it also might enrich the error's context like the `OutOfGas` handler. ##### Recovery middleware We also add a middleware type (decorator). That function type wraps `RecoveryHandler` and returns the next middleware in execution chain and handler's `error`. Type is used to separate actual `recovery()` object handling from middleware chain processing. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type recoveryMiddleware func(recoveryObj interface{ }) (recoveryMiddleware, error) func newRecoveryMiddleware(handler RecoveryHandler, next recoveryMiddleware) recoveryMiddleware { return func(recoveryObj interface{ }) (recoveryMiddleware, error) { if err := handler(recoveryObj); err != nil { return nil, err } return next, nil } } ``` Function receives a `recoveryObj` object and returns: * (next `recoveryMiddleware`, `nil`) if object wasn't handled (not a target type) by `RecoveryHandler`; * (`nil`, not nil `error`) if input object was handled and other middlewares in the chain should not be executed; * (`nil`, `nil`) in case of invalid behavior. Panic recovery might not have been properly handled; this can be avoided by always using a `default` as a rightmost middleware in the chain (always returns an `error`'); `OutOfGas` middleware example: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func newOutOfGasRecoveryMiddleware(gasWanted uint64, ctx sdk.Context, next recoveryMiddleware) recoveryMiddleware { handler := func(recoveryObj interface{ }) error { err, ok := recoveryObj.(sdk.ErrorOutOfGas) if !ok { return nil } return errorsmod.Wrap( sdkerrors.ErrOutOfGas, fmt.Sprintf( "out of gas in location: %v; gasWanted: %d, gasUsed: %d", err.Descriptor, gasWanted, ctx.GasMeter().GasConsumed(), ), ) } return newRecoveryMiddleware(handler, next) } ``` `Default` middleware example: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func newDefaultRecoveryMiddleware() recoveryMiddleware { handler := func(recoveryObj interface{ }) error { return errorsmod.Wrap( sdkerrors.ErrPanic, fmt.Sprintf("recovered: %v\nstack:\n%v", recoveryObj, string(debug.Stack())), ) } return newRecoveryMiddleware(handler, nil) } ``` ##### Recovery processing Basic chain of middlewares processing would look like: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func processRecovery(recoveryObj interface{ }, middleware recoveryMiddleware) error { if middleware == nil { return nil } next, err := middleware(recoveryObj) if err != nil { return err } if next == nil { return nil } return processRecovery(recoveryObj, next) } ``` That way we can create a middleware chain which is executed from left to right, the rightmost middleware is a `default` handler which must return an `error`. ##### BaseApp changes The `default` middleware chain must exist in a `BaseApp` object. `Baseapp` modifications: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type BaseApp struct { // ... runTxRecoveryMiddleware recoveryMiddleware } func NewBaseApp(...) { // ... app.runTxRecoveryMiddleware = newDefaultRecoveryMiddleware() } func (app *BaseApp) runTx(...) { // ... defer func() { if r := recover(); r != nil { recoveryMW := newOutOfGasRecoveryMiddleware(gasWanted, ctx, app.runTxRecoveryMiddleware) err, result = processRecovery(r, recoveryMW), nil } gInfo = sdk.GasInfo{ GasWanted: gasWanted, GasUsed: ctx.GasMeter().GasConsumed() } }() // ... } ``` Developers can add their custom `RecoveryHandler`s by providing `AddRunTxRecoveryHandler` as a BaseApp option parameter to the `NewBaseapp` constructor: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (app *BaseApp) AddRunTxRecoveryHandler(handlers ...RecoveryHandler) { for _, h := range handlers { app.runTxRecoveryMiddleware = newRecoveryMiddleware(h, app.runTxRecoveryMiddleware) } } ``` This method would prepend handlers to an existing chain. ## Consequences ### Positive * Developers of Cosmos SDK based projects can add custom panic handlers to: * add error context for custom panic sources (panic inside of custom keepers); * emit `panic()`: passthrough recovery object to the Tendermint core; * other necessary handling; * Developers can use standard Cosmos SDK `BaseApp` implementation, rather that rewriting it in their projects; * Proposed solution doesn't break the current "standard" `runTx()` flow; ### Negative * Introduces changes to the execution model design. ### Neutral * `OutOfGas` error handler becomes one of the middlewares; * Default panic handler becomes one of the middlewares; ## References * [PR-6053 with proposed solution](https://github.com/cosmos/cosmos-sdk/pull/6053) * [Similar solution. ADR-010 Modular AnteHandler](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-010-modular-antehandler.md) # ADR 023: Protocol Buffer Naming and Versioning Conventions Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-023-protobuf-naming 2020 April 27: Initial Draft 2020 August 5: Update guidelines ## Changelog * 2020 April 27: Initial Draft * 2020 August 5: Update guidelines ## Status Accepted ## Context Protocol Buffers provide a basic [style guide](https://developers.google.com/protocol-buffers/docs/style) and [Buf](https://buf.build/docs/style-guide) builds upon that. To the extent possible, we want to follow industry accepted guidelines and wisdom for the effective usage of protobuf, deviating from those only when there is clear rationale for our use case. ### Adoption of `Any` The adoption of `google.protobuf.Any` as the recommended approach for encoding interface types (as opposed to `oneof`) makes package naming a central part of the encoding as fully-qualified message names now appear in encoded messages. ### Current Directory Organization Thus far we have mostly followed [Buf's](https://buf.build) [DEFAULT](https://buf.build/docs/lint-checkers#default) recommendations, with the minor deviation of disabling [`PACKAGE_DIRECTORY_MATCH`](https://buf.build/docs/lint-checkers#file_layout) which although being convenient for developing code comes with the warning from Buf that: > you will have a very bad time with many Protobuf plugins across various languages if you do not do this ### Adoption of gRPC Queries In [ADR 021](/sdk/latest/reference/architecture/adr-021-protobuf-query-encoding), gRPC was adopted for Protobuf native queries. The full gRPC service path thus becomes a key part of ABCI query path. In the future, gRPC queries may be allowed from within persistent scripts by technologies such as CosmWasm and these query routes would be stored within script binaries. ## Decision The goal of this ADR is to provide thoughtful naming conventions that: * encourage a good user experience for when users interact directly with .proto files and fully-qualified protobuf names * balance conciseness against the possibility of either over-optimizing (making names too short and cryptic) or under-optimizing (just accepting bloated names with lots of redundant information) These guidelines are meant to act as a style guide for both the Cosmos SDK and third-party modules. As a starting point, we should adopt all of the [DEFAULT](https://buf.build/docs/lint-checkers#default) checkers in [Buf's](https://buf.build) including [`PACKAGE_DIRECTORY_MATCH`](https://buf.build/docs/lint-checkers#file_layout), except: * [PACKAGE\_VERSION\_SUFFIX](https://buf.build/docs/lint-checkers#package_version_suffix) * [SERVICE\_SUFFIX](https://buf.build/docs/lint-checkers#service_suffix) Further guidelines to be described below. ### Principles #### Concise and Descriptive Names Names should be descriptive enough to convey their meaning and distinguish them from other names. Given that we are using fully-qualifed names within `google.protobuf.Any` as well as within gRPC query routes, we should aim to keep names concise, without going overboard. The general rule of thumb should be if a shorter name would convey more or else the same thing, pick the shorter name. For instance, `cosmos.bank.MsgSend` (19 bytes) conveys roughly the same information as `cosmos_sdk.x.bank.v1.MsgSend` (28 bytes) but is more concise. Such conciseness makes names both more pleasant to work with and take up less space within transactions and on the wire. We should also resist the temptation to over-optimize, by making names cryptically short with abbreviations. For instance, we shouldn't try to reduce `cosmos.bank.MsgSend` to `csm.bk.MSnd` just to save a few bytes. The goal is to make names ***concise but not cryptic***. #### Names are for Clients First Package and type names should be chosen for the benefit of users, not necessarily because of legacy concerns related to the go code-base. #### Plan for Longevity In the interests of long-term support, we should plan on the names we do choose to be in usage for a long time, so now is the opportunity to make the best choices for the future. ### Versioning #### Guidelines on Stable Package Versions In general, schema evolution is the way to update protobuf schemas. That means that new fields, messages, and RPC methods are *added* to existing schemas and old fields, messages and RPC methods are maintained as long as possible. Breaking things is often unacceptable in a blockchain scenario. For instance, immutable smart contracts may depend on certain data schemas on the host chain. If the host chain breaks those schemas, the smart contract may be irreparably broken. Even when things can be fixed (for instance in client software), this often comes at a high cost. Instead of breaking things, we should make every effort to evolve schemas rather than just breaking them. [Buf](https://buf.build) breaking change detection should be used on all stable (non-alpha or beta) packages to prevent such breakage. With that in mind, different stable versions (i.e. `v1` or `v2`) of a package should more or less be considered different packages and this should be last resort approach for upgrading protobuf schemas. Scenarios where creating a `v2` may make sense are: * we want to create a new module with similar functionality to an existing module and adding `v2` is the most natural way to do this. In that case, there are really just two different, but similar modules with different APIs. * we want to add a new revamped API for an existing module and it's just too cumbersome to add it to the existing package, so putting it in `v2` is cleaner for users. In this case, care should be made to not deprecate support for `v1` if it is actively used in immutable smart contracts. #### Guidelines on unstable (alpha and beta) package versions The following guidelines are recommended for marking packages as alpha or beta: * marking something as `alpha` or `beta` should be a last resort and just putting something in the stable package (i.e. `v1` or `v2`) should be preferred * a package *should* be marked as `alpha` *if and only if* there are active discussions to remove or significantly alter the package in the near future * a package *should* be marked as `beta` *if and only if* there is an active discussion to significantly refactor/rework the functionality in the near future but not remove it * modules *can and should* have types in both stable (i.e. `v1` or `v2`) and unstable (`alpha` or `beta`) packages. *`alpha` and `beta` should not be used to avoid responsibility for maintaining compatibility.* Whenever code is released into the wild, especially on a blockchain, there is a high cost to changing things. In some cases, for instance with immutable smart contracts, a breaking change may be impossible to fix. When marking something as `alpha` or `beta`, maintainers should ask the questions: * what is the cost of asking others to change their code vs the benefit of us maintaining the optionality to change it? * what is the plan for moving this to `v1` and how will that affect users? `alpha` or `beta` should really be used to communicate "changes are planned". As a case study, gRPC reflection is in the package `grpc.reflection.v1alpha`. It hasn't been changed since 2017 and it is now used in other widely used software like gRPCurl. Some folks probably use it in production services and so if they actually went and changed the package to `grpc.reflection.v1`, some software would break and they probably don't want to do that... So now the `v1alpha` package is more or less the de-facto `v1`. Let's not do that. The following are guidelines for working with non-stable packages: * [Buf's recommended version suffix](https://buf.build/docs/lint-checkers#package_version_suffix) (ex. `v1alpha1`) *should* be used for non-stable packages * non-stable packages should generally be excluded from breaking change detection * immutable smart contract modules (i.e. CosmWasm) *should* block smart contracts/persistent scripts from interacting with `alpha`/`beta` packages #### Omit v1 suffix Instead of using [Buf's recommended version suffix](https://buf.build/docs/lint-checkers#package_version_suffix), we can omit `v1` for packages that don't actually have a second version. This allows for more concise names for common use cases like `cosmos.bank.Send`. Packages that do have a second or third version can indicate that with `.v2` or `.v3`. ### Package Naming #### Adopt a short, unique top-level package name Top-level packages should adopt a short name that is known to not collide with other names in common usage within the Cosmos ecosystem. In the near future, a registry should be created to reserve and index top-level package names used within the Cosmos ecosystem. Because the Cosmos SDK is intended to provide the top-level types for the Cosmos project, the top-level package name `cosmos` is recommended for usage within the Cosmos SDK instead of the longer `cosmos_sdk`. [ICS](https://github.com/cosmos/ics) specifications could consider a short top-level package like `ics23` based upon the standard number. #### Limit sub-package depth Sub-package depth should be increased with caution. Generally a single sub-package is needed for a module or a library. Even though `x` or `modules` is used in source code to denote modules, this is often unnecessary for .proto files as modules are the primary thing sub-packages are used for. Only items which are known to be used infrequently should have deep sub-package depths. For the Cosmos SDK, it is recommended that we simply write `cosmos.bank`, `cosmos.gov`, etc. rather than `cosmos.x.bank`. In practice, most non-module types can go straight in the `cosmos` package or we can introduce a `cosmos.base` package if needed. Note that this naming *will not* change go package names, i.e. the `cosmos.bank` protobuf package will still live in `x/bank`. ### Message Naming Message type names should be as concise possible without losing clarity. `sdk.Msg` types which are used in transactions will retain the `Msg` prefix as that provides helpful context. ### Service and RPC Naming [ADR 021](/sdk/latest/reference/architecture/adr-021-protobuf-query-encoding) specifies that modules should implement a gRPC query service. We should consider the principle of conciseness for query service and RPC names as these may be called from persistent script modules such as CosmWasm. Also, users may use these query paths from tools like [gRPCurl](https://github.com/fullstorydev/grpcurl). As an example, we can shorten `/cosmos_sdk.x.bank.v1.QueryService/QueryBalance` to `/cosmos.bank.Query/Balance` without losing much useful information. RPC request and response types *should* follow the `ServiceNameMethodNameRequest`/ `ServiceNameMethodNameResponse` naming convention. i.e. for an RPC method named `Balance` on the `Query` service, the request and response types would be `QueryBalanceRequest` and `QueryBalanceResponse`. This will be more self-explanatory than `BalanceRequest` and `BalanceResponse`. #### Use just `Query` for the query service Instead of [Buf's default service suffix recommendation](https://github.com/cosmos/cosmos-sdk/pull/6033), we should simply use the shorter `Query` for query services. For other types of gRPC services, we should consider sticking with Buf's default recommendation. #### Omit `Get` and `Query` from query service RPC names `Get` and `Query` should be omitted from `Query` service names because they are redundant in the fully-qualified name. For instance, `/cosmos.bank.Query/QueryBalance` just says `Query` twice without any new information. ## Future Improvements A registry of top-level package names should be created to coordinate naming across the ecosystem, prevent collisions, and also help developers discover useful schemas. A simple starting point would be a git repository with community-based governance. ## Consequences ### Positive * names will be more concise and easier to read and type * all transactions using `Any` will be at shorter (`_sdk.x` and `.v1` will be removed) * `.proto` file imports will be more standard (without `"third_party/proto"` in the path) * code generation will be easier for clients because .proto files will be in a single `proto/` directory which can be copied rather than scattered throughout the Cosmos SDK ### Negative ### Neutral * `.proto` files will need to be reorganized and refactored * some modules may need to be marked as alpha or beta ## References # ADR 024: Coin Metadata Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-024-coin-metadata 05/19/2020: Initial draft ## Changelog * 05/19/2020: Initial draft ## Status Proposed ## Context Assets in the Cosmos SDK are represented via a `Coins` type that consists of an `amount` and a `denom`, where the `amount` can be any arbitrarily large or small value. In addition, the Cosmos SDK uses an account-based model where there are two types of primary accounts -- basic accounts and module accounts. All account types have a set of balances that are composed of `Coins`. The `x/bank` module keeps track of all balances for all accounts and also keeps track of the total supply of balances in an application. With regards to a balance `amount`, the Cosmos SDK assumes a static and fixed unit of denomination, regardless of the denomination itself. In other words, clients and apps built atop a Cosmos-SDK-based chain may choose to define and use arbitrary units of denomination to provide a richer UX, however, by the time a tx or operation reaches the Cosmos SDK state machine, the `amount` is treated as a single unit. For example, for the Cosmos Hub (Gaia), clients assume 1 ATOM = 10^6 uatom, and so all txs and operations in the Cosmos SDK work off of units of 10^6. This clearly provides a poor and limited UX especially as interoperability of networks increases and as a result the total amount of asset types increases. We propose to have `x/bank` additionally keep track of metadata per `denom` in order to help clients, wallet providers, and explorers improve their UX and remove the requirement for making any assumptions on the unit of denomination. ## Decision The `x/bank` module will be updated to store and index metadata by `denom`, specifically the "base" or smallest unit -- the unit the Cosmos SDK state-machine works with. Metadata may also include a non-zero length list of denominations. Each entry contains the name of the denomination `denom`, the exponent to the base and a list of aliases. An entry is to be interpreted as `1 denom = 10^exponent base_denom` (e.g. `1 ETH = 10^18 wei` and `1 uatom = 10^0 uatom`). There are two denominations that are of high importance for clients: the `base`, which is the smallest possible unit and the `display`, which is the unit that is commonly referred to in human communication and on exchanges. The values in those fields link to an entry in the list of denominations. The list in `denom_units` and the `display` entry may be changed via governance. As a result, we can define the type as follows: ```protobuf expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message DenomUnit { string denom = 1; uint32 exponent = 2; repeated string aliases = 3; } message Metadata { string description = 1; repeated DenomUnit denom_units = 2; string base = 3; string display = 4; } ``` As an example, the ATOM's metadata can be defined as follows: ```json expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "name": "atom", "description": "The native staking token of the Cosmos Hub.", "denom_units": [ { "denom": "uatom", "exponent": 0, "aliases": [ "microatom" ], }, { "denom": "matom", "exponent": 3, "aliases": [ "milliatom" ] }, { "denom": "atom", "exponent": 6, } ], "base": "uatom", "display": "atom", } ``` Given the above metadata, a client may infer the following things: * 4.3atom = 4.3 \* (10^6) = 4,300,000uatom * The string "atom" can be used as a display name in a list of tokens. * The balance 4300000 can be displayed as 4,300,000uatom or 4,300matom or 4.3atom. The `display` denomination 4.3atom is a good default if the authors of the client don't make an explicit decision to choose a different representation. A client should be able to query for metadata by denom both via the CLI and REST interfaces. In addition, we will add handlers to these interfaces to convert from any unit to another given unit, as the base framework for this already exists in the Cosmos SDK. Finally, we need to ensure metadata exists in the `GenesisState` of the `x/bank` module which is also indexed by the base `denom`. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type GenesisState struct { SendEnabled bool `json:"send_enabled" yaml:"send_enabled"` Balances []Balance `json:"balances" yaml:"balances"` Supply sdk.Coins `json:"supply" yaml:"supply"` DenomMetadata []Metadata `json:"denom_metadata" yaml:"denom_metadata"` } ``` ## Future Work In order for clients to avoid having to convert assets to the base denomination -- either manually or via an endpoint, we may consider supporting automatic conversion of a given unit input. ## Consequences ### Positive * Provides clients, wallet providers and block explorers with additional data on asset denomination to improve UX and remove any need to make assumptions on denomination units. ### Negative * A small amount of required additional storage in the `x/bank` module. The amount of additional storage should be minimal as the amount of total assets should not be large. ### Neutral ## References # ADR 027: Deterministic Protobuf Serialization Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-027-deterministic-protobuf-serialization 2020-08-07: Initial Draft 2020-09-01: Further clarify rules ## Changelog * 2020-08-07: Initial Draft * 2020-09-01: Further clarify rules ## Status Proposed ## Abstract Fully deterministic structure serialization, which works across many languages and clients, is needed when signing messages. We need to be sure that whenever we serialize a data structure, no matter in which supported language, the raw bytes will stay the same. [Protobuf](https://developers.google.com/protocol-buffers/docs/proto3) serialization is not bijective (i.e. there exist a practically unlimited number of valid binary representations for a given protobuf document)1. This document describes a deterministic serialization scheme for a subset of protobuf documents, that covers this use case but can be reused in other cases as well. ### Context For signature verification in Cosmos SDK, the signer and verifier need to agree on the same serialization of a `SignDoc` as defined in [ADR-020](/sdk/v0.50/build/architecture/adr-020-protobuf-transaction-encoding) without transmitting the serialization. Currently, for block signatures we are using a workaround: we create a new [TxRaw](https://github.com/cosmos/cosmos-sdk/blob/9e85e81e0e8140067dd893421290c191529c148c/proto/cosmos/tx/v1beta1/tx.proto#L30) instance (as defined in [adr-020-protobuf-transaction-encoding](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-020-protobuf-transaction-encoding.md#transactions)) by converting all [Tx](https://github.com/cosmos/cosmos-sdk/blob/9e85e81e0e8140067dd893421290c191529c148c/proto/cosmos/tx/v1beta1/tx.proto#L13) fields to bytes on the client side. This adds an additional manual step when sending and signing transactions. ### Decision The following encoding scheme is to be used by other ADRs, and in particular for `SignDoc` serialization. ## Specification ### Scope This ADR defines a protobuf3 serializer. The output is a valid protobuf serialization, such that every protobuf parser can parse it. No maps are supported in version 1 due to the complexity of defining a deterministic serialization. This might change in future. Implementations must reject documents containing maps as invalid input. ### Background - Protobuf3 Encoding Most numeric types in protobuf3 are encoded as [varints](https://developers.google.com/protocol-buffers/docs/encoding#varints). Varints are at most 10 bytes, and since each varint byte has 7 bits of data, varints are a representation of `uint70` (70-bit unsigned integer). When encoding, numeric values are casted from their base type to `uint70`, and when decoding, the parsed `uint70` is casted to the appropriate numeric type. The maximum valid value for a varint that complies with protobuf3 is `FF FF FF FF FF FF FF FF FF 7F` (i.e. `2**70 -1`). If the field type is `{,u,s}int64`, the highest 6 bits of the 70 are dropped during decoding, introducing 6 bits of malleability. If the field type is `{,u,s}int32`, the highest 38 bits of the 70 are dropped during decoding, introducing 38 bits of malleability. Among other sources of non-determinism, this ADR eliminates the possibility of encoding malleability. ### Serialization rules The serialization is based on the [protobuf3 encoding](https://developers.google.com/protocol-buffers/docs/encoding) with the following additions: 1. Fields must be serialized only once in ascending order 2. Extra fields or any extra data must not be added 3. [Default values](https://developers.google.com/protocol-buffers/docs/proto3#default) must be omitted 4. `repeated` fields of scalar numeric types must use [packed encoding](https://developers.google.com/protocol-buffers/docs/encoding#packed) 5. Varint encoding must not be longer than needed: * No trailing zero bytes (in little endian, i.e. no leading zeroes in big endian). Per rule 3 above, the default value of `0` must be omitted, so this rule does not apply in such cases. * The maximum value for a varint must be `FF FF FF FF FF FF FF FF FF 01`. In other words, when decoded, the highest 6 bits of the 70-bit unsigned integer must be `0`. (10-byte varints are 10 groups of 7 bits, i.e. 70 bits, of which only the lowest 70-6=64 are useful.) * The maximum value for 32-bit values in varint encoding must be `FF FF FF FF 0F` with one exception (below). In other words, when decoded, the highest 38 bits of the 70-bit unsigned integer must be `0`. * The one exception to the above is *negative* `int32`, which must be encoded using the full 10 bytes for sign extension2. * The maximum value for Boolean values in varint encoding must be `01` (i.e. it must be `0` or `1`). Per rule 3 above, the default value of `0` must be omitted, so if a Boolean is included it must have a value of `1`. While rule number 1. and 2. should be pretty straight forward and describe the default behavior of all protobuf encoders the author is aware of, the 3rd rule is more interesting. After a protobuf3 deserialization you cannot differentiate between unset fields and fields set to the default value3. At serialization level however, it is possible to set the fields with an empty value or omitting them entirely. This is a significant difference to e.g. JSON where a property can be empty (`""`, `0`), `null` or undefined, leading to 3 different documents. Omitting fields set to default values is valid because the parser must assign the default value to fields missing in the serialization4. For scalar types, omitting defaults is required by the spec5. For `repeated` fields, not serializing them is the only way to express empty lists. Enums must have a first element of numeric value 0, which is the default6. And message fields default to unset7. Omitting defaults allows for some amount of forward compatibility: users of newer versions of a protobuf schema produce the same serialization as users of older versions as long as newly added fields are not used (i.e. set to their default value). ### Implementation There are three main implementation strategies, ordered from the least to the most custom development: * **Use a protobuf serializer that follows the above rules by default.** E.g. [gogoproto](https://pkg.go.dev/github.com/cosmos/gogoproto/gogoproto) is known to be compliant by in most cases, but not when certain annotations such as `nullable = false` are used. It might also be an option to configure an existing serializer accordingly. * **Normalize default values before encoding them.** If your serializer follows rule 1. and 2. and allows you to explicitly unset fields for serialization, you can normalize default values to unset. This can be done when working with [protobuf.js](https://www.npmjs.com/package/protobufjs): ```js theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} const bytes = SignDoc.encode({ bodyBytes: body.length > 0 ? body : null, // normalize empty bytes to unset authInfoBytes: authInfo.length > 0 ? authInfo : null, // normalize empty bytes to unset chainId: chainId || null, // normalize "" to unset accountNumber: accountNumber || null, // normalize 0 to unset accountSequence: accountSequence || null, // normalize 0 to unset }).finish(); ``` * **Use a hand-written serializer for the types you need.** If none of the above ways works for you, you can write a serializer yourself. For SignDoc this would look something like this in Go, building on existing protobuf utilities: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} if !signDoc.body_bytes.empty() { buf.WriteUVarInt64(0xA) // wire type and field number for body_bytes buf.WriteUVarInt64(signDoc.body_bytes.length()) buf.WriteBytes(signDoc.body_bytes) } if !signDoc.auth_info.empty() { buf.WriteUVarInt64(0x12) // wire type and field number for auth_info buf.WriteUVarInt64(signDoc.auth_info.length()) buf.WriteBytes(signDoc.auth_info) } if !signDoc.chain_id.empty() { buf.WriteUVarInt64(0x1a) // wire type and field number for chain_id buf.WriteUVarInt64(signDoc.chain_id.length()) buf.WriteBytes(signDoc.chain_id) } if signDoc.account_number != 0 { buf.WriteUVarInt64(0x20) // wire type and field number for account_number buf.WriteUVarInt(signDoc.account_number) } if signDoc.account_sequence != 0 { buf.WriteUVarInt64(0x28) // wire type and field number for account_sequence buf.WriteUVarInt(signDoc.account_sequence) } ``` ### Test vectors Given the protobuf definition `Article.proto` ```protobuf expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package blog; syntax = "proto3"; enum Type { UNSPECIFIED = 0; IMAGES = 1; NEWS = 2; }; enum Review { UNSPECIFIED = 0; ACCEPTED = 1; REJECTED = 2; }; message Article { string title = 1; string description = 2; uint64 created = 3; uint64 updated = 4; bool public = 5; bool promoted = 6; Type type = 7; Review review = 8; repeated string comments = 9; repeated string backlinks = 10; }; ``` serializing the values ```yaml theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} title: "The world needs change 🌳" description: "" created: 1596806111080 updated: 0 public: true promoted: false type: Type.NEWS review: Review.UNSPECIFIED comments: ["Nice one", "Thank you"] backlinks: [] ``` must result in the serialization ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} 0a1b54686520776f726c64206e65656473206368616e676520f09f8cb318e8bebec8bc2e280138024a084e696365206f6e654a095468616e6b20796f75 ``` When inspecting the serialized document, you see that every second field is omitted: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} $ echo 0a1b54686520776f726c64206e65656473206368616e676520f09f8cb318e8bebec8bc2e280138024a084e696365206f6e654a095468616e6b20796f75 | xxd -r -p | protoc --decode_raw 1: "The world needs change \360\237\214\263" 3: 1596806111080 5: 1 7: 2 9: "Nice one" 9: "Thank you" ``` ## Consequences Having such an encoding available allows us to get deterministic serialization for all protobuf documents we need in the context of Cosmos SDK signing. ### Positive * Well defined rules that can be verified independent of a reference implementation * Simple enough to keep the barrier to implement transaction signing low * It allows us to continue to use 0 and other empty values in SignDoc, avoiding the need to work around 0 sequences. This does not imply the change from [Link](https://github.com/cosmos/cosmos-sdk/pull/6949) should not be merged, but not too important anymore. ### Negative * When implementing transaction signing, the encoding rules above must be understood and implemented. * The need for rule number 3. adds some complexity to implementations. * Some data structures may require custom code for serialization. Thus the code is not very portable - it will require additional work for each client implementing serialization to properly handle custom data structures. ### Neutral ### Usage in Cosmos SDK For the reasons mentioned above ("Negative" section) we prefer to keep workarounds for shared data structure. Example: the aforementioned `TxRaw` is using raw bytes as a workaround. This allows them to use any valid Protobuf library without the need of implementing a custom serializer that adheres to this standard (and related risks of bugs). ## References * 1 *When a message is serialized, there is no guaranteed order for how its known or unknown fields should be written. Serialization order is an implementation detail and the details of any particular implementation may change in the future. Therefore, protocol buffer parsers must be able to parse fields in any order.* from [Link](https://developers.google.com/protocol-buffers/docs/encoding#order) * 2 [Link](https://developers.google.com/protocol-buffers/docs/encoding#signed_integers) * 3 *Note that for scalar message fields, once a message is parsed there's no way of telling whether a field was explicitly set to the default value (for example whether a boolean was set to false) or just not set at all: you should bear this in mind when defining your message types. For example, don't have a boolean that switches on some behavior when set to false if you don't want that behavior to also happen by default.* from [Link](https://developers.google.com/protocol-buffers/docs/proto3#default) * 4 *When a message is parsed, if the encoded message does not contain a particular singular element, the corresponding field in the parsed object is set to the default value for that field.* from [Link](https://developers.google.com/protocol-buffers/docs/proto3#default) * 5 *Also note that if a scalar message field is set to its default, the value will not be serialized on the wire.* from [Link](https://developers.google.com/protocol-buffers/docs/proto3#default) * 6 *For enums, the default value is the first defined enum value, which must be 0.* from [Link](https://developers.google.com/protocol-buffers/docs/proto3#default) * 7 *For message fields, the field is not set. Its exact value is language-dependent.* from [Link](https://developers.google.com/protocol-buffers/docs/proto3#default) * Encoding rules and parts of the reasoning taken from [canonical-proto3 Aaron Craelius](https://github.com/regen-network/canonical-proto3) # ADR 028: Public Key Addresses Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-028-public-key-addresses 2020/08/18: Initial version 2021/01/15: Analysis and algorithm update ## Changelog * 2020/08/18: Initial version * 2021/01/15: Analysis and algorithm update ## Status Proposed ## Abstract This ADR defines an address format for all addressable Cosmos SDK accounts. That includes: new public key algorithms, multisig public keys, and module accounts. ## Context Issue [#3685](https://github.com/cosmos/cosmos-sdk/issues/3685) identified that public key address spaces are currently overlapping. We confirmed that it significantly decreases security of Cosmos SDK. ### Problem An attacker can control an input for an address generation function. This leads to a birthday attack, which significantly decreases the security space. To overcome this, we need to separate the inputs for different kind of account types: a security break of one account type shouldn't impact the security of other account types. ### Initial proposals One initial proposal was extending the address length and adding prefixes for different types of addresses. @ethanfrey explained an alternate approach originally used in [Link](https://github.com/iov-one/weave): > I spent quite a bit of time thinking about this issue while building weave... The other cosmos Sdk. > Basically I define a condition to be a type and format as human readable string with some binary data appended. This condition is hashed into an Address (again at 20 bytes). The use of this prefix makes it impossible to find a preimage for a given address with a different condition (eg ed25519 vs secp256k1). > This is explained in depth here [Link](https://weave.readthedocs.io/en/latest/design/permissions.html) > And the code is here, look mainly at the top where we process conditions. [Link](https://github.com/iov-one/weave/blob/master/conditions.go) And explained how this approach should be sufficiently collision resistant: > Yeah, AFAIK, 20 bytes should be collision resistance when the preimages are unique and not malleable. A space of 2^160 would expect some collision to be likely around 2^80 elements (birthday paradox). And if you want to find a collision for some existing element in the database, it is still 2^160. 2^80 only is if all these elements are written to state. > The good example you brought up was eg. a public key bytes being a valid public key on two algorithms supported by the codec. Meaning if either was broken, you would break accounts even if they were secured with the safer variant. This is only as the issue when no differentiating type info is present in the preimage (before hashing into an address). > I would like to hear an argument if the 20 bytes space is an actual issue for security, as I would be happy to increase my address sizes in weave. I just figured cosmos and ethereum and bitcoin all use 20 bytes, it should be good enough. And the arguments above which made me feel it was secure. But I have not done a deeper analysis. This led to the first proposal (which we proved to be not good enough): we concatenate a key type with a public key, hash it and take the first 20 bytes of that hash, summarized as `sha256(keyTypePrefix || keybytes)[:20]`. ### Review and Discussions In [#5694](https://github.com/cosmos/cosmos-sdk/issues/5694) we discussed various solutions. We agreed that 20 bytes it's not future proof, and extending the address length is the only way to allow addresses of different types, various signature types, etc. This disqualifies the initial proposal. In the issue we discussed various modifications: * Choice of the hash function. * Move the prefix out of the hash function: `keyTypePrefix + sha256(keybytes)[:20]` \[post-hash-prefix-proposal]. * Use double hashing: `sha256(keyTypePrefix + sha256(keybytes)[:20])`. * Increase to keybytes hash slice from 20 byte to 32 or 40 bytes. We concluded that 32 bytes, produced by a good hash functions is future secure. ### Requirements * Support currently used tools - we don't want to break an ecosystem, or add a long adaptation period. Ref: [Link](https://github.com/cosmos/cosmos-sdk/issues/8041) * Try to keep the address length small - addresses are widely used in state, both as part of a key and object value. ### Scope This ADR only defines a process for the generation of address bytes. For end-user interactions with addresses (through the API, or CLI, etc.), we still use bech32 to format these addresses as strings. This ADR doesn't change that. Using Bech32 for string encoding gives us support for checksum error codes and handling of user typos. ## Decision We define the following account types, for which we define the address function: 1. simple accounts: represented by a regular public key (ie: secp256k1, sr25519) 2. naive multisig: accounts composed by other addressable objects (ie: naive multisig) 3. composed accounts with a native address key (ie: bls, group module accounts) 4. module accounts: basically any accounts which cannot sign transactions and which are managed internally by modules ### Legacy Public Key Addresses Don't Change Currently (Jan 2021), the only officially supported Cosmos SDK user accounts are `secp256k1` basic accounts and legacy amino multisig. They are used in existing Cosmos SDK zones. They use the following address formats: * secp256k1: `ripemd160(sha256(pk_bytes))[:20]` * legacy amino multisig: `sha256(aminoCdc.Marshal(pk))[:20]` We don't want to change existing addresses. So the addresses for these two key types will remain the same. The current multisig public keys use amino serialization to generate the address. We will retain those public keys and their address formatting, and call them "legacy amino" multisig public keys in protobuf. We will also create multisig public keys without amino addresses to be described below. ### Hash Function Choice As in other parts of the Cosmos SDK, we will use `sha256`. ### Basic Address We start with defining a base algorithm for generating addresses which we will call `Hash`. Notably, it's used for accounts represented by a single key pair. For each public key schema we have to have an associated `typ` string, explained in the next section. `hash` is the cryptographic hash function defined in the previous section. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} const A_LEN = 32 func Hash(typ string, key []byte) []byte { return hash(hash(typ) + key)[:A_LEN] } ``` The `+` is bytes concatenation, which doesn't use any separator. This algorithm is the outcome of a consultation session with a professional cryptographer. Motivation: this algorithm keeps the address relatively small (length of the `typ` doesn't impact the length of the final address) and it's more secure than \[post-hash-prefix-proposal] (which uses the first 20 bytes of a pubkey hash, significantly reducing the address space). Moreover the cryptographer motivated the choice of adding `typ` in the hash to protect against a switch table attack. `address.Hash` is a low level function to generate *base* addresses for new key types. Example: * BLS: `address.Hash("bls", pubkey)` ### Composed Addresses For simple composed accounts (like a new naive multisig) we generalize the `address.Hash`. The address is constructed by recursively creating addresses for the sub accounts, sorting the addresses and composing them into a single address. It ensures that the ordering of keys doesn't impact the resulting address. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // We don't need a PubKey interface - we need anything which is addressable. type Addressable interface { Address() []byte } func Composed(typ string, subaccounts []Addressable) []byte { addresses = map(subaccounts, \a -> LengthPrefix(a.Address())) addresses = sort(addresses) return address.Hash(typ, addresses[0] + ... + addresses[n]) } ``` The `typ` parameter should be a schema descriptor, containing all significant attributes with deterministic serialization (eg: utf8 string). `LengthPrefix` is a function which prepends 1 byte to the address. The value of that byte is the length of the address bits before prepending. The address must be at most 255 bits long. We are using `LengthPrefix` to eliminate conflicts - it assures, that for 2 lists of addresses: `as = {a1, a2, ..., an}` and `bs = {b1, b2, ..., bm}` such that every `bi` and `ai` is at most 255 long, `concatenate(map(as, (a) => LengthPrefix(a))) = map(bs, (b) => LengthPrefix(b))` if `as = bs`. Implementation Tip: account implementations should cache addresses. #### Multisig Addresses For a new multisig public keys, we define the `typ` parameter not based on any encoding scheme (amino or protobuf). This avoids issues with non-determinism in the encoding scheme. Example: ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package cosmos.crypto.multisig; message PubKey { uint32 threshold = 1; repeated google.protobuf.Any pubkeys = 2; } ``` ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (multisig PubKey) Address() { // first gather all nested pub keys var keys []address.Addressable // cryptotypes.PubKey implements Addressable for _, _key := range multisig.Pubkeys { keys = append(keys, key.GetCachedValue().(cryptotypes.PubKey)) } // form the type from the message name (cosmos.crypto.multisig.PubKey) and the threshold joined together prefix := fmt.Sprintf("%s/%d", proto.MessageName(multisig), multisig.Threshold) // use the Composed function defined above return address.Composed(prefix, keys) } ``` ### Derived Addresses We must be able to cryptographically derive one address from another one. The derivation process must guarantee hash properties, hence we use the already defined `Hash` function: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func Derive(address, derivationKey []byte) []byte { return Hash(addres, derivationKey) } ``` ### Module Account Addresses A module account will have `"module"` type. Module accounts can have sub accounts. The submodule account will be created based on module name, and sequence of derivation keys. Typically, the first derivation key should be a class of the derived accounts. The derivation process has a defined order: module name, submodule key, subsubmodule key... An example module account is created using: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} address.Module(moduleName, key) ``` An example sub-module account is created using: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} groupPolicyAddresses := []byte{1 } address.Module(moduleName, groupPolicyAddresses, policyID) ``` The `address.Module` function is using `address.Hash` with `"module"` as the type argument, and byte representation of the module name concatenated with submodule key. The two last component must be uniquely separated to avoid potential clashes (example: modulename="ab" & submodulekey="bc" will have the same derivation key as modulename="a" & submodulekey="bbc"). We use a null byte (`'\x00'`) to separate module name from the submodule key. This works, because null byte is not a part of a valid module name. Finally, the sub-submodule accounts are created by applying the `Derive` function recursively. We could use `Derive` function also in the first step (rather than concatenating module name with zero byte and the submodule key). We decided to do concatenation to avoid one level of derivation and speed up computation. For backward compatibility with the existing `authtypes.NewModuleAddress`, we add a special case in `Module` function: when no derivation key is provided, we fallback to the "legacy" implementation. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func Module(moduleName string, derivationKeys ...[]byte) []byte{ if len(derivationKeys) == 0 { return authtypes.NewModuleAddress(modulenName) // legacy case } submoduleAddress := Hash("module", []byte(moduleName) + 0 + key) return fold((a, k) => Derive(a, k), subsubKeys, submoduleAddress) } ``` **Example 1** A lending BTC pool address would be: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} btcPool := address.Module("lending", btc.Address() }) ``` If we want to create an address for a module account depending on more than one key, we can concatenate them: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} btcAtomAMM := address.Module("amm", btc.Address() + atom.Address() }) ``` **Example 2** a smart-contract address could be constructed by: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} smartContractAddr = Module("mySmartContractVM", smartContractsNamespace, smartContractKey }) // which equals to: smartContractAddr = Derived( Module("mySmartContractVM", smartContractsNamespace), []{ smartContractKey }) ``` ### Schema Types A `typ` parameter used in `Hash` function SHOULD be unique for each account type. Since all Cosmos SDK account types are serialized in the state, we propose to use the protobuf message name string. Example: all public key types have a unique protobuf message type similar to: ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package cosmos.crypto.sr25519; message PubKey { bytes key = 1; } ``` All protobuf messages have unique fully qualified names, in this example `cosmos.crypto.sr25519.PubKey`. These names are derived directly from .proto files in a standardized way and used in other places such as the type URL in `Any`s. We can easily obtain the name using `proto.MessageName(msg)`. ## Consequences ### Backwards Compatibility This ADR is compatible with what was committed and directly supported in the Cosmos SDK repository. ### Positive * a simple algorithm for generating addresses for new public keys, complex accounts and modules * the algorithm generalizes *native composed keys* * increased security and collision resistance of addresses * the approach is extensible for future use-cases - one can use other address types, as long as they don't conflict with the address length specified here (20 or 32 bytes). * support new account types. ### Negative * addresses do not communicate key type, a prefixed approach would have done this * addresses are 60% longer and will consume more storage space * requires a refactor of KVStore store keys to handle variable length addresses ### Neutral * protobuf message names are used as key type prefixes ## Further Discussions Some accounts can have a fixed name or may be constructed in other way (eg: modules). We were discussing an idea of an account with a predefined name (eg: `me.regen`), which could be used by institutions. Without going into details, these kinds of addresses are compatible with the hash based addresses described here as long as they don't have the same length. More specifically, any special account address must not have a length equal to 20 or 32 bytes. ## Appendix: Consulting session End of Dec 2020 we had a session with [Alan Szepieniec](https://scholar.google.be/citations?user=4LyZn8oAAAAJ\&hl=en) to consult the approach presented above. Alan general observations: * we don’t need 2-preimage resistance * we need 32bytes address space for collision resistance * when an attacker can control an input for object with an address then we have a problem with birthday attack * there is an issue with smart-contracts for hashing * sha2 mining can be use to breaking address pre-image Hashing algorithm * any attack breaking blake3 will break blake2 * Alan is pretty confident about the current security analysis of the blake hash algorithm. It was a finalist, and the author is well known in security analysis. Algorithm: * Alan recommends to hash the prefix: `address(pub_key) = hash(hash(key_type) + pub_key)[:32]`, main benefits: * we are free to user arbitrary long prefix names * we still don’t risk collisions * switch tables * discussion about penalization -> about adding prefix post hash * Aaron asked about post hash prefixes (`address(pub_key) = key_type + hash(pub_key)`) and differences. Alan noted that this approach has longer address space and it’s stronger. Algorithm for complex / composed keys: * merging tree like addresses with same algorithm are fine Module addresses: Should module addresses have different size to differentiate it? * we will need to set a pre-image prefix for module addresse to keept them in 32-byte space: `hash(hash('module') + module_key)` * Aaron observation: we already need to deal with variable length (to not break secp256k1 keys). Discssion about arithmetic hash function for ZKP * Posseidon / Rescue * Problem: much bigger risk because we don’t know much techniques and history of crypto-analysis of arithmetic constructions. It’s still a new ground and area of active research. Post quantum signature size * Alan suggestion: Falcon: speed / size ration - very good. * Aaron - should we think about it? Alan: based on early extrapolation this thing will get able to break EC cryptography in 2050 . But that’s a lot of uncertainty. But there is magic happening with recurions / linking / simulation and that can speedup the progress. Other ideas * Let’s say we use same key and two different address algorithms for 2 different use cases. Is it still safe to use it? Alan: if we want to hide the public key (which is not our use case), then it’s less secure but there are fixes. ### References * [Notes](https://hackmd.io/_NGWI4xZSbKzj1BkCqyZMw) # ADR 029: Fee Grant Module Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-029-fee-grant-module 2020/08/18: Initial Draft 2021/05/05: Removed height based expiration support and simplified naming. ## Changelog * 2020/08/18: Initial Draft * 2021/05/05: Removed height based expiration support and simplified naming. ## Status Accepted ## Context In order to make blockchain transactions, the signing account must possess a sufficient balance of the right denomination in order to pay fees. There are classes of transactions where needing to maintain a wallet with sufficient fees is a barrier to adoption. For instance, when proper permissions are setup, someone may temporarily delegate the ability to vote on proposals to a "burner" account that is stored on a mobile phone with only minimal security. Other use cases include workers tracking items in a supply chain or farmers submitting field data for analytics or compliance purposes. For all of these use cases, UX would be significantly enhanced by obviating the need for these accounts to always maintain the appropriate fee balance. This is especially true if we wanted to achieve enterprise adoption for something like supply chain tracking. While one solution would be to have a service that fills up these accounts automatically with the appropriate fees, a better UX would be provided by allowing these accounts to pull from a common fee pool account with proper spending limits. A single pool would reduce the churn of making lots of small "fill up" transactions and also more effectively leverages the resources of the organization setting up the pool. ## Decision As a solution we propose a module, `x/feegrant` which allows one account, the "granter" to grant another account, the "grantee" an allowance to spend the granter's account balance for fees within certain well-defined limits. Fee allowances are defined by the extensible `FeeAllowanceI` interface: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type FeeAllowanceI { // Accept can use fee payment requested as well as timestamp of the current block // to determine whether or not to process this. This is checked in // Keeper.UseGrantedFees and the return values should match how it is handled there. // // If it returns an error, the fee payment is rejected, otherwise it is accepted. // The FeeAllowance implementation is expected to update it's internal state // and will be saved again after an acceptance. // // If remove is true (regardless of the error), the FeeAllowance will be deleted from storage // (eg. when it is used up). (See call to RevokeFeeAllowance in Keeper.UseGrantedFees) Accept(ctx sdk.Context, fee sdk.Coins, msgs []sdk.Msg) (remove bool, err error) // ValidateBasic should evaluate this FeeAllowance for internal consistency. // Don't allow negative amounts, or negative periods for example. ValidateBasic() error } ``` Two basic fee allowance types, `BasicAllowance` and `PeriodicAllowance` are defined to support known use cases: ```protobuf expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // BasicAllowance implements FeeAllowanceI with a one-time grant of tokens // that optionally expires. The delegatee can use up to SpendLimit to cover fees. message BasicAllowance { // spend_limit specifies the maximum amount of tokens that can be spent // by this allowance and will be updated as tokens are spent. If it is // empty, there is no spend limit and any amount of coins can be spent. repeated cosmos_sdk.v1.Coin spend_limit = 1; // expiration specifies an optional time when this allowance expires google.protobuf.Timestamp expiration = 2; } // PeriodicAllowance extends FeeAllowanceI to allow for both a maximum cap, // as well as a limit per time period. message PeriodicAllowance { BasicAllowance basic = 1; // period specifies the time duration in which period_spend_limit coins can // be spent before that allowance is reset google.protobuf.Duration period = 2; // period_spend_limit specifies the maximum number of coins that can be spent // in the period repeated cosmos_sdk.v1.Coin period_spend_limit = 3; // period_can_spend is the number of coins left to be spent before the period_reset time repeated cosmos_sdk.v1.Coin period_can_spend = 4; // period_reset is the time at which this period resets and a new one begins, // it is calculated from the start time of the first transaction after the // last period ended google.protobuf.Timestamp period_reset = 5; } ``` Allowances can be granted and revoked using `MsgGrantAllowance` and `MsgRevokeAllowance`: ```protobuf expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // MsgGrantAllowance adds permission for Grantee to spend up to Allowance // of fees from the account of Granter. message MsgGrantAllowance { string granter = 1; string grantee = 2; google.protobuf.Any allowance = 3; } // MsgRevokeAllowance removes any existing FeeAllowance from Granter to Grantee. message MsgRevokeAllowance { string granter = 1; string grantee = 2; } ``` In order to use allowances in transactions, we add a new field `granter` to the transaction `Fee` type: ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package cosmos.tx.v1beta1; message Fee { repeated cosmos.base.v1beta1.Coin amount = 1; uint64 gas_limit = 2; string payer = 3; string granter = 4; } ``` `granter` must either be left empty or must correspond to an account which has granted a fee allowance to fee payer (either the first signer or the value of the `payer` field). A new `AnteDecorator` named `DeductGrantedFeeDecorator` will be created in order to process transactions with `fee_payer` set and correctly deduct fees based on fee allowances. ## Consequences ### Positive * improved UX for use cases where it is cumbersome to maintain an account balance just for fees ### Negative ### Neutral * a new field must be added to the transaction `Fee` message and a new `AnteDecorator` must be created to use it ## References * Blog article describing initial work: [Link](https://medium.com/regen-network/hacking-the-cosmos-cosmwasm-and-key-management-a08b9f561d1b) * Initial public specification: [Link](https://gist.github.com/aaronc/b60628017352df5983791cad30babe56) * Original subkeys proposal from B-harvest which influenced this design: [Link](https://github.com/cosmos/cosmos-sdk/issues/4480) # ADR 030: Authorization Module Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-030-authz-module ## Changelog * 2019-11-06: Initial Draft * 2020-10-12: Updated Draft * 2020-11-13: Accepted * 2020-05-06: proto API updates, use `sdk.Msg` instead of `sdk.ServiceMsg` (the latter concept was removed from Cosmos SDK) * 2022-04-20: Updated the `SendAuthorization` proto docs to clarify the `SpendLimit` is a required field. (Generic authorization can be used with bank msg type url to create limit less bank authorization) ## Status Accepted ## Abstract This ADR defines the `x/authz` module which allows accounts to grant authorizations to perform actions on behalf of that account to other accounts. ## Context The concrete use cases which motivated this module include: * the desire to delegate the ability to vote on proposals to other accounts besides the account which one has delegated stake * "sub-keys" functionality, as originally proposed in [#4480](https://github.com/cosmos/cosmos-sdk/issues/4480) which is a term used to describe the functionality provided by this module together with the `fee_grant` module from [ADR 029](/sdk/v0.50/build/architecture/adr-029-fee-grant-module) and the [group module](https://github.com/cosmos/cosmos-sdk/tree/main/x/group). The "sub-keys" functionality roughly refers to the ability for one account to grant some subset of its capabilities to other accounts with possibly less robust, but easier to use security measures. For instance, a master account representing an organization could grant the ability to spend small amounts of the organization's funds to individual employee accounts. Or an individual (or group) with a multisig wallet could grant the ability to vote on proposals to any one of the member keys. The current implementation is based on work done by the [Gaian's team at Hackatom Berlin 2019](https://github.com/cosmos-gaians/cosmos-sdk/tree/hackatom/x/delegation). ## Decision We will create a module named `authz` which provides functionality for granting arbitrary privileges from one account (the *granter*) to another account (the *grantee*). Authorizations must be granted for a particular `Msg` service methods one by one using an implementation of `Authorization` interface. ### Types Authorizations determine exactly what privileges are granted. They are extensible and can be defined for any `Msg` service method even outside of the module where the `Msg` method is defined. `Authorization`s reference `Msg`s using their TypeURL. #### Authorization ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type Authorization interface { proto.Message // MsgTypeURL returns the fully-qualified Msg TypeURL (as described in ADR 020), // which will process and accept or reject a request. MsgTypeURL() string // Accept determines whether this grant permits the provided sdk.Msg to be performed, and if // so provides an upgraded authorization instance. Accept(ctx sdk.Context, msg sdk.Msg) (AcceptResponse, error) // ValidateBasic does a simple validation check that // doesn't require access to any other information. ValidateBasic() error } // AcceptResponse instruments the controller of an authz message if the request is accepted // and if it should be updated or deleted. type AcceptResponse struct { // If Accept=true, the controller can accept and authorization and handle the update. Accept bool // If Delete=true, the controller must delete the authorization object and release // storage resources. Delete bool // Controller, who is calling Authorization.Accept must check if `Updated != nil`. If yes, // it must use the updated version and handle the update on the storage level. Updated Authorization } ``` For example a `SendAuthorization` like this is defined for `MsgSend` that takes a `SpendLimit` and updates it down to zero: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type SendAuthorization struct { // SpendLimit specifies the maximum amount of tokens that can be spent // by this authorization and will be updated as tokens are spent. This field is required. (Generic authorization // can be used with bank msg type url to create limit less bank authorization). SpendLimit sdk.Coins } func (a SendAuthorization) MsgTypeURL() string { return sdk.MsgTypeURL(&MsgSend{ }) } func (a SendAuthorization) Accept(ctx sdk.Context, msg sdk.Msg) (authz.AcceptResponse, error) { mSend, ok := msg.(*MsgSend) if !ok { return authz.AcceptResponse{ }, sdkerrors.ErrInvalidType.Wrap("type mismatch") } limitLeft, isNegative := a.SpendLimit.SafeSub(mSend.Amount) if isNegative { return authz.AcceptResponse{ }, sdkerrors.ErrInsufficientFunds.Wrapf("requested amount is more than spend limit") } if limitLeft.IsZero() { return authz.AcceptResponse{ Accept: true, Delete: true }, nil } return authz.AcceptResponse{ Accept: true, Delete: false, Updated: &SendAuthorization{ SpendLimit: limitLeft }}, nil } ``` A different type of capability for `MsgSend` could be implemented using the `Authorization` interface with no need to change the underlying `bank` module. ##### Small notes on `AcceptResponse` * The `AcceptResponse.Accept` field will be set to `true` if the authorization is accepted. However, if it is rejected, the function `Accept` will raise an error (without setting `AcceptResponse.Accept` to `false`). * The `AcceptResponse.Updated` field will be set to a non-nil value only if there is a real change to the authorization. If authorization remains the same (as is, for instance, always the case for a [`GenericAuthorization`](#genericauthorization)), the field will be `nil`. ### `Msg` Service ```protobuf expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} service Msg { // Grant grants the provided authorization to the grantee on the granter's // account with the provided expiration time. rpc Grant(MsgGrant) returns (MsgGrantResponse); // Exec attempts to execute the provided messages using // authorizations granted to the grantee. Each message should have only // one signer corresponding to the granter of the authorization. rpc Exec(MsgExec) returns (MsgExecResponse); // Revoke revokes any authorization corresponding to the provided method name on the // granter's account that has been granted to the grantee. rpc Revoke(MsgRevoke) returns (MsgRevokeResponse); } // Grant gives permissions to execute // the provided method with expiration time. message Grant { google.protobuf.Any authorization = 1 [(cosmos_proto.accepts_interface) = "cosmos.authz.v1beta1.Authorization"]; google.protobuf.Timestamp expiration = 2 [(gogoproto.stdtime) = true, (gogoproto.nullable) = false]; } message MsgGrant { string granter = 1; string grantee = 2; Grant grant = 3 [(gogoproto.nullable) = false]; } message MsgExecResponse { cosmos.base.abci.v1beta1.Result result = 1; } message MsgExec { string grantee = 1; // Authorization Msg requests to execute. Each msg must implement Authorization interface repeated google.protobuf.Any msgs = 2 [(cosmos_proto.accepts_interface) = "cosmos.base.v1beta1.Msg"];; } ``` ### Router Middleware The `authz` `Keeper` will expose a `DispatchActions` method which allows other modules to send `Msg`s to the router based on `Authorization` grants: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type Keeper interface { // DispatchActions routes the provided msgs to their respective handlers if the grantee was granted an authorization // to send those messages by the first (and only) signer of each msg. DispatchActions(ctx sdk.Context, grantee sdk.AccAddress, msgs []sdk.Msg) sdk.Result` } ``` ### CLI #### `tx exec` Method When a CLI user wants to run a transaction on behalf of another account using `MsgExec`, they can use the `exec` method. For instance `gaiacli tx gov vote 1 yes --from --generate-only | gaiacli tx authz exec --send-as --from ` would send a transaction like this: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} MsgExec { Grantee: mykey, Msgs: []sdk.Msg{ MsgVote { ProposalID: 1, Voter: cosmos3thsdgh983egh823 Option: Yes } } } ``` #### `tx grant --from ` This CLI command will send a `MsgGrant` transaction. `authorization` should be encoded as JSON on the CLI. #### `tx revoke --from ` This CLI command will send a `MsgRevoke` transaction. ### Built-in Authorizations #### `SendAuthorization` ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // SendAuthorization allows the grantee to spend up to spend_limit coins from // the granter's account. message SendAuthorization { repeated cosmos.base.v1beta1.Coin spend_limit = 1; } ``` #### `GenericAuthorization` ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // GenericAuthorization gives the grantee unrestricted permissions to execute // the provided method on behalf of the granter's account. message GenericAuthorization { option (cosmos_proto.implements_interface) = "Authorization"; // Msg, identified by it's type URL, to grant unrestricted permissions to execute string msg = 1; } ``` ## Consequences ### Positive * Users will be able to authorize arbitrary actions on behalf of their accounts to other users, improving key management for many use cases * The solution is more generic than previously considered approaches and the `Authorization` interface approach can be extended to cover other use cases by SDK users ### Negative ### Neutral ## References * Initial Hackatom implementation: [Link](https://github.com/cosmos-gaians/cosmos-sdk/tree/hackatom/x/delegation) * Post-Hackatom spec: [Link](https://gist.github.com/aaronc/b60628017352df5983791cad30babe56#delegation-module) * B-Harvest subkeys spec: [Link](https://github.com/cosmos/cosmos-sdk/issues/4480) # ADR 031: Protobuf Msg Services Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-031-msg-service 2020-10-05: Initial Draft 2021-04-21: Remove ServiceMsgs to follow Protobuf Any's spec, see #9063. ## Changelog * 2020-10-05: Initial Draft * 2021-04-21: Remove `ServiceMsg`s to follow Protobuf `Any`'s spec, see [#9063](https://github.com/cosmos/cosmos-sdk/issues/9063). ## Status Accepted ## Abstract We want to leverage protobuf `service` definitions for defining `Msg`s which will give us significant developer UX improvements in terms of the code that is generated and the fact that return types will now be well defined. ## Context Currently `Msg` handlers in the Cosmos SDK do have return values that are placed in the `data` field of the response. These return values, however, are not specified anywhere except in the golang handler code. In early conversations it was proposed that `Msg` return types be captured using a protobuf extension field, ex: ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package cosmos.gov; message MsgSubmitProposal option (cosmos_proto.msg_return) = “uint64”; string delegator_address = 1; string validator_address = 2; repeated sdk.Coin amount = 3; } ``` This was never adopted, however. Having a well-specified return value for `Msg`s would improve client UX. For instance, in `x/gov`, `MsgSubmitProposal` returns the proposal ID as a big-endian `uint64`. This isn’t really documented anywhere and clients would need to know the internals of the Cosmos SDK to parse that value and return it to users. Also, there may be cases where we want to use these return values programatically. For instance, [Link](https://github.com/cosmos/cosmos-sdk/issues/7093) proposes a method for doing inter-module Ocaps using the `Msg` router. A well-defined return type would improve the developer UX for this approach. In addition, handler registration of `Msg` types tends to add a bit of boilerplate on top of keepers and is usually done through manual type switches. This isn't necessarily bad, but it does add overhead to creating modules. ## Decision We decide to use protobuf `service` definitions for defining `Msg`s as well as the code generated by them as a replacement for `Msg` handlers. Below we define how this will look for the `SubmitProposal` message from `x/gov` module. We start with a `Msg` `service` definition: ```protobuf expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package cosmos.gov; service Msg { rpc SubmitProposal(MsgSubmitProposal) returns (MsgSubmitProposalResponse); } // Note that for backwards compatibility this uses MsgSubmitProposal as the request // type instead of the more canonical MsgSubmitProposalRequest message MsgSubmitProposal { google.protobuf.Any content = 1; string proposer = 2; } message MsgSubmitProposalResponse { uint64 proposal_id; } ``` While this is most commonly used for gRPC, overloading protobuf `service` definitions like this does not violate the intent of the [protobuf spec](https://developers.google.com/protocol-buffers/docs/proto3#services) which says: > If you don’t want to use gRPC, it’s also possible to use protocol buffers with your own RPC implementation. > With this approach, we would get an auto-generated `MsgServer` interface: In addition to clearly specifying return types, this has the benefit of generating client and server code. On the server side, this is almost like an automatically generated keeper method and could maybe be used intead of keepers eventually (see [#7093](https://github.com/cosmos/cosmos-sdk/issues/7093)): ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package gov type MsgServer interface { SubmitProposal(context.Context, *MsgSubmitProposal) (*MsgSubmitProposalResponse, error) } ``` On the client side, developers could take advantage of this by creating RPC implementations that encapsulate transaction logic. Protobuf libraries that use asynchronous callbacks, like [protobuf.js](https://github.com/protobufjs/protobuf.js#using-services) could use this to register callbacks for specific messages even for transactions that include multiple `Msg`s. Each `Msg` service method should have exactly one request parameter: its corresponding `Msg` type. For example, the `Msg` service method `/cosmos.gov.v1beta1.Msg/SubmitProposal` above has exactly one request parameter, namely the `Msg` type `/cosmos.gov.v1beta1.MsgSubmitProposal`. It is important the reader understands clearly the nomenclature difference between a `Msg` service (a Protobuf service) and a `Msg` type (a Protobuf message), and the differences in their fully-qualified name. This convention has been decided over the more canonical `Msg...Request` names mainly for backwards compatibility, but also for better readability in `TxBody.messages` (see [Encoding section](#encoding) below): transactions containing `/cosmos.gov.MsgSubmitProposal` read better than those containing `/cosmos.gov.v1beta1.MsgSubmitProposalRequest`. One consequence of this convention is that each `Msg` type can be the request parameter of only one `Msg` service method. However, we consider this limitation a good practice in explicitness. ### Encoding Encoding of transactions generated with `Msg` services do not differ from current Protobuf transaction encoding as defined in [ADR-020](/sdk/v0.50/build/architecture/adr-020-protobuf-transaction-encoding). We are encoding `Msg` types (which are exactly `Msg` service methods' request parameters) as `Any` in `Tx`s which involves packing the binary-encoded `Msg` with its type URL. ### Decoding Since `Msg` types are packed into `Any`, decoding transactions messages are done by unpacking `Any`s into `Msg` types. For more information, please refer to [ADR-020](/sdk/v0.50/build/architecture/adr-020-protobuf-transaction-encoding#transactions). ### Routing We propose to add a `msg_service_router` in BaseApp. This router is a key/value map which maps `Msg` types' `type_url`s to their corresponding `Msg` service method handler. Since there is a 1-to-1 mapping between `Msg` types and `Msg` service method, the `msg_service_router` has exactly one entry per `Msg` service method. When a transaction is processed by BaseApp (in CheckTx or in DeliverTx), its `TxBody.messages` are decoded as `Msg`s. Each `Msg`'s `type_url` is matched against an entry in the `msg_service_router`, and the respective `Msg` service method handler is called. For backward compatibility, the old handlers are not removed yet. If BaseApp receives a legacy `Msg` with no corresponding entry in the `msg_service_router`, it will be routed via its legacy `Route()` method into the legacy handler. ### Module Configuration In [ADR 021](/sdk/v0.50/build/architecture/adr-021-protobuf-query-encoding), we introduced a method `RegisterQueryService` to `AppModule` which allows for modules to register gRPC queriers. To register `Msg` services, we attempt a more extensible approach by converting `RegisterQueryService` to a more generic `RegisterServices` method: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type AppModule interface { RegisterServices(Configurator) ... } type Configurator interface { QueryServer() grpc.Server MsgServer() grpc.Server } // example module: func (am AppModule) RegisterServices(cfg Configurator) { types.RegisterQueryServer(cfg.QueryServer(), keeper) types.RegisterMsgServer(cfg.MsgServer(), keeper) } ``` The `RegisterServices` method and the `Configurator` interface are intended to evolve to satisfy the use cases discussed in [#7093](https://github.com/cosmos/cosmos-sdk/issues/7093) and [#7122](https://github.com/cosmos/cosmos-sdk/issues/7421). When `Msg` services are registered, the framework *should* verify that all `Msg` types implement the `sdk.Msg` interface and throw an error during initialization rather than later when transactions are processed. ### `Msg` Service Implementation Just like query services, `Msg` service methods can retrieve the `sdk.Context` from the `context.Context` parameter method using the `sdk.UnwrapSDKContext` method: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package gov func (k Keeper) SubmitProposal(goCtx context.Context, params *types.MsgSubmitProposal) (*MsgSubmitProposalResponse, error) { ctx := sdk.UnwrapSDKContext(goCtx) ... } ``` The `sdk.Context` should have an `EventManager` already attached by BaseApp's `msg_service_router`. Separate handler definition is no longer needed with this approach. ## Consequences This design changes how a module functionality is exposed and accessed. It deprecates the existing `Handler` interface and `AppModule.Route` in favor of [Protocol Buffer Services](https://developers.google.com/protocol-buffers/docs/proto3#services) and Service Routing described above. This dramatically simplifies the code. We don't need to create handlers and keepers any more. Use of Protocol Buffer auto-generated clients clearly separates the communication interfaces between the module and a modules user. The control logic (aka handlers and keepers) is not exposed any more. A module interface can be seen as a black box accessible through a client API. It's worth to note that the client interfaces are also generated by Protocol Buffers. This also allows us to change how we perform functional tests. Instead of mocking AppModules and Router, we will mock a client (server will stay hidden). More specifically: we will never mock `moduleA.MsgServer` in `moduleB`, but rather `moduleA.MsgClient`. One can think about it as working with external services (eg DBs, or online servers...). We assume that the transmission between clients and servers is correctly handled by generated Protocol Buffers. Finally, closing a module to client API opens desirable OCAP patterns discussed in ADR-033. Since server implementation and interface is hidden, nobody can hold "keepers"/servers and will be forced to relay on the client interface, which will drive developers for correct encapsulation and software engineering patterns. ### Pros * communicates return type clearly * manual handler registration and return type marshaling is no longer needed, just implement the interface and register it * communication interface is automatically generated, the developer can now focus only on the state transition methods - this would improve the UX of [#7093](https://github.com/cosmos/cosmos-sdk/issues/7093) approach (1) if we chose to adopt that * generated client code could be useful for clients and tests * dramatically reduces and simplifies the code ### Cons * using `service` definitions outside the context of gRPC could be confusing (but doesn’t violate the proto3 spec) ## References * [Initial Github Issue #7122](https://github.com/cosmos/cosmos-sdk/issues/7122) * [proto 3 Language Guide: Defining Services](https://developers.google.com/protocol-buffers/docs/proto3#services) * [ADR 020](/sdk/v0.50/build/architecture/adr-020-protobuf-transaction-encoding) * [ADR 021](/sdk/v0.50/build/architecture/adr-021-protobuf-query-encoding) # ADR 032: Typed Events Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-032-typed-events 28-Sept-2020: Initial Draft ## Changelog * 28-Sept-2020: Initial Draft ## Authors * Anil Kumar (@anilcse) * Jack Zampolin (@jackzampolin) * Adam Bozanich (@boz) ## Status Proposed ## Abstract Currently in the Cosmos SDK, events are defined in the handlers for each message as well as `BeginBlock` and `EndBlock`. Each module doesn't have types defined for each event, they are implemented as `map[string]string`. Above all else this makes these events difficult to consume as it requires a great deal of raw string matching and parsing. This proposal focuses on updating the events to use **typed events** defined in each module such that emiting and subscribing to events will be much easier. This workflow comes from the experience of the Akash Network team. ## Context Currently in the Cosmos SDK, events are defined in the handlers for each message, meaning each module doesn't have a cannonical set of types for each event. Above all else this makes these events difficult to consume as it requires a great deal of raw string matching and parsing. This proposal focuses on updating the events to use **typed events** defined in each module such that emiting and subscribing to events will be much easier. This workflow comes from the experience of the Akash Network team. [Our platform](http://github.com/ovrclk/akash) requires a number of programatic on chain interactions both on the provider (datacenter - to bid on new orders and listen for leases created) and user (application developer - to send the app manifest to the provider) side. In addition the Akash team is now maintaining the IBC [`relayer`](https://github.com/ovrclk/relayer), another very event driven process. In working on these core pieces of infrastructure, and integrating lessons learned from Kubernetes developement, our team has developed a standard method for defining and consuming typed events in Cosmos SDK modules. We have found that it is extremely useful in building this type of event driven application. As the Cosmos SDK gets used more extensively for apps like `peggy`, other peg zones, IBC, DeFi, etc... there will be an exploding demand for event driven applications to support new features desired by users. We propose upstreaming our findings into the Cosmos SDK to enable all Cosmos SDK applications to quickly and easily build event driven apps to aid their core application. Wallets, exchanges, explorers, and defi protocols all stand to benefit from this work. If this proposal is accepted, users will be able to build event driven Cosmos SDK apps in go by just writing `EventHandler`s for their specific event types and passing them to `EventEmitters` that are defined in the Cosmos SDK. The end of this proposal contains a detailed example of how to consume events after this refactor. This proposal is specifically about how to consume these events as a client of the blockchain, not for intermodule communication. ## Decision **Step-1**: Implement additional functionality in the `types` package: `EmitTypedEvent` and `ParseTypedEvent` functions ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // types/events.go // EmitTypedEvent takes typed event and emits converting it into sdk.Event func (em *EventManager) EmitTypedEvent(event proto.Message) error { evtType := proto.MessageName(event) evtJSON, err := codec.ProtoMarshalJSON(event) if err != nil { return err } var attrMap map[string]json.RawMessage err = json.Unmarshal(evtJSON, &attrMap) if err != nil { return err } var attrs []abci.EventAttribute for k, v := range attrMap { attrs = append(attrs, abci.EventAttribute{ Key: []byte(k), Value: v, }) } em.EmitEvent(Event{ Type: evtType, Attributes: attrs, }) return nil } // ParseTypedEvent converts abci.Event back to typed event func ParseTypedEvent(event abci.Event) (proto.Message, error) { concreteGoType := proto.MessageType(event.Type) if concreteGoType == nil { return nil, fmt.Errorf("failed to retrieve the message of type %q", event.Type) } var value reflect.Value if concreteGoType.Kind() == reflect.Ptr { value = reflect.New(concreteGoType.Elem()) } else { value = reflect.Zero(concreteGoType) } protoMsg, ok := value.Interface().(proto.Message) if !ok { return nil, fmt.Errorf("%q does not implement proto.Message", event.Type) } attrMap := make(map[string]json.RawMessage) for _, attr := range event.Attributes { attrMap[string(attr.Key)] = attr.Value } attrBytes, err := json.Marshal(attrMap) if err != nil { return nil, err } err = jsonpb.Unmarshal(strings.NewReader(string(attrBytes)), protoMsg) if err != nil { return nil, err } return protoMsg, nil } ``` Here, the `EmitTypedEvent` is a method on `EventManager` which takes typed event as input and apply json serialization on it. Then it maps the JSON key/value pairs to `event.Attributes` and emits it in form of `sdk.Event`. `Event.Type` will be the type URL of the proto message. When we subscribe to emitted events on the CometBFT websocket, they are emitted in the form of an `abci.Event`. `ParseTypedEvent` parses the event back to it's original proto message. **Step-2**: Add proto definitions for typed events for msgs in each module: For example, let's take `MsgSubmitProposal` of `gov` module and implement this event's type. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // proto/cosmos/gov/v1beta1/gov.proto // Add typed event definition package cosmos.gov.v1beta1; message EventSubmitProposal { string from_address = 1; uint64 proposal_id = 2; TextProposal proposal = 3; } ``` **Step-3**: Refactor event emission to use the typed event created and emit using `sdk.EmitTypedEvent`: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // x/gov/handler.go func handleMsgSubmitProposal(ctx sdk.Context, keeper keeper.Keeper, msg types.MsgSubmitProposalI) (*sdk.Result, error) { ... types.Context.EventManager().EmitTypedEvent( &EventSubmitProposal{ FromAddress: fromAddress, ProposalId: id, Proposal: proposal, }, ) ... } ``` ### How to subscribe to these typed events in `Client` > NOTE: Full code example below Users will be able to subscribe using `client.Context.Client.Subscribe` and consume events which are emitted using `EventHandler`s. Akash Network has built a simple [`pubsub`](https://github.com/ovrclk/akash/blob/90d258caeb933b611d575355b8df281208a214f8/pubsub/bus.go#L20). This can be used to subscribe to `abci.Events` and [publish](https://github.com/ovrclk/akash/blob/90d258caeb933b611d575355b8df281208a214f8/events/publish.go#L21) them as typed events. Please see the below code sample for more detail on this flow looks for clients. ## Consequences ### Positive * Improves consistency of implementation for the events currently in the Cosmos SDK * Provides a much more ergonomic way to handle events and facilitates writing event driven applications * This implementation will support a middleware ecosystem of `EventHandler`s ### Negative ## Detailed code example of publishing events This ADR also proposes adding affordances to emit and consume these events. This way developers will only need to write `EventHandler`s which define the actions they desire to take. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // EventEmitter is a type that describes event emitter functions // This should be defined in `types/events.go` type EventEmitter func(context.Context, client.Context, ...EventHandler) error // EventHandler is a type of function that handles events coming out of the event bus // This should be defined in `types/events.go` type EventHandler func(proto.Message) error // Sample use of the functions below func main() { ctx, cancel := context.WithCancel(context.Background()) if err := TxEmitter(ctx, client.Context{ }.WithNodeURI("tcp://localhost:26657"), SubmitProposalEventHandler); err != nil { cancel() panic(err) } return } // SubmitProposalEventHandler is an example of an event handler that prints proposal details // when any EventSubmitProposal is emitted. func SubmitProposalEventHandler(ev proto.Message) (err error) { switch event := ev.(type) { // Handle governance proposal events creation events case govtypes.EventSubmitProposal: // Users define business logic here e.g. fmt.Println(ev.FromAddress, ev.ProposalId, ev.Proposal) return nil default: return nil } } // TxEmitter is an example of an event emitter that emits just transaction events. This can and // should be implemented somewhere in the Cosmos SDK. The Cosmos SDK can include an EventEmitters for tm.event='Tx' // and/or tm.event='NewBlock' (the new block events may contain typed events) func TxEmitter(ctx context.Context, cliCtx client.Context, ehs ...EventHandler) (err error) { // Instantiate and start CometBFT RPC client client, err := cliCtx.GetNode() if err != nil { return err } if err = client.Start(); err != nil { return err } // Start the pubsub bus bus := pubsub.NewBus() defer bus.Close() // Initialize a new error group eg, ctx := errgroup.WithContext(ctx) // Publish chain events to the pubsub bus eg.Go(func() error { return PublishChainTxEvents(ctx, client, bus, simapp.ModuleBasics) }) // Subscribe to the bus events subscriber, err := bus.Subscribe() if err != nil { return err } // Handle all the events coming out of the bus eg.Go(func() error { var err error for { select { case <-ctx.Done(): return nil case <-subscriber.Done(): return nil case ev := <-subscriber.Events(): for _, eh := range ehs { if err = eh(ev); err != nil { break } } } } return nil }) return group.Wait() } // PublishChainTxEvents events using cmtclient. Waits on context shutdown signals to exit. func PublishChainTxEvents(ctx context.Context, client cmtclient.EventsClient, bus pubsub.Bus, mb module.BasicManager) (err error) { // Subscribe to transaction events txch, err := client.Subscribe(ctx, "txevents", "tm.event='Tx'", 100) if err != nil { return err } // Unsubscribe from transaction events on function exit defer func() { err = client.UnsubscribeAll(ctx, "txevents") }() // Use errgroup to manage concurrency g, ctx := errgroup.WithContext(ctx) // Publish transaction events in a goroutine g.Go(func() error { var err error for { select { case <-ctx.Done(): break case ed := <-ch: switch evt := ed.Data.(type) { case cmttypes.EventDataTx: if !evt.Result.IsOK() { continue } // range over events, parse them using the basic manager and // send them to the pubsub bus for _, abciEv := range events { typedEvent, err := sdk.ParseTypedEvent(abciEv) if err != nil { return er } if err := bus.Publish(typedEvent); err != nil { bus.Close() return } continue } } } } return err }) // Exit on error or context cancelation return g.Wait() } ``` ## References * [Publish Custom Events via a bus](https://github.com/ovrclk/akash/blob/90d258caeb933b611d575355b8df281208a214f8/events/publish.go#L19-L58) * [Consuming the events in `Client`](https://github.com/ovrclk/deploy/blob/bf6c633ab6c68f3026df59efd9982d6ca1bf0561/cmd/event-handlers.go#L57) # ADR 033: Protobuf-based Inter-Module Communication Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-033-protobuf-inter-module-comm 2020-10-05: Initial Draft ## Changelog * 2020-10-05: Initial Draft ## Status Proposed ## Abstract This ADR introduces a system for permissioned inter-module communication leveraging the protobuf `Query` and `Msg` service definitions defined in [ADR 021](/sdk/v0.50/build/architecture/adr-021-protobuf-query-encoding) and [ADR 031](/sdk/v0.50/build/architecture/adr-031-msg-service) which provides: * stable protobuf based module interfaces to potentially later replace the keeper paradigm * stronger inter-module object capabilities (OCAPs) guarantees * module accounts and sub-account authorization ## Context In the current Cosmos SDK documentation on the [Object-Capability Model](/sdk/latest/guides/module-design/ocap), it is stated that: > We assume that a thriving ecosystem of Cosmos SDK modules that are easy to compose into a blockchain application will contain faulty or malicious modules. There is currently not a thriving ecosystem of Cosmos SDK modules. We hypothesize that this is in part due to: 1. lack of a stable v1.0 Cosmos SDK to build modules off of. Module interfaces are changing, sometimes dramatically, from point release to point release, often for good reasons, but this does not create a stable foundation to build on. 2. lack of a properly implemented object capability or even object-oriented encapsulation system which makes refactors of module keeper interfaces inevitable because the current interfaces are poorly constrained. ### `x/bank` Case Study Currently the `x/bank` keeper gives pretty much unrestricted access to any module which references it. For instance, the `SetBalance` method allows the caller to set the balance of any account to anything, bypassing even proper tracking of supply. There appears to have been some later attempts to implement some semblance of OCAPs using module-level minting, staking and burning permissions. These permissions allow a module to mint, burn or delegate tokens with reference to the module’s own account. These permissions are actually stored as a `[]string` array on the `ModuleAccount` type in state. However, these permissions don’t really do much. They control what modules can be referenced in the `MintCoins`, `BurnCoins` and `DelegateCoins***` methods, but for one there is no unique object capability token that controls access — just a simple string. So the `x/upgrade` module could mint tokens for the `x/staking` module simple by calling `MintCoins(“staking”)`. Furthermore, all modules which have access to these keeper methods, also have access to `SetBalance` negating any other attempt at OCAPs and breaking even basic object-oriented encapsulation. ## Decision Based on [ADR-021](/sdk/v0.50/build/architecture/adr-021-protobuf-query-encoding) and [ADR-031](/sdk/v0.50/build/architecture/adr-031-msg-service), we introduce the Inter-Module Communication framework for secure module authorization and OCAPs. When implemented, this could also serve as an alternative to the existing paradigm of passing keepers between modules. The approach outlined here-in is intended to form the basis of a Cosmos SDK v1.0 that provides the necessary stability and encapsulation guarantees that allow a thriving module ecosystem to emerge. Of particular note — the decision is to *enable* this functionality for modules to adopt at their own discretion. Proposals to migrate existing modules to this new paradigm will have to be a separate conversation, potentially addressed as amendments to this ADR. ### New "Keeper" Paradigm In [ADR 021](/sdk/v0.50/build/architecture/adr-021-protobuf-query-encoding), a mechanism for using protobuf service definitions to define queriers was introduced and in [ADR 31](/sdk/v0.50/build/architecture/adr-031-msg-service), a mechanism for using protobuf service to define `Msg`s was added. Protobuf service definitions generate two golang interfaces representing the client and server sides of a service plus some helper code. Here is a minimal example for the bank `cosmos.bank.Msg/Send` message type: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package bank type MsgClient interface { Send(context.Context, *MsgSend, opts ...grpc.CallOption) (*MsgSendResponse, error) } type MsgServer interface { Send(context.Context, *MsgSend) (*MsgSendResponse, error) } ``` [ADR 021](/sdk/v0.50/build/architecture/adr-021-protobuf-query-encoding) and [ADR 31](/sdk/v0.50/build/architecture/adr-031-msg-service) specifies how modules can implement the generated `QueryServer` and `MsgServer` interfaces as replacements for the legacy queriers and `Msg` handlers respectively. In this ADR we explain how modules can make queries and send `Msg`s to other modules using the generated `QueryClient` and `MsgClient` interfaces and propose this mechanism as a replacement for the existing `Keeper` paradigm. To be clear, this ADR does not necessitate the creation of new protobuf definitions or services. Rather, it leverages the same proto based service interfaces already used by clients for inter-module communication. Using this `QueryClient`/`MsgClient` approach has the following key benefits over exposing keepers to external modules: 1. Protobuf types are checked for breaking changes using [buf](https://buf.build/docs/breaking-overview) and because of the way protobuf is designed this will give us strong backwards compatibility guarantees while allowing for forward evolution. 2. The separation between the client and server interfaces will allow us to insert permission checking code in between the two which checks if one module is authorized to send the specified `Msg` to the other module providing a proper object capability system (see below). 3. The router for inter-module communication gives us a convenient place to handle rollback of transactions, enabling atomicy of operations ([currently a problem](https://github.com/cosmos/cosmos-sdk/issues/8030)). Any failure within a module-to-module call would result in a failure of the entire transaction This mechanism has the added benefits of: * reducing boilerplate through code generation, and * allowing for modules in other languages either via a VM like CosmWasm or sub-processes using gRPC ### Inter-module Communication To use the `Client` generated by the protobuf compiler we need a `grpc.ClientConn` [interface](https://github.com/grpc/grpc-go/blob/v1.49.x/clientconn.go#L441-L450) implementation. For this we introduce a new type, `ModuleKey`, which implements the `grpc.ClientConn` interface. `ModuleKey` can be thought of as the "private key" corresponding to a module account, where authentication is provided through use of a special `Invoker()` function, described in more detail below. Blockchain users (external clients) use their account's private key to sign transactions containing `Msg`s where they are listed as signers (each message specifies required signers with `Msg.GetSigner`). The authentication checks is performed by `AnteHandler`. Here, we extend this process, by allowing modules to be identified in `Msg.GetSigners`. When a module wants to trigger the execution a `Msg` in another module, its `ModuleKey` acts as the sender (through the `ClientConn` interface we describe below) and is set as a sole "signer". It's worth to note that we don't use any cryptographic signature in this case. For example, module `A` could use its `A.ModuleKey` to create `MsgSend` object for `/cosmos.bank.Msg/Send` transaction. `MsgSend` validation will assure that the `from` account (`A.ModuleKey` in this case) is the signer. Here's an example of a hypothetical module `foo` interacting with `x/bank`: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package foo type FooMsgServer { // ... bankQuery bank.QueryClient bankMsg bank.MsgClient } func NewFooMsgServer(moduleKey RootModuleKey, ...) FooMsgServer { // ... return FooMsgServer { // ... modouleKey: moduleKey, bankQuery: bank.NewQueryClient(moduleKey), bankMsg: bank.NewMsgClient(moduleKey), } } func (foo *FooMsgServer) Bar(ctx context.Context, req *MsgBarRequest) (*MsgBarResponse, error) { balance, err := foo.bankQuery.Balance(&bank.QueryBalanceRequest{ Address: fooMsgServer.moduleKey.Address(), Denom: "foo" }) ... res, err := foo.bankMsg.Send(ctx, &bank.MsgSendRequest{ FromAddress: fooMsgServer.moduleKey.Address(), ... }) ... } ``` This design is also intended to be extensible to cover use cases of more fine grained permissioning like minting by denom prefix being restricted to certain modules (as discussed in [#7459](https://github.com/cosmos/cosmos-sdk/pull/7459#discussion_r529545528)). ### `ModuleKey`s and `ModuleID`s A `ModuleKey` can be thought of as a "private key" for a module account and a `ModuleID` can be thought of as the corresponding "public key". From the [ADR 028](/sdk/v0.50/build/architecture/adr-028-public-key-addresses), modules can have both a root module account and any number of sub-accounts or derived accounts that can be used for different pools (ex. staking pools) or managed accounts (ex. group accounts). We can also think of module sub-accounts as similar to derived keys - there is a root key and then some derivation path. `ModuleID` is a simple struct which contains the module name and optional "derivation" path, and forms its address based on the `AddressHash` method from [the ADR-028](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-028-public-key-addresses.md): ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type ModuleID struct { ModuleName string Path []byte } func (key ModuleID) Address() []byte { return AddressHash(key.ModuleName, key.Path) } ``` In addition to being able to generate a `ModuleID` and address, a `ModuleKey` contains a special function called `Invoker` which is the key to safe inter-module access. The `Invoker` creates an `InvokeFn` closure which is used as an `Invoke` method in the `grpc.ClientConn` interface and under the hood is able to route messages to the appropriate `Msg` and `Query` handlers performing appropriate security checks on `Msg`s. This allows for even safer inter-module access than keeper's whose private member variables could be manipulated through reflection. Golang does not support reflection on a function closure's captured variables and direct manipulation of memory would be needed for a truly malicious module to bypass the `ModuleKey` security. The two `ModuleKey` types are `RootModuleKey` and `DerivedModuleKey`: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type Invoker func(callInfo CallInfo) func(ctx context.Context, request, response interface{ }, opts ...interface{ }) error type CallInfo { Method string Caller ModuleID } type RootModuleKey struct { moduleName string invoker Invoker } func (rm RootModuleKey) Derive(path []byte) DerivedModuleKey { /* ... */ } type DerivedModuleKey struct { moduleName string path []byte invoker Invoker } ``` A module can get access to a `DerivedModuleKey`, using the `Derive(path []byte)` method on `RootModuleKey` and then would use this key to authenticate `Msg`s from a sub-account. Ex: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package foo func (fooMsgServer *MsgServer) Bar(ctx context.Context, req *MsgBar) (*MsgBarResponse, error) { derivedKey := fooMsgServer.moduleKey.Derive(req.SomePath) bankMsgClient := bank.NewMsgClient(derivedKey) res, err := bankMsgClient.Balance(ctx, &bank.MsgSend{ FromAddress: derivedKey.Address(), ... }) ... } ``` In this way, a module can gain permissioned access to a root account and any number of sub-accounts and send authenticated `Msg`s from these accounts. The `Invoker` `callInfo.Caller` parameter is used under the hood to distinguish between different module accounts, but either way the function returned by `Invoker` only allows `Msg`s from either the root or a derived module account to pass through. Note that `Invoker` itself returns a function closure based on the `CallInfo` passed in. This will allow client implementations in the future that cache the invoke function for each method type avoiding the overhead of hash table lookup. This would reduce the performance overhead of this inter-module communication method to the bare minimum required for checking permissions. To re-iterate, the closure only allows access to authorized calls. There is no access to anything else regardless of any name impersonation. Below is a rough sketch of the implementation of `grpc.ClientConn.Invoke` for `RootModuleKey`: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (key RootModuleKey) Invoke(ctx context.Context, method string, args, reply interface{ }, opts ...grpc.CallOption) error { f := key.invoker(CallInfo { Method: method, Caller: ModuleID { ModuleName: key.moduleName }}) return f(ctx, args, reply) } ``` ### `AppModule` Wiring and Requirements In [ADR 031](/sdk/v0.50/build/architecture/adr-031-msg-service), the `AppModule.RegisterService(Configurator)` method was introduced. To support inter-module communication, we extend the `Configurator` interface to pass in the `ModuleKey` and to allow modules to specify their dependencies on other modules using `RequireServer()`: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type Configurator interface { MsgServer() grpc.Server QueryServer() grpc.Server ModuleKey() ModuleKey RequireServer(msgServer interface{ }) } ``` The `ModuleKey` is passed to modules in the `RegisterService` method itself so that `RegisterServices` serves as a single entry point for configuring module services. This is intended to also have the side-effect of greatly reducing boilerplate in `app.go`. For now, `ModuleKey`s will be created based on `AppModuleBasic.Name()`, but a more flexible system may be introduced in the future. The `ModuleManager` will handle creation of module accounts behind the scenes. Because modules do not get direct access to each other anymore, modules may have unfulfilled dependencies. To make sure that module dependencies are resolved at startup, the `Configurator.RequireServer` method should be added. The `ModuleManager` will make sure that all dependencies declared with `RequireServer` can be resolved before the app starts. An example module `foo` could declare it's dependency on `x/bank` like this: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package foo func (am AppModule) RegisterServices(cfg Configurator) { cfg.RequireServer((*bank.QueryServer)(nil)) cfg.RequireServer((*bank.MsgServer)(nil)) } ``` ### Security Considerations In addition to checking for `ModuleKey` permissions, a few additional security precautions will need to be taken by the underlying router infrastructure. #### Recursion and Re-entry Recursive or re-entrant method invocations pose a potential security threat. This can be a problem if Module A calls Module B and Module B calls module A again in the same call. One basic way for the router system to deal with this is to maintain a call stack which prevents a module from being referenced more than once in the call stack so that there is no re-entry. A `map[string]interface{}` table in the router could be used to perform this security check. #### Queries Queries in Cosmos SDK are generally un-permissioned so allowing one module to query another module should not pose any major security threats assuming basic precautions are taken. The basic precaution that the router system will need to take is making sure that the `sdk.Context` passed to query methods does not allow writing to the store. This can be done for now with a `CacheMultiStore` as is currently done for `BaseApp` queries. ### Internal Methods In many cases, we may wish for modules to call methods on other modules which are not exposed to clients at all. For this purpose, we add the `InternalServer` method to `Configurator`: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type Configurator interface { MsgServer() grpc.Server QueryServer() grpc.Server InternalServer() grpc.Server } ``` As an example, x/slashing's Slash must call x/staking's Slash, but we don't want to expose x/staking's Slash to end users and clients. Internal protobuf services will be defined in a corresponding `internal.proto` file in the given module's proto package. Services registered against `InternalServer` will be callable from other modules but not by external clients. An alternative solution to internal-only methods could involve hooks / plugins as discussed [here](https://github.com/cosmos/cosmos-sdk/pull/7459#issuecomment-733807753). A more detailed evaluation of a hooks / plugin system will be addressed later in follow-ups to this ADR or as a separate ADR. ### Authorization By default, the inter-module router requires that messages are sent by the first signer returned by `GetSigners`. The inter-module router should also accept authorization middleware such as that provided by [ADR 030](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-030-authz-module.md). This middleware will allow accounts to otherwise specific module accounts to perform actions on their behalf. Authorization middleware should take into account the need to grant certain modules effectively "admin" privileges to other modules. This will be addressed in separate ADRs or updates to this ADR. ### Future Work Other future improvements may include: * custom code generation that: * simplifies interfaces (ex. generates code with `sdk.Context` instead of `context.Context`) * optimizes inter-module calls - for instance caching resolved methods after first invocation * combining `StoreKey`s and `ModuleKey`s into a single interface so that modules have a single OCAPs handle * code generation which makes inter-module communication more performant * decoupling `ModuleKey` creation from `AppModuleBasic.Name()` so that app's can override root module account names * inter-module hooks and plugins ## Alternatives ### MsgServices vs `x/capability` The `x/capability` module does provide a proper object-capability implementation that can be used by any module in the Cosmos SDK and could even be used for inter-module OCAPs as described in [#5931](https://github.com/cosmos/cosmos-sdk/issues/5931). The advantages of the approach described in this ADR are mostly around how it integrates with other parts of the Cosmos SDK, specifically: * protobuf so that: * code generation of interfaces can be leveraged for a better dev UX * module interfaces are versioned and checked for breakage using [buf](https://docs.buf.build/breaking-overview) * sub-module accounts as per ADR 028 * the general `Msg` passing paradigm and the way signers are specified by `GetSigners` Also, this is a complete replacement for keepers and could be applied to *all* inter-module communication whereas the `x/capability` approach in #5931 would need to be applied method by method. ## Consequences ### Backwards Compatibility This ADR is intended to provide a pathway to a scenario where there is greater long term compatibility between modules. In the short-term, this will likely result in breaking certain `Keeper` interfaces which are too permissive and/or replacing `Keeper` interfaces altogether. ### Positive * an alternative to keepers which can more easily lead to stable inter-module interfaces * proper inter-module OCAPs * improved module developer DevX, as commented on by several particpants on [Architecture Review Call, Dec 3](https://hackmd.io/E0wxxOvRQ5qVmTf6N_k84Q) * lays the groundwork for what can be a greatly simplified `app.go` * router can be setup to enforce atomic transactions for module-to-module calls ### Negative * modules which adopt this will need significant refactoring ### Neutral ## Test Cases \[optional] ## References * [ADR 021](/sdk/v0.50/build/architecture/adr-021-protobuf-query-encoding) * [ADR 031](/sdk/v0.50/build/architecture/adr-031-msg-service) * [ADR 028](/sdk/v0.50/build/architecture/adr-028-public-key-addresses) * [ADR 030 draft](https://github.com/cosmos/cosmos-sdk/pull/7105) * [Object-Capability Model](/sdk/latest/guides/module-design/ocap) # ADR 034: Account Rekeying Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-034-account-rekeying 30-09-2020: Initial Draft ## Changelog * 30-09-2020: Initial Draft ## Status PROPOSED ## Abstract Account rekeying is a process hat allows an account to replace its authentication pubkey with a new one. ## Context Currently, in the Cosmos SDK, the address of an auth `BaseAccount` is based on the hash of the public key. Once an account is created, the public key for the account is set in stone, and cannot be changed. This can be a problem for users, as key rotation is a useful security practice, but is not possible currently. Furthermore, as multisigs are a type of pubkey, once a multisig for an account is set, it can not be updated. This is problematic, as multisigs are often used by organizations or companies, who may need to change their set of multisig signers for internal reasons. Transferring all the assets of an account to a new account with the updated pubkey is not sufficient, because some "engagements" of an account are not easily transferable. For example, in staking, to transfer bonded Atoms, an account would have to unbond all delegations and wait the three week unbonding period. Even more significantly, for validator operators, ownership over a validator is not transferrable at all, meaning that the operator key for a validator can never be updated, leading to poor operational security for validators. ## Decision We propose the addition of a new feature to `x/auth` that allows accounts to update the public key associated with their account, while keeping the address the same. This is possible because the Cosmos SDK `BaseAccount` stores the public key for an account in state, instead of making the assumption that the public key is included in the transaction (whether explicitly or implicitly through the signature) as in other blockchains such as Bitcoin and Ethereum. Because the public key is stored on chain, it is okay for the public key to not hash to the address of an account, as the address is not pertinent to the signature checking process. To build this system, we design a new Msg type as follows: ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} service Msg { rpc ChangePubKey(MsgChangePubKey) returns (MsgChangePubKeyResponse); } message MsgChangePubKey { string address = 1; google.protobuf.Any pub_key = 2; } message MsgChangePubKeyResponse {} ``` The MsgChangePubKey transaction needs to be signed by the existing pubkey in state. Once, approved, the handler for this message type, which takes in the AccountKeeper, will update the in-state pubkey for the account and replace it with the pubkey from the Msg. An account that has had its pubkey changed cannot be automatically pruned from state. This is because if pruned, the original pubkey of the account would be needed to recreate the same address, but the owner of the address may not have the original pubkey anymore. Currently, we do not automatically prune any accounts anyways, but we would like to keep this option open the road (this is the purpose of account numbers). To resolve this, we charge an additional gas fee for this operation to compensate for this this externality (this bound gas amount is configured as parameter `PubKeyChangeCost`). The bonus gas is charged inside the handler, using the `ConsumeGas` function. Furthermore, in the future, we can allow accounts that have rekeyed manually prune themselves using a new Msg type such as `MsgDeleteAccount`. Manually pruning accounts can give a gas refund as an incentive for performing the action. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} amount := ak.GetParams(ctx).PubKeyChangeCost ctx.GasMeter().ConsumeGas(amount, "pubkey change fee") ``` Every time a key for an address is changed, we will store a log of this change in the state of the chain, thus creating a stack of all previous keys for an address and the time intervals for which they were active. This allows dapps and clients to easily query past keys for an account which may be useful for features such as verifying timestamped off-chain signed messages. ## Consequences ### Positive * Will allow users and validator operators to employ better operational security practices with key rotation. * Will allow organizations or groups to easily change and add/remove multisig signers. ### Negative Breaks the current assumed relationship between address and pubkeys as H(pubkey) = address. This has a couple of consequences. * This makes wallets that support this feature more complicated. For example, if an address on chain was updated, the corresponding key in the CLI wallet also needs to be updated. * Cannot automatically prune accounts with 0 balance that have had their pubkey changed. ### Neutral * While the purpose of this is intended to allow the owner of an account to update to a new pubkey they own, this could technically also be used to transfer ownership of an account to a new owner. For example, this could be use used to sell a staked position without unbonding or an account that has vesting tokens. However, the friction of this is very high as this would essentially have to be done as a very specific OTC trade. Furthermore, additional constraints could be added to prevent accouns with Vesting tokens to use this feature. * Will require that PubKeys for an account are included in the genesis exports. ## References * [Link](https://www.algorand.com/resources/blog/announcing-rekeying) # ADR 035: Rosetta API Support Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-035-rosetta-api-support Jonathan Gimeno (@jgimeno) David Grierson (@senormonito) Alessio Treglia (@alessio) Frojdy Dymylja (@fdymylja) ## Authors * Jonathan Gimeno (@jgimeno) * David Grierson (@senormonito) * Alessio Treglia (@alessio) * Frojdy Dymylja (@fdymylja) ## Changelog * 2021-05-12: the external library [cosmos-rosetta-gateway](https://github.com/tendermint/cosmos-rosetta-gateway) has been moved within the Cosmos SDK. ## Context [Rosetta API](https://www.rosetta-api.org/) is an open-source specification and set of tools developed by Coinbase to standardise blockchain interactions. Through the use of a standard API for integrating blockchain applications it will * Be easier for a user to interact with a given blockchain * Allow exchanges to integrate new blockchains quickly and easily * Enable application developers to build cross-blockchain applications such as block explorers, wallets and dApps at considerably lower cost and effort. ## Decision It is clear that adding Rosetta API support to the Cosmos SDK will bring value to all the developers and Cosmos SDK based chains in the ecosystem. How it is implemented is key. The driving principles of the proposed design are: 1. **Extensibility:** it must be as riskless and painless as possible for application developers to set-up network configurations to expose Rosetta API-compliant services. 2. **Long term support:** This proposal aims to provide support for all the supported Cosmos SDK release series. 3. **Cost-efficiency:** Backporting changes to Rosetta API specifications from `master` to the various stable branches of Cosmos SDK is a cost that needs to be reduced. We will achieve these delivering on these principles by the following: 1. There will be a package `rosetta/lib` for the implementation of the core Rosetta API features, particularly: a. The types and interfaces (`Client`, `OfflineClient`...), this separates design from implementation detail. b. The `Server` functionality as this is independent of the Cosmos SDK version. c. The `Online/OfflineNetwork`, which is not exported, and implements the rosetta API using the `Client` interface to query the node, build tx and so on. d. The `errors` package to extend rosetta errors. 2. Due to differences between the Cosmos release series, each series will have its own specific implementation of `Client` interface. 3. There will be two options for starting an API service in applications: a. API shares the application process b. API-specific process. ## Architecture ### The External Repo As section will describe the proposed external library, including the service implementation, plus the defined types and interfaces. #### Server `Server` is a simple `struct` that is started and listens to the port specified in the settings. This is meant to be used across all the Cosmos SDK versions that are actively supported. The constructor follows: `func NewServer(settings Settings) (Server, error)` `Settings`, which are used to construct a new server, are the following: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Settings define the rosetta server settings type Settings struct { // Network contains the information regarding the network Network *types.NetworkIdentifier // Client is the online API handler Client crgtypes.Client // Listen is the address the handler will listen at Listen string // Offline defines if the rosetta service should be exposed in offline mode Offline bool // Retries is the number of readiness checks that will be attempted when instantiating the handler // valid only for online API Retries int // RetryWait is the time that will be waited between retries RetryWait time.Duration } ``` #### Types Package types uses a mixture of rosetta types and custom defined type wrappers, that the client must parse and return while executing operations. ##### Interfaces Every SDK version uses a different format to connect (rpc, gRPC, etc), query and build transactions, we have abstracted this in what is the `Client` interface. The client uses rosetta types, while the `Online/OfflineNetwork` takes care of returning correctly parsed rosetta responses and errors. Each Cosmos SDK release series will have their own `Client` implementations. Developers can implement their own custom `Client`s as required. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Client defines the API the client implementation should provide. type Client interface { // Needed if the client needs to perform some action before connecting. Bootstrap() error // Ready checks if the servicer constraints for queries are satisfied // for example the node might still not be ready, it's useful in process // when the rosetta instance might come up before the node itself // the servicer must return nil if the node is ready Ready() error // Data API // Balances fetches the balance of the given address // if height is not nil, then the balance will be displayed // at the provided height, otherwise last block balance will be returned Balances(ctx context.Context, addr string, height *int64) ([]*types.Amount, error) // BlockByHashAlt gets a block and its transaction at the provided height BlockByHash(ctx context.Context, hash string) (BlockResponse, error) // BlockByHeightAlt gets a block given its height, if height is nil then last block is returned BlockByHeight(ctx context.Context, height *int64) (BlockResponse, error) // BlockTransactionsByHash gets the block, parent block and transactions // given the block hash. BlockTransactionsByHash(ctx context.Context, hash string) (BlockTransactionsResponse, error) // BlockTransactionsByHash gets the block, parent block and transactions // given the block hash. BlockTransactionsByHeight(ctx context.Context, height *int64) (BlockTransactionsResponse, error) // GetTx gets a transaction given its hash GetTx(ctx context.Context, hash string) (*types.Transaction, error) // GetUnconfirmedTx gets an unconfirmed Tx given its hash // NOTE(fdymylja): NOT IMPLEMENTED YET! GetUnconfirmedTx(ctx context.Context, hash string) (*types.Transaction, error) // Mempool returns the list of the current non confirmed transactions Mempool(ctx context.Context) ([]*types.TransactionIdentifier, error) // Peers gets the peers currently connected to the node Peers(ctx context.Context) ([]*types.Peer, error) // Status returns the node status, such as sync data, version etc Status(ctx context.Context) (*types.SyncStatus, error) // Construction API // PostTx posts txBytes to the node and returns the transaction identifier plus metadata related // to the transaction itself. PostTx(txBytes []byte) (res *types.TransactionIdentifier, meta map[string]interface{ }, err error) // ConstructionMetadataFromOptions ConstructionMetadataFromOptions(ctx context.Context, options map[string]interface{ }) (meta map[string]interface{ }, err error) OfflineClient } // OfflineClient defines the functionalities supported without having access to the node type OfflineClient interface { NetworkInformationProvider // SignedTx returns the signed transaction given the tx bytes (msgs) plus the signatures SignedTx(ctx context.Context, txBytes []byte, sigs []*types.Signature) (signedTxBytes []byte, err error) // TxOperationsAndSignersAccountIdentifiers returns the operations related to a transaction and the account // identifiers if the transaction is signed TxOperationsAndSignersAccountIdentifiers(signed bool, hexBytes []byte) (ops []*types.Operation, signers []*types.AccountIdentifier, err error) // ConstructionPayload returns the construction payload given the request ConstructionPayload(ctx context.Context, req *types.ConstructionPayloadsRequest) (resp *types.ConstructionPayloadsResponse, err error) // PreprocessOperationsToOptions returns the options given the preprocess operations PreprocessOperationsToOptions(ctx context.Context, req *types.ConstructionPreprocessRequest) (options map[string]interface{ }, err error) // AccountIdentifierFromPublicKey returns the account identifier given the public key AccountIdentifierFromPublicKey(pubKey *types.PublicKey) (*types.AccountIdentifier, error) } ``` ### 2. Cosmos SDK Implementation The Cosmos SDK implementation, based on version, takes care of satisfying the `Client` interface. In Stargate, Launchpad and 0.37, we have introduced the concept of rosetta.Msg, this message is not in the shared repository as the sdk.Msg type differs between Cosmos SDK versions. The rosetta.Msg interface follows: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Msg represents a cosmos-sdk message that can be converted from and to a rosetta operation. type Msg interface { sdk.Msg ToOperations(withStatus, hasError bool) []*types.Operation FromOperations(ops []*types.Operation) (sdk.Msg, error) } ``` Hence developers who want to extend the rosetta set of supported operations just need to extend their module's sdk.Msgs with the `ToOperations` and `FromOperations` methods. ### 3. API service invocation As stated at the start, application developers will have two methods for invocation of the Rosetta API service: 1. Shared process for both application and API 2. Standalone API service #### Shared Process (Only Stargate) Rosetta API service could run within the same execution process as the application. This would be enabled via app.toml settings, and if gRPC is not enabled the rosetta instance would be spinned in offline mode (tx building capabilities only). #### Separate API service Client application developers can write a new command to launch a Rosetta API server as a separate process too, using the rosetta command contained in the `/server/rosetta` package. Construction of the command depends on Cosmos SDK version. Examples can be found inside `simd` for stargate, and `contrib/rosetta/simapp` for other release series. ## Status Proposed ## Consequences ### Positive * Out-of-the-box Rosetta API support within Cosmos SDK. * Blockchain interface standardisation ## References * [Link](https://www.rosetta-api.org/) # ADR 036: Arbitrary Message Signature Specification Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-036-arbitrary-signature 28/10/2020 - Initial draft ## Changelog * 28/10/2020 - Initial draft ## Authors * Antoine Herzog (@antoineherzog) * Zaki Manian (@zmanian) * Aleksandr Bezobchuk (alexanderbez) \[1] * Frojdi Dymylja (@fdymylja) ## Status Draft ## Abstract Currently, in the Cosmos SDK, there is no convention to sign arbitrary message like on Ethereum. We propose with this specification, for Cosmos SDK ecosystem, a way to sign and validate off-chain arbitrary messages. This specification serves the purpose of covering every use case, this means that cosmos-sdk applications developers decide how to serialize and represent `Data` to users. ## Context Having the ability to sign messages off-chain has proven to be a fundamental aspect of nearly any blockchain. The notion of signing messages off-chain has many added benefits such as saving on computational costs and reducing transaction throughput and overhead. Within the context of the Cosmos, some of the major applications of signing such data includes, but is not limited to, providing a cryptographic secure and verifiable means of proving validator identity and possibly associating it with some other framework or organization. In addition, having the ability to sign Cosmos messages with a Ledger or similar HSM device. Further context and use cases can be found in the references links. ## Decision The aim is being able to sign arbitrary messages, even using Ledger or similar HSM devices. As a result signed messages should look roughly like Cosmos SDK messages but **must not** be a valid on-chain transaction. `chain-id`, `account_number` and `sequence` can all be assigned invalid values. Cosmos SDK 0.40 also introduces a concept of “auth\_info” this can specify SIGN\_MODES. A spec should include an `auth_info` that supports SIGN\_MODE\_DIRECT and SIGN\_MODE\_LEGACY\_AMINO. Create the `offchain` proto definitions, we extend the auth module with `offchain` package to offer functionalities to verify and sign offline messages. An offchain transaction follows these rules: * the memo must be empty * nonce, sequence number must be equal to 0 * chain-id must be equal to “” * fee gas must be equal to 0 * fee amount must be an empty array Verification of an offchain transaction follows the same rules as an onchain one, except for the spec differences highlighted above. The first message added to the `offchain` package is `MsgSignData`. `MsgSignData` allows developers to sign arbitrary bytes valid offchain only. Where `Signer` is the account address of the signer. `Data` is arbitrary bytes which can represent `text`, `files`, `object`s. It's applications developers decision how `Data` should be deserialized, serialized and the object it can represent in their context. It's applications developers decision how `Data` should be treated, by treated we mean the serialization and deserialization process and the Object `Data` should represent. Proto definition: ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // MsgSignData defines an arbitrary, general-purpose, off-chain message message MsgSignData { // Signer is the sdk.AccAddress of the message signer bytes Signer = 1 [(gogoproto.jsontag) = "signer", (gogoproto.casttype) = "github.com/cosmos/cosmos-sdk/types.AccAddress"]; // Data represents the raw bytes of the content that is signed (text, json, etc) bytes Data = 2 [(gogoproto.jsontag) = "data"]; } ``` Signed MsgSignData json example: ```json expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "type": "cosmos-sdk/StdTx", "value": { "msg": [ { "type": "sign/MsgSignData", "value": { "signer": "cosmos1hftz5ugqmpg9243xeegsqqav62f8hnywsjr4xr", "data": "cmFuZG9t" } } ], "fee": { "amount": [], "gas": "0" }, "signatures": [ { "pub_key": { "type": "tendermint/PubKeySecp256k1", "value": "AqnDSiRoFmTPfq97xxEb2VkQ/Hm28cPsqsZm9jEVsYK9" }, "signature": "8y8i34qJakkjse9pOD2De+dnlc4KvFgh0wQpes4eydN66D9kv7cmCEouRrkka9tlW9cAkIL52ErB+6ye7X5aEg==" } ], "memo": "" } } ``` ## Consequences There is a specification on how messages, that are not meant to be broadcast to a live chain, should be formed. ### Backwards Compatibility Backwards compatibility is maintained as this is a new message spec definition. ### Positive * A common format that can be used by multiple applications to sign and verify off-chain messages. * The specification is primitive which means it can cover every use case without limiting what is possible to fit inside it. * It gives room for other off-chain messages specifications that aim to target more specific and common use cases such as off-chain-based authN/authZ layers \[2]. ### Negative * Current proposal requires a fixed relationship between an account address and a public key. * Doesn't work with multisig accounts. ## Further discussion * Regarding security in `MsgSignData`, the developer using `MsgSignData` is in charge of making the content laying in `Data` non-replayable when, and if, needed. * the offchain package will be further extended with extra messages that target specific use cases such as, but not limited to, authentication in applications, payment channels, L2 solutions in general. ## References 1. [Link](https://github.com/cosmos/ics/pull/33) 2. [Link](https://github.com/cosmos/cosmos-sdk/pull/7727#discussion_r515668204) 3. [Link](https://github.com/cosmos/cosmos-sdk/pull/7727#issuecomment-722478477) 4. [Link](https://github.com/cosmos/cosmos-sdk/pull/7727#issuecomment-721062923) # ADR 037: Governance split votes Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-037-gov-split-vote 2020/10/28: Intial draft ## Changelog * 2020/10/28: Intial draft ## Status Accepted ## Abstract This ADR defines a modification to the governance module that would allow a staker to split their votes into several voting options. For example, it could use 70% of its voting power to vote Yes and 30% of its voting power to vote No. ## Context Currently, an address can cast a vote with only one options (Yes/No/Abstain/NoWithVeto) and use their full voting power behind that choice. However, often times the entity owning that address might not be a single individual. For example, a company might have different stakeholders who want to vote differently, and so it makes sense to allow them to split their voting power. Another example use case is exchanges. Many centralized exchanges often stake a portion of their users' tokens in their custody. Currently, it is not possible for them to do "passthrough voting" and giving their users voting rights over their tokens. However, with this system, exchanges can poll their users for voting preferences, and then vote on-chain proportionally to the results of the poll. ## Decision We modify the vote structs to be ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type WeightedVoteOption struct { Option string Weight sdk.Dec } type Vote struct { ProposalID int64 Voter sdk.Address Options []WeightedVoteOption } ``` And for backwards compatibility, we introduce `MsgVoteWeighted` while keeping `MsgVote`. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type MsgVote struct { ProposalID int64 Voter sdk.Address Option Option } type MsgVoteWeighted struct { ProposalID int64 Voter sdk.Address Options []WeightedVoteOption } ``` The `ValidateBasic` of a `MsgVoteWeighted` struct would require that 1. The sum of all the Rates is equal to 1.0 2. No Option is repeated The governance tally function will iterate over all the options in a vote and add to the tally the result of the voter's voting power \* the rate for that option. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} tally() { results := map[types.VoteOption]sdk.Dec for _, vote := range votes { for i, weightedOption := range vote.Options { results[weightedOption.Option] += getVotingPower(vote.voter) * weightedOption.Weight } } } ``` The CLI command for creating a multi-option vote would be as such: ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx gov vote 1 "yes=0.6,no=0.3,abstain=0.05,no_with_veto=0.05" --from mykey ``` To create a single-option vote a user can do either ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx gov vote 1 "yes=1" --from mykey ``` or ```shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx gov vote 1 yes --from mykey ``` to maintain backwards compatibility. ## Consequences ### Backwards Compatibility * Previous VoteMsg types will remain the same and so clients will not have to update their procedure unless they want to support the WeightedVoteMsg feature. * When querying a Vote struct from state, its structure will be different, and so clients wanting to display all voters and their respective votes will have to handle the new format and the fact that a single voter can have split votes. * The result of querying the tally function should have the same API for clients. ### Positive * Can make the voting process more accurate for addresses representing multiple stakeholders, often some of the largest addresses. ### Negative * Is more complex than simple voting, and so may be harder to explain to users. However, this is mostly mitigated because the feature is opt-in. ### Neutral * Relatively minor change to governance tally function. # ADR 038: KVStore state listening Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-038-state-listening ## Changelog * 11/23/2020: Initial draft * 10/06/2022: Introduce plugin system based on hashicorp/go-plugin * 10/14/2022: * Add `ListenCommit`, flatten the state writes in a block to a single batch. * Remove listeners from cache stores, should only listen to `rootmulti.Store`. * Remove `HaltAppOnDeliveryError()`, the errors are propagated by default, the implementations should return nil if don't want to propogate errors. * 26/05/2023: Update with ABCI 2.0 ## Status Proposed ## Abstract This ADR defines a set of changes to enable listening to state changes of individual KVStores and exposing these data to consumers. ## Context Currently, KVStore data can be remotely accessed through [Queries](https://github.com/cosmos/cosmos-sdk/blob/master/docs/building-modules/messages-and-queries.md#queries) which proceed either through Tendermint and the ABCI, or through the gRPC server. In addition to these request/response queries, it would be beneficial to have a means of listening to state changes as they occur in real time. ## Decision We will modify the `CommitMultiStore` interface and its concrete (`rootmulti`) implementations and introduce a new `listenkv.Store` to allow listening to state changes in underlying KVStores. We don't need to listen to cache stores, because we can't be sure that the writes will be committed eventually, and the writes are duplicated in `rootmulti.Store` eventually, so we should only listen to `rootmulti.Store`. We will introduce a plugin system for configuring and running streaming services that write these state changes and their surrounding ABCI message context to different destinations. ### Listening In a new file, `store/types/listening.go`, we will create a `MemoryListener` struct for streaming out protobuf encoded KV pairs state changes from a KVStore. The `MemoryListener` will be used internally by the concrete `rootmulti` implementation to collect state changes from KVStores. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // MemoryListener listens to the state writes and accumulate the records in memory. type MemoryListener struct { stateCache []StoreKVPair } // NewMemoryListener creates a listener that accumulate the state writes in memory. func NewMemoryListener() *MemoryListener { return &MemoryListener{ } } // OnWrite writes state change events to the internal cache func (fl *MemoryListener) OnWrite(storeKey StoreKey, key []byte, value []byte, delete bool) { fl.stateCache = append(fl.stateCache, StoreKVPair{ StoreKey: storeKey.Name(), Delete: delete, Key: key, Value: value, }) } // PopStateCache returns the current state caches and set to nil func (fl *MemoryListener) PopStateCache() []StoreKVPair { res := fl.stateCache fl.stateCache = nil return res } ``` We will also define a protobuf type for the KV pairs. In addition to the key and value fields this message will include the StoreKey for the originating KVStore so that we can collect information from separate KVStores and determine the source of each KV pair. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message StoreKVPair { optional string store_key = 1; // the store key for the KVStore this pair originates from required bool set = 2; // true indicates a set operation, false indicates a delete operation required bytes key = 3; required bytes value = 4; } ``` ### ListenKVStore We will create a new `Store` type `listenkv.Store` that the `rootmulti` store will use to wrap a `KVStore` to enable state listening. We will configure the `Store` with a `MemoryListener` which will collect state changes for output to specific destinations. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Store implements the KVStore interface with listening enabled. // Operations are traced on each core KVStore call and written to any of the // underlying listeners with the proper key and operation permissions type Store struct { parent types.KVStore listener *types.MemoryListener parentStoreKey types.StoreKey } // NewStore returns a reference to a new traceKVStore given a parent // KVStore implementation and a buffered writer. func NewStore(parent types.KVStore, psk types.StoreKey, listener *types.MemoryListener) *Store { return &Store{ parent: parent, listener: listener, parentStoreKey: psk } } // Set implements the KVStore interface. It traces a write operation and // delegates the Set call to the parent KVStore. func (s *Store) Set(key []byte, value []byte) { types.AssertValidKey(key) s.parent.Set(key, value) s.listener.OnWrite(s.parentStoreKey, key, value, false) } // Delete implements the KVStore interface. It traces a write operation and // delegates the Delete call to the parent KVStore. func (s *Store) Delete(key []byte) { s.parent.Delete(key) s.listener.OnWrite(s.parentStoreKey, key, nil, true) } ``` ### MultiStore interface updates We will update the `CommitMultiStore` interface to allow us to wrap a `Memorylistener` to a specific `KVStore`. Note that the `MemoryListener` will be attached internally by the concrete `rootmulti` implementation. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type CommitMultiStore interface { ... // AddListeners adds a listener for the KVStore belonging to the provided StoreKey AddListeners(keys []StoreKey) // PopStateCache returns the accumulated state change messages from MemoryListener PopStateCache() []StoreKVPair } ``` ### MultiStore implementation updates We will adjust the `rootmulti` `GetKVStore` method to wrap the returned `KVStore` with a `listenkv.Store` if listening is turned on for that `Store`. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (rs *Store) GetKVStore(key types.StoreKey) types.KVStore { store := rs.stores[key].(types.KVStore) if rs.TracingEnabled() { store = tracekv.NewStore(store, rs.traceWriter, rs.traceContext) } if rs.ListeningEnabled(key) { store = listenkv.NewStore(store, key, rs.listeners[key]) } return store } ``` We will implement `AddListeners` to manage KVStore listeners internally and implement `PopStateCache` for a means of retrieving the current state. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // AddListeners adds state change listener for a specific KVStore func (rs *Store) AddListeners(keys []types.StoreKey) { listener := types.NewMemoryListener() for i := range keys { rs.listeners[keys[i]] = listener } } ``` ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (rs *Store) PopStateCache() []types.StoreKVPair { var cache []types.StoreKVPair for _, ls := range rs.listeners { cache = append(cache, ls.PopStateCache()...) } sort.SliceStable(cache, func(i, j int) bool { return cache[i].StoreKey < cache[j].StoreKey }) return cache } ``` We will also adjust the `rootmulti` `CacheMultiStore` and `CacheMultiStoreWithVersion` methods to enable listening in the cache layer. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (rs *Store) CacheMultiStore() types.CacheMultiStore { stores := make(map[types.StoreKey]types.CacheWrapper) for k, v := range rs.stores { store := v.(types.KVStore) // Wire the listenkv.Store to allow listeners to observe the writes from the cache store, // set same listeners on cache store will observe duplicated writes. if rs.ListeningEnabled(k) { store = listenkv.NewStore(store, k, rs.listeners[k]) } stores[k] = store } return cachemulti.NewStore(rs.db, stores, rs.keysByName, rs.traceWriter, rs.getTracingContext()) } ``` ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (rs *Store) CacheMultiStoreWithVersion(version int64) (types.CacheMultiStore, error) { // ... // Wire the listenkv.Store to allow listeners to observe the writes from the cache store, // set same listeners on cache store will observe duplicated writes. if rs.ListeningEnabled(key) { cacheStore = listenkv.NewStore(cacheStore, key, rs.listeners[key]) } cachedStores[key] = cacheStore } return cachemulti.NewStore(rs.db, cachedStores, rs.keysByName, rs.traceWriter, rs.getTracingContext()), nil } ``` ### Exposing the data #### Streaming Service We will introduce a new `ABCIListener` interface that plugs into the BaseApp and relays ABCI requests and responses so that the service can group the state changes with the ABCI requests. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // baseapp/streaming.go // ABCIListener is the interface that we're exposing as a streaming service. type ABCIListener interface { // ListenFinalizeBlock updates the streaming service with the latest FinalizeBlock messages ListenFinalizeBlock(ctx context.Context, req abci.RequestFinalizeBlock, res abci.ResponseFinalizeBlock) error // ListenCommit updates the steaming service with the latest Commit messages and state changes ListenCommit(ctx context.Context, res abci.ResponseCommit, changeSet []*StoreKVPair) error } ``` #### BaseApp Registration We will add a new method to the `BaseApp` to enable the registration of `StreamingService`s: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // SetStreamingService is used to set a streaming service into the BaseApp hooks and load the listeners into the multistore func (app *BaseApp) SetStreamingService(s ABCIListener) { // register the StreamingService within the BaseApp // BaseApp will pass BeginBlock, DeliverTx, and EndBlock requests and responses to the streaming services to update their ABCI context app.abciListeners = append(app.abciListeners, s) } ``` We will add two new fields to the `BaseApp` struct: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type BaseApp struct { ... // abciListenersAsync for determining if abciListeners will run asynchronously. // When abciListenersAsync=false and stopNodeOnABCIListenerErr=false listeners will run synchronized but will not stop the node. // When abciListenersAsync=true stopNodeOnABCIListenerErr will be ignored. abciListenersAsync bool // stopNodeOnABCIListenerErr halts the node when ABCI streaming service listening results in an error. // stopNodeOnABCIListenerErr=true must be paired with abciListenersAsync=false. stopNodeOnABCIListenerErr bool } ``` #### ABCI Event Hooks We will modify the `FinalizeBlock` and `Commit` methods to pass ABCI requests and responses to any streaming service hooks registered with the `BaseApp`. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (app *BaseApp) FinalizeBlock(req abci.RequestFinalizeBlock) abci.ResponseFinalizeBlock { var abciRes abci.ResponseFinalizeBlock defer func() { // call the streaming service hook with the FinalizeBlock messages for _, abciListener := range app.abciListeners { ctx := app.finalizeState.ctx blockHeight := ctx.BlockHeight() if app.abciListenersAsync { go func(req abci.RequestFinalizeBlock, res abci.ResponseFinalizeBlock) { if err := app.abciListener.FinalizeBlock(blockHeight, req, res); err != nil { app.logger.Error("FinalizeBlock listening hook failed", "height", blockHeight, "err", err) } }(req, abciRes) } else { if err := app.abciListener.ListenFinalizeBlock(blockHeight, req, res); err != nil { app.logger.Error("FinalizeBlock listening hook failed", "height", blockHeight, "err", err) if app.stopNodeOnABCIListenerErr { os.Exit(1) } } } } }() ... return abciRes } ``` ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (app *BaseApp) Commit() abci.ResponseCommit { ... res := abci.ResponseCommit{ Data: commitID.Hash, RetainHeight: retainHeight, } // call the streaming service hook with the Commit messages for _, abciListener := range app.abciListeners { ctx := app.deliverState.ctx blockHeight := ctx.BlockHeight() changeSet := app.cms.PopStateCache() if app.abciListenersAsync { go func(res abci.ResponseCommit, changeSet []store.StoreKVPair) { if err := app.abciListener.ListenCommit(ctx, res, changeSet); err != nil { app.logger.Error("ListenCommit listening hook failed", "height", blockHeight, "err", err) } }(res, changeSet) } else { if err := app.abciListener.ListenCommit(ctx, res, changeSet); err != nil { app.logger.Error("ListenCommit listening hook failed", "height", blockHeight, "err", err) if app.stopNodeOnABCIListenerErr { os.Exit(1) } } } } ... return res } ``` #### Go Plugin System We propose a plugin architecture to load and run `Streaming` plugins and other types of implementations. We will introduce a plugin system over gRPC that is used to load and run Cosmos-SDK plugins. The plugin system uses [hashicorp/go-plugin](https://github.com/hashicorp/go-plugin). Each plugin must have a struct that implements the `plugin.Plugin` interface and an `Impl` interface for processing messages over gRPC. Each plugin must also have a message protocol defined for the gRPC service: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // streaming/plugins/abci/{ plugin_version }/interface.go // Handshake is a common handshake that is shared by streaming and host. // This prevents users from executing bad plugins or executing a plugin // directory. It is a UX feature, not a security feature. var Handshake = plugin.HandshakeConfig{ ProtocolVersion: 1, MagicCookieKey: "ABCI_LISTENER_PLUGIN", MagicCookieValue: "ef78114d-7bdf-411c-868f-347c99a78345", } // ListenerPlugin is the base struc for all kinds of go-plugin implementations // It will be included in interfaces of different Plugins type ABCIListenerPlugin struct { // GRPCPlugin must still implement the Plugin interface plugin.Plugin // Concrete implementation, written in Go. This is only used for plugins // that are written in Go. Impl baseapp.ABCIListener } func (p *ListenerGRPCPlugin) GRPCServer(_ *plugin.GRPCBroker, s *grpc.Server) error { RegisterABCIListenerServiceServer(s, &GRPCServer{ Impl: p.Impl }) return nil } func (p *ListenerGRPCPlugin) GRPCClient( _ context.Context, _ *plugin.GRPCBroker, c *grpc.ClientConn, ) (interface{ }, error) { return &GRPCClient{ client: NewABCIListenerServiceClient(c) }, nil } ``` The `plugin.Plugin` interface has two methods `Client` and `Server`. For our GRPC service these are `GRPCClient` and `GRPCServer` The `Impl` field holds the concrete implementation of our `baseapp.ABCIListener` interface written in Go. Note: this is only used for plugin implementations written in Go. The advantage of having such a plugin system is that within each plugin authors can define the message protocol in a way that fits their use case. For example, when state change listening is desired, the `ABCIListener` message protocol can be defined as below (*for illustrative purposes only*). When state change listening is not desired than `ListenCommit` can be omitted from the protocol. ```protobuf expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} syntax = "proto3"; ... message Empty {} message ListenFinalizeBlockRequest { RequestFinalizeBlock req = 1; ResponseFinalizeBlock res = 2; } message ListenCommitRequest { int64 block_height = 1; ResponseCommit res = 2; repeated StoreKVPair changeSet = 3; } // plugin that listens to state changes service ABCIListenerService { rpc ListenFinalizeBlock(ListenFinalizeBlockRequest) returns (Empty); rpc ListenCommit(ListenCommitRequest) returns (Empty); } ``` ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} ... // plugin that doesn't listen to state changes service ABCIListenerService { rpc ListenFinalizeBlock(ListenFinalizeBlockRequest) returns (Empty); rpc ListenCommit(ListenCommitRequest) returns (Empty); } ``` Implementing the service above: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // streaming/plugins/abci/{ plugin_version }/grpc.go var ( _ baseapp.ABCIListener = (*GRPCClient)(nil) ) // GRPCClient is an implementation of the ABCIListener and ABCIListenerPlugin interfaces that talks over RPC. type GRPCClient struct { client ABCIListenerServiceClient } func (m *GRPCClient) ListenFinalizeBlock(goCtx context.Context, req abci.RequestFinalizeBlock, res abci.ResponseFinalizeBlock) error { ctx := sdk.UnwrapSDKContext(goCtx) _, err := m.client.ListenDeliverTx(ctx, &ListenDeliverTxRequest{ BlockHeight: ctx.BlockHeight(), Req: req, Res: res }) return err } func (m *GRPCClient) ListenCommit(goCtx context.Context, res abci.ResponseCommit, changeSet []store.StoreKVPair) error { ctx := sdk.UnwrapSDKContext(goCtx) _, err := m.client.ListenCommit(ctx, &ListenCommitRequest{ BlockHeight: ctx.BlockHeight(), Res: res, ChangeSet: changeSet }) return err } // GRPCServer is the gRPC server that GRPCClient talks to. type GRPCServer struct { // This is the real implementation Impl baseapp.ABCIListener } func (m *GRPCServer) ListenFinalizeBlock(ctx context.Context, req *ListenFinalizeBlockRequest) (*Empty, error) { return &Empty{ }, m.Impl.ListenFinalizeBlock(ctx, req.Req, req.Res) } func (m *GRPCServer) ListenCommit(ctx context.Context, req *ListenCommitRequest) (*Empty, error) { return &Empty{ }, m.Impl.ListenCommit(ctx, req.Res, req.ChangeSet) } ``` And the pre-compiled Go plugin `Impl`(*this is only used for plugins that are written in Go*): ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // streaming/plugins/abci/{ plugin_version }/impl/plugin.go // Plugins are pre-compiled and loaded by the plugin system // ABCIListener is the implementation of the baseapp.ABCIListener interface type ABCIListener struct{ } func (m *ABCIListenerPlugin) ListenFinalizeBlock(ctx context.Context, req abci.RequestFinalizeBlock, res abci.ResponseFinalizeBlock) error { // send data to external system } func (m *ABCIListenerPlugin) ListenCommit(ctx context.Context, res abci.ResponseCommit, changeSet []store.StoreKVPair) error { // send data to external system } func main() { plugin.Serve(&plugin.ServeConfig{ HandshakeConfig: grpc_abci_v1.Handshake, Plugins: map[string]plugin.Plugin{ "grpc_plugin_v1": &grpc_abci_v1.ABCIListenerGRPCPlugin{ Impl: &ABCIListenerPlugin{ }}, }, // A non-nil value here enables gRPC serving for this streaming... GRPCServer: plugin.DefaultGRPCServer, }) } ``` We will introduce a plugin loading system that will return `(interface{}, error)`. This provides the advantage of using versioned plugins where the plugin interface and gRPC protocol change over time. In addition, it allows for building independent plugin that can expose different parts of the system over gRPC. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func NewStreamingPlugin(name string, logLevel string) (interface{ }, error) { logger := hclog.New(&hclog.LoggerOptions{ Output: hclog.DefaultOutput, Level: toHclogLevel(logLevel), Name: fmt.Sprintf("plugin.%s", name), }) // We're a host. Start by launching the streaming process. env := os.Getenv(GetPluginEnvKey(name)) client := plugin.NewClient(&plugin.ClientConfig{ HandshakeConfig: HandshakeMap[name], Plugins: PluginMap, Cmd: exec.Command("sh", "-c", env), Logger: logger, AllowedProtocols: []plugin.Protocol{ plugin.ProtocolNetRPC, plugin.ProtocolGRPC }, }) // Connect via RPC rpcClient, err := client.Client() if err != nil { return nil, err } // Request streaming plugin return rpcClient.Dispense(name) } ``` We propose a `RegisterStreamingPlugin` function for the App to register `NewStreamingPlugin`s with the App's BaseApp. Streaming plugins can be of `Any` type; therefore, the function takes in an interface vs a concrete type. For example, we could have plugins of `ABCIListener`, `WasmListener` or `IBCListener`. Note that `RegisterStreamingPluing` function is helper function and not a requirement. Plugin registration can easily be moved from the App to the BaseApp directly. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // baseapp/streaming.go // RegisterStreamingPlugin registers streaming plugins with the App. // This method returns an error if a plugin is not supported. func RegisterStreamingPlugin( bApp *BaseApp, appOpts servertypes.AppOptions, keys map[string]*types.KVStoreKey, streamingPlugin interface{ }, ) error { switch t := streamingPlugin.(type) { case ABCIListener: registerABCIListenerPlugin(bApp, appOpts, keys, t) default: return fmt.Errorf("unexpected plugin type %T", t) } return nil } ``` ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func registerABCIListenerPlugin( bApp *BaseApp, appOpts servertypes.AppOptions, keys map[string]*store.KVStoreKey, abciListener ABCIListener, ) { asyncKey := fmt.Sprintf("%s.%s.%s", StreamingTomlKey, StreamingABCITomlKey, StreamingABCIAsync) async := cast.ToBool(appOpts.Get(asyncKey)) stopNodeOnErrKey := fmt.Sprintf("%s.%s.%s", StreamingTomlKey, StreamingABCITomlKey, StreamingABCIStopNodeOnErrTomlKey) stopNodeOnErr := cast.ToBool(appOpts.Get(stopNodeOnErrKey)) keysKey := fmt.Sprintf("%s.%s.%s", StreamingTomlKey, StreamingABCITomlKey, StreamingABCIKeysTomlKey) exposeKeysStr := cast.ToStringSlice(appOpts.Get(keysKey)) exposedKeys := exposeStoreKeysSorted(exposeKeysStr, keys) bApp.cms.AddListeners(exposedKeys) app.SetStreamingManager( storetypes.StreamingManager{ ABCIListeners: []storetypes.ABCIListener{ abciListener }, StopNodeOnErr: stopNodeOnErr, }, ) } ``` ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func exposeAll(list []string) bool { for _, ele := range list { if ele == "*" { return true } } return false } func exposeStoreKeys(keysStr []string, keys map[string]*types.KVStoreKey) []types.StoreKey { var exposeStoreKeys []types.StoreKey if exposeAll(keysStr) { exposeStoreKeys = make([]types.StoreKey, 0, len(keys)) for _, storeKey := range keys { exposeStoreKeys = append(exposeStoreKeys, storeKey) } } else { exposeStoreKeys = make([]types.StoreKey, 0, len(keysStr)) for _, keyStr := range keysStr { if storeKey, ok := keys[keyStr]; ok { exposeStoreKeys = append(exposeStoreKeys, storeKey) } } } // sort storeKeys for deterministic output sort.SliceStable(exposeStoreKeys, func(i, j int) bool { return exposeStoreKeys[i].Name() < exposeStoreKeys[j].Name() }) return exposeStoreKeys } ``` The `NewStreamingPlugin` and `RegisterStreamingPlugin` functions are used to register a plugin with the App's BaseApp. e.g. in `NewSimApp`: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func NewSimApp( logger log.Logger, db dbm.DB, traceStore io.Writer, loadLatest bool, appOpts servertypes.AppOptions, baseAppOptions ...func(*baseapp.BaseApp), ) *SimApp { ... keys := sdk.NewKVStoreKeys( authtypes.StoreKey, banktypes.StoreKey, stakingtypes.StoreKey, minttypes.StoreKey, distrtypes.StoreKey, slashingtypes.StoreKey, govtypes.StoreKey, paramstypes.StoreKey, ibchost.StoreKey, upgradetypes.StoreKey, evidencetypes.StoreKey, ibctransfertypes.StoreKey, capabilitytypes.StoreKey, ) ... // register streaming services streamingCfg := cast.ToStringMap(appOpts.Get(baseapp.StreamingTomlKey)) for service := range streamingCfg { pluginKey := fmt.Sprintf("%s.%s.%s", baseapp.StreamingTomlKey, service, baseapp.StreamingPluginTomlKey) pluginName := strings.TrimSpace(cast.ToString(appOpts.Get(pluginKey))) if len(pluginName) > 0 { logLevel := cast.ToString(appOpts.Get(flags.FlagLogLevel)) plugin, err := streaming.NewStreamingPlugin(pluginName, logLevel) if err != nil { tmos.Exit(err.Error()) } if err := baseapp.RegisterStreamingPlugin(bApp, appOpts, keys, plugin); err != nil { tmos.Exit(err.Error()) } } } return app ``` #### Configuration The plugin system will be configured within an App's TOML configuration files. ```toml expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} # gRPC streaming [streaming] # ABCI streaming service [streaming.abci] # The plugin version to use for ABCI listening plugin = "abci_v1" # List of kv store keys to listen to for state changes. # Set to ["*"] to expose all keys. keys = ["*"] # Enable abciListeners to run asynchronously. # When abciListenersAsync=false and stopNodeOnABCIListenerErr=false listeners will run synchronized but will not stop the node. # When abciListenersAsync=true stopNodeOnABCIListenerErr will be ignored. async = false # Whether to stop the node on message deliver error. stop-node-on-err = true ``` There will be four parameters for configuring `ABCIListener` plugin: `streaming.abci.plugin`, `streaming.abci.keys`, `streaming.abci.async` and `streaming.abci.stop-node-on-err`. `streaming.abci.plugin` is the name of the plugin we want to use for streaming, `streaming.abci.keys` is a set of store keys for stores it listens to, `streaming.abci.async` is bool enabling asynchronous listening and `streaming.abci.stop-node-on-err` is a bool that stops the node when true and when operating on synchronized mode `streaming.abci.async=false`. Note that `streaming.abci.stop-node-on-err=true` will be ignored if `streaming.abci.async=true`. The configuration above support additional streaming plugins by adding the plugin to the `[streaming]` configuration section and registering the plugin with `RegisterStreamingPlugin` helper function. Note the that each plugin must include `streaming.{service}.plugin` property as it is a requirement for doing the lookup and registration of the plugin with the App. All other properties are unique to the individual services. #### Encoding and decoding streams ADR-038 introduces the interfaces and types for streaming state changes out from KVStores, associating this data with their related ABCI requests and responses, and registering a service for consuming this data and streaming it to some destination in a final format. Instead of prescribing a final data format in this ADR, it is left to a specific plugin implementation to define and document this format. We take this approach because flexibility in the final format is necessary to support a wide range of streaming service plugins. For example, the data format for a streaming service that writes the data out to a set of files will differ from the data format that is written to a Kafka topic. ## Consequences These changes will provide a means of subscribing to KVStore state changes in real time. ### Backwards Compatibility * This ADR changes the `CommitMultiStore` interface, implementations supporting the previous version of this interface will not support the new one ### Positive * Ability to listen to KVStore state changes in real time and expose these events to external consumers ### Negative * Changes `CommitMultiStore` interface and its implementations ### Neutral * Introduces additional- but optional- complexity to configuring and running a cosmos application * If an application developer opts to use these features to expose data, they need to be aware of the ramifications/risks of that data exposure as it pertains to the specifics of their application # ADR 039: Epoched Staking Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-039-epoched-staking 10-Feb-2021: Initial Draft ## Changelog * 10-Feb-2021: Initial Draft ## Authors * Dev Ojha (@valardragon) * Sunny Aggarwal (@sunnya97) ## Status Proposed ## Abstract This ADR updates the proof of stake module to buffer the staking weight updates for a number of blocks before updating the consensus' staking weights. The length of the buffer is dubbed an epoch. The prior functionality of the staking module is then a special case of the abstracted module, with the epoch being set to 1 block. ## Context The current proof of stake module takes the design decision to apply staking weight changes to the consensus engine immediately. This means that delegations and unbonds get applied immediately to the validator set. This decision was primarily done as it was implementationally simplest, and because we at the time believed that this would lead to better UX for clients. An alternative design choice is to allow buffering staking updates (delegations, unbonds, validators joining) for a number of blocks. This 'epoch'd proof of stake consensus provides the guarantee that the consensus weights for validators will not change mid-epoch, except in the event of a slash condition. Additionally, the UX hurdle may not be as significant as was previously thought. This is because it is possible to provide users immediate acknowledgement that their bond was recorded and will be executed. Furthermore, it has become clearer over time that immediate execution of staking events comes with limitations, such as: * Threshold based cryptography. One of the main limitations is that because the validator set can change so regularly, it makes the running of multiparty computation by a fixed validator set difficult. Many threshold-based cryptographic features for blockchains such as randomness beacons and threshold decryption require a computationally-expensive DKG process (will take much longer than 1 block to create). To productively use these, we need to guarantee that the result of the DKG will be used for a reasonably long time. It wouldn't be feasible to rerun the DKG every block. By epoching staking, it guarantees we'll only need to run a new DKG once every epoch. * Light client efficiency. This would lessen the overhead for IBC when there is high churn in the validator set. In the Tendermint light client bisection algorithm, the number of headers you need to verify is related to bounding the difference in validator sets between a trusted header and the latest header. If the difference is too great, you verify more header in between the two. By limiting the frequency of validator set changes, we can reduce the worst case size of IBC lite client proofs, which occurs when a validator set has high churn. * Fairness of deterministic leader election. Currently we have no ways of reasoning of fairness of deterministic leader election in the presence of staking changes without epochs (tendermint/spec#217). Breaking fairness of leader election is profitable for validators, as they earn additional rewards from being the proposer. Adding epochs at least makes it easier for our deterministic leader election to match something we can prove secure. (Albeit, we still haven’t proven if our current algorithm is fair with > 2 validators in the presence of stake changes) * Staking derivative design. Currently, reward distribution is done lazily using the F1 fee distribution. While saving computational complexity, lazy accounting requires a more stateful staking implementation. Right now, each delegation entry has to track the time of last withdrawal. Handling this can be a challenge for some staking derivatives designs that seek to provide fungibility for all tokens staked to a single validator. Force-withdrawing rewards to users can help solve this, however it is infeasible to force-withdraw rewards to users on a per block basis. With epochs, a chain could more easily alter the design to have rewards be forcefully withdrawn (iterating over delegator accounts only once per-epoch), and can thus remove delegation timing from state. This may be useful for certain staking derivative designs. ## Design considerations ### Slashing There is a design consideration for whether to apply a slash immediately or at the end of an epoch. A slash event should apply to only members who are actually staked during the time of the infraction, namely during the epoch the slash event occurred. Applying it immediately can be viewed as offering greater consensus layer security, at potential costs to the aforementioned usecases. The benefits of immediate slashing for consensus layer security can be all be obtained by executing the validator jailing immediately (thus removing it from the validator set), and delaying the actual slash change to the validator's weight until the epoch boundary. For the use cases mentioned above, workarounds can be integrated to avoid problems, as follows: * For threshold based cryptography, this setting will have the threshold cryptography use the original epoch weights, while consensus has an update that lets it more rapidly benefit from additional security. If the threshold based cryptography blocks liveness of the chain, then we have effectively raised the liveness threshold of the remaining validators for the rest of the epoch. (Alternatively, jailed nodes could still contribute shares) This plan will fail in the extreme case that more than 1/3rd of the validators have been jailed within a single epoch. For such an extreme scenario, the chain already have its own custom incident response plan, and defining how to handle the threshold cryptography should be a part of that. * For light client efficiency, there can be a bit included in the header indicating an intra-epoch slash (ala [Link](https://github.com/tendermint/spec/issues/199)). * For fairness of deterministic leader election, applying a slash or jailing within an epoch would break the guarantee we were seeking to provide. This then re-introduces a new (but significantly simpler) problem for trying to provide fairness guarantees. Namely, that validators can adversarially elect to remove themself from the set of proposers. From a security perspective, this could potentially be handled by two different mechanisms (or prove to still be too difficult to achieve). One is making a security statement acknowledging the ability for an adversary to force an ahead-of-time fixed threshold of users to drop out of the proposer set within an epoch. The second method would be to parameterize such that the cost of a slash within the epoch far outweights benefits due to being a proposer. However, this latter criterion is quite dubious, since being a proposer can have many advantageous side-effects in chains with complex state machines. (Namely, DeFi games such as Fomo3D) * For staking derivative design, there is no issue introduced. This does not increase the state size of staking records, since whether a slash has occurred is fully queryable given the validator address. ### Token lockup When someone makes a transaction to delegate, even though they are not immediately staked, their tokens should be moved into a pool managed by the staking module which will then be used at the end of an epoch. This prevents concerns where they stake, and then spend those tokens not realizing they were already allocated for staking, and thus having their staking tx fail. ### Pipelining the epochs For threshold based cryptography in particular, we need a pipeline for epoch changes. This is because when we are in epoch N, we want the epoch N+1 weights to be fixed so that the validator set can do the DKG accordingly. So if we are currently in epoch N, the stake weights for epoch N+1 should already be fixed, and new stake changes should be getting applied to epoch N + 2. This can be handled by making a parameter for the epoch pipeline length. This parameter should not be alterable except during hard forks, to mitigate implementation complexity of switching the pipeline length. With pipeline length 1, if I redelegate during epoch N, then my redelegation is applied prior to the beginning of epoch N+1. With pipeline length 2, if I redelegate during epoch N, then my redelegation is applied prior to the beginning of epoch N+2. ### Rewards Even though all staking updates are applied at epoch boundaries, rewards can still be distributed immediately when they are claimed. This is because they do not affect the current stake weights, as we do not implement auto-bonding of rewards. If such a feature were to be implemented, it would have to be setup so that rewards are auto-bonded at the epoch boundary. ### Parameterizing the epoch length When choosing the epoch length, there is a trade-off queued state/computation buildup, and countering the previously discussed limitations of immediate execution if they apply to a given chain. Until an ABCI mechanism for variable block times is introduced, it is ill-advised to be using high epoch lengths due to the computation buildup. This is because when a block's execution time is greater than the expected block time from Tendermint, rounds may increment. ## Decision **Step-1**: Implement buffering of all staking and slashing messages. First we create a pool for storing tokens that are being bonded, but should be applied at the epoch boundary called the `EpochDelegationPool`. Then, we have two separate queues, one for staking, one for slashing. We describe what happens on each message being delivered below: ### Staking messages * **MsgCreateValidator**: Move user's self-bond to `EpochDelegationPool` immediately. Queue a message for the epoch boundary to handle the self-bond, taking the funds from the `EpochDelegationPool`. If Epoch execution fail, return back funds from `EpochDelegationPool` to user's account. * **MsgEditValidator**: Validate message and if valid queue the message for execution at the end of the Epoch. * **MsgDelegate**: Move user's funds to `EpochDelegationPool` immediately. Queue a message for the epoch boundary to handle the delegation, taking the funds from the `EpochDelegationPool`. If Epoch execution fail, return back funds from `EpochDelegationPool` to user's account. * **MsgBeginRedelegate**: Validate message and if valid queue the message for execution at the end of the Epoch. * **MsgUndelegate**: Validate message and if valid queue the message for execution at the end of the Epoch. ### Slashing messages * **MsgUnjail**: Validate message and if valid queue the message for execution at the end of the Epoch. * **Slash Event**: Whenever a slash event is created, it gets queued in the slashing module to apply at the end of the epoch. The queues should be setup such that this slash applies immediately. ### Evidence Messages * **MsgSubmitEvidence**: This gets executed immediately, and the validator gets jailed immediately. However in slashing, the actual slash event gets queued. Then we add methods to the end blockers, to ensure that at the epoch boundary the queues are cleared and delegation updates are applied. **Step-2**: Implement querying of queued staking txs. When querying the staking activity of a given address, the status should return not only the amount of tokens staked, but also if there are any queued stake events for that address. This will require more work to be done in the querying logic, to trace the queued upcoming staking events. As an initial implementation, this can be implemented as a linear search over all queued staking events. However, for chains that need long epochs, they should eventually build additional support for nodes that support querying to be able to produce results in constant time. (This is do-able by maintaining an auxilliary hashmap for indexing upcoming staking events by address) **Step-3**: Adjust gas Currently gas represents the cost of executing a transaction when its done immediately. (Merging together costs of p2p overhead, state access overhead, and computational overhead) However, now a transaction can cause computation in a future block, namely at the epoch boundary. To handle this, we should initially include parameters for estimating the amount of future computation (denominated in gas), and add that as a flat charge needed for the message. We leave it as out of scope for how to weight future computation versus current computation in gas pricing, and have it set such that the are weighted equally for now. ## Consequences ### Positive * Abstracts the proof of stake module that allows retaining the existing functionality * Enables new features such as validator-set based threshold cryptography ### Negative * Increases complexity of integrating more complex gas pricing mechanisms, as they now have to consider future execution costs as well. * When epoch > 1, validators can no longer leave the network immediately, and must wait until an epoch boundary. # ADR 040: Storage and SMT State Commitments Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-040-storage-and-smt-state-commitments 2020-01-15: Draft ## Changelog * 2020-01-15: Draft ## Status DRAFT Not Implemented ## Abstract Sparse Merkle Tree ([SMT](https://osf.io/8mcnh/)) is a version of a Merkle Tree with various storage and performance optimizations. This ADR defines a separation of state commitments from data storage and the Cosmos SDK transition from IAVL to SMT. ## Context Currently, Cosmos SDK uses IAVL for both state [commitments](https://cryptography.fandom.com/wiki/Commitment_scheme) and data storage. IAVL has effectively become an orphaned project within the Cosmos ecosystem and it's proven to be an inefficient state commitment data structure. In the current design, IAVL is used for both data storage and as a Merkle Tree for state commitments. IAVL is meant to be a standalone Merkelized key/value database, however it's using a KV DB engine to store all tree nodes. So, each node is stored in a separate record in the KV DB. This causes many inefficiencies and problems: * Each object query requires a tree traversal from the root. Subsequent queries for the same object are cached on the Cosmos SDK level. * Each edge traversal requires a DB query. * Creating snapshots is [expensive](https://github.com/cosmos/cosmos-sdk/issues/7215#issuecomment-684804950). It takes about 30 seconds to export less than 100 MB of state (as of March 2020). * Updates in IAVL may trigger tree reorganization and possible O(log(n)) hashes re-computation, which can become a CPU bottleneck. * The node structure is pretty expensive - it contains a standard tree node elements (key, value, left and right element) and additional metadata such as height, version (which is not required by the Cosmos SDK). The entire node is hashed, and that hash is used as the key in the underlying database, [ref](https://github.com/cosmos/iavl/blob/master/docs/node/node.md). Moreover, the IAVL project lacks support and a maintainer and we already see better and well-established alternatives. Instead of optimizing the IAVL, we are looking into other solutions for both storage and state commitments. ## Decision We propose to separate the concerns of state commitment (**SC**), needed for consensus, and state storage (**SS**), needed for state machine. Finally we replace IAVL with [Celestia's SMT](https://github.com/lazyledger/smt). Celestia SMT is based on Diem (called jellyfish) design \[\*] - it uses a compute-optimized SMT by replacing subtrees with only default values with a single node (same approach is used by Ethereum2) and implements compact proofs. The storage model presented here doesn't deal with data structure nor serialization. It's a Key-Value database, where both key and value are binaries. The storage user is responsible for data serialization. ### Decouple state commitment from storage Separation of storage and commitment (by the SMT) will allow the optimization of different components according to their usage and access patterns. `SC` (SMT) is used to commit to a data and compute Merkle proofs. `SS` is used to directly access data. To avoid collisions, both `SS` and `SC` will use a separate storage namespace (they could use the same database underneath). `SS` will store each record directly (mapping `(key, value)` as `key → value`). SMT is a merkle tree structure: we don't store keys directly. For every `(key, value)` pair, `hash(key)` is used as leaf path (we hash a key to uniformly distribute leaves in the tree) and `hash(value)` as the leaf contents. The tree structure is specified in more depth [below](#smt-for-state-commitment). For data access we propose 2 additional KV buckets (implemented as namespaces for the key-value pairs, sometimes called [column family](https://github.com/facebook/rocksdb/wiki/Terminology)): 1. B1: `key → value`: the principal object storage, used by a state machine, behind the Cosmos SDK `KVStore` interface: provides direct access by key and allows prefix iteration (KV DB backend must support it). 2. B2: `hash(key) → key`: a reverse index to get a key from an SMT path. Internally the SMT will store `(key, value)` as `prefix || hash(key) || hash(value)`. So, we can get an object value by composing `hash(key) → B2 → B1`. 3. We could use more buckets to optimize the app usage if needed. We propose to use a KV database for both `SS` and `SC`. The store interface will allow to use the same physical DB backend for both `SS` and `SC` as well two separate DBs. The latter option allows for the separation of `SS` and `SC` into different hardware units, providing support for more complex setup scenarios and improving overall performance: one can use different backends (eg RocksDB and Badger) as well as independently tuning the underlying DB configuration. ### Requirements State Storage requirements: * range queries * quick (key, value) access * creating a snapshot * historical versioning * pruning (garbage collection) State Commitment requirements: * fast updates * tree path should be short * query historical commitment proofs using ICS-23 standard * pruning (garbage collection) ### SMT for State Commitment A Sparse Merkle tree is based on the idea of a complete Merkle tree of an intractable size. The assumption here is that as the size of the tree is intractable, there would only be a few leaf nodes with valid data blocks relative to the tree size, rendering a sparse tree. The full specification can be found at [Celestia](https://github.com/celestiaorg/celestia-specs/blob/ec98170398dfc6394423ee79b00b71038879e211/src/specs/data_structures.md#sparse-merkle-tree). In summary: * The SMT consists of a binary Merkle tree, constructed in the same fashion as described in [Certificate Transparency (RFC-6962)](https://tools.ietf.org/html/rfc6962), but using as the hashing function SHA-2-256 as defined in [FIPS 180-4](https://doi.org/10.6028/NIST.FIPS.180-4). * Leaves and internal nodes are hashed differently: the one-byte `0x00` is prepended for leaf nodes while `0x01` is prepended for internal nodes. * Default values are given to leaf nodes with empty leaves. * While the above rule is sufficient to pre-compute the values of intermediate nodes that are roots of empty subtrees, a further simplification is to extend this default value to all nodes that are roots of empty subtrees. The 32-byte zero is used as the default value. This rule takes precedence over the above one. * An internal node that is the root of a subtree that contains exactly one non-empty leaf is replaced by that leaf's leaf node. ### Snapshots for storage sync and state versioning Below, with simple *snapshot* we refer to a database snapshot mechanism, not to a *ABCI snapshot sync*. The latter will be referred as *snapshot sync* (which will directly use DB snapshot as described below). Database snapshot is a view of DB state at a certain time or transaction. It's not a full copy of a database (it would be too big). Usually a snapshot mechanism is based on a *copy on write* and it allows DB state to be efficiently delivered at a certain stage. Some DB engines support snapshotting. Hence, we propose to reuse that functionality for the state sync and versioning (described below). We limit the supported DB engines to ones which efficiently implement snapshots. In a final section we discuss the evaluated DBs. One of the Stargate core features is a *snapshot sync* delivered in the `/snapshot` package. It provides a way to trustlessly sync a blockchain without repeating all transactions from the genesis. This feature is implemented in Cosmos SDK and requires storage support. Currently IAVL is the only supported backend. It works by streaming to a client a snapshot of a `SS` at a certain version together with a header chain. A new database snapshot will be created in every `EndBlocker` and identified by a block height. The `root` store keeps track of the available snapshots to offer `SS` at a certain version. The `root` store implements the `RootStore` interface described below. In essence, `RootStore` encapsulates a `Committer` interface. `Committer` has a `Commit`, `SetPruning`, `GetPruning` functions which will be used for creating and removing snapshots. The `rootStore.Commit` function creates a new snapshot and increments the version on each call, and checks if it needs to remove old versions. We will need to update the SMT interface to implement the `Committer` interface. NOTE: `Commit` must be called exactly once per block. Otherwise we risk going out of sync for the version number and block height. NOTE: For the Cosmos SDK storage, we may consider splitting that interface into `Committer` and `PruningCommitter` - only the multiroot should implement `PruningCommitter` (cache and prefix store don't need pruning). Number of historical versions for `abci.RequestQuery` and state sync snapshots is part of a node configuration, not a chain configuration (configuration implied by the blockchain consensus). A configuration should allow to specify number of past blocks and number of past blocks modulo some number (eg: 100 past blocks and one snapshot every 100 blocks for past 2000 blocks). Archival nodes can keep all past versions. Pruning old snapshots is effectively done by a database. Whenever we update a record in `SC`, SMT won't update nodes - instead it creates new nodes on the update path, without removing the old one. Since we are snapshotting each block, we need to change that mechanism to immediately remove orphaned nodes from the database. This is a safe operation - snapshots will keep track of the records and make it available when accessing past versions. To manage the active snapshots we will either use a DB *max number of snapshots* option (if available), or we will remove DB snapshots in the `EndBlocker`. The latter option can be done efficiently by identifying snapshots with block height and calling a store function to remove past versions. #### Accessing old state versions One of the functional requirements is to access old state. This is done through `abci.RequestQuery` structure. The version is specified by a block height (so we query for an object by a key `K` at block height `H`). The number of old versions supported for `abci.RequestQuery` is configurable. Accessing an old state is done by using available snapshots. `abci.RequestQuery` doesn't need old state of `SC` unless the `prove=true` parameter is set. The SMT merkle proof must be included in the `abci.ResponseQuery` only if both `SC` and `SS` have a snapshot for requested version. Moreover, Cosmos SDK could provide a way to directly access a historical state. However, a state machine shouldn't do that - since the number of snapshots is configurable, it would lead to nondeterministic execution. We positively [validated](https://github.com/cosmos/cosmos-sdk/discussions/8297) a versioning and snapshot mechanism for querying old state with regards to the database we evaluated. ### State Proofs For any object stored in State Store (SS), we have corresponding object in `SC`. A proof for object `V` identified by a key `K` is a branch of `SC`, where the path corresponds to the key `hash(K)`, and the leaf is `hash(K, V)`. ### Rollbacks We need to be able to process transactions and roll-back state updates if a transaction fails. This can be done in the following way: during transaction processing, we keep all state change requests (writes) in a `CacheWrapper` abstraction (as it's done today). Once we finish the block processing, in the `Endblocker`, we commit a root store - at that time, all changes are written to the SMT and to the `SS` and a snapshot is created. ### Committing to an object without saving it We identified use-cases, where modules will need to save an object commitment without storing an object itself. Sometimes clients are receiving complex objects, and they have no way to prove a correctness of that object without knowing the storage layout. For those use cases it would be easier to commit to the object without storing it directly. ### Refactor MultiStore The Stargate `/store` implementation (store/v1) adds an additional layer in the SDK store construction - the `MultiStore` structure. The multistore exists to support the modularity of the Cosmos SDK - each module is using its own instance of IAVL, but in the current implementation, all instances share the same database. The latter indicates, however, that the implementation doesn't provide true modularity. Instead it causes problems related to race condition and atomic DB commits (see: [#6370](https://github.com/cosmos/cosmos-sdk/issues/6370) and [discussion](https://github.com/cosmos/cosmos-sdk/discussions/8297#discussioncomment-757043)). We propose to reduce the multistore concept from the SDK, and to use a single instance of `SC` and `SS` in a `RootStore` object. To avoid confusion, we should rename the `MultiStore` interface to `RootStore`. The `RootStore` will have the following interface; the methods for configuring tracing and listeners are omitted for brevity. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Used where read-only access to versions is needed. type BasicRootStore interface { Store GetKVStore(StoreKey) KVStore CacheRootStore() CacheRootStore } // Used as the main app state, replacing CommitMultiStore. type CommitRootStore interface { BasicRootStore Committer Snapshotter GetVersion(uint64) (BasicRootStore, error) SetInitialVersion(uint64) error ... // Trace and Listen methods } // Replaces CacheMultiStore for branched state. type CacheRootStore interface { BasicRootStore Write() ... // Trace and Listen methods } // Example of constructor parameters for the concrete type. type RootStoreConfig struct { Upgrades *StoreUpgrades InitialVersion uint64 ReservePrefix(StoreKey, StoreType) } ``` In contrast to `MultiStore`, `RootStore` doesn't allow to dynamically mount sub-stores or provide an arbitrary backing DB for individual sub-stores. NOTE: modules will be able to use a special commitment and their own DBs. For example: a module which will use ZK proofs for state can store and commit this proof in the `RootStore` (usually as a single record) and manage the specialized store privately or using the `SC` low level interface. #### Compatibility support To ease the transition to this new interface for users, we can create a shim which wraps a `CommitMultiStore` but provides a `CommitRootStore` interface, and expose functions to safely create and access the underlying `CommitMultiStore`. The new `RootStore` and supporting types can be implemented in a `store/v2alpha1` package to avoid breaking existing code. #### Merkle Proofs and IBC Currently, an IBC (v1.0) Merkle proof path consists of two elements (`["", ""]`), with each key corresponding to a separate proof. These are each verified according to individual [ICS-23 specs](https://github.com/cosmos/ibc-go/blob/f7051429e1cf833a6f65d51e6c3df1609290a549/modules/core/23-commitment/types/merkle.go#L17), and the result hash of each step is used as the committed value of the next step, until a root commitment hash is obtained. The root hash of the proof for `""` is hashed with the `""` to validate against the App Hash. This is not compatible with the `RootStore`, which stores all records in a single Merkle tree structure, and won't produce separate proofs for the store- and record-key. Ideally, the store-key component of the proof could just be omitted, and updated to use a "no-op" spec, so only the record-key is used. However, because the IBC verification code hardcodes the `"ibc"` prefix and applies it to the SDK proof as a separate element of the proof path, this isn't possible without a breaking change. Breaking this behavior would severely impact the Cosmos ecosystem which already widely adopts the IBC module. Requesting an update of the IBC module across the chains is a time consuming effort and not easily feasible. As a workaround, the `RootStore` will have to use two separate SMTs (they could use the same underlying DB): one for IBC state and one for everything else. A simple Merkle map that reference these SMTs will act as a Merkle Tree to create a final App hash. The Merkle map is not stored in a DBs - it's constructed in the runtime. The IBC substore key must be `"ibc"`. The workaround can still guarantee atomic syncs: the [proposed DB backends](#evaluated-kv-databases) support atomic transactions and efficient rollbacks, which will be used in the commit phase. The presented workaround can be used until the IBC module is fully upgraded to supports single-element commitment proofs. ### Optimization: compress module key prefixes We consider a compression of prefix keys by creating a mapping from module key to an integer, and serializing the integer using varint coding. Varint coding assures that different values don't have common byte prefix. For Merkle Proofs we can't use prefix compression - so it should only apply for the `SS` keys. Moreover, the prefix compression should be only applied for the module namespace. More precisely: * each module has its own namespace; * when accessing a module namespace we create a KVStore with embedded prefix; * that prefix will be compressed only when accessing and managing `SS`. We need to assure that the codes won't change. We can fix the mapping in a static variable (provided by an app) or SS state under a special key. TODO: need to make decision about the key compression. ## Optimization: SS key compression Some objects may be saved with key, which contains a Protobuf message type. Such keys are long. We could save a lot of space if we can map Protobuf message types in varints. TODO: finalize this or move to another ADR. ## Migration Using the new store will require a migration. 2 Migrations are proposed: 1. Genesis export -- it will reset the blockchain history. 2. In place migration: we can reuse `UpgradeKeeper.SetUpgradeHandler` to provide the migration logic: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} app.UpgradeKeeper.SetUpgradeHandler("adr-40", func(ctx sdk.Context, plan upgradetypes.Plan, vm module.VersionMap) (module.VersionMap, error) { storev2.Migrate(iavlstore, v2.store) // RunMigrations returns the VersionMap // with the updated module ConsensusVersions return app.mm.RunMigrations(ctx, vm) }) ``` The `Migrate` function will read all entries from a store/v1 DB and save them to the AD-40 combined KV store. Cache layer should not be used and the operation must finish with a single Commit call. Inserting records to the `SC` (SMT) component is the bottleneck. Unfortunately SMT doesn't support batch transactions. Adding batch transactions to `SC` layer is considered as a feature after the main release. ## Consequences ### Backwards Compatibility This ADR doesn't introduce any Cosmos SDK level API changes. We change the storage layout of the state machine, a storage hard fork and network upgrade is required to incorporate these changes. SMT provides a merkle proof functionality, however it is not compatible with ICS23. Updating the proofs for ICS23 compatibility is required. ### Positive * Decoupling state from state commitment introduce better engineering opportunities for further optimizations and better storage patterns. * Performance improvements. * Joining SMT based camp which has wider and proven adoption than IAVL. Example projects which decided on SMT: Ethereum2, Diem (Libra), Trillan, Tezos, Celestia. * Multistore removal fixes a longstanding issue with the current MultiStore design. * Simplifies merkle proofs - all modules, except IBC, have only one pass for merkle proof. ### Negative * Storage migration * LL SMT doesn't support pruning - we will need to add and test that functionality. * `SS` keys will have an overhead of a key prefix. This doesn't impact `SC` because all keys in `SC` have same size (they are hashed). ### Neutral * Deprecating IAVL, which is one of the core proposals of Cosmos Whitepaper. ## Alternative designs Most of the alternative designs were evaluated in a prior state commitments and storage report. Ethereum research published [Verkle Trie](https://dankradfeist.de/ethereum/2021/06/18/verkle-trie-for-eth1.html) - an idea of combining polynomial commitments with merkle tree in order to reduce the tree height. This concept has a very good potential, but we think it's too early to implement it. The current, SMT based design could be easily updated to the Verkle Trie once other research implement all necessary libraries. The main advantage of the design described in this ADR is the separation of state commitments from the data storage and designing a more powerful interface. ## Further Discussions ### Evaluated KV Databases We verified existing databases KV databases for evaluating snapshot support. The following databases provide efficient snapshot mechanism: Badger, RocksDB, [Pebble](https://github.com/cockroachdb/pebble). Databases which don't provide such support or are not production ready: boltdb, leveldb, goleveldb, membdb, lmdb. ### RDBMS Use of RDBMS instead of simple KV store for state. Use of RDBMS will require a Cosmos SDK API breaking change (`KVStore` interface) and will allow better data extraction and indexing solutions. Instead of saving an object as a single blob of bytes, we could save it as record in a table in the state storage layer, and as a `hash(key, protobuf(object))` in the SMT as outlined above. To verify that an object registered in RDBMS is same as the one committed to SMT, one will need to load it from RDBMS, marshal using protobuf, hash and do SMT search. ### Off Chain Store We were discussing use case where modules can use a support database, which is not automatically committed. Module will responsible for having a sound storage model and can optionally use the feature discussed in \_*Committing to an object without saving it* section. ## References * [IAVL What's Next?](https://github.com/cosmos/cosmos-sdk/issues/7100) * [IAVL overview](https://docs.google.com/document/d/16Z_hW2rSAmoyMENO-RlAhQjAG3mSNKsQueMnKpmcBv0/edit#heading=h.yd2th7x3o1iv) of it's state v0.15 * [Celestia (LazyLedger) SMT](https://github.com/lazyledger/smt) * Facebook Diem (Libra) SMT [design](https://developers.diem.com/papers/jellyfish-merkle-tree/2021-01-14.pdf) * [Trillian Revocation Transparency](https://github.com/google/trillian/blob/master/docs/papers/RevocationTransparency.pdf), [Trillian Verifiable Data Structures](https://github.com/google/trillian/blob/master/docs/papers/VerifiableDataStructures.pdf). * Design and implementation [discussion](https://github.com/cosmos/cosmos-sdk/discussions/8297). * [How to Upgrade IBC Chains and their Clients](https://github.com/cosmos/ibc-go/blob/main/docs/docs/01-ibc/05-upgrades/01-quick-guide.md) * [ADR-40 Effect on IBC](https://github.com/cosmos/ibc-go/discussions/256) # ADR 041: In-Place Store Migrations Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-041-in-place-store-migrations 17.02.2021: Initial Draft ## Changelog * 17.02.2021: Initial Draft ## Status Accepted ## Abstract This ADR introduces a mechanism to perform in-place state store migrations during chain software upgrades. ## Context When a chain upgrade introduces state-breaking changes inside modules, the current procedure consists of exporting the whole state into a JSON file (via the `simd export` command), running migration scripts on the JSON file (`simd genesis migrate` command), clearing the stores (`simd unsafe-reset-all` command), and starting a new chain with the migrated JSON file as new genesis (optionally with a custom initial block height). An example of such a procedure can be seen [in the Cosmos Hub 3->4 migration guide](https://github.com/cosmos/gaia/blob/v4.0.3/docs/migration/cosmoshub-3.md#upgrade-procedure). This procedure is cumbersome for multiple reasons: * The procedure takes time. It can take hours to run the `export` command, plus some additional hours to run `InitChain` on the fresh chain using the migrated JSON. * The exported JSON file can be heavy (\~100MB-1GB), making it difficult to view, edit and transfer, which in turn introduces additional work to solve these problems (such as [streaming genesis](https://github.com/cosmos/cosmos-sdk/issues/6936)). ## Decision We propose a migration procedure based on modifying the KV store in-place without involving the JSON export-process-import flow described above. ### Module `ConsensusVersion` We introduce a new method on the `AppModule` interface: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type AppModule interface { // --snip-- ConsensusVersion() uint64 } ``` This methods returns an `uint64` which serves as state-breaking version of the module. It MUST be incremented on each consensus-breaking change introduced by the module. To avoid potential errors with default values, the initial version of a module MUST be set to 1. In the Cosmos SDK, version 1 corresponds to the modules in the v0.41 series. ### Module-Specific Migration Functions For each consensus-breaking change introduced by the module, a migration script from ConsensusVersion `N` to version `N+1` MUST be registered in the `Configurator` using its newly-added `RegisterMigration` method. All modules receive a reference to the configurator in their `RegisterServices` method on `AppModule`, and this is where the migration functions should be registered. The migration functions should be registered in increasing order. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (am AppModule) RegisterServices(cfg module.Configurator) { // --snip-- cfg.RegisterMigration(types.ModuleName, 1, func(ctx sdk.Context) error { // Perform in-place store migrations from ConsensusVersion 1 to 2. }) cfg.RegisterMigration(types.ModuleName, 2, func(ctx sdk.Context) error { // Perform in-place store migrations from ConsensusVersion 2 to 3. }) // etc. } ``` For example, if the new ConsensusVersion of a module is `N` , then `N-1` migration functions MUST be registered in the configurator. In the Cosmos SDK, the migration functions are handled by each module's keeper, because the keeper holds the `sdk.StoreKey` used to perform in-place store migrations. To not overload the keeper, a `Migrator` wrapper is used by each module to handle the migration functions: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Migrator is a struct for handling in-place store migrations. type Migrator struct { BaseKeeper } ``` Migration functions should live inside the `migrations/` folder of each module, and be called by the Migrator's methods. We propose the format `Migrate{M}to{N}` for method names. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Migrate1to2 migrates from version 1 to 2. func (m Migrator) Migrate1to2(ctx sdk.Context) error { return v2bank.MigrateStore(ctx, m.keeper.storeKey) // v043bank is package `x/bank/migrations/v2`. } ``` Each module's migration functions are specific to the module's store evolutions, and are not described in this ADR. An example of x/bank store key migrations after the introduction of ADR-028 length-prefixed addresses can be seen in this [store.go code](https://github.com/cosmos/cosmos-sdk/blob/36f68eb9e041e20a5bb47e216ac5eb8b91f95471/x/bank/legacy/v043/store.go#L41-L62). ### Tracking Module Versions in `x/upgrade` We introduce a new prefix store in `x/upgrade`'s store. This store will track each module's current version, it can be modelized as a `map[string]uint64` of module name to module ConsensusVersion, and will be used when running the migrations (see next section for details). The key prefix used is `0x1`, and the key/value format is: ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} 0x2 | {bytes(module_name)} => BigEndian(module_consensus_version) ``` The initial state of the store is set from `app.go`'s `InitChainer` method. The UpgradeHandler signature needs to be updated to take a `VersionMap`, as well as return an upgraded `VersionMap` and an error: ```diff theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} - type UpgradeHandler func(ctx sdk.Context, plan Plan) + type UpgradeHandler func(ctx sdk.Context, plan Plan, versionMap VersionMap) (VersionMap, error) ``` To apply an upgrade, we query the `VersionMap` from the `x/upgrade` store and pass it into the handler. The handler runs the actual migration functions (see next section), and if successful, returns an updated `VersionMap` to be stored in state. ```diff expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (k UpgradeKeeper) ApplyUpgrade(ctx sdk.Context, plan types.Plan) { // --snip-- - handler(ctx, plan) + updatedVM, err := handler(ctx, plan, k.GetModuleVersionMap(ctx)) // k.GetModuleVersionMap() fetches the VersionMap stored in state. + if err != nil { + return err + } + + // Set the updated consensus versions to state + k.SetModuleVersionMap(ctx, updatedVM) } ``` A gRPC query endpoint to query the `VersionMap` stored in `x/upgrade`'s state will also be added, so that app developers can double-check the `VersionMap` before the upgrade handler runs. ### Running Migrations Once all the migration handlers are registered inside the configurator (which happens at startup), running migrations can happen by calling the `RunMigrations` method on `module.Manager`. This function will loop through all modules, and for each module: * Get the old ConsensusVersion of the module from its `VersionMap` argument (let's call it `M`). * Fetch the new ConsensusVersion of the module from the `ConsensusVersion()` method on `AppModule` (call it `N`). * If `N>M`, run all registered migrations for the module sequentially `M -> M+1 -> M+2...` until `N`. * There is a special case where there is no ConsensusVersion for the module, as this means that the module has been newly added during the upgrade. In this case, no migration function is run, and the module's current ConsensusVersion is saved to `x/upgrade`'s store. If a required migration is missing (e.g. if it has not been registered in the `Configurator`), then the `RunMigrations` function will error. In practice, the `RunMigrations` method should be called from inside an `UpgradeHandler`. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, vm module.VersionMap) (module.VersionMap, error) { return app.mm.RunMigrations(ctx, vm) }) ``` Assuming a chain upgrades at block `n`, the procedure should run as follows: * the old binary will halt in `BeginBlock` when starting block `N`. In its store, the ConsensusVersions of the old binary's modules are stored. * the new binary will start at block `N`. The UpgradeHandler is set in the new binary, so will run at `BeginBlock` of the new binary. Inside `x/upgrade`'s `ApplyUpgrade`, the `VersionMap` will be retrieved from the (old binary's) store, and passed into the `RunMigrations` functon, migrating all module stores in-place before the modules' own `BeginBlock`s. ## Consequences ### Backwards Compatibility This ADR introduces a new method `ConsensusVersion()` on `AppModule`, which all modules need to implement. It also alters the UpgradeHandler function signature. As such, it is not backwards-compatible. While modules MUST register their migration functions when bumping ConsensusVersions, running those scripts using an upgrade handler is optional. An application may perfectly well decide to not call the `RunMigrations` inside its upgrade handler, and continue using the legacy JSON migration path. ### Positive * Perform chain upgrades without manipulating JSON files. * While no benchmark has been made yet, it is probable that in-place store migrations will take less time than JSON migrations. The main reason supporting this claim is that both the `simd export` command on the old binary and the `InitChain` function on the new binary will be skipped. ### Negative * Module developers MUST correctly track consensus-breaking changes in their modules. If a consensus-breaking change is introduced in a module without its corresponding `ConsensusVersion()` bump, then the `RunMigrations` function won't detect the migration, and the chain upgrade might be unsuccessful. Documentation should clearly reflect this. ### Neutral * The Cosmos SDK will continue to support JSON migrations via the existing `simd export` and `simd genesis migrate` commands. * The current ADR does not allow creating, renaming or deleting stores, only modifying existing store keys and values. The Cosmos SDK already has the `StoreLoader` for those operations. ## Further Discussions ## References * Initial discussion: [Link](https://github.com/cosmos/cosmos-sdk/discussions/8429) * Implementation of `ConsensusVersion` and `RunMigrations`: [Link](https://github.com/cosmos/cosmos-sdk/pull/8485) * Issue discussing `x/upgrade` design: [Link](https://github.com/cosmos/cosmos-sdk/issues/8514) # ADR 042: Group Module Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-042-group-module 2020/04/09: Initial Draft ## Changelog * 2020/04/09: Initial Draft ## Status Draft ## Abstract This ADR defines the `x/group` module which allows the creation and management of on-chain multi-signature accounts and enables voting for message execution based on configurable decision policies. ## Context The legacy amino multi-signature mechanism of the Cosmos SDK has certain limitations: * Key rotation is not possible, although this can be solved with [account rekeying](/sdk/latest/reference/architecture/adr-034-account-rekeying). * Thresholds can't be changed. * UX is cumbersome for non-technical users ([#5661](https://github.com/cosmos/cosmos-sdk/issues/5661)). * It requires `legacy_amino` sign mode ([#8141](https://github.com/cosmos/cosmos-sdk/issues/8141)). While the group module is not meant to be a total replacement for the current multi-signature accounts, it provides a solution to the limitations described above, with a more flexible key management system where keys can be added, updated or removed, as well as configurable thresholds. It's meant to be used with other access control modules such as [`x/feegrant`](/sdk/v0.50/build/architecture/adr-029-fee-grant-module) ans [`x/authz`](/sdk/latest/reference/architecture/adr-030-authz-module) to simplify key management for individuals and organizations. The proof of concept of the group module can be found in [Link](https://github.com/regen-network/regen-ledger/tree/master/proto/regen/group/v1alpha1) and [Link](https://github.com/regen-network/regen-ledger/tree/master/x/group). ## Decision We propose merging the `x/group` module with its supporting [ORM/Table Store package](https://github.com/regen-network/regen-ledger/tree/master/orm) ([#7098](https://github.com/cosmos/cosmos-sdk/issues/7098)) into the Cosmos SDK and continuing development here. There will be a dedicated ADR for the ORM package. ### Group A group is a composition of accounts with associated weights. It is not an account and doesn't have a balance. It doesn't in and of itself have any sort of voting or decision weight. Group members can create proposals and vote on them through group accounts using different decision policies. It has an `admin` account which can manage members in the group, update the group metadata and set a new admin. ```protobuf expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message GroupInfo { // group_id is the unique ID of this group. uint64 group_id = 1; // admin is the account address of the group's admin. string admin = 2; // metadata is any arbitrary metadata to attached to the group. bytes metadata = 3; // version is used to track changes to a group's membership structure that // would break existing proposals. Whenever a member weight has changed, // or any member is added or removed, the version is incremented and will // invalidate all proposals from older versions. uint64 version = 4; // total_weight is the sum of the group members' weights. string total_weight = 5; } ``` ```protobuf expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message GroupMember { // group_id is the unique ID of the group. uint64 group_id = 1; // member is the member data. Member member = 2; } // Member represents a group member with an account address, // non-zero weight and metadata. message Member { // address is the member's account address. string address = 1; // weight is the member's voting weight that should be greater than 0. string weight = 2; // metadata is any arbitrary metadata to attached to the member. bytes metadata = 3; } ``` ### Group Account A group account is an account associated with a group and a decision policy. A group account does have a balance. Group accounts are abstracted from groups because a single group may have multiple decision policies for different types of actions. Managing group membership separately from decision policies results in the least overhead and keeps membership consistent across different policies. The pattern that is recommended is to have a single master group account for a given group, and then to create separate group accounts with different decision policies and delegate the desired permissions from the master account to those "sub-accounts" using the [`x/authz` module](/sdk/latest/reference/architecture/adr-030-authz-module). ```protobuf expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message GroupAccountInfo { // address is the group account address. string address = 1; // group_id is the ID of the Group the GroupAccount belongs to. uint64 group_id = 2; // admin is the account address of the group admin. string admin = 3; // metadata is any arbitrary metadata of this group account. bytes metadata = 4; // version is used to track changes to a group's GroupAccountInfo structure that // invalidates active proposal from old versions. uint64 version = 5; // decision_policy specifies the group account's decision policy. google.protobuf.Any decision_policy = 6 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"]; } ``` Similarly to a group admin, a group account admin can update its metadata, decision policy or set a new group account admin. A group account can also be an admin or a member of a group. For instance, a group admin could be another group account which could "elects" the members or it could be the same group that elects itself. ### Decision Policy A decision policy is the mechanism by which members of a group can vote on proposals. All decision policies should have a minimum and maximum voting window. The minimum voting window is the minimum duration that must pass in order for a proposal to potentially pass, and it may be set to 0. The maximum voting window is the maximum time that a proposal may be voted on and executed if it reached enough support before it is closed. Both of these values must be less than a chain-wide max voting window parameter. We define the `DecisionPolicy` interface that all decision policies must implement: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type DecisionPolicy interface { codec.ProtoMarshaler ValidateBasic() error GetTimeout() types.Duration Allow(tally Tally, totalPower string, votingDuration time.Duration) (DecisionPolicyResult, error) Validate(g GroupInfo) error } type DecisionPolicyResult struct { Allow bool Final bool } ``` #### Threshold decision policy A threshold decision policy defines a minimum support votes (*yes*), based on a tally of voter weights, for a proposal to pass. For this decision policy, abstain and veto are treated as no support (*no*). ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message ThresholdDecisionPolicy { // threshold is the minimum weighted sum of support votes for a proposal to succeed. string threshold = 1; // voting_period is the duration from submission of a proposal to the end of voting period // Within this period, votes and exec messages can be submitted. google.protobuf.Duration voting_period = 2 [(gogoproto.nullable) = false]; } ``` ### Proposal Any member of a group can submit a proposal for a group account to decide upon. A proposal consists of a set of `sdk.Msg`s that will be executed if the proposal passes as well as any metadata associated with the proposal. These `sdk.Msg`s get validated as part of the `Msg/CreateProposal` request validation. They should also have their signer set as the group account. Internally, a proposal also tracks: * its current `Status`: submitted, closed or aborted * its `Result`: unfinalized, accepted or rejected * its `VoteState` in the form of a `Tally`, which is calculated on new votes and when executing the proposal. ```protobuf expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Tally represents the sum of weighted votes. message Tally { option (gogoproto.goproto_getters) = false; // yes_count is the weighted sum of yes votes. string yes_count = 1; // no_count is the weighted sum of no votes. string no_count = 2; // abstain_count is the weighted sum of abstainers. string abstain_count = 3; // veto_count is the weighted sum of vetoes. string veto_count = 4; } ``` ### Voting Members of a group can vote on proposals. There are four choices to choose while voting - yes, no, abstain and veto. Not all decision policies will support them. Votes can contain some optional metadata. In the current implementation, the voting window begins as soon as a proposal is submitted. Voting internally updates the proposal `VoteState` as well as `Status` and `Result` if needed. ### Executing Proposals Proposals will not be automatically executed by the chain in this current design, but rather a user must submit a `Msg/Exec` transaction to attempt to execute the proposal based on the current votes and decision policy. A future upgrade could automate this and have the group account (or a fee granter) pay. #### Changing Group Membership In the current implementation, updating a group or a group account after submitting a proposal will make it invalid. It will simply fail if someone calls `Msg/Exec` and will eventually be garbage collected. ### Notes on current implementation This section outlines the current implementation used in the proof of concept of the group module but this could be subject to changes and iterated on. #### ORM The [ORM package](https://github.com/cosmos/cosmos-sdk/discussions/9156) defines tables, sequences and secondary indexes which are used in the group module. Groups are stored in state as part of a `groupTable`, the `group_id` being an auto-increment integer. Group members are stored in a `groupMemberTable`. Group accounts are stored in a `groupAccountTable`. The group account address is generated based on an auto-increment integer which is used to derive the group module `RootModuleKey` into a `DerivedModuleKey`, as stated in [ADR-033](/sdk/latest/reference/architecture/adr-033-protobuf-inter-module-comm#modulekeys-and-moduleids). The group account is added as a new `ModuleAccount` through `x/auth`. Proposals are stored as part of the `proposalTable` using the `Proposal` type. The `proposal_id` is an auto-increment integer. Votes are stored in the `voteTable`. The primary key is based on the vote's `proposal_id` and `voter` account address. #### ADR-033 to route proposal messages Inter-module communication introduced by [ADR-033](/sdk/latest/reference/architecture/adr-033-protobuf-inter-module-comm) can be used to route a proposal's messages using the `DerivedModuleKey` corresponding to the proposal's group account. ## Consequences ### Positive * Improved UX for multi-signature accounts allowing key rotation and custom decision policies. ### Negative ### Neutral * It uses ADR 033 so it will need to be implemented within the Cosmos SDK, but this doesn't imply necessarily any large refactoring of existing Cosmos SDK modules. * The current implementation of the group module uses the ORM package. ## Further Discussions * Convergence of `/group` and `x/gov` as both support proposals and voting: [Link](https://github.com/cosmos/cosmos-sdk/discussions/9066) * `x/group` possible future improvements: * Execute proposals on submission ([Link](https://github.com/regen-network/regen-ledger/issues/288)) * Withdraw a proposal ([Link](https://github.com/regen-network/cosmos-modules/issues/41)) * Make `Tally` more flexible and support non-binary choices ## References * Initial specification: * [Link](https://gist.github.com/aaronc/b60628017352df5983791cad30babe56#group-module) * [#5236](https://github.com/cosmos/cosmos-sdk/pull/5236) * Proposal to add `x/group` into the Cosmos SDK: [#7633](https://github.com/cosmos/cosmos-sdk/issues/7633) # ADR 43: NFT Module Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-043-nft-module 2021-05-01: Initial Draft 2021-07-02: Review updates 2022-06-15: Add batch operation 2022-11-11: Remove strict validation of classID and tokenID ## Changelog * 2021-05-01: Initial Draft * 2021-07-02: Review updates * 2022-06-15: Add batch operation * 2022-11-11: Remove strict validation of classID and tokenID ## Status PROPOSED ## Abstract This ADR defines the `x/nft` module which is a generic implementation of NFTs, roughly "compatible" with ERC721. **Applications using the `x/nft` module must implement the following functions**: * `MsgNewClass` - Receive the user's request to create a class, and call the `NewClass` of the `x/nft` module. * `MsgUpdateClass` - Receive the user's request to update a class, and call the `UpdateClass` of the `x/nft` module. * `MsgMintNFT` - Receive the user's request to mint a nft, and call the `MintNFT` of the `x/nft` module. * `BurnNFT` - Receive the user's request to burn a nft, and call the `BurnNFT` of the `x/nft` module. * `UpdateNFT` - Receive the user's request to update a nft, and call the `UpdateNFT` of the `x/nft` module. ## Context NFTs are more than just crypto art, which is very helpful for accruing value to the Cosmos ecosystem. As a result, Cosmos Hub should implement NFT functions and enable a unified mechanism for storing and sending the ownership representative of NFTs as discussed in [Link](https://github.com/cosmos/cosmos-sdk/discussions/9065). As discussed in [#9065](https://github.com/cosmos/cosmos-sdk/discussions/9065), several potential solutions can be considered: * irismod/nft and modules/incubator/nft * CW721 * DID NFTs * interNFT Since functions/use cases of NFTs are tightly connected with their logic, it is almost impossible to support all the NFTs' use cases in one Cosmos SDK module by defining and implementing different transaction types. Considering generic usage and compatibility of interchain protocols including IBC and Gravity Bridge, it is preferred to have a generic NFT module design which handles the generic NFTs logic. This design idea can enable composability that application-specific functions should be managed by other modules on Cosmos Hub or on other Zones by importing the NFT module. The current design is based on the work done by [IRISnet team](https://github.com/irisnet/irismod/tree/master/modules/nft) and an older implementation in the [Cosmos repository](https://github.com/cosmos/modules/tree/master/incubator/nft). ## Decision We create a `x/nft` module, which contains the following functionality: * Store NFTs and track their ownership. * Expose `Keeper` interface for composing modules to transfer, mint and burn NFTs. * Expose external `Message` interface for users to transfer ownership of their NFTs. * Query NFTs and their supply information. The proposed module is a base module for NFT app logic. It's goal it to provide a common layer for storage, basic transfer functionality and IBC. The module should not be used as a standalone. Instead an app should create a specialized module to handle app specific logic (eg: NFT ID construction, royalty), user level minting and burning. Moreover an app specialized module should handle auxiliary data to support the app logic (eg indexes, ORM, business data). All data carried over IBC must be part of the `NFT` or `Class` type described below. The app specific NFT data should be encoded in `NFT.data` for cross-chain integrity. Other objects related to NFT, which are not important for integrity can be part of the app specific module. ### Types We propose two main types: * `Class` -- describes NFT class. We can think about it as a smart contract address. * `NFT` -- object representing unique, non fungible asset. Each NFT is associated with a Class. #### Class NFT **Class** is comparable to an ERC-721 smart contract (provides description of a smart contract), under which a collection of NFTs can be created and managed. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message Class { string id = 1; string name = 2; string symbol = 3; string description = 4; string uri = 5; string uri_hash = 6; google.protobuf.Any data = 7; } ``` * `id` is used as the primary index for storing the class; *required* * `name` is a descriptive name of the NFT class; *optional* * `symbol` is the symbol usually shown on exchanges for the NFT class; *optional* * `description` is a detailed description of the NFT class; *optional* * `uri` is a URI for the class metadata stored off chain. It should be a JSON file that contains metadata about the NFT class and NFT data schema ([OpenSea example](https://docs.opensea.io/docs/contract-level-metadata)); *optional* * `uri_hash` is a hash of the document pointed by uri; *optional* * `data` is app specific metadata of the class; *optional* #### NFT We define a general model for `NFT` as follows. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message NFT { string class_id = 1; string id = 2; string uri = 3; string uri_hash = 4; google.protobuf.Any data = 10; } ``` * `class_id` is the identifier of the NFT class where the NFT belongs; *required* * `id` is an identifier of the NFT, unique within the scope of its class. It is specified by the creator of the NFT and may be expanded to use DID in the future. `class_id` combined with `id` uniquely identifies an NFT and is used as the primary index for storing the NFT; *required* ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} {class_id}/{id} --> NFT (bytes) ``` * `uri` is a URI for the NFT metadata stored off chain. Should point to a JSON file that contains metadata about this NFT (Ref: [ERC721 standard and OpenSea extension](https://docs.opensea.io/docs/metadata-standards)); *required* * `uri_hash` is a hash of the document pointed by uri; *optional* * `data` is an app specific data of the NFT. CAN be used by composing modules to specify additional properties of the NFT; *optional* This ADR doesn't specify values that `data` can take; however, best practices recommend upper-level NFT modules clearly specify their contents. Although the value of this field doesn't provide the additional context required to manage NFT records, which means that the field can technically be removed from the specification, the field's existence allows basic informational/UI functionality. ### `Keeper` Interface ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type Keeper interface { NewClass(ctx sdk.Context,class Class) UpdateClass(ctx sdk.Context,class Class) Mint(ctx sdk.Context,nft NFT,receiver sdk.AccAddress) // updates totalSupply BatchMint(ctx sdk.Context, tokens []NFT,receiver sdk.AccAddress) error Burn(ctx sdk.Context, classId string, nftId string) // updates totalSupply BatchBurn(ctx sdk.Context, classID string, nftIDs []string) error Update(ctx sdk.Context, nft NFT) BatchUpdate(ctx sdk.Context, tokens []NFT) error Transfer(ctx sdk.Context, classId string, nftId string, receiver sdk.AccAddress) BatchTransfer(ctx sdk.Context, classID string, nftIDs []string, receiver sdk.AccAddress) error GetClass(ctx sdk.Context, classId string) Class GetClasses(ctx sdk.Context) []Class GetNFT(ctx sdk.Context, classId string, nftId string) NFT GetNFTsOfClassByOwner(ctx sdk.Context, classId string, owner sdk.AccAddress) []NFT GetNFTsOfClass(ctx sdk.Context, classId string) []NFT GetOwner(ctx sdk.Context, classId string, nftId string) sdk.AccAddress GetBalance(ctx sdk.Context, classId string, owner sdk.AccAddress) uint64 GetTotalSupply(ctx sdk.Context, classId string) uint64 } ``` Other business logic implementations should be defined in composing modules that import `x/nft` and use its `Keeper`. ### `Msg` Service ```protobuf expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} service Msg { rpc Send(MsgSend) returns (MsgSendResponse); } message MsgSend { string class_id = 1; string id = 2; string sender = 3; string reveiver = 4; } message MsgSendResponse {} ``` `MsgSend` can be used to transfer the ownership of an NFT to another address. The implementation outline of the server is as follows: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type msgServer struct{ k Keeper } func (m msgServer) Send(ctx context.Context, msg *types.MsgSend) (*types.MsgSendResponse, error) { // check current ownership assertEqual(msg.Sender, m.k.GetOwner(msg.ClassId, msg.Id)) // transfer ownership m.k.Transfer(msg.ClassId, msg.Id, msg.Receiver) return &types.MsgSendResponse{ }, nil } ``` The query service methods for the `x/nft` module are: ```protobuf expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} service Query { // Balance queries the number of NFTs of a given class owned by the owner, same as balanceOf in ERC721 rpc Balance(QueryBalanceRequest) returns (QueryBalanceResponse) { option (google.api.http).get = "/cosmos/nft/v1beta1/balance/{owner}/{class_id}"; } // Owner queries the owner of the NFT based on its class and id, same as ownerOf in ERC721 rpc Owner(QueryOwnerRequest) returns (QueryOwnerResponse) { option (google.api.http).get = "/cosmos/nft/v1beta1/owner/{class_id}/{id}"; } // Supply queries the number of NFTs from the given class, same as totalSupply of ERC721. rpc Supply(QuerySupplyRequest) returns (QuerySupplyResponse) { option (google.api.http).get = "/cosmos/nft/v1beta1/supply/{class_id}"; } // NFTs queries all NFTs of a given class or owner,choose at least one of the two, similar to tokenByIndex in ERC721Enumerable rpc NFTs(QueryNFTsRequest) returns (QueryNFTsResponse) { option (google.api.http).get = "/cosmos/nft/v1beta1/nfts"; } // NFT queries an NFT based on its class and id. rpc NFT(QueryNFTRequest) returns (QueryNFTResponse) { option (google.api.http).get = "/cosmos/nft/v1beta1/nfts/{class_id}/{id}"; } // Class queries an NFT class based on its id rpc Class(QueryClassRequest) returns (QueryClassResponse) { option (google.api.http).get = "/cosmos/nft/v1beta1/classes/{class_id}"; } // Classes queries all NFT classes rpc Classes(QueryClassesRequest) returns (QueryClassesResponse) { option (google.api.http).get = "/cosmos/nft/v1beta1/classes"; } } // QueryBalanceRequest is the request type for the Query/Balance RPC method message QueryBalanceRequest { string class_id = 1; string owner = 2; } // QueryBalanceResponse is the response type for the Query/Balance RPC method message QueryBalanceResponse { uint64 amount = 1; } // QueryOwnerRequest is the request type for the Query/Owner RPC method message QueryOwnerRequest { string class_id = 1; string id = 2; } // QueryOwnerResponse is the response type for the Query/Owner RPC method message QueryOwnerResponse { string owner = 1; } // QuerySupplyRequest is the request type for the Query/Supply RPC method message QuerySupplyRequest { string class_id = 1; } // QuerySupplyResponse is the response type for the Query/Supply RPC method message QuerySupplyResponse { uint64 amount = 1; } // QueryNFTstRequest is the request type for the Query/NFTs RPC method message QueryNFTsRequest { string class_id = 1; string owner = 2; cosmos.base.query.v1beta1.PageRequest pagination = 3; } // QueryNFTsResponse is the response type for the Query/NFTs RPC methods message QueryNFTsResponse { repeated cosmos.nft.v1beta1.NFT nfts = 1; cosmos.base.query.v1beta1.PageResponse pagination = 2; } // QueryNFTRequest is the request type for the Query/NFT RPC method message QueryNFTRequest { string class_id = 1; string id = 2; } // QueryNFTResponse is the response type for the Query/NFT RPC method message QueryNFTResponse { cosmos.nft.v1beta1.NFT nft = 1; } // QueryClassRequest is the request type for the Query/Class RPC method message QueryClassRequest { string class_id = 1; } // QueryClassResponse is the response type for the Query/Class RPC method message QueryClassResponse { cosmos.nft.v1beta1.Class class = 1; } // QueryClassesRequest is the request type for the Query/Classes RPC method message QueryClassesRequest { // pagination defines an optional pagination for the request. cosmos.base.query.v1beta1.PageRequest pagination = 1; } // QueryClassesResponse is the response type for the Query/Classes RPC method message QueryClassesResponse { repeated cosmos.nft.v1beta1.Class classes = 1; cosmos.base.query.v1beta1.PageResponse pagination = 2; } ``` ### Interoperability Interoperability is all about reusing assets between modules and chains. The former one is achieved by ADR-33: Protobuf client - server communication. At the time of writing ADR-33 is not finalized. The latter is achieved by IBC. Here we will focus on the IBC side. IBC is implemented per module. Here, we aligned that NFTs will be recorded and managed in the x/nft. This requires creation of a new IBC standard and implementation of it. For IBC interoperability, NFT custom modules MUST use the NFT object type understood by the IBC client. So, for x/nft interoperability, custom NFT implementations (example: x/cryptokitty) should use the canonical x/nft module and proxy all NFT balance keeping functionality to x/nft or else re-implement all functionality using the NFT object type understood by the IBC client. In other words: x/nft becomes the standard NFT registry for all Cosmos NFTs (example: x/cryptokitty will register a kitty NFT in x/nft and use x/nft for book keeping). This was [discussed](https://github.com/cosmos/cosmos-sdk/discussions/9065#discussioncomment-873206) in the context of using x/bank as a general asset balance book. Not using x/nft will require implementing another module for IBC. ## Consequences ### Backward Compatibility No backward incompatibilities. ### Forward Compatibility This specification conforms to the ERC-721 smart contract specification for NFT identifiers. Note that ERC-721 defines uniqueness based on (contract address, uint256 tokenId), and we conform to this implicitly because a single module is currently aimed to track NFT identifiers. Note: use of the (mutable) data field to determine uniqueness is not safe.s ### Positive * NFT identifiers available on Cosmos Hub. * Ability to build different NFT modules for the Cosmos Hub, e.g., ERC-721. * NFT module which supports interoperability with IBC and other cross-chain infrastructures like Gravity Bridge ### Negative * New IBC app is required for x/nft * CW721 adapter is required ### Neutral * Other functions need more modules. For example, a custody module is needed for NFT trading function, a collectible module is needed for defining NFT properties. ## Further Discussions For other kinds of applications on the Hub, more app-specific modules can be developed in the future: * `x/nft/custody`: custody of NFTs to support trading functionality. * `x/nft/marketplace`: selling and buying NFTs using sdk.Coins. * `x/fractional`: a module to split an ownership of an asset (NFT or other assets) for multiple stakeholder. `x/group` should work for most of the cases. Other networks in the Cosmos ecosystem could design and implement their own NFT modules for specific NFT applications and use cases. ## References * Initial discussion: [Link](https://github.com/cosmos/cosmos-sdk/discussions/9065) * x/nft: initialize module: [Link](https://github.com/cosmos/cosmos-sdk/pull/9174) * [ADR 033](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-033-protobuf-inter-module-comm.md) # ADR 044: Guidelines for Updating Protobuf Definitions Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-044-protobuf-updates-guidelines 28.06.2021: Initial Draft 02.12.2021: Add Since: comment for new fields 21.07.2022: Remove the rule of no new Msg in the same proto version. ## Changelog * 28.06.2021: Initial Draft * 02.12.2021: Add `Since:` comment for new fields * 21.07.2022: Remove the rule of no new `Msg` in the same proto version. ## Status Draft ## Abstract This ADR provides guidelines and recommended practices when updating Protobuf definitions. These guidelines are targeting module developers. ## Context The Cosmos SDK maintains a set of [Protobuf definitions](https://github.com/cosmos/cosmos-sdk/tree/main/proto/cosmos). It is important to correctly design Protobuf definitions to avoid any breaking changes within the same version. The reasons are to not break tooling (including indexers and explorers), wallets and other third-party integrations. When making changes to these Protobuf definitions, the Cosmos SDK currently only follows [Buf's](https://docs.buf.build/) recommendations. We noticed however that Buf's recommendations might still result in breaking changes in the SDK in some cases. For example: * Adding fields to `Msg`s. Adding fields is a not a Protobuf spec-breaking operation. However, when adding new fields to `Msg`s, the unknown field rejection will throw an error when sending the new `Msg` to an older node. * Marking fields as `reserved`. Protobuf proposes the `reserved` keyword for removing fields without the need to bump the package version. However, by doing so, client backwards compatibility is broken as Protobuf doesn't generate anything for `reserved` fields. See [#9446](https://github.com/cosmos/cosmos-sdk/issues/9446) for more details on this issue. Moreover, module developers often face other questions around Protobuf definitions such as "Can I rename a field?" or "Can I deprecate a field?" This ADR aims to answer all these questions by providing clear guidelines about allowed updates for Protobuf definitions. ## Decision We decide to keep [Buf's](https://docs.buf.build/) recommendations with the following exceptions: * `UNARY_RPC`: the Cosmos SDK currently does not support streaming RPCs. * `COMMENT_FIELD`: the Cosmos SDK allows fields with no comments. * `SERVICE_SUFFIX`: we use the `Query` and `Msg` service naming convention, which doesn't use the `-Service` suffix. * `PACKAGE_VERSION_SUFFIX`: some packages, such as `cosmos.crypto.ed25519`, don't use a version suffix. * `RPC_REQUEST_STANDARD_NAME`: Requests for the `Msg` service don't have the `-Request` suffix to keep backwards compatibility. On top of Buf's recommendations we add the following guidelines that are specific to the Cosmos SDK. ### Updating Protobuf Definition Without Bumping Version #### 1. Module developers MAY add new Protobuf definitions Module developers MAY add new `message`s, new `Service`s, new `rpc` endpoints, and new fields to existing messages. This recommendation follows the Protobuf specification, but is added in this document for clarity, as the SDK requires one additional change. The SDK requires the Protobuf comment of the new addition to contain one line with the following format: ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Since: cosmos-sdk {, ...} ``` Where each `version` denotes a minor ("0.45") or patch ("0.44.5") version from which the field is available. This will greatly help client libraries, who can optionally use reflection or custom code generation to show/hide these fields depending on the targetted node version. As examples, the following comments are valid: ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Since: cosmos-sdk 0.44 // Since: cosmos-sdk 0.42.11, 0.44.5 ``` and the following ones are NOT valid: ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Since cosmos-sdk v0.44 // since: cosmos-sdk 0.44 // Since: cosmos-sdk 0.42.11 0.44.5 // Since: Cosmos SDK 0.42.11, 0.44.5 ``` #### 2. Fields MAY be marked as `deprecated`, and nodes MAY implement a protocol-breaking change for handling these fields Protobuf supports the [`deprecated` field option](https://developers.google.com/protocol-buffers/docs/proto#options), and this option MAY be used on any field, including `Msg` fields. If a node handles a Protobuf message with a non-empty deprecated field, the node MAY change its behavior upon processing it, even in a protocol-breaking way. When possible, the node MUST handle backwards compatibility without breaking the consensus (unless we increment the proto version). As an example, the Cosmos SDK v0.42 to v0.43 update contained two Protobuf-breaking changes, listed below. Instead of bumping the package versions from `v1beta1` to `v1`, the SDK team decided to follow this guideline, by reverting the breaking changes, marking those changes as deprecated, and modifying the node implementation when processing messages with deprecated fields. More specifically: * The Cosmos SDK recently removed support for [time-based software upgrades](https://github.com/cosmos/cosmos-sdk/pull/8849). As such, the `time` field has been marked as deprecated in `cosmos.upgrade.v1beta1.Plan`. Moreover, the node will reject any proposal containing an upgrade Plan whose `time` field is non-empty. * The Cosmos SDK now supports [governance split votes](/sdk/v0.50/build/architecture/adr-037-gov-split-vote). When querying for votes, the returned `cosmos.gov.v1beta1.Vote` message has its `option` field (used for 1 vote option) deprecated in favor of its `options` field (allowing multiple vote options). Whenever possible, the SDK still populates the deprecated `option` field, that is, if and only if the `len(options) == 1` and `options[0].Weight == 1.0`. #### 3. Fields MUST NOT be renamed Whereas the official Protobuf recommendations do not prohibit renaming fields, as it does not break the Protobuf binary representation, the SDK explicitly forbids renaming fields in Protobuf structs. The main reason for this choice is to avoid introducing breaking changes for clients, which often rely on hard-coded fields from generated types. Moreover, renaming fields will lead to client-breaking JSON representations of Protobuf definitions, used in REST endpoints and in the CLI. ### Incrementing Protobuf Package Version TODO, needs architecture review. Some topics: * Bumping versions frequency * When bumping versions, should the Cosmos SDK support both versions? * i.e. v1beta1 -> v1, should we have two folders in the Cosmos SDK, and handlers for both versions? * mention ADR-023 Protobuf naming ## Consequences > This section describes the resulting context, after applying the decision. All consequences should be listed here, not just the "positive" ones. A particular decision may have positive, negative, and neutral consequences, but all of them affect the team and project in the future. ### Backwards Compatibility > All ADRs that introduce backwards incompatibilities must include a section describing these incompatibilities and their severity. The ADR must explain how the author proposes to deal with these incompatibilities. ADR submissions without a sufficient backwards compatibility treatise may be rejected outright. ### Positive * less pain to tool developers * more compatibility in the ecosystem * ... ### Negative `{negative consequences}` ### Neutral * more rigor in Protobuf review ## Further Discussions This ADR is still in the DRAFT stage, and the "Incrementing Protobuf Package Version" will be filled in once we make a decision on how to correctly do it. ## Test Cases \[optional] Test cases for an implementation are mandatory for ADRs that are affecting consensus changes. Other ADRs can choose to include links to test cases if applicable. ## References * [#9445](https://github.com/cosmos/cosmos-sdk/issues/9445) Release proto definitions v1 * [#9446](https://github.com/cosmos/cosmos-sdk/issues/9446) Address v1beta1 proto breaking changes # Adr 045 check delivertx middlewares Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-045-check-delivertx-middlewares 20.08.2021: Initial draft. 07.12.2021: Update tx.Handler interface (\#10693). 17.05.2022: ADR is abandoned, as middlewares are deemed too hard to reason about. ## Changelog * 20.08.2021: Initial draft. * 07.12.2021: Update `tx.Handler` interface ([#10693](https://github.com/cosmos/cosmos-sdk/pull/10693)). * 17.05.2022: ADR is abandoned, as middlewares are deemed too hard to reason about. ## Status ABANDONED. Replacement is being discussed in [#11955](https://github.com/cosmos/cosmos-sdk/issues/11955). ## Abstract This ADR replaces the current BaseApp `runTx` and antehandlers design with a middleware-based design. ## Context BaseApp's implementation of ABCI `{Check,Deliver}Tx()` and its own `Simulate()` method call the `runTx` method under the hood, which first runs antehandlers, then executes `Msg`s. However, the [transaction Tips](https://github.com/cosmos/cosmos-sdk/issues/9406) and [refunding unused gas](https://github.com/cosmos/cosmos-sdk/issues/2150) use cases require custom logic to be run after the `Msg`s execution. There is currently no way to achieve this. An naive solution would be to add post-`Msg` hooks to BaseApp. However, the Cosmos SDK team thinks in parallel about the bigger picture of making app wiring simpler ([#9181](https://github.com/cosmos/cosmos-sdk/discussions/9182)), which includes making BaseApp more lightweight and modular. ## Decision We decide to transform Baseapp's implementation of ABCI `{Check,Deliver}Tx` and its own `Simulate` methods to use a middleware-based design. The two following interfaces are the base of the middleware design, and are defined in `types/tx`: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type Handler interface { CheckTx(ctx context.Context, req Request, checkReq RequestCheckTx) (Response, ResponseCheckTx, error) DeliverTx(ctx context.Context, req Request) (Response, error) SimulateTx(ctx context.Context, req Request (Response, error) } type Middleware func(Handler) Handler ``` where we define the following arguments and return types: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type Request struct { Tx sdk.Tx TxBytes []byte } type Response struct { GasWanted uint64 GasUsed uint64 // MsgResponses is an array containing each Msg service handler's response // type, packed in an Any. This will get proto-serialized into the `Data` field // in the ABCI Check/DeliverTx responses. MsgResponses []*codectypes.Any Log string Events []abci.Event } type RequestCheckTx struct { Type abci.CheckTxType } type ResponseCheckTx struct { Priority int64 } ``` Please note that because CheckTx handles separate logic related to mempool priotization, its signature is different than DeliverTx and SimulateTx. BaseApp holds a reference to a `tx.Handler`: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type BaseApp struct { // other fields txHandler tx.Handler } ``` Baseapp's ABCI `{Check,Deliver}Tx()` and `Simulate()` methods simply call `app.txHandler.{Check,Deliver,Simulate}Tx()` with the relevant arguments. For example, for `DeliverTx`: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (app *BaseApp) DeliverTx(req abci.RequestDeliverTx) abci.ResponseDeliverTx { var abciRes abci.ResponseDeliverTx ctx := app.getContextForTx(runTxModeDeliver, req.Tx) res, err := app.txHandler.DeliverTx(ctx, tx.Request{ TxBytes: req.Tx }) if err != nil { abciRes = sdkerrors.ResponseDeliverTx(err, uint64(res.GasUsed), uint64(res.GasWanted), app.trace) return abciRes } abciRes, err = convertTxResponseToDeliverTx(res) if err != nil { return sdkerrors.ResponseDeliverTx(err, uint64(res.GasUsed), uint64(res.GasWanted), app.trace) } return abciRes } // convertTxResponseToDeliverTx converts a tx.Response into a abci.ResponseDeliverTx. func convertTxResponseToDeliverTx(txRes tx.Response) (abci.ResponseDeliverTx, error) { data, err := makeABCIData(txRes) if err != nil { return abci.ResponseDeliverTx{ }, nil } return abci.ResponseDeliverTx{ Data: data, Log: txRes.Log, Events: txRes.Events, }, nil } // makeABCIData generates the Data field to be sent to ABCI Check/DeliverTx. func makeABCIData(txRes tx.Response) ([]byte, error) { return proto.Marshal(&sdk.TxMsgData{ MsgResponses: txRes.MsgResponses }) } ``` The implementations are similar for `BaseApp.CheckTx` and `BaseApp.Simulate`. `baseapp.txHandler`'s three methods' implementations can obviously be monolithic functions, but for modularity we propose a middleware composition design, where a middleware is simply a function that takes a `tx.Handler`, and returns another `tx.Handler` wrapped around the previous one. ### Implementing a Middleware In practice, middlewares are created by Go function that takes as arguments some parameters needed for the middleware, and returns a `tx.Middleware`. For example, for creating an arbitrary `MyMiddleware`, we can implement: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // myTxHandler is the tx.Handler of this middleware. Note that it holds a // reference to the next tx.Handler in the stack. type myTxHandler struct { // next is the next tx.Handler in the middleware stack. next tx.Handler // some other fields that are relevant to the middleware can be added here } // NewMyMiddleware returns a middleware that does this and that. func NewMyMiddleware(arg1, arg2) tx.Middleware { return func (txh tx.Handler) tx.Handler { return myTxHandler{ next: txh, // optionally, set arg1, arg2... if they are needed in the middleware } } } // Assert myTxHandler is a tx.Handler. var _ tx.Handler = myTxHandler{ } func (h myTxHandler) CheckTx(ctx context.Context, req Request, checkReq RequestcheckTx) (Response, ResponseCheckTx, error) { // CheckTx specific pre-processing logic // run the next middleware res, checkRes, err := txh.next.CheckTx(ctx, req, checkReq) // CheckTx specific post-processing logic return res, checkRes, err } func (h myTxHandler) DeliverTx(ctx context.Context, req Request) (Response, error) { // DeliverTx specific pre-processing logic // run the next middleware res, err := txh.next.DeliverTx(ctx, tx, req) // DeliverTx specific post-processing logic return res, err } func (h myTxHandler) SimulateTx(ctx context.Context, req Request) (Response, error) { // SimulateTx specific pre-processing logic // run the next middleware res, err := txh.next.SimulateTx(ctx, tx, req) // SimulateTx specific post-processing logic return res, err } ``` ### Composing Middlewares While BaseApp simply holds a reference to a `tx.Handler`, this `tx.Handler` itself is defined using a middleware stack. The Cosmos SDK exposes a base (i.e. innermost) `tx.Handler` called `RunMsgsTxHandler`, which executes messages. Then, the app developer can compose multiple middlewares on top on the base `tx.Handler`. Each middleware can run pre-and-post-processing logic around its next middleware, as described in the section above. Conceptually, as an example, given the middlewares `A`, `B`, and `C` and the base `tx.Handler` `H` the stack looks like: ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} A.pre B.pre C.pre H # The base tx.handler, for example `RunMsgsTxHandler` C.post B.post A.post ``` We define a `ComposeMiddlewares` function for composing middlewares. It takes the base handler as first argument, and middlewares in the "outer to inner" order. For the above stack, the final `tx.Handler` is: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} txHandler := middleware.ComposeMiddlewares(H, A, B, C) ``` The middleware is set in BaseApp via its `SetTxHandler` setter: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // simapp/app.go txHandler := middleware.ComposeMiddlewares(...) app.SetTxHandler(txHandler) ``` The app developer can define their own middlewares, or use the Cosmos SDK's pre-defined middlewares from `middleware.NewDefaultTxHandler()`. ### Middlewares Maintained by the Cosmos SDK While the app developer can define and compose the middlewares of their choice, the Cosmos SDK provides a set of middlewares that caters for the ecosystem's most common use cases. These middlewares are: | Middleware | Description | | ----------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | RunMsgsTxHandler | This is the base `tx.Handler`. It replaces the old baseapp's `runMsgs`, and executes a transaction's `Msg`s. | | TxDecoderMiddleware | This middleware takes in transaction raw bytes, and decodes them into a `sdk.Tx`. It replaces the `baseapp.txDecoder` field, so that BaseApp stays as thin as possible. Since most middlewares read the contents of the `sdk.Tx`, the TxDecoderMiddleware should be run first in the middleware stack. | | `{Antehandlers}` | Each antehandler is converted to its own middleware. These middlewares perform signature verification, fee deductions and other validations on the incoming transaction. | | IndexEventsTxMiddleware | This is a simple middleware that chooses which events to index in Tendermint. Replaces `baseapp.indexEvents` (which unfortunately still exists in baseapp too, because it's used to index Begin/EndBlock events) | | RecoveryTxMiddleware | This index recovers from panics. It replaces baseapp.runTx's panic recovery described in [ADR-022](/sdk/v0.50/build/architecture/adr-022-custom-panic-handling). | | GasTxMiddleware | This replaces the [`Setup`](https://github.com/cosmos/cosmos-sdk/blob/v0.43.0/x/auth/ante/setup.go) Antehandler. It sets a GasMeter on sdk.Context. Note that before, GasMeter was set on sdk.Context inside the antehandlers, and there was some mess around the fact that antehandlers had their own panic recovery system so that the GasMeter could be read by baseapp's recovery system. Now, this mess is all removed: one middleware sets GasMeter, another one handles recovery. | ### Similarities and Differences between Antehandlers and Middlewares The middleware-based design builds upon the existing antehandlers design described in [ADR-010](/sdk/v0.50/build/architecture/adr-010-modular-antehandler). Even though the final decision of ADR-010 was to go with the "Simple Decorators" approach, the middleware design is actually very similar to the other [Decorator Pattern](/sdk/v0.50/build/architecture/adr-010-modular-antehandler#decorator-pattern) proposal, also used in [weave](https://github.com/iov-one/weave). #### Similarities with Antehandlers * Designed as chaining/composing small modular pieces. * Allow code reuse for `{Check,Deliver}Tx` and for `Simulate`. * Set up in `app.go`, and easily customizable by app developers. * Order is important. #### Differences with Antehandlers * The Antehandlers are run before `Msg` execution, whereas middlewares can run before and after. * The middleware approach uses separate methods for `{Check,Deliver,Simulate}Tx`, whereas the antehandlers pass a `simulate bool` flag and uses the `sdkCtx.Is{Check,Recheck}Tx()` flags to determine in which transaction mode we are. * The middleware design lets each middleware hold a reference to the next middleware, whereas the antehandlers pass a `next` argument in the `AnteHandle` method. * The middleware design use Go's standard `context.Context`, whereas the antehandlers use `sdk.Context`. ## Consequences ### Backwards Compatibility Since this refactor removes some logic away from BaseApp and into middlewares, it introduces API-breaking changes for app developers. Most notably, instead of creating an antehandler chain in `app.go`, app developers need to create a middleware stack: ```diff expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} - anteHandler, err := ante.NewAnteHandler( - ante.HandlerOptions{ - AccountKeeper: app.AccountKeeper, - BankKeeper: app.BankKeeper, - SignModeHandler: encodingConfig.TxConfig.SignModeHandler(), - FeegrantKeeper: app.FeeGrantKeeper, - SigGasConsumer: ante.DefaultSigVerificationGasConsumer, - }, -) +txHandler, err := authmiddleware.NewDefaultTxHandler(authmiddleware.TxHandlerOptions{ + Debug: app.Trace(), + IndexEvents: indexEvents, + LegacyRouter: app.legacyRouter, + MsgServiceRouter: app.msgSvcRouter, + LegacyAnteHandler: anteHandler, + TxDecoder: encodingConfig.TxConfig.TxDecoder, +}) if err != nil { panic(err) } - app.SetAnteHandler(anteHandler) + app.SetTxHandler(txHandler) ``` Other more minor API breaking changes will also be provided in the CHANGELOG. As usual, the Cosmos SDK will provide a release migration document for app developers. This ADR does not introduce any state-machine-, client- or CLI-breaking changes. ### Positive * Allow custom logic to be run before an after `Msg` execution. This enables the [tips](https://github.com/cosmos/cosmos-sdk/issues/9406) and [gas refund](https://github.com/cosmos/cosmos-sdk/issues/2150) uses cases, and possibly other ones. * Make BaseApp more lightweight, and defer complex logic to small modular components. * Separate paths for `{Check,Deliver,Simulate}Tx` with different returns types. This allows for improved readability (replace `if sdkCtx.IsRecheckTx() && !simulate {...}` with separate methods) and more flexibility (e.g. returning a `priority` in `ResponseCheckTx`). ### Negative * It is hard to understand at first glance the state updates that would occur after a middleware runs given the `sdk.Context` and `tx`. A middleware can have an arbitrary number of nested middleware being called within its function body, each possibly doing some pre- and post-processing before calling the next middleware on the chain. Thus to understand what a middleware is doing, one must also understand what every other middleware further along the chain is also doing, and the order of middlewares matters. This can get quite complicated to understand. * API-breaking changes for app developers. ### Neutral No neutral consequences. ## Further Discussions * [#9934](https://github.com/cosmos/cosmos-sdk/discussions/9934) Decomposing BaseApp's other ABCI methods into middlewares. * Replace `sdk.Tx` interface with the concrete protobuf Tx type in the `tx.Handler` methods signature. ## Test Cases We update the existing baseapp and antehandlers tests to use the new middleware API, but keep the same test cases and logic, to avoid introducing regressions. Existing CLI tests will also be left untouched. For new middlewares, we introduce unit tests. Since middlewares are purposefully small, unit tests suit well. ## References * Initial discussion: [Link](https://github.com/cosmos/cosmos-sdk/issues/9585) * Implementation: [#9920 BaseApp refactor](https://github.com/cosmos/cosmos-sdk/pull/9920) and [#10028 Antehandlers migration](https://github.com/cosmos/cosmos-sdk/pull/10028) # ADR 046: Module Params Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-046-module-params Sep 22, 2021: Initial Draft ## Changelog * Sep 22, 2021: Initial Draft ## Status Proposed ## Abstract This ADR describes an alternative approach to how Cosmos SDK modules use, interact, and store their respective parameters. ## Context Currently, in the Cosmos SDK, modules that require the use of parameters use the `x/params` module. The `x/params` works by having modules define parameters, typically via a simple `Params` structure, and registering that structure in the `x/params` module via a unique `Subspace` that belongs to the respective registering module. The registering module then has unique access to its respective `Subspace`. Through this `Subspace`, the module can get and set its `Params` structure. In addition, the Cosmos SDK's `x/gov` module has direct support for changing parameters on-chain via a `ParamChangeProposal` governance proposal type, where stakeholders can vote on suggested parameter changes. There are various tradeoffs to using the `x/params` module to manage individual module parameters. Namely, managing parameters essentially comes for "free" in that developers only need to define the `Params` struct, the `Subspace`, and the various auxiliary functions, e.g. `ParamSetPairs`, on the `Params` type. However, there are some notable drawbacks. These drawbacks include the fact that parameters are serialized in state via JSON which is extremely slow. In addition, parameter changes via `ParamChangeProposal` governance proposals have no way of reading from or writing to state. In other words, it is currently not possible to have any state transitions in the application during an attempt to change param(s). ## Decision We will build off of the alignment of `x/gov` and `x/authz` work per [#9810](https://github.com/cosmos/cosmos-sdk/pull/9810). Namely, module developers will create one or more unique parameter data structures that must be serialized to state. The Param data structures must implement `sdk.Msg` interface with respective Protobuf Msg service method which will validate and update the parameters with all necessary changes. The `x/gov` module via the work done in [#9810](https://github.com/cosmos/cosmos-sdk/pull/9810), will dispatch Param messages, which will be handled by Protobuf Msg services. Note, it is up to developers to decide how to structure their parameters and the respective `sdk.Msg` messages. Consider the parameters currently defined in `x/auth` using the `x/params` module for parameter management: ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message Params { uint64 max_memo_characters = 1; uint64 tx_sig_limit = 2; uint64 tx_size_cost_per_byte = 3; uint64 sig_verify_cost_ed25519 = 4; uint64 sig_verify_cost_secp256k1 = 5; } ``` Developers can choose to either create a unique data structure for every field in `Params` or they can create a single `Params` structure as outlined above in the case of `x/auth`. In the former, `x/params`, approach, a `sdk.Msg` would need to be created for every single field along with a handler. This can become burdensome if there are a lot of parameter fields. In the latter case, there is only a single data structure and thus only a single message handler, however, the message handler might have to be more sophisticated in that it might need to understand what parameters are being changed vs what parameters are untouched. Params change proposals are made using the `x/gov` module. Execution is done through `x/authz` authorization to the root `x/gov` module's account. Continuing to use `x/auth`, we demonstrate a more complete example: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type Params struct { MaxMemoCharacters uint64 TxSigLimit uint64 TxSizeCostPerByte uint64 SigVerifyCostED25519 uint64 SigVerifyCostSecp256k1 uint64 } type MsgUpdateParams struct { MaxMemoCharacters uint64 TxSigLimit uint64 TxSizeCostPerByte uint64 SigVerifyCostED25519 uint64 SigVerifyCostSecp256k1 uint64 } type MsgUpdateParamsResponse struct { } func (ms msgServer) UpdateParams(goCtx context.Context, msg *types.MsgUpdateParams) (*types.MsgUpdateParamsResponse, error) { ctx := sdk.UnwrapSDKContext(goCtx) // verification logic... // persist params params := ParamsFromMsg(msg) ms.SaveParams(ctx, params) return &types.MsgUpdateParamsResponse{ }, nil } func ParamsFromMsg(msg *types.MsgUpdateParams) Params { // ... } ``` A gRPC `Service` query should also be provided, for example: ```protobuf expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} service Query { // ... rpc Params(QueryParamsRequest) returns (QueryParamsResponse) { option (google.api.http).get = "/cosmos//v1beta1/params"; } } message QueryParamsResponse { Params params = 1 [(gogoproto.nullable) = false]; } ``` ## Consequences As a result of implementing the module parameter methodology, we gain the ability for module parameter changes to be stateful and extensible to fit nearly every application's use case. We will be able to emit events (and trigger hooks registered to that events using the work proposed in [event hooks](https://github.com/cosmos/cosmos-sdk/discussions/9656)), call other Msg service methods or perform migration. In addition, there will be significant gains in performance when it comes to reading and writing parameters from and to state, especially if a specific set of parameters are read on a consistent basis. However, this methodology will require developers to implement more types and Msg service methods which can become burdensome if many parameters exist. In addition, developers are required to implement persistence logic for module parameters. However, this should be trivial. ### Backwards Compatibility The new method for working with module parameters is naturally not backwards compatible with the existing `x/params` module. However, the `x/params` will remain in the Cosmos SDK and will be marked as deprecated with no additional functionality being added apart from potential bug fixes. Note, the `x/params` module may be removed entirely in a future release. ### Positive * Module parameters are serialized more efficiently * Modules are able to react on parameters changes and perform additional actions. * Special events can be emitted, allowing hooks to be triggered. ### Negative * Module parameters becomes slightly more burdensome for module developers: * Modules are now responsible for persisting and retrieving parameter state * Modules are now required to have unique message handlers to handle parameter changes per unique parameter data structure. ### Neutral * Requires [#9810](https://github.com/cosmos/cosmos-sdk/pull/9810) to be reviewed and merged. ## References * [Link](https://github.com/cosmos/cosmos-sdk/pull/9810) * [Link](https://github.com/cosmos/cosmos-sdk/issues/9438) * [Link](https://github.com/cosmos/cosmos-sdk/discussions/9913) # ADR 047: Extend Upgrade Plan Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-047-extend-upgrade-plan Nov, 23, 2021: Initial Draft May, 16, 2023: Proposal ABANDONED. prerun and postrun are not necessary anymore and adding the artifacts brings minor benefits. ## Changelog * Nov, 23, 2021: Initial Draft * May, 16, 2023: Proposal ABANDONED. `pre_run` and `post_run` are not necessary anymore and adding the `artifacts` brings minor benefits. ## Status ABANDONED ## Abstract This ADR expands the existing x/upgrade `Plan` proto message to include new fields for defining pre-run and post-run processes within upgrade tooling. It also defines a structure for providing downloadable artifacts involved in an upgrade. ## Context The `upgrade` module in conjunction with Cosmovisor are designed to facilitate and automate a blockchain's transition from one version to another. Users submit a software upgrade governance proposal containing an upgrade `Plan`. The [Plan](https://github.com/cosmos/cosmos-sdk/blob/v0.44.5/proto/cosmos/upgrade/v1beta1/upgrade.proto#L12) currently contains the following fields: * `name`: A short string identifying the new version. * `height`: The chain height at which the upgrade is to be performed. * `info`: A string containing information about the upgrade. The `info` string can be anything. However, Cosmovisor will try to use the `info` field to automatically download a new version of the blockchain executable. For the auto-download to work, Cosmovisor expects it to be either a stringified JSON object (with a specific structure defined through documentation), or a URL that will return such JSON. The JSON object identifies URLs used to download the new blockchain executable for different platforms (OS and Architecture, e.g. "linux/amd64"). Such a URL can either return the executable file directly or can return an archive containing the executable and possibly other assets. If the URL returns an archive, it is decompressed into `{DAEMON_HOME}/cosmovisor/{upgrade name}`. Then, if `{DAEMON_HOME}/cosmovisor/{upgrade name}/bin/{DAEMON_NAME}` does not exist, but `{DAEMON_HOME}/cosmovisor/{upgrade name}/{DAEMON_NAME}` does, the latter is copied to the former. If the URL returns something other than an archive, it is downloaded to `{DAEMON_HOME}/cosmovisor/{upgrade name}/bin/{DAEMON_NAME}`. If an upgrade height is reached and the new version of the executable version isn't available, Cosmovisor will stop running. Both `DAEMON_HOME` and `DAEMON_NAME` are [environment variables used to configure Cosmovisor](https://github.com/cosmos/cosmos-sdk/blob/cosmovisor/v1.0.0/cosmovisor/README.md#command-line-arguments-and-environment-variables). Currently, there is no mechanism that makes Cosmovisor run a command after the upgraded chain has been restarted. The current upgrade process has this timeline: 1. An upgrade governance proposal is submitted and approved. 2. The upgrade height is reached. 3. The `x/upgrade` module writes the `upgrade_info.json` file. 4. The chain halts. 5. Cosmovisor backs up the data directory (if set up to do so). 6. Cosmovisor downloads the new executable (if not already in place). 7. Cosmovisor executes the `${DAEMON_NAME} pre-upgrade`. 8. Cosmovisor restarts the app using the new version and same args originally provided. ## Decision ### Protobuf Updates We will update the `x/upgrade.Plan` message for providing upgrade instructions. The upgrade instructions will contain a list of artifacts available for each platform. It allows for the definition of a pre-run and post-run commands. These commands are not consensus guaranteed; they will be executed by Cosmosvisor (or other) during its upgrade handling. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message Plan { // ... (existing fields) UpgradeInstructions instructions = 6; } ``` The new `UpgradeInstructions instructions` field MUST be optional. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message UpgradeInstructions { string pre_run = 1; string post_run = 2; repeated Artifact artifacts = 3; string description = 4; } ``` All fields in the `UpgradeInstructions` are optional. * `pre_run` is a command to run prior to the upgraded chain restarting. If defined, it will be executed after halting and downloading the new artifact but before restarting the upgraded chain. The working directory this command runs from MUST be `{DAEMON_HOME}/cosmovisor/{upgrade name}`. This command MUST behave the same as the current [pre-upgrade](https://github.com/cosmos/cosmos-sdk/blob/v0.44.5/docs/migrations/pre-upgrade.md) command. It does not take in any command-line arguments and is expected to terminate with the following exit codes: | Exit status code | How it is handled in Cosmosvisor | | ---------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------- | | `0` | Assumes `pre-upgrade` command executed successfully and continues the upgrade. | | `1` | Default exit code when `pre-upgrade` command has not been implemented. | | `30` | `pre-upgrade` command was executed but failed. This fails the entire upgrade. | | `31` | `pre-upgrade` command was executed but failed. But the command is retried until exit code `1` or `30` are returned. | | If defined, then the app supervisors (e.g. Cosmovisor) MUST NOT run `app pre-run`. | | * `post_run` is a command to run after the upgraded chain has been started. If defined, this command MUST be only executed at most once by an upgrading node. The output and exit code SHOULD be logged but SHOULD NOT affect the running of the upgraded chain. The working directory this command runs from MUST be `{DAEMON_HOME}/cosmovisor/{upgrade name}`. * `artifacts` define items to be downloaded. It SHOULD have only one entry per platform. * `description` contains human-readable information about the upgrade and might contain references to external resources. It SHOULD NOT be used for structured processing information. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message Artifact { string platform = 1; string url = 2; string checksum = 3; string checksum_algo = 4; } ``` * `platform` is a required string that SHOULD be in the format `{OS}/{CPU}`, e.g. `"linux/amd64"`. The string `"any"` SHOULD also be allowed. An `Artifact` with a `platform` of `"any"` SHOULD be used as a fallback when a specific `{OS}/{CPU}` entry is not found. That is, if an `Artifact` exists with a `platform` that matches the system's OS and CPU, that should be used; otherwise, if an `Artifact` exists with a `platform` of `any`, that should be used; otherwise no artifact should be downloaded. * `url` is a required URL string that MUST conform to [RFC 1738: Uniform Resource Locators](https://www.ietf.org/rfc/rfc1738.txt). A request to this `url` MUST return either an executable file or an archive containing either `bin/{DAEMON_NAME}` or `{DAEMON_NAME}`. The URL should not contain checksum - it should be specified by the `checksum` attribute. * `checksum` is a checksum of the expected result of a request to the `url`. It is not required, but is recommended. If provided, it MUST be a hex encoded checksum string. Tools utilizing these `UpgradeInstructions` MUST fail if a `checksum` is provided but is different from the checksum of the result returned by the `url`. * `checksum_algo` is a string identify the algorithm used to generate the `checksum`. Recommended algorithms: `sha256`, `sha512`. Algorithms also supported (but not recommended): `sha1`, `md5`. If a `checksum` is provided, a `checksum_algo` MUST also be provided. A `url` is not required to contain a `checksum` query parameter. If the `url` does contain a `checksum` query parameter, the `checksum` and `checksum_algo` fields MUST also be populated, and their values MUST match the value of the query parameter. For example, if the `url` is `"https://example.com?checksum=md5:d41d8cd98f00b204e9800998ecf8427e"`, then the `checksum` field must be `"d41d8cd98f00b204e9800998ecf8427e"` and the `checksum_algo` field must be `"md5"`. ### Upgrade Module Updates If an upgrade `Plan` does not use the new `UpgradeInstructions` field, existing functionality will be maintained. The parsing of the `info` field as either a URL or `binaries` JSON will be deprecated. During validation, if the `info` field is used as such, a warning will be issued, but not an error. We will update the creation of the `upgrade-info.json` file to include the `UpgradeInstructions`. We will update the optional validation available via CLI to account for the new `Plan` structure. We will add the following validation: 1. If `UpgradeInstructions` are provided: 1. There MUST be at least one entry in `artifacts`. 2. All of the `artifacts` MUST have a unique `platform`. 3. For each `Artifact`, if the `url` contains a `checksum` query parameter: 1. The `checksum` query parameter value MUST be in the format of `{checksum_algo}:{checksum}`. 2. The `{checksum}` from the query parameter MUST equal the `checksum` provided in the `Artifact`. 3. The `{checksum_algo}` from the query parameter MUST equal the `checksum_algo` provided in the `Artifact`. 2. The following validation is currently done using the `info` field. We will apply similar validation to the `UpgradeInstructions`. For each `Artifact`: 1. The `platform` MUST have the format `{OS}/{CPU}` or be `"any"`. 2. The `url` field MUST NOT be empty. 3. The `url` field MUST be a proper URL. 4. A `checksum` MUST be provided either in the `checksum` field or as a query parameter in the `url`. 5. If the `checksum` field has a value and the `url` also has a `checksum` query parameter, the two values MUST be equal. 6. The `url` MUST return either a file or an archive containing either `bin/{DAEMON_NAME}` or `{DAEMON_NAME}`. 7. If a `checksum` is provided (in the field or as a query param), the checksum of the result of the `url` MUST equal the provided checksum. Downloading of an `Artifact` will happen the same way that URLs from `info` are currently downloaded. ### Cosmovisor Updates If the `upgrade-info.json` file does not contain any `UpgradeInstructions`, existing functionality will be maintained. We will update Cosmovisor to look for and handle the new `UpgradeInstructions` in `upgrade-info.json`. If the `UpgradeInstructions` are provided, we will do the following: 1. The `info` field will be ignored. 2. The `artifacts` field will be used to identify the artifact to download based on the `platform` that Cosmovisor is running in. 3. If a `checksum` is provided (either in the field or as a query param in the `url`), and the downloaded artifact has a different checksum, the upgrade process will be interrupted and Cosmovisor will exit with an error. 4. If a `pre_run` command is defined, it will be executed at the same point in the process where the `app pre-upgrade` command would have been executed. It will be executed using the same environment as other commands run by Cosmovisor. 5. If a `post_run` command is defined, it will be executed after executing the command that restarts the chain. It will be executed in a background process using the same environment as the other commands. Any output generated by the command will be logged. Once complete, the exit code will be logged. We will deprecate the use of the `info` field for anything other than human readable information. A warning will be logged if the `info` field is used to define the assets (either by URL or JSON). The new upgrade timeline is very similar to the current one. Changes are in bold: 1. An upgrade governance proposal is submitted and approved. 2. The upgrade height is reached. 3. The `x/upgrade` module writes the `upgrade_info.json` file **(now possibly with `UpgradeInstructions`)**. 4. The chain halts. 5. Cosmovisor backs up the data directory (if set up to do so). 6. Cosmovisor downloads the new executable (if not already in place). 7. Cosmovisor executes **the `pre_run` command if provided**, or else the `${DAEMON_NAME} pre-upgrade` command. 8. Cosmovisor restarts the app using the new version and same args originally provided. 9. **Cosmovisor immediately runs the `post_run` command in a detached process.** ## Consequences ### Backwards Compatibility Since the only change to existing definitions is the addition of the `instructions` field to the `Plan` message, and that field is optional, there are no backwards incompatibilities with respects to the proto messages. Additionally, current behavior will be maintained when no `UpgradeInstructions` are provided, so there are no backwards incompatibilities with respects to either the upgrade module or Cosmovisor. ### Forwards Compatibility In order to utilize the `UpgradeInstructions` as part of a software upgrade, both of the following must be true: 1. The chain must already be using a sufficiently advanced version of the Cosmos SDK. 2. The chain's nodes must be using a sufficiently advanced version of Cosmovisor. ### Positive 1. The structure for defining artifacts is clearer since it is now defined in the proto instead of in documentation. 2. Availability of a pre-run command becomes more obvious. 3. A post-run command becomes possible. ### Negative 1. The `Plan` message becomes larger. This is negligible because A) the `x/upgrades` module only stores at most one upgrade plan, and B) upgrades are rare enough that the increased gas cost isn't a concern. 2. There is no option for providing a URL that will return the `UpgradeInstructions`. 3. The only way to provide multiple assets (executables and other files) for a platform is to use an archive as the platform's artifact. ### Neutral 1. Existing functionality of the `info` field is maintained when the `UpgradeInstructions` aren't provided. ## Further Discussions 1. [Draft PR #10032 Comment](https://github.com/cosmos/cosmos-sdk/pull/10032/files?authenticity_token=pLtzpnXJJB%2Fif2UWiTp9Td3MvRrBF04DvjSuEjf1azoWdLF%2BSNymVYw9Ic7VkqHgNLhNj6iq9bHQYnVLzMXd4g%3D%3D\&file-filters%5B%5D=.go\&file-filters%5B%5D=.proto#r698708349): Consider different names for `UpgradeInstructions instructions` (either the message type or field name). 2. [Draft PR #10032 Comment](https://github.com/cosmos/cosmos-sdk/pull/10032/files?authenticity_token=pLtzpnXJJB%2Fif2UWiTp9Td3MvRrBF04DvjSuEjf1azoWdLF%2BSNymVYw9Ic7VkqHgNLhNj6iq9bHQYnVLzMXd4g%3D%3D\&file-filters%5B%5D=.go\&file-filters%5B%5D=.proto#r754655072): 1. Consider putting the `string platform` field inside `UpgradeInstructions` and make `UpgradeInstructions` a repeated field in `Plan`. 2. Consider using a `oneof` field in the `Plan` which could either be `UpgradeInstructions` or else a URL that should return the `UpgradeInstructions`. 3. Consider allowing `info` to either be a JSON serialized version of `UpgradeInstructions` or else a URL that returns that. 3. [Draft PR #10032 Comment](https://github.com/cosmos/cosmos-sdk/pull/10032/files?authenticity_token=pLtzpnXJJB%2Fif2UWiTp9Td3MvRrBF04DvjSuEjf1azoWdLF%2BSNymVYw9Ic7VkqHgNLhNj6iq9bHQYnVLzMXd4g%3D%3D\&file-filters%5B%5D=.go\&file-filters%5B%5D=.proto#r755462876): Consider not including the `UpgradeInstructions.description` field, using the `info` field for that purpose instead. 4. [Draft PR #10032 Comment](https://github.com/cosmos/cosmos-sdk/pull/10032/files?authenticity_token=pLtzpnXJJB%2Fif2UWiTp9Td3MvRrBF04DvjSuEjf1azoWdLF%2BSNymVYw9Ic7VkqHgNLhNj6iq9bHQYnVLzMXd4g%3D%3D\&file-filters%5B%5D=.go\&file-filters%5B%5D=.proto#r754643691): Consider allowing multiple artifacts to be downloaded for any given `platform` by adding a `name` field to the `Artifact` message. 5. [PR #10502 Comment](https://github.com/cosmos/cosmos-sdk/pull/10602#discussion_r781438288) Allow the new `UpgradeInstructions` to be provided via URL. 6. [PR #10502 Comment](https://github.com/cosmos/cosmos-sdk/pull/10602#discussion_r781438288) Allow definition of a `signer` for assets (as an alternative to using a `checksum`). ## References * [Current upgrade.proto](https://github.com/cosmos/cosmos-sdk/blob/v0.44.5/proto/cosmos/upgrade/v1beta1/upgrade.proto) * [Upgrade Module README](https://github.com/cosmos/cosmos-sdk/blob/v0.44.5/x/upgrade/spec/README.md) * [Cosmovisor README](https://github.com/cosmos/cosmos-sdk/blob/cosmovisor/v1.0.0/cosmovisor/README.md) * [Pre-upgrade README](https://github.com/cosmos/cosmos-sdk/blob/v0.44.5/docs/migrations/pre-upgrade.md) * [Draft/POC PR #10032](https://github.com/cosmos/cosmos-sdk/pull/10032) * [RFC 1738: Uniform Resource Locators](https://www.ietf.org/rfc/rfc1738.txt) # ADR 048: Multi Tire Gas Price System Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-048-consensus-fees Dec 1, 2021: Initial Draft ## Changelog * Dec 1, 2021: Initial Draft ## Status Rejected ## Abstract This ADR describes a flexible mechanism to maintain a consensus level gas prices, in which one can choose a multi-tier gas price system or EIP-1559 like one through configuration. ## Context Currently, each validator configures its own `minimal-gas-prices` in `app.yaml`. But setting a proper minimal gas price is critical to protect network from DDoS attack, and it's hard for all the validators to pick a sensible value, so we propose to maintain a gas price in consensus level. Since tendermint 0.34.20 has supported mempool prioritization, we can take advantage of that to implement more sophisticated gas fee system. ## Multi-Tier Price System We propose a multi-tier price system on consensus to provide maximum flexibility: * Tier 1: a constant gas price, which could only be modified occasionally through governance proposal. * Tier 2: a dynamic gas price which is adjusted according to previous block load. * Tier 3: a dynamic gas price which is adjusted according to previous block load at a higher speed. The gas price of higher tier should bigger than the lower tier. The transaction fees are charged with the exact gas price calculated on consensus. The parameter schema is like this: ```protobuf expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message TierParams { uint32 priority = 1 // priority in tendermint mempool Coin initial_gas_price = 2 // uint32 parent_gas_target = 3 // the target saturation of block uint32 change_denominator = 4 // decides the change speed Coin min_gas_price = 5 // optional lower bound of the price adjustment Coin max_gas_price = 6 // optional upper bound of the price adjustment } message Params { repeated TierParams tiers = 1; } ``` ### Extension Options We need to allow user to specify the tier of service for the transaction, to support it in an extensible way, we add an extension option in `AuthInfo`: ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message ExtensionOptionsTieredTx { uint32 fee_tier = 1 } ``` The value of `fee_tier` is just the index to the `tiers` parameter list. We also change the semantic of existing `fee` field of `Tx`, instead of charging user the exact `fee` amount, we treat it as a fee cap, while the actual amount of fee charged is decided dynamically. If the `fee` is smaller than dynamic one, the transaction won't be included in current block and ideally should stay in the mempool until the consensus gas price drop. The mempool can eventually prune old transactions. ### Tx Prioritization Transactions are prioritized based on the tier, the higher the tier, the higher the priority. Within the same tier, follow the default Tendermint order (currently FIFO). Be aware of that the mempool tx ordering logic is not part of consensus and can be modified by malicious validator. This mechanism can be easily composed with prioritization mechanisms: * we can add extra tiers out of a user control: * Example 1: user can set tier 0, 10 or 20, but the protocol will create tiers 0, 1, 2 ... 29. For example IBC transactions will go to tier `user_tier + 5`: if user selected tier 1, then the transaction will go to tier 15. * Example 2: we can reserve tier 4, 5, ... only for special transaction types. For example, tier 5 is reserved for evidence tx. So if submits a bank.Send transaction and set tier 5, it will be delegated to tier 3 (the max tier level available for any transaction). * Example 3: we can enforce that all transactions of a sepecific type will go to specific tier. For example, tier 100 will be reserved for evidence transactions and all evidence transactions will always go to that tier. ### `min-gas-prices` Deprecate the current per-validator `min-gas-prices` configuration, since it would confusing for it to work together with the consensus gas price. ### Adjust For Block Load For tier 2 and tier 3 transactions, the gas price is adjusted according to previous block load, the logic could be similar to EIP-1559: ```python expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} def adjust_gas_price(gas_price, parent_gas_used, tier): if parent_gas_used == tier.parent_gas_target: return gas_price elif parent_gas_used > tier.parent_gas_target: gas_used_delta = parent_gas_used - tier.parent_gas_target gas_price_delta = max(gas_price * gas_used_delta // tier.parent_gas_target // tier.change_speed, 1) return gas_price + gas_price_delta else: gas_used_delta = parent_gas_target - parent_gas_used gas_price_delta = gas_price * gas_used_delta // parent_gas_target // tier.change_speed return gas_price - gas_price_delta ``` ### Block Segment Reservation Ideally we should reserve block segments for each tier, so the lower tiered transactions won't be completely squeezed out by higher tier transactions, which will force user to use higher tier, and the system degraded to a single tier. We need help from tendermint to implement this. ## Implementation We can make each tier's gas price strategy fully configurable in protocol parameters, while providing a sensible default one. Pseudocode in python-like syntax: ```python expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} interface TieredTx: def tier(self) -> int: pass def tx_tier(tx): if isinstance(tx, TieredTx): return tx.tier() else: # default tier for custom transactions return 0 # NOTE: we can add more rules here per "Tx Prioritization" section class TierParams: 'gas price strategy parameters of one tier' priority: int # priority in tendermint mempool initial_gas_price: Coin parent_gas_target: int change_speed: Decimal # 0 means don't adjust for block load. class Params: 'protocol parameters' tiers: List[TierParams] class State: 'consensus state' # total gas used in last block, None when it's the first block parent_gas_used: Optional[int] # gas prices of last block for all tiers gas_prices: List[Coin] def begin_block(): 'Adjust gas prices' for i, tier in enumerate(Params.tiers): if State.parent_gas_used is None: # initialized gas price for the first block State.gas_prices[i] = tier.initial_gas_price else: # adjust gas price according to gas used in previous block State.gas_prices[i] = adjust_gas_price(State.gas_prices[i], State.parent_gas_used, tier) def mempoolFeeTxHandler_checkTx(ctx, tx): # the minimal-gas-price configured by validator, zero in deliver_tx context validator_price = ctx.MinGasPrice() consensus_price = State.gas_prices[tx_tier(tx)] min_price = max(validator_price, consensus_price) # zero means infinity for gas price cap if tx.gas_price() > 0 and tx.gas_price() < min_price: return 'insufficient fees' return next_CheckTx(ctx, tx) def txPriorityHandler_checkTx(ctx, tx): res, err := next_CheckTx(ctx, tx) # pass priority to tendermint res.Priority = Params.tiers[tx_tier(tx)].priority return res, err def end_block(): 'Update block gas used' State.parent_gas_used = block_gas_meter.consumed() ``` ### DDoS attack protection To fully saturate the blocks and prevent other transactions from executing, attacker need to use transactions of highest tier, the cost would be significantly higher than the default tier. If attacker spam with lower tier transactions, user can mitigate by sending higher tier transactions. ## Consequences ### Backwards Compatibility * New protocol parameters. * New consensus states. * New/changed fields in transaction body. ### Positive * The default tier keeps the same predictable gas price experience for client. * The higher tier's gas price can adapt to block load. * No priority conflict with custom priority based on transaction types, since this proposal only occupy three priority levels. * Possibility to compose different priority rules with tiers ### Negative * Wallets & tools need to update to support the new `tier` parameter, and semantic of `fee` field is changed. ### Neutral ## References * [Link](https://eips.ethereum.org/EIPS/eip-1559) * [Link](https://iohk.io/en/blog/posts/2021/11/26/network-traffic-and-tiered-pricing/) # ADR 049: State Sync Hooks Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-049-state-sync-hooks Jan 19, 2022: Initial Draft Apr 29, 2022: Safer extension snapshotter interface ## Changelog * Jan 19, 2022: Initial Draft * Apr 29, 2022: Safer extension snapshotter interface ## Status Implemented ## Abstract This ADR outlines a hooks-based mechanism for application modules to provide additional state (outside of the IAVL tree) to be used during state sync. ## Context New clients use state-sync to download snapshots of module state from peers. Currently, the snapshot consists of a stream of `SnapshotStoreItem` and `SnapshotIAVLItem`, which means that application modules that define their state outside of the IAVL tree cannot include their state as part of the state-sync process. Note, Even though the module state data is outside of the tree, for determinism we require that the hash of the external data should be posted in the IAVL tree. ## Decision A simple proposal based on our existing implementation is that, we can add two new message types: `SnapshotExtensionMeta` and `SnapshotExtensionPayload`, and they are appended to the existing multi-store stream with `SnapshotExtensionMeta` acting as a delimiter between extensions. As the chunk hashes should be able to ensure data integrity, we don't need a delimiter to mark the end of the snapshot stream. Besides, we provide `Snapshotter` and `ExtensionSnapshotter` interface for modules to implement snapshotters, which will handle both taking snapshot and the restoration. Each module could have mutiple snapshotters, and for modules with additional state, they should implement `ExtensionSnapshotter` as extension snapshotters. When setting up the application, the snapshot `Manager` should call `RegisterExtensions([]ExtensionSnapshotter…)` to register all the extension snapshotters. ```protobuf expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // SnapshotItem is an item contained in a rootmulti.Store snapshot. // On top of the exsiting SnapshotStoreItem and SnapshotIAVLItem, we add two new options for the item. message SnapshotItem { // item is the specific type of snapshot item. oneof item { SnapshotStoreItem store = 1; SnapshotIAVLItem iavl = 2 [(gogoproto.customname) = "IAVL"]; SnapshotExtensionMeta extension = 3; SnapshotExtensionPayload extension_payload = 4; } } // SnapshotExtensionMeta contains metadata about an external snapshotter. // One module may need multiple snapshotters, so each module may have multiple SnapshotExtensionMeta. message SnapshotExtensionMeta { // the name of the ExtensionSnapshotter, and it is registered to snapshotter manager when setting up the application // name should be unique for each ExtensionSnapshotter as we need to alphabetically order their snapshots to get // deterministic snapshot stream. string name = 1; // this is used by each ExtensionSnapshotter to decide the format of payloads included in SnapshotExtensionPayload message // it is used within the snapshotter/namespace, not global one for all modules uint32 format = 2; } // SnapshotExtensionPayload contains payloads of an external snapshotter. message SnapshotExtensionPayload { bytes payload = 1; } ``` When we create a snapshot stream, the `multistore` snapshot is always placed at the beginning of the binary stream, and other extension snapshots are alphabetically ordered by the name of the corresponding `ExtensionSnapshotter`. The snapshot stream would look like as follows: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // multi-store snapshot { SnapshotStoreItem | SnapshotIAVLItem, ... } // extension1 snapshot SnapshotExtensionMeta { SnapshotExtensionPayload, ... } // extension2 snapshot SnapshotExtensionMeta { SnapshotExtensionPayload, ... } ``` We add an `extensions` field to snapshot `Manager` for extension snapshotters. The `multistore` snapshotter is a special one and it doesn't need a name because it is always placed at the beginning of the binary stream. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type Manager struct { store *Store multistore types.Snapshotter extensions map[string]types.ExtensionSnapshotter mtx sync.Mutex operation operation chRestore chan<- io.ReadCloser chRestoreDone <-chan restoreDone restoreChunkHashes [][]byte restoreChunkIndex uint32 } ``` For extension snapshotters that implement the `ExtensionSnapshotter` interface, their names should be registered to the snapshot `Manager` by calling `RegisterExtensions` when setting up the application. The snapshotters will handle both taking snapshot and restoration. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // RegisterExtensions register extension snapshotters to manager func (m *Manager) RegisterExtensions(extensions ...types.ExtensionSnapshotter) error ``` On top of the existing `Snapshotter` interface for the `multistore`, we add `ExtensionSnapshotter` interface for the extension snapshotters. Three more function signatures: `SnapshotFormat()`, `SupportedFormats()` and `SnapshotName()` are added to `ExtensionSnapshotter`. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // ExtensionPayloadReader read extension payloads, // it returns io.EOF when reached either end of stream or the extension boundaries. type ExtensionPayloadReader = func() ([]byte, error) // ExtensionPayloadWriter is a helper to write extension payloads to underlying stream. type ExtensionPayloadWriter = func([]byte) error // ExtensionSnapshotter is an extension Snapshotter that is appended to the snapshot stream. // ExtensionSnapshotter has a unique name and manages its own internal formats. type ExtensionSnapshotter interface { // SnapshotName returns the name of snapshotter, it should be unique in the manager. SnapshotName() string // SnapshotFormat returns the default format used to take a snapshot. SnapshotFormat() uint32 // SupportedFormats returns a list of formats it can restore from. SupportedFormats() []uint32 // SnapshotExtension writes extension payloads into the underlying protobuf stream. SnapshotExtension(height uint64, payloadWriter ExtensionPayloadWriter) error // RestoreExtension restores an extension state snapshot, // the payload reader returns `io.EOF` when reached the extension boundaries. RestoreExtension(height uint64, format uint32, payloadReader ExtensionPayloadReader) error } ``` ## Consequences As a result of this implementation, we are able to create snapshots of binary chunk stream for the state that we maintain outside of the IAVL Tree, CosmWasm blobs for example. And new clients are able to fetch sanpshots of state for all modules that have implemented the corresponding interface from peer nodes. ### Backwards Compatibility This ADR introduces new proto message types, add an `extensions` field in snapshot `Manager`, and add new `ExtensionSnapshotter` interface, so this is not backwards compatible if we have extensions. But for applications that does not have the state data outside of the IAVL tree for any module, the snapshot stream is backwards-compatible. ### Positive * State maintained outside of IAVL tree like CosmWasm blobs can create snapshots by implementing extension snapshotters, and being fetched by new clients via state-sync. ### Negative ### Neutral * All modules that maintain state outside of IAVL tree need to implement `ExtensionSnapshotter` and the snapshot `Manager` need to call `RegisterExtensions` when setting up the application. ## Further Discussions While an ADR is in the DRAFT or PROPOSED stage, this section should contain a summary of issues to be solved in future iterations (usually referencing comments from a pull-request discussion). Later, this section can optionally list ideas or improvements the author or reviewers found during the analysis of this ADR. ## Test Cases \[optional] Test cases for an implementation are mandatory for ADRs that are affecting consensus changes. Other ADRs can choose to include links to test cases if applicable. ## References * [Link](https://github.com/cosmos/cosmos-sdk/pull/10961) * [Link](https://github.com/cosmos/cosmos-sdk/issues/7340) # ADR 050: SIGN_MODE_TEXTUAL Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-050-sign-mode-textual ## Changelog * Dec 06, 2021: Initial Draft. * Feb 07, 2022: Draft read and concept-ACKed by the Ledger team. * May 16, 2022: Change status to Accepted. * Aug 11, 2022: Require signing over tx raw bytes. * Sep 07, 2022: Add custom `Msg`-renderers. * Sep 18, 2022: Structured format instead of lines of text * Nov 23, 2022: Specify CBOR encoding. * Dec 01, 2022: Link to examples in separate JSON file. * Dec 06, 2022: Re-ordering of envelope screens. * Dec 14, 2022: Mention exceptions for invertability. * Jan 23, 2023: Switch Screen.Text to Title+Content. * Mar 07, 2023: Change SignDoc from array to struct containing array. * Mar 20, 2023: Introduce a spec version initialized to 0. ## Status Accepted. Implementation started. Small value renderers details still need to be polished. Spec version: 0. ## Abstract This ADR specifies SIGN\_MODE\_TEXTUAL, a new string-based sign mode that is targetted at signing with hardware devices. ## Context Protobuf-based SIGN\_MODE\_DIRECT was introduced in [ADR-020](/sdk/v0.50/build/architecture/adr-020-protobuf-transaction-encoding) and is intended to replace SIGN\_MODE\_LEGACY\_AMINO\_JSON in most situations, such as mobile wallets and CLI keyrings. However, the [Ledger](https://www.ledger.com/) hardware wallet is still using SIGN\_MODE\_LEGACY\_AMINO\_JSON for displaying the sign bytes to the user. Hardware wallets cannot transition to SIGN\_MODE\_DIRECT as: * SIGN\_MODE\_DIRECT is binary-based and thus not suitable for display to end-users. Technically, hardware wallets could simply display the sign bytes to the user. But this would be considered as blind signing, and is a security concern. * hardware cannot decode the protobuf sign bytes due to memory constraints, as the Protobuf definitions would need to be embedded on the hardware device. In an effort to remove Amino from the SDK, a new sign mode needs to be created for hardware devices. [Initial discussions](https://github.com/cosmos/cosmos-sdk/issues/6513) propose a text-based sign mode, which this ADR formally specifies. ## Decision In SIGN\_MODE\_TEXTUAL, a transaction is rendered into a textual representation, which is then sent to a secure device or subsystem for the user to review and sign. Unlike `SIGN_MODE_DIRECT`, the transmitted data can be simply decoded into legible text even on devices with limited processing and display. The textual representation is a sequence of *screens*. Each screen is meant to be displayed in its entirety (if possible) even on a small device like a Ledger. A screen is roughly equivalent to a short line of text. Large screens can be displayed in several pieces, much as long lines of text are wrapped, so no hard guidance is given, though 40 characters is a good target. A screen is used to display a single key/value pair for scalar values (or composite values with a compact notation, such as `Coins`) or to introduce or conclude a larger grouping. The text can contain the full range of Unicode code points, including control characters and nul. The device is responsible for deciding how to display characters it cannot render natively. See [annex 2](/sdk/v0.50/build/architecture/adr-050-sign-mode-textual-annex2) for guidance. Screens have a non-negative indentation level to signal composite or nested structures. Indentation level zero is the top level. Indentation is displayed via some device-specific mechanism. Message quotation notation is an appropriate model, such as leading `>` characters or vertical bars on more capable displays. Some screens are marked as *expert* screens, meant to be displayed only if the viewer chooses to opt in for the extra detail. Expert screens are meant for information that is rarely useful, or needs to be present only for signature integrity (see below). ### Invertible Rendering We require that the rendering of the transaction be invertible: there must be a parsing function such that for every transaction, when rendered to the textual representation, parsing that representation yeilds a proto message equivalent to the original under proto equality. Note that this inverse function does not need to perform correct parsing or error signaling for the whole domain of textual data. Merely that the range of valid transactions be invertible under the composition of rendering and parsing. Note that the existence of an inverse function ensures that the rendered text contains the full information of the original transaction, not a hash or subset. We make an exception for invertibility for data which are too large to meaningfully display, such as byte strings longer than 32 bytes. We may then selectively render them with a cryptographically-strong hash. In these cases, it is still computationally infeasible to find a different transaction which has the same rendering. However, we must ensure that the hash computation is simple enough to be reliably executed independently, so at least the hash is itself reasonably verifiable when the raw byte string is not. ### Chain State The rendering function (and parsing function) may depend on the current chain state. This is useful for reading parameters, such as coin display metadata, or for reading user-specific preferences such as language or address aliases. Note that if the observed state changes between signature generation and the transaction's inclusion in a block, the delivery-time rendering might differ. If so, the signature will be invalid and the transaction will be rejected. ### Signature and Security For security, transaction signatures should have three properties: 1. Given the transaction, signatures, and chain state, it must be possible to validate that the signatures matches the transaction, to verify that the signers must have known their respective secret keys. 2. It must be computationally infeasible to find a substantially different transaction for which the given signatures are valid, given the same chain state. 3. The user should be able to give informed consent to the signed data via a simple, secure device with limited display capabilities. The correctness and security of `SIGN_MODE_TEXTUAL` is guaranteed by demonstrating an inverse function from the rendering to transaction protos. This means that it is impossible for a different protocol buffer message to render to the same text. ### Transaction Hash Malleability When client software forms a transaction, the "raw" transaction (`TxRaw`) is serialized as a proto and a hash of the resulting byte sequence is computed. This is the `TxHash`, and is used by various services to track the submitted transaction through its lifecycle. Various misbehavior is possible if one can generate a modified transaction with a different TxHash but for which the signature still checks out. SIGN\_MODE\_TEXTUAL prevents this transaction malleability by including the TxHash as an expert screen in the rendering. ### SignDoc The SignDoc for `SIGN_MODE_TEXTUAL` is formed from a data structure like: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type Screen struct { Title string // possibly size limited to, advised to 64 characters Content string // possibly size limited to, advised to 255 characters Indent uint8 // size limited to something small like 16 or 32 Expert bool } type SignDocTextual struct { Screens []Screen } ``` We do not plan to use protobuf serialization to form the sequence of bytes that will be tranmitted and signed, in order to keep the decoder simple. We will use [CBOR](https://cbor.io) ([RFC 8949](https://www.rfc-editor.org/rfc/rfc8949.html)) instead. The encoding is defined by the following CDDL ([RFC 8610](https://www.rfc-editor.org/rfc/rfc8610)): ``` ;;; CDDL (RFC 8610) Specification of SignDoc for SIGN_MODE_TEXTUAL. ;;; Must be encoded using CBOR deterministic encoding (RFC 8949, section 4.2.1). ;; A Textual document is a struct containing one field: an array of screens. sign_doc = { screens_key: [* screen], } ;; The key is an integer to keep the encoding small. screens_key = 1 ;; A screen consists of a text string, an indentation, and the expert flag, ;; represented as an integer-keyed map. All entries are optional ;; and MUST be omitted from the encoding if empty, zero, or false. ;; Text defaults to the empty string, indent defaults to zero, ;; and expert defaults to false. screen = { ? title_key: tstr, ? content_key: tstr, ? indent_key: uint, ? expert_key: bool, } ;; Keys are small integers to keep the encoding small. title_key = 1 content_key = 2 indent_key = 3 expert_key = 4 ``` Defining the sign\_doc as directly an array of screens has also been considered. However, given the possibility of future iterations of this specification, using a single-keyed struct has been chosen over the former proposal, as structs allow for easier backwards-compatibility. ## Details In the examples that follow, screens will be shown as lines of text, indentation is indicated with a leading '>', and expert screens are marked with a leading `*`. ### Encoding of the Transaction Envelope We define "transaction envelope" as all data in a transaction that is not in the `TxBody.Messages` field. Transaction envelope includes fee, signer infos and memo, but don't include `Msg`s. `//` denotes comments and are not shown on the Ledger device. ```protobuf expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} Chain ID: Account number: Sequence: Address: *Public Key: This transaction has Message(s) // Pluralize "Message" only when int>1 > Message (/): // See value renderers for Any rendering. End of Message Memo: // Skipped if no memo set. Fee: // See value renderers for coins rendering. *Fee payer: // Skipped if no fee_payer set. *Fee granter: // Skipped if no fee_granter set. Tip: // Skippted if no tip. Tipper: *Gas Limit: *Timeout Height: // Skipped if no timeout_height set. *Other signer: SignerInfo // Skipped if the transaction only has 1 signer. *> Other signer (/): *End of other signers *Extension options: Any: // Skipped if no body extension options *> Extension options (/): *End of extension options *Non critical extension options: Any: // Skipped if no body non critical extension options *> Non critical extension options (/): *End of Non critical extension options *Hash of raw bytes: // Hex encoding of bytes defined, to prevent tx hash malleability. ``` ### Encoding of the Transaction Body Transaction Body is the `Tx.TxBody.Messages` field, which is an array of `Any`s, where each `Any` packs a `sdk.Msg`. Since `sdk.Msg`s are widely used, they have a slightly different encoding than usual array of `Any`s (Protobuf: `repeated google.protobuf.Any`) described in Annex 1. ``` This transaction has message: // Optional 's' for "message" if there's is >1 sdk.Msgs. // For each Msg, print the following 2 lines: Msg (/): // E.g. Msg (1/2): bank v1beta1 send coins End of transaction messages ``` #### Example Given the following Protobuf message: ```protobuf expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message Grant { google.protobuf.Any authorization = 1 [(cosmos_proto.accepts_interface) = "cosmos.authz.v1beta1.Authorization"]; google.protobuf.Timestamp expiration = 2 [(gogoproto.stdtime) = true, (gogoproto.nullable) = false]; } message MsgGrant { option (cosmos.msg.v1.signer) = "granter"; string granter = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"]; string grantee = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; } ``` and a transaction containing 1 such `sdk.Msg`, we get the following encoding: ``` This transaction has 1 message: Msg (1/1): authz v1beta1 grant Granter: cosmos1abc...def Grantee: cosmos1ghi...jkl End of transaction messages ``` ### Custom `Msg` Renderers Application developers may choose to not follow default renderer value output for their own `Msg`s. In this case, they can implement their own custom `Msg` renderer. This is similar to [EIP4430](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-4430.md), where the smart contract developer chooses the description string to be shown to the end user. This is done by setting the `cosmos.msg.textual.v1.expert_custom_renderer` Protobuf option to a non-empty string. This option CAN ONLY be set on a Protobuf message representing transaction message object (implementing `sdk.Msg` interface). ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message MsgFooBar { // Optional comments to describe in human-readable language the formatting // rules of the custom renderer. option (cosmos.msg.textual.v1.expert_custom_renderer) = ""; // proto fields } ``` When this option is set on a `Msg`, a registered function will transform the `Msg` into an array of one or more strings, which MAY use the key/value format (described in point #3) with the expert field prefix (described in point #5) and arbitrary indentation (point #6). These strings MAY be rendered from a `Msg` field using a default value renderer, or they may be generated from several fields using custom logic. The `` is a string convention chosen by the application developer and is used to identify the custom `Msg` renderer. For example, the documentation or specification of this custom algorithm can reference this identifier. This identifier CAN have a versioned suffix (e.g. `_v1`) to adapt for future changes (which would be consensus-breaking). We also recommend adding Protobuf comments to describe in human language the custom logic used. Moreover, the renderer must provide 2 functions: one for formatting from Protobuf to string, and one for parsing string to Protobuf. These 2 functions are provided by the application developer. To satisfy point #1, the parse function MUST be the inverse of the formatting function. This property will not be checked by the SDK at runtime. However, we strongly recommend the application developer to include a comprehensive suite in their app repo to test invertibility, as to not introduce security bugs. ### Require signing over the `TxBody` and `AuthInfo` raw bytes Recall that the transaction bytes merklelized on chain are the Protobuf binary serialization of [TxRaw](hhttps://buf.build/cosmos/cosmos-sdk/sdk/v0.50/main:cosmos.tx.v1beta1#cosmos.tx.v1beta1.TxRaw), which contains the `body_bytes` and `auth_info_bytes`. Moreover, the transaction hash is defined as the SHA256 hash of the `TxRaw` bytes. We require that the user signs over these bytes in SIGN\_MODE\_TEXTUAL, more specifically over the following string: ``` *Hash of raw bytes: ``` where: * `++` denotes concatenation, * `HEX` is the hexadecimal representation of the bytes, all in capital letters, no `0x` prefix, * and `len()` is encoded as a Big-Endian uint64. This is to prevent transaction hash malleability. The point #1 about invertiblity assures that transaction `body` and `auth_info` values are not malleable, but the transaction hash still might be malleable with point #1 only, because the SIGN\_MODE\_TEXTUAL strings don't follow the byte ordering defined in `body_bytes` and `auth_info_bytes`. Without this hash, a malicious validator or exchange could intercept a transaction, modify its transaction hash *after* the user signed it using SIGN\_MODE\_TEXTUAL (by tweaking the byte ordering inside `body_bytes` or `auth_info_bytes`), and then submit it to Tendermint. By including this hash in the SIGN\_MODE\_TEXTUAL signing payload, we keep the same level of guarantees as [SIGN\_MODE\_DIRECT](/sdk/v0.50/build/architecture/adr-020-protobuf-transaction-encoding). These bytes are only shown in expert mode, hence the leading `*`. ## Updates to the current specification The current specification is not set in stone, and future iterations are to be expected. We distinguish two categories of updates to this specification: 1. Updates that require changes of the hardware device embedded application. 2. Updates that only modify the envelope and the value renderers. Updates in the 1st category include changes of the `Screen` struct or its corresponding CBOR encoding. This type of updates require a modification of the hardware signer application, to be able to decode and parse the new types. Backwards-compatibility must also be guaranteed, so that the new hardware application works with existing versions of the SDK. These updates require the coordination of multiple parties: SDK developers, hardware application developers (currently: Zondax), and client-side developers (e.g. CosmJS). Furthermore, a new submission of the hardware device application may be necessary, which, dependending on the vendor, can take some time. As such, we recommend to avoid this type of updates as much as possible. Updates in the 2nd category include changes to any of the value renderers or to the transaction envelope. For example, the ordering of fields in the envelope can be swapped, or the timestamp formatting can be modified. Since SIGN\_MODE\_TEXTUAL sends `Screen`s to the hardware device, this type of change do not need a hardware wallet application update. They are however state-machine-breaking, and must be documented as such. They require the coordination of SDK developers with client-side developers (e.g. CosmJS), so that the updates are released on both sides close to each other in time. We define a spec version, which is an integer that must be incremented on each update of either category. This spec version will be exposed by the SDK's implementation, and can be communicated to clients. For example, SDK v0.50 might use the spec version 1, and SDK v0.51 might use 2; thanks to this versioning, clients can know how to craft SIGN\_MODE\_TEXTUAL transactions based on the target SDK version. The current spec version is defined in the "Status" section, on the top of this document. It is initialized to `0` to allow flexibility in choosing how to define future versions, as it would allow adding a field either in the SignDoc Go struct or in Protobuf in a backwards-compatible way. ## Additional Formatting by the Hardware Device See [annex 2](/sdk/v0.50/build/architecture/adr-050-sign-mode-textual-annex2). ## Examples 1. A minimal MsgSend: [see transaction](https://github.com/cosmos/cosmos-sdk/blob/094abcd393379acbbd043996024d66cd65246fb1/tx/textual/internal/testdata/e2e.json#L2-L70). 2. A transaction with a bit of everything: [see transaction](https://github.com/cosmos/cosmos-sdk/blob/094abcd393379acbbd043996024d66cd65246fb1/tx/textual/internal/testdata/e2e.json#L71-L270). The examples below are stored in a JSON file with the following fields: * `proto`: the representation of the transaction in ProtoJSON, * `screens`: the transaction rendered into SIGN\_MODE\_TEXTUAL screens, * `cbor`: the sign bytes of the transaction, which is the CBOR encoding of the screens. ## Consequences ### Backwards Compatibility SIGN\_MODE\_TEXTUAL is purely additive, and doesn't break any backwards compatibility with other sign modes. ### Positive * Human-friendly way of signing in hardware devices. * Once SIGN\_MODE\_TEXTUAL is shipped, SIGN\_MODE\_LEGACY\_AMINO\_JSON can be deprecated and removed. On the longer term, once the ecosystem has totally migrated, Amino can be totally removed. ### Negative * Some fields are still encoded in non-human-readable ways, such as public keys in hexadecimal. * New ledger app needs to be released, still unclear ### Neutral * If the transaction is complex, the string array can be arbitrarily long, and some users might just skip some screens and blind sign. ## Further Discussions * Some details on value renderers need to be polished, see [Annex 1](/sdk/v0.50/build/architecture/adr-050-sign-mode-textual-annex1). * Are ledger apps able to support both SIGN\_MODE\_LEGACY\_AMINO\_JSON and SIGN\_MODE\_TEXTUAL at the same time? * Open question: should we add a Protobuf field option to allow app developers to overwrite the textual representation of certain Protobuf fields and message? This would be similar to Ethereum's [EIP4430](https://github.com/ethereum/EIPs/pull/4430), where the contract developer decides on the textual representation. * Internationalization. ## References * [Annex 1](/sdk/v0.50/build/architecture/adr-050-sign-mode-textual-annex1) * Initial discussion: [Link](https://github.com/cosmos/cosmos-sdk/issues/6513) * Living document used in the working group: [Link](https://hackmd.io/fsZAO-TfT0CKmLDtfMcKeA?both) * Working group meeting notes: [Link](https://hackmd.io/7RkGfv_rQAaZzEigUYhcXw) * Ethereum's "Described Transactions" [Link](https://github.com/ethereum/EIPs/pull/4430) # ADR 050: SIGN_MODE_TEXTUAL: Annex 1 Value Renderers Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-050-sign-mode-textual-annex1 ## Changelog * Dec 06, 2021: Initial Draft * Feb 07, 2022: Draft read and concept-ACKed by the Ledger team. * Dec 01, 2022: Remove `Object: ` prefix on Any header screen. * Dec 13, 2022: Sign over bytes hash when bytes length > 32. * Mar 27, 2023: Update `Any` value renderer to omit message header screen. ## Status Accepted. Implementation started. Small value renderers details still need to be polished. ## Abstract This Annex describes value renderers, which are used for displaying Protobuf values in a human-friendly way using a string array. ## Value Renderers Value Renderers describe how values of different Protobuf types should be encoded as a string array. Value renderers can be formalized as a set of bijective functions `func renderT(value T) []string`, where `T` is one of the below Protobuf types for which this spec is defined. ### Protobuf `number` * Applies to: * protobuf numeric integer types (`int{32,64}`, `uint{32,64}`, `sint{32,64}`, `fixed{32,64}`, `sfixed{32,64}`) * strings whose `customtype` is `github.com/cosmos/cosmos-sdk/types.Int` or `github.com/cosmos/cosmos-sdk/types.Dec` * bytes whose `customtype` is `github.com/cosmos/cosmos-sdk/types.Int` or `github.com/cosmos/cosmos-sdk/types.Dec` * Trailing decimal zeroes are always removed * Formatting with `'`s for every three integral digits. * Usage of `.` to denote the decimal delimiter. #### Examples * `1000` (uint64) -> `1'000` * `"1000000.00"` (string representing a Dec) -> `1'000'000` * `"1000000.10"` (string representing a Dec) -> `1'000'000.1` ### `coin` * Applies to `cosmos.base.v1beta1.Coin`. * Denoms are converted to `display` denoms using `Metadata` (if available). **This requires a state query**. The definition of `Metadata` can be found in the [bank protobuf definition](https://buf.build/cosmos/cosmos-sdk/docs/main:cosmos.bank.v1beta1#cosmos.bank.v1beta1.Metadata). If the `display` field is empty or nil, then we do not perform any denom conversion. * Amounts are converted to `display` denom amounts and rendered as `number`s above * We do not change the capitalization of the denom. In practice, `display` denoms are stored in lowercase in state (e.g. `10 atom`), however they are often showed in UPPERCASE in everyday life (e.g. `10 ATOM`). Value renderers keep the case used in state, but we may recommend chains changing the denom metadata to be uppercase for better user display. * One space between the denom and amount (e.g. `10 atom`). * In the future, IBC denoms could maybe be converted to DID/IIDs, if we can find a robust way for doing this (ex. `cosmos:cosmos:hub:bank:denom:atom`) #### Examples * `1000000000uatom` -> `["1'000 atom"]`, because atom is the metadata's display denom. ### `coins` * an array of `coin` is display as the concatenation of each `coin` encoded as the specification above, the joined together with the delimiter `", "` (a comma and a space, no quotes around). * the list of coins is ordered by unicode code point of the display denom: `A-Z` \< `a-z`. For example, the string `aAbBcC` would be sorted `ABCabc`. * if the coins list had 0 items in it then it'll be rendered as `zero` ### Example * `["3cosm", "2000000uatom"]` -> `2 atom, 3 COSM` (assuming the display denoms are `atom` and `COSM`) * `["10atom", "20Acoin"]` -> `20 Acoin, 10 atom` (assuming the display denoms are `atom` and `Acoin`) * `[]` -> `zero` ### `repeated` * Applies to all `repeated` fields, except `cosmos.tx.v1beta1.TxBody#Messages`, which has a particular encoding (see [ADR-050](/sdk/v0.50/build/architecture/adr-050-sign-mode-textual)). * A repeated type has the following template: ``` : (/): (/): End of . ``` where: * `field_name` is the Protobuf field name of the repeated field * `field_kind`: * if the type of the repeated field is a message, `field_kind` is the message name * if the type of the repeated field is an enum, `field_kind` is the enum name * in any other case, `field_kind` is the protobuf primitive type (e.g. "string" or "bytes") * `int` is the length of the array * `index` is one based index of the repeated field #### Examples Given the proto definition: ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message AllowedMsgAllowance { repeated string allowed_messages = 1; } ``` and initializing with: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} x := []AllowedMsgAllowance{"cosmos.bank.v1beta1.MsgSend", "cosmos.gov.v1.MsgVote" } ``` we have the following value-rendered encoding: ``` Allowed messages: 2 strings Allowed messages (1/2): cosmos.bank.v1beta1.MsgSend Allowed messages (2/2): cosmos.gov.v1.MsgVote End of Allowed messages ``` ### `message` * Applies to all Protobuf messages that do not have a custom encoding. * Field names follow [sentence case](https://en.wiktionary.org/wiki/sentence_case) * replace each `_` with a space * capitalize first letter of the sentence * Field names are ordered by their Protobuf field number * Screen title is the field name, and screen content is the value. * Nesting: * if a field contains a nested message, we value-render the underlying message using the template: ``` : <1st line of value-rendered message> > // Notice the `>` prefix. ``` * `>` character is used to denote nesting. For each additional level of nesting, add `>`. #### Examples Given the following Protobuf messages: ```protobuf expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} enum VoteOption { VOTE_OPTION_UNSPECIFIED = 0; VOTE_OPTION_YES = 1; VOTE_OPTION_ABSTAIN = 2; VOTE_OPTION_NO = 3; VOTE_OPTION_NO_WITH_VETO = 4; } message WeightedVoteOption { VoteOption option = 1; string weight = 2 [(cosmos_proto.scalar) = "cosmos.Dec"]; } message Vote { uint64 proposal_id = 1; string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"]; reserved 3; repeated WeightedVoteOption options = 4; } ``` we get the following encoding for the `Vote` message: ``` Vote object > Proposal id: 4 > Voter: cosmos1abc...def > Options: 2 WeightedVoteOptions > Options (1/2): WeightedVoteOption object >> Option: VOTE_OPTION_YES >> Weight: 0.7 > Options (2/2): WeightedVoteOption object >> Option: VOTE_OPTION_NO >> Weight: 0.3 > End of Options ``` ### Enums * Show the enum variant name as string. #### Examples See example above with `message Vote{}`. ### `google.protobuf.Any` * Applies to `google.protobuf.Any` * Rendered as: ``` > ``` There is however one exception: when the underlying message is a Protobuf message that does not have a custom encoding, then the message header screen is omitted, and one level of indentation is removed. Messages that have a custom encoding, including `google.protobuf.Timestamp`, `google.protobuf.Duration`, `google.protobuf.Any`, `cosmos.base.v1beta1.Coin`, and messages that have an app-defined custom encoding, will preserve their header and indentation level. #### Examples Message header screen is stripped, one-level of indentation removed: ``` /cosmos.gov.v1.Vote > Proposal id: 4 > Vote: cosmos1abc...def > Options: 2 WeightedVoteOptions > Options (1/2): WeightedVoteOption object >> Option: Yes >> Weight: 0.7 > Options (2/2): WeightedVoteOption object >> Option: No >> Weight: 0.3 > End of Options ``` Message with custom encoding: ``` /cosmos.base.v1beta1.Coin > 10uatom ``` ### `google.protobuf.Timestamp` Rendered using [RFC 3339](https://www.rfc-editor.org/rfc/rfc3339) (a simplification of ISO 8601), which is the current recommendation for portable time values. The rendering always uses "Z" (UTC) as the timezone. It uses only the necessary fractional digits of a second, omitting the fractional part entirely if the timestamp has no fractional seconds. (The resulting timestamps are not automatically sortable by standard lexicographic order, but we favor the legibility of the shorter string.) #### Examples The timestamp with 1136214245 seconds and 700000000 nanoseconds is rendered as `2006-01-02T15:04:05.7Z`. The timestamp with 1136214245 seconds and zero nanoseconds is rendered as `2006-01-02T15:04:05Z`. ### `google.protobuf.Duration` The duration proto expresses a raw number of seconds and nanoseconds. This will be rendered as longer time units of days, hours, and minutes, plus any remaining seconds, in that order. Leading and trailing zero-quantity units will be omitted, but all units in between nonzero units will be shown, e.g. ` 3 days, 0 hours, 0 minutes, 5 seconds`. Even longer time units such as months or years are imprecise. Weeks are precise, but not commonly used - `91 days` is more immediately legible than `13 weeks`. Although `days` can be problematic, e.g. noon to noon on subsequent days can be 23 or 25 hours depending on daylight savings transitions, there is significant advantage in using strict 24-hour days over using only hours (e.g. `91 days` vs `2184 hours`). When nanoseconds are nonzero, they will be shown as fractional seconds, with only the minimum number of digits, e.g `0.5 seconds`. A duration of exactly zero is shown as `0 seconds`. Units will be given as singular (no trailing `s`) when the quantity is exactly one, and will be shown in plural otherwise. Negative durations will be indicated with a leading minus sign (`-`). Examples: * `1 day` * `30 days` * `-1 day, 12 hours` * `3 hours, 0 minutes, 53.025 seconds` ### bytes * Bytes of length shorter or equal to 35 are rendered in hexadecimal, all capital letters, without the `0x` prefix. * Bytes of length greater than 35 are hashed using SHA256. The rendered text is `SHA-256=`, followed by the 32-byte hash, in hexadecimal, all capital letters, without the `0x` prefix. * The hexadecimal string is finally separated into groups of 4 digits, with a space `' '` as separator. If the bytes length is odd, the 2 remaining hexadecimal characters are at the end. The number 35 was chosen because it is the longest length where the hashed-and-prefixed representation is longer than the original data directly formatted, using the 3 rules above. More specifically: * a 35-byte array will have 70 hex characters, plus 17 space characters, resulting in 87 characters. * byte arrays starting from length 36 will be be hashed to 32 bytes, which is 64 hex characters plus 15 spaces, and with the `SHA-256=` prefix, it takes 87 characters. Also, secp256k1 public keys have length 33, so their Textual representation is not their hashed value, which we would like to avoid. Note: Data longer than 35 bytes are not rendered in a way that can be inverted. See ADR-050's [section about invertability](/sdk/v0.50/build/architecture/adr-050-sign-mode-textual#invertible-rendering) for a discussion. #### Examples Inputs are displayed as byte arrays. * `[0]`: `00` * `[0,1,2]`: `0001 02` * `[0,1,2,..,34]`: `0001 0203 0405 0607 0809 0A0B 0C0D 0E0F 1011 1213 1415 1617 1819 1A1B 1C1D 1E1F 2021 22` * `[0,1,2,..,35]`: `SHA-256=5D7E 2D9B 1DCB C85E 7C89 0036 A2CF 2F9F E7B6 6554 F2DF 08CE C6AA 9C0A 25C9 9C21` ### address bytes We currently use `string` types in protobuf for addresses so this may not be needed, but if any address bytes are used in sign mode textual they should be rendered with bech32 formatting ### strings Strings are rendered as-is. ### Default Values * Default Protobuf values for each field are skipped. #### Example ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message TestData { string signer = 1; string metadata = 2; } ``` ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} myTestData := TestData{ Signer: "cosmos1abc" } ``` We get the following encoding for the `TestData` message: ``` TestData object > Signer: cosmos1abc ``` ### bool Boolean values are rendered as `True` or `False`. ### \[ABANDONED] Custom `msg_title` instead of Msg `type_url` *This paragraph is in the Annex for informational purposes only, and will be removed in a next update of the ADR.* * all protobuf messages to be used with `SIGN_MODE_TEXTUAL` CAN have a short title associated with them that can be used in format strings whenever the type URL is explicitly referenced via the `cosmos.msg.v1.textual.msg_title` Protobuf message option. * if this option is not specified for a Msg, then the Protobuf fully qualified name will be used. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message MsgSend { option (cosmos.msg.v1.textual.msg_title) = "bank send coins"; } ``` * they MUST be unique per message, per chain #### Examples * `cosmos.gov.v1.MsgVote` -> `governance v1 vote` #### Best Pratices We recommend to use this option only for `Msg`s whose Protobuf fully qualified name can be hard to understand. As such, the two examples above (`MsgSend` and `MsgVote`) are not good examples to be used with `msg_title`. We still allow `msg_title` for chains who might have `Msg`s with complex or non-obvious names. In those cases, we recommend to drop the version (e.g. `v1`) in the string if there's only one version of the module on chain. This way, the bijective mapping can figure out which message each string corresponds to. If multiple Protobuf versions of the same module exist on the same chain, we recommend keeping the first `msg_title` with version, and the second `msg_title` with version (e.g. `v2`): * `mychain.mymodule.v1.MsgDo` -> `mymodule do something` * `mychain.mymodule.v2.MsgDo` -> `mymodule v2 do something` # ADR 050: SIGN_MODE_TEXTUAL: Annex 2 XXX Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-050-sign-mode-textual-annex2 Oct 3, 2022: Initial Draft ## Changelog * Oct 3, 2022: Initial Draft ## Status DRAFT ## Abstract This annex provides normative guidance on how devices should render a `SIGN_MODE_TEXTUAL` document. ## Context `SIGN_MODE_TEXTUAL` allows a legible version of a transaction to be signed on a hardware security device, such as a Ledger. Early versions of the design rendered transactions directly to lines of ASCII text, but this proved awkward from its in-band signaling, and for the need to display Unicode text within the transaction. ## Decision `SIGN_MODE_TEXTUAL` renders to an abstract representation, leaving it up to device-specific software how to present this representation given the capabilities, limitations, and conventions of the deivce. We offer the following normative guidance: 1. The presentation should be as legible as possible to the user, given the capabilities of the device. If legibility could be sacrificed for other properties, we would recommend just using some other signing mode. Legibility should focus on the common case - it is okay for unusual cases to be less legible. 2. The presentation should be invertible if possible without substantial sacrifice of legibility. Any change to the rendered data should result in a visible change to the presentation. This extends the integrity of the signing to user-visible presentation. 3. The presentation should follow normal conventions of the device, without sacrificing legibility or invertibility. As an illustration of these principles, here is an example algorithm for presentation on a device which can display a single 80-character line of printable ASCII characters: * The presentation is broken into lines, and each line is presented in sequence, with user controls for going forward or backward a line. * Expert mode screens are only presented if the device is in expert mode. * Each line of the screen starts with a number of `>` characters equal to the screen's indentation level, followed by a `+` character if this isn't the first line of the screen, followed by a space if either a `>` or a `+` has been emitted, or if this header is followed by a `>`, `+`, or space. * If the line ends with whitespace or an `@` character, an additional `@` character is appended to the line. * The following ASCII control characters or backslash (`\`) are converted to a backslash followed by a letter code, in the manner of string literals in many languages: * a: U+0007 alert or bell * b: U+0008 backspace * f: U+000C form feed * n: U+000A line feed * r: U+000D carriage return * t: U+0009 horizontal tab * v: U+000B vertical tab * `\`: U+005C backslash * All other ASCII control characters, plus non-ASCII Unicode code points, are shown as either: * `\u` followed by 4 uppercase hex chacters for code points in the basic multilingual plane (BMP). * `\U` followed by 8 uppercase hex characters for other code points. * The screen will be broken into multiple lines to fit the 80-character limit, considering the above transformations in a way that attempts to minimize the number of lines generated. Expanded control or Unicode characters are never split across lines. Example output: ``` An introductory line. key1: 123456 key2: a string that ends in whitespace @ key3: a string that ends in a single ampersand - @@ >tricky key4<: note the leading space in the presentation introducing an aggregate > key5: false > key6: a very long line of text, please co\u00F6perate and break into >+ multiple lines. > Can we do further nesting? >> You bet we can! ``` The inverse mapping gives us the only input which could have generated this output (JSON notation for string data): ``` Indent Text ------ ---- 0 "An introductory line." 0 "key1: 123456" 0 "key2: a string that ends in whitespace " 0 "key3: a string that ends in a single ampersand - @" 0 ">tricky key4<: note the leading space in the presentation" 0 "introducing an aggregate" 1 "key5: false" 1 "key6: a very long line of text, please coöperate and break into multiple lines." 1 "Can we do further nesting?" 2 "You bet we can!" ``` # ADR 053: Go Module Refactoring Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-053-go-module-refactoring 2022-04-27: First Draft ## Changelog * 2022-04-27: First Draft ## Status PROPOSED ## Abstract The current SDK is built as a single monolithic go module. This ADR describes how we refactor the SDK into smaller independently versioned go modules for ease of maintenance. ## Context Go modules impose certain requirements on software projects with respect to stable version numbers (anything above 0.x) in that [any API breaking changes necessitate a major version](https://go.dev/doc/modules/release-workflow#breaking) increase which technically creates a new go module (with a v2, v3, etc. suffix). [Keeping modules API compatible](https://go.dev/blog/module-compatibility) in this way requires a fair amount of fair thought and discipline. The Cosmos SDK is a fairly large project which originated before go modules came into existence and has always been under a v0.x release even though it has been used in production for years now, not because it isn't production quality software, but rather because the API compatibility guarantees required by go modules are fairly complex to adhere to with such a large project. Up to now, it has generally been deemed more important to be able to break the API if needed rather than require all users update all package import paths to accommodate breaking changes causing v2, v3, etc. releases. This is in addition to the other complexities related to protobuf generated code that will be addressed in a separate ADR. Nevertheless, the desire for semantic versioning has been [strong in the community](https://github.com/cosmos/cosmos-sdk/discussions/10162) and the single go module release process has made it very hard to release small changes to isolated features in a timely manner. Release cycles often exceed six months which means small improvements done in a day or two get bottle-necked by everything else in the monolithic release cycle. ## Decision To improve the current situation, the SDK is being refactored into multiple go modules within the current repository. There has been a [fair amount of debate](https://github.com/cosmos/cosmos-sdk/discussions/10582#discussioncomment-1813377) as to how to do this, with some developers arguing for larger vs smaller module scopes. There are pros and cons to both approaches (which will be discussed below in the [Consequences](#consequences) section), but the approach being adopted is the following: * a go module should generally be scoped to a specific coherent set of functionality (such as math, errors, store, etc.) * when code is removed from the core SDK and moved to a new module path, every effort should be made to avoid API breaking changes in the existing code using aliases and wrapper types (as done in [Link](https://github.com/cosmos/cosmos-sdk/pull/10779) and [Link](https://github.com/cosmos/cosmos-sdk/pull/11788)) * new go modules should be moved to a standalone domain (`cosmossdk.io`) before being tagged as `v1.0.0` to accommodate the possibility that they may be better served by a standalone repository in the future * all go modules should follow the guidelines in [Link](https://go.dev/blog/module-compatibility) before `v1.0.0` is tagged and should make use of `internal` packages to limit the exposed API surface * the new go module's API may deviate from the existing code where there are clear improvements to be made or to remove legacy dependencies (for instance on amino or gogo proto), as long the old package attempts to avoid API breakage with aliases and wrappers * care should be taken when simply trying to turn an existing package into a new go module: [Link](https://github.com/golang/go/wiki/Modules#is-it-possible-to-add-a-module-to-a-multi-module-repository). In general, it seems safer to just create a new module path (appending v2, v3, etc. if necessary), rather than trying to make an old package a new module. ## Consequences ### Backwards Compatibility If the above guidelines are followed to use aliases or wrapper types pointing in existing APIs that point back to the new go modules, there should be no or very limited breaking changes to existing APIs. ### Positive * standalone pieces of software will reach `v1.0.0` sooner * new features to specific functionality will be released sooner ### Negative * there will be more go module versions to update in the SDK itself and per-project, although most of these will hopefully be indirect ### Neutral ## Further Discussions Further discussions are occurring in primarily in [Link](https://github.com/cosmos/cosmos-sdk/discussions/10582) and within the Cosmos SDK Framework Working Group. ## References * [Link](https://go.dev/doc/modules/release-workflow) * [Link](https://go.dev/blog/module-compatibility) * [Link](https://github.com/cosmos/cosmos-sdk/discussions/10162) * [Link](https://github.com/cosmos/cosmos-sdk/discussions/10582) * [Link](https://github.com/cosmos/cosmos-sdk/pull/10779) * [Link](https://github.com/cosmos/cosmos-sdk/pull/11788) # ADR 054: Semver Compatible SDK Modules Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-054-semver-compatible-modules 2022-04-27: First draft ## Changelog * 2022-04-27: First draft ## Status DRAFT ## Abstract In order to move the Cosmos SDK to a system of decoupled semantically versioned modules which can be composed in different combinations (ex. staking v3 with bank v1 and distribution v2), we need to reassess how we organize the API surface of modules to avoid problems with go semantic import versioning and circular dependencies. This ADR explores various approaches we can take to addressing these issues. ## Context There has been [a fair amount of desire](https://github.com/cosmos/cosmos-sdk/discussions/10162) in the community for semantic versioning in the SDK and there has been significant movement to splitting SDK modules into [standalone go modules](https://github.com/cosmos/cosmos-sdk/issues/11899). Both of these will ideally allow the ecosystem to move faster because we won't be waiting for all dependencies to update synchronously. For instance, we could have 3 versions of the core SDK compatible with the latest 2 releases of CosmWasm as well as 4 different versions of staking . This sort of setup would allow early adopters to aggressively integrate new versions, while allowing more conservative users to be selective about which versions they're ready for. In order to achieve this, we need to solve the following problems: 1. because of the way [go semantic import versioning](https://research.swtch.com/vgo-import) (SIV) works, moving to SIV naively will actually make it harder to achieve these goals 2. circular dependencies between modules need to be broken to actually release many modules in the SDK independently 3. pernicious minor version incompatibilities introduced through correctly [evolving protobuf schemas](https://developers.google.com/protocol-buffers/docs/proto3#updating) without correct [unknown field filtering](/sdk/v0.50/build/architecture/adr-020-protobuf-transaction-encoding#unknown-field-filtering) Note that all the following discussion assumes that the proto file versioning and state machine versioning of a module are distinct in that: * proto files are maintained in a non-breaking way (using something like [buf breaking](https://docs.buf.build/breaking/overview) to ensure all changes are backwards compatible) * proto file versions get bumped much less frequently, i.e. we might maintain `cosmos.bank.v1` through many versions of the bank module state machine * state machine breaking changes are more common and ideally this is what we'd want to semantically version with go modules, ex. `x/bank/v2`, `x/bank/v3`, etc. ### Problem 1: Semantic Import Versioning Compatibility Consider we have a module `foo` which defines the following `MsgDoSomething` and that we've released its state machine in go module `example.com/foo`: ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package foo.v1; message MsgDoSomething { string sender = 1; uint64 amount = 2; } service Msg { DoSomething(MsgDoSomething) returns (MsgDoSomethingResponse); } ``` Now consider that we make a revision to this module and add a new `condition` field to `MsgDoSomething` and also add a new validation rule on `amount` requiring it to be non-zero, and that following go semantic versioning we release the next state machine version of `foo` as `example.com/foo/v2`. ```protobuf expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Revision 1 package foo.v1; message MsgDoSomething { string sender = 1; // amount must be a non-zero integer. uint64 amount = 2; // condition is an optional condition on doing the thing. // // Since: Revision 1 Condition condition = 3; } ``` Approaching this naively, we would generate the protobuf types for the initial version of `foo` in `example.com/foo/types` and we would generate the protobuf types for the second version in `example.com/foo/v2/types`. Now let's say we have a module `bar` which talks to `foo` using this keeper interface which `foo` provides: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type FooKeeper interface { DoSomething(MsgDoSomething) error } ``` #### Scenario A: Backward Compatibility: Newer Foo, Older Bar Imagine we have a chain which uses both `foo` and `bar` and wants to upgrade to `foo/v2`, but the `bar` module has not upgraded to `foo/v2`. In this case, the chain will not be able to upgrade to `foo/v2` until `bar` has upgraded its references to `example.com/foo/types.MsgDoSomething` to `example.com/foo/v2/types.MsgDoSomething`. Even if `bar`'s usage of `MsgDoSomething` has not changed at all, the upgrade will be impossible without this change because `example.com/foo/types.MsgDoSomething` and `example.com/foo/v2/types.MsgDoSomething` are fundamentally different incompatible structs in the go type system. #### Scenario B: Forward Compatibility: Older Foo, Newer Bar Now let's consider the reverse scenario, where `bar` upgrades to `foo/v2` by changing the `MsgDoSomething` reference to `example.com/foo/v2/types.MsgDoSomething` and releases that as `bar/v2` with some other changes that a chain wants. The chain, however, has decided that it thinks the changes in `foo/v2` are too risky and that it'd prefer to stay on the initial version of `foo`. In this scenario, it is impossible to upgrade to `bar/v2` without upgrading to `foo/v2` even if `bar/v2` would have worked 100% fine with `foo` other than changing the import path to `MsgDoSomething` (meaning that `bar/v2` doesn't actually use any new features of `foo/v2`). Now because of the way go semantic import versioning works, we are locked into either using `foo` and `bar` OR `foo/v2` and `bar/v2`. We cannot have `foo` + `bar/v2` OR `foo/v2` + `bar`. The go type system doesn't allow this even if both versions of these modules are otherwise compatible with each other. #### Naive Mitigation A naive approach to fixing this would be to not regenerate the protobuf types in `example.com/foo/v2/types` but instead just update `example.com/foo/types` to reflect the changes needed for `v2` (adding `condition` and requiring `amount` to be non-zero). Then we could release a patch of `example.com/foo/types` with this update and use that for `foo/v2`. But this change is state machine breaking for `v1`. It requires changing the `ValidateBasic` method to reject the case where `amount` is zero, and it adds the `condition` field which should be rejected based on [ADR 020 unknown field filtering](/sdk/v0.50/build/architecture/adr-020-protobuf-transaction-encoding#unknown-field-filtering). So adding these changes as a patch on `v1` is actually incorrect based on semantic versioning. Chains that want to stay on `v1` of `foo` should not be importing these changes because they are incorrect for `v1.` ### Problem 2: Circular dependencies None of the above approaches allow `foo` and `bar` to be separate modules if for some reason `foo` and `bar` depend on each other in different ways. For instance, we can't have `foo` import `bar/types` while `bar` imports `foo/types`. We have several cases of circular module dependencies in the SDK (ex. staking, distribution and slashing) that are legitimate from a state machine perspective. Without separating the API types out somehow, there would be no way to independently semantically version these modules without some other mitigation. ### Problem 3: Handling Minor Version Incompatibilities Imagine that we solve the first two problems but now have a scenario where `bar/v2` wants the option to use `MsgDoSomething.condition` which only `foo/v2` supports. If `bar/v2` works with `foo` `v1` and sets `condition` to some non-nil value, then `foo` will silently ignore this field resulting in a silent logic possibly dangerous logic error. If `bar/v2` were able to check whether `foo` was on `v1` or `v2` and dynamically, it could choose to only use `condition` when `foo/v2` is available. Even if `bar/v2` were able to perform this check, however, how do we know that it is always performing the check properly. Without some sort of framework-level [unknown field filtering](/sdk/v0.50/build/architecture/adr-020-protobuf-transaction-encoding#unknown-field-filtering), it is hard to know whether these pernicious hard to detect bugs are getting into our app and a client-server layer such as [ADR 033: Inter-Module Communication](/sdk/v0.50/build/architecture/adr-033-protobuf-inter-module-comm) may be needed to do this. ## Solutions ### Approach A) Separate API and State Machine Modules One solution (first proposed in [Link](https://github.com/cosmos/cosmos-sdk/discussions/10582)) is to isolate all protobuf generated code into a separate module from the state machine module. This would mean that we could have state machine go modules `foo` and `foo/v2` which could use a types or API go module say `foo/api`. This `foo/api` go module would be perpetually on `v1.x` and only accept non-breaking changes. This would then allow other modules to be compatible with either `foo` or `foo/v2` as long as the inter-module API only depends on the types in `foo/api`. It would also allow modules `foo` and `bar` to depend on each other in that both of them could depend on `foo/api` and `bar/api` without `foo` directly depending on `bar` and vice versa. This is similar to the naive mitigation described above except that it separates the types into separate go modules which in and of itself could be used to break circular module dependencies. It has the same problems as the naive solution, otherwise, which we could rectify by: 1. removing all state machine breaking code from the API module (ex. `ValidateBasic` and any other interface methods) 2. embedding the correct file descriptors for unknown field filtering in the binary #### Migrate all interface methods on API types to handlers To solve 1), we need to remove all interface implementations from generated types and instead use a handler approach which essentially means that given a type `X`, we have some sort of resolver which allows us to resolve interface implementations for that type (ex. `sdk.Msg` or `authz.Authorization`). For example: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (k Keeper) DoSomething(msg MsgDoSomething) error { var validateBasicHandler ValidateBasicHandler err := k.resolver.Resolve(&validateBasic, msg) if err != nil { return err } err = validateBasicHandler.ValidateBasic() ... } ``` In the case of some methods on `sdk.Msg`, we could replace them with declarative annotations. For instance, `GetSigners` can already be replaced by the protobuf annotation `cosmos.msg.v1.signer`. In the future, we may consider some sort of protobuf validation framework (like [Link](https://github.com/bufbuild/protoc-gen-validate) but more Cosmos-specific) to replace `ValidateBasic`. #### Pinned FileDescriptor's To solve 2), state machine modules must be able to specify what the version of the protobuf files was that they were built against. For instance if the API module for `foo` upgrades to `foo/v2`, the original `foo` module still needs a copy of the original protobuf files it was built with so that ADR 020 unknown field filtering will reject `MsgDoSomething` when `condition` is set. The simplest way to do this may be to embed the protobuf `FileDescriptor`s into the module itself so that these `FileDescriptor`s are used at runtime rather than the ones that are built into the `foo/api` which may be different. Using [buf build](https://docs.buf.build/build/usage#output-format), [go embed](https://pkg.go.dev/embed), and a build script we can probably come up with a solution for embedding `FileDescriptor`s into modules that is fairly straightforward. #### Potential limitations to generated code One challenge with this approach is that it places heavy restrictions on what can go in API modules and requires that most of this is state machine breaking. All or most of the code in the API module would be generated from protobuf files, so we can probably control this with how code generation is done, but it is a risk to be aware of. For instance, we do code generation for the ORM that in the future could contain optimizations that are state machine breaking. We would either need to ensure very carefully that the optimizations aren't actually state machine breaking in generated code or separate this generated code out from the API module into the state machine module. Both of these mitigations are potentially viable but the API module approach does require an extra level of care to avoid these sorts of issues. #### Minor Version Incompatibilities This approach in and of itself does little to address any potential minor version incompatibilities and the requisite [unknown field filtering](/sdk/v0.50/build/architecture/adr-020-protobuf-transaction-encoding#unknown-field-filtering). Likely some sort of client-server routing layer which does this check such as [ADR 033: Inter-Module communication](/sdk/v0.50/build/architecture/adr-033-protobuf-inter-module-comm) is required to make sure that this is done properly. We could then allow modules to perform a runtime check given a `MsgClient`, ex: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (k Keeper) CallFoo() error { if k.interModuleClient.MinorRevision(k.fooMsgClient) >= 2 { k.fooMsgClient.DoSomething(&MsgDoSomething{ Condition: ... }) } else { ... } } ``` To do the unknown field filtering itself, the ADR 033 router would need to use the [protoreflect API](https://pkg.go.dev/google.golang.org/protobuf/reflect/protoreflect) to ensure that no fields unknown to the receiving module are set. This could result in an undesirable performance hit depending on how complex this logic is. ### Approach B) Changes to Generated Code An alternate approach to solving the versioning problem is to change how protobuf code is generated and move modules mostly or completely in the direction of inter-module communication as described in [ADR 033](/sdk/v0.50/build/architecture/adr-033-protobuf-inter-module-comm). In this paradigm, a module could generate all the types it needs internally - including the API types of other modules - and talk to other modules via a client-server boundary. For instance, if `bar` needs to talk to `foo`, it could generate its own version of `MsgDoSomething` as `bar/internal/foo/v1.MsgDoSomething` and just pass this to the inter-module router which would somehow convert it to the version which foo needs (ex. `foo/internal.MsgDoSomething`). Currently, two generated structs for the same protobuf type cannot exist in the same go binary without special build flags (see [Link](https://developers.google.com/protocol-buffers/docs/reference/go/faq#fix-namespace-conflict)). A relatively simple mitigation to this issue would be to set up the protobuf code to not register protobuf types globally if they are generated in an `internal/` package. This will require modules to register their types manually with the app-level level protobuf registry, this is similar to what modules already do with the `InterfaceRegistry` and amino codec. If modules *only* do ADR 033 message passing then a naive and non-performant solution for converting `bar/internal/foo/v1.MsgDoSomething` to `foo/internal.MsgDoSomething` would be marshaling and unmarshaling in the ADR 033 router. This would break down if we needed to expose protobuf types in `Keeper` interfaces because the whole point is to try to keep these types `internal/` so that we don't end up with all the import version incompatibilities we've described above. However, because of the issue with minor version incompatibilities and the need for [unknown field filtering](/sdk/v0.50/build/architecture/adr-020-protobuf-transaction-encoding#unknown-field-filtering), sticking with the `Keeper` paradigm instead of ADR 033 may be unviable to begin with. A more performant solution (that could maybe be adapted to work with `Keeper` interfaces) would be to only expose getters and setters for generated types and internally store data in memory buffers which could be passed from one implementation to another in a zero-copy way. For example, imagine this protobuf API with only getters and setters is exposed for `MsgSend`: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type MsgSend interface { proto.Message GetFromAddress() string GetToAddress() string GetAmount() []v1beta1.Coin SetFromAddress(string) SetToAddress(string) SetAmount([]v1beta1.Coin) } func NewMsgSend() MsgSend { return &msgSendImpl{ memoryBuffers: ... } } ``` Under the hood, `MsgSend` could be implemented based on some raw memory buffer in the same way that [Cap'n Proto](https://capnproto.org) and [FlatBuffers](https://google.github.io/flatbuffers/) so that we could convert between one version of `MsgSend` and another without serialization (i.e. zero-copy). This approach would have the added benefits of allowing zero-copy message passing to modules written in other languages such as Rust and accessed through a VM or FFI. It could also make unknown field filtering in inter-module communication simpler if we require that all new fields are added in sequential order, ex. just checking that no field `> 5` is set. Also, we wouldn't have any issues with state machine breaking code on generated types because all the generated code used in the state machine would actually live in the state machine module itself. Depending on how interface types and protobuf `Any`s are used in other languages, however, it may still be desirable to take the handler approach described in approach A. Either way, types implementing interfaces would still need to be registered with an `InterfaceRegistry` as they are now because there would be no way to retrieve them via the global registry. In order to simplify access to other modules using ADR 033, a public API module (maybe even one [remotely generated by Buf](https://docs.buf.build/bsr/remote-generation/go)) could be used by client modules instead of requiring to generate all client types internally. The big downsides of this approach are that it requires big changes to how people use protobuf types and would be a substantial rewrite of the protobuf code generator. This new generated code, however, could still be made compatible with the [`google.golang.org/protobuf/reflect/protoreflect`](https://pkg.go.dev/google.golang.org/protobuf/reflect/protoreflect) API in order to work with all standard golang protobuf tooling. It is possible that the naive approach of marshaling/unmarshaling in the ADR 033 router is an acceptable intermediate solution if the changes to the code generator are seen as too complex. However, since all modules would likely need to migrate to ADR 033 anyway with this approach, it might be better to do this all at once. ### Approach C) Don't address these issues If the above solutions are seen as too complex, we can also decide not to do anything explicit to enable better module version compatibility, and break circular dependencies. In this case, when developers are confronted with the issues described above they can require dependencies to update in sync (what we do now) or attempt some ad-hoc potentially hacky solution. One approach is to ditch go semantic import versioning (SIV) altogether. Some people have commented that go's SIV (i.e. changing the import path to `foo/v2`, `foo/v3`, etc.) is too restrictive and that it should be optional. The golang maintainers disagree and only officially support semantic import versioning. We could, however, take the contrarian perspective and get more flexibility by using 0.x-based versioning basically forever. Module version compatibility could then be achieved using go.mod replace directives to pin dependencies to specific compatible 0.x versions. For instance if we knew `foo` 0.2 and 0.3 were both compatible with `bar` 0.3 and 0.4, we could use replace directives in our go.mod to stick to the versions of `foo` and `bar` we want. This would work as long as the authors of `foo` and `bar` avoid incompatible breaking changes between these modules. Or, if developers choose to use semantic import versioning, they can attempt the naive solution described above and would also need to use special tags and replace directives to make sure that modules are pinned to the correct versions. Note, however, that all of these ad-hoc approaches, would be vulnerable to the minor version compatibility issues described above unless [unknown field filtering](/sdk/v0.50/build/architecture/adr-020-protobuf-transaction-encoding#unknown-field-filtering) is properly addressed. ### Approach D) Avoid protobuf generated code in public APIs An alternative approach would be to avoid protobuf generated code in public module APIs. This would help avoid the discrepancy between state machine versions and client API versions at the module to module boundaries. It would mean that we wouldn't do inter-module message passing based on ADR 033, but rather stick to the existing keeper approach and take it one step further by avoiding any protobuf generated code in the keeper interface methods. Using this approach, our `foo.Keeper.DoSomething` method wouldn't have the generated `MsgDoSomething` struct (which comes from the protobuf API), but instead positional parameters. Then in order for `foo/v2` to support the `foo/v1` keeper it would simply need to implement both the v1 and v2 keeper APIs. The `DoSomething` method in v2 could have the additional `condition` parameter, but this wouldn't be present in v1 at all so there would be no danger of a client accidentally setting this when it isn't available. So this approach would avoid the challenge around minor version incompatibilities because the existing module keeper API would not get new fields when they are added to protobuf files. Taking this approach, however, would likely require making all protobuf generated code internal in order to prevent it from leaking into the keeper API. This means we would still need to modify the protobuf code generator to not register `internal/` code with the global registry, and we would still need to manually register protobuf `FileDescriptor`s (this is probably true in all scenarios). It may, however, be possible to avoid needing to refactor interface methods on generated types to handlers. Also, this approach doesn't address what would be done in scenarios where modules still want to use the message router. Either way, we probably still want a way to pass messages from one module to another router safely even if it's just for use cases like `x/gov`, `x/authz`, CosmWasm, etc. That would still require most of the things outlined in approach (B), although we could advise modules to prefer keepers for communicating with other modules. The biggest downside of this approach is probably that it requires a strict refactoring of keeper interfaces to avoid generated code leaking into the API. This may result in cases where we need to duplicate types that are already defined in proto files and then write methods for converting between the golang and protobuf version. This may end up in a lot of unnecessary boilerplate and that may discourage modules from actually adopting it and achieving effective version compatibility. Approaches (A) and (B), although heavy handed initially, aim to provide a system which once adopted more or less gives the developer version compatibility for free with minimal boilerplate. Approach (D) may not be able to provide such a straightforward system since it requires a golang API to be defined alongside a protobuf API in a way that requires duplication and differing sets of design principles (protobuf APIs encourage additive changes while golang APIs would forbid it). Other downsides to this approach are: * no clear roadmap to supporting modules in other languages like Rust * doesn't get us any closer to proper object capability security (one of the goals of ADR 033) * ADR 033 needs to be done properly anyway for the set of use cases which do need it ## Decision The latest **DRAFT** proposal is: 1. we are alignment on adopting [ADR 033](/sdk/v0.50/build/architecture/adr-033-protobuf-inter-module-comm) not just as an addition to the framework, but as a core replacement to the keeper paradigm entirely. 2. the ADR 033 inter-module router will accommodate any variation of approach (A) or (B) given the following rules: a. if the client type is the same as the server type then pass it directly through, b. if both client and server use the zero-copy generated code wrappers (which still need to be defined), then pass the memory buffers from one wrapper to the other, or c. marshal/unmarshal types between client and server. This approach will allow for both maximal correctness and enable a clear path to enabling modules within in other languages, possibly executed within a WASM VM. ### Minor API Revisions To declare minor API revisions of proto files, we propose the following guidelines (which were already documented in [cosmos.app.v1alpha module options](https://github.com/cosmos/cosmos-sdk/blob/v0.50.10/proto/cosmos/app/v1alpha1/module.proto)): * proto packages which are revised from their initial version (considered revision `0`) should include a `package` * comment in some .proto file containing the test `Revision N` at the start of a comment line where `N` is the current revision number. * all fields, messages, etc. added in a version beyond the initial revision should add a comment at the start of a comment line of the form `Since: Revision N` where `N` is the non-zero revision it was added. It is advised that there is a 1:1 correspondence between a state machine module and versioned set of proto files which are versioned either as a buf module a go API module or both. If the buf schema registry is used, the version of this buf module should always be `1.N` where `N` corresponds to the package revision. Patch releases should be used when only documentation comments are updated. It is okay to include proto packages named `v2`, `v3`, etc. in this same `1.N` versioned buf module (ex. `cosmos.bank.v2`) as long as all these proto packages consist of a single API intended to be served by a single SDK module. ### Introspecting Minor API Revisions In order for modules to introspect the minor API revision of peer modules, we propose adding the following method to `cosmossdk.io/core/intermodule.Client`: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} ServiceRevision(ctx context.Context, serviceName string) uint64 ``` Modules could all this using the service name statically generated by the go grpc code generator: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} intermoduleClient.ServiceRevision(ctx, bankv1beta1.Msg_ServiceDesc.ServiceName) ``` In the future, we may decide to extend the code generator used for protobuf services to add a field to client types which does this check more concisely, ex: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package bankv1beta1 type MsgClient interface { Send(context.Context, MsgSend) (MsgSendResponse, error) ServiceRevision(context.Context) uint64 } ``` ### Unknown Field Filtering To correctly perform [unknown field filtering](/sdk/v0.50/build/architecture/adr-020-protobuf-transaction-encoding#unknown-field-filtering), the inter-module router can do one of the following: * use the `protoreflect` API for messages which support that * for gogo proto messages, marshal and use the existing `codec/unknownproto` code * for zero-copy messages, do a simple check on the highest set field number (assuming we can require that fields are adding consecutively in increasing order) ### `FileDescriptor` Registration Because a single go binary may contain different versions of the same generated protobuf code, we cannot rely on the global protobuf registry to contain the correct `FileDescriptor`s. Because `appconfig` module configuration is itself written in protobuf, we would like to load the `FileDescriptor`s for a module before loading a module itself. So we will provide ways to register `FileDescriptor`s at module registration time before instantiation. We propose the following `cosmossdk.io/core/appmodule.Option` constructors for the various cases of how `FileDescriptor`s may be packaged: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package appmodule // this can be used when we are using google.golang.org/protobuf compatible generated code // Ex: // ProtoFiles(bankv1beta1.File_cosmos_bank_v1beta1_module_proto) func ProtoFiles(file []protoreflect.FileDescriptor) Option { } // this can be used when we are using gogo proto generated code. func GzippedProtoFiles(file [][]byte) Option { } // this can be used when we are using buf build to generated a pinned file descriptor func ProtoImage(protoImage []byte) Option { } ``` This approach allows us to support several ways protobuf files might be generated: * proto files generated internally to a module (use `ProtoFiles`) * the API module approach with pinned file descriptors (use `ProtoImage`) * gogo proto (use `GzippedProtoFiles`) ### Module Dependency Declaration One risk of ADR 033 is that dependencies are called at runtime which are not present in the loaded set of SDK modules.\ Also we want modules to have a way to define a minimum dependency API revision that they require. Therefore, all modules should declare their set of dependencies upfront. These dependencies could be defined when a module is instantiated, but ideally we know what the dependencies are before instantiation and can statically look at an app config and determine whether the set of modules. For example, if `bar` requires `foo` revision `>= 1`, then we should be able to know this when creating an app config with two versions of `bar` and `foo`. We propose defining these dependencies in the proto options of the module config object itself. ### Interface Registration We will also need to define how interface methods are defined on types that are serialized as `google.protobuf.Any`'s. In light of the desire to support modules in other languages, we may want to think of solutions that will accommodate other languages such as plugins described briefly in [ADR 033](/sdk/v0.50/build/architecture/adr-033-protobuf-inter-module-comm#internal-methods). ### Testing In order to ensure that modules are indeed with multiple versions of their dependencies, we plan to provide specialized unit and integration testing infrastructure that automatically tests multiple versions of dependencies. #### Unit Testing Unit tests should be conducted inside SDK modules by mocking their dependencies. In a full ADR 033 scenario, this means that all interaction with other modules is done via the inter-module router, so mocking of dependencies means mocking their msg and query server implementations. We will provide both a test runner and fixture to make this streamlined. The key thing that the test runner should do to test compatibility is to test all combinations of dependency API revisions. This can be done by taking the file descriptors for the dependencies, parsing their comments to determine the revisions various elements were added, and then created synthetic file descriptors for each revision by subtracting elements that were added later. Here is a proposed API for the unit test runner and fixture: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package moduletesting import ( "context" "testing" "cosmossdk.io/core/intermodule" "cosmossdk.io/depinject" "google.golang.org/grpc" "google.golang.org/protobuf/proto" "google.golang.org/protobuf/reflect/protodesc" ) type TestFixture interface { context.Context intermodule.Client // for making calls to the module we're testing BeginBlock() EndBlock() } type UnitTestFixture interface { TestFixture grpc.ServiceRegistrar // for registering mock service implementations } type UnitTestConfig struct { ModuleConfig proto.Message // the module's config object DepinjectConfig depinject.Config // optional additional depinject config options DependencyFileDescriptors []protodesc.FileDescriptorProto // optional dependency file descriptors to use instead of the global registry } // Run runs the test function for all combinations of dependency API revisions. func (cfg UnitTestConfig) Run(t *testing.T, f func(t *testing.T, f UnitTestFixture)) { // ... } ``` Here is an example for testing bar calling foo which takes advantage of conditional service revisions in the expected mock arguments: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func TestBar(t *testing.T) { UnitTestConfig{ ModuleConfig: &foomodulev1.Module{ }}.Run(t, func (t *testing.T, f moduletesting.UnitTestFixture) { ctrl := gomock.NewController(t) mockFooMsgServer := footestutil.NewMockMsgServer() foov1.RegisterMsgServer(f, mockFooMsgServer) barMsgClient := barv1.NewMsgClient(f) if f.ServiceRevision(foov1.Msg_ServiceDesc.ServiceName) >= 1 { mockFooMsgServer.EXPECT().DoSomething(gomock.Any(), &foov1.MsgDoSomething{ ..., Condition: ..., // condition is expected in revision >= 1 }).Return(&foov1.MsgDoSomethingResponse{ }, nil) } else { mockFooMsgServer.EXPECT().DoSomething(gomock.Any(), &foov1.MsgDoSomething{... }).Return(&foov1.MsgDoSomethingResponse{ }, nil) } res, err := barMsgClient.CallFoo(f, &MsgCallFoo{ }) ... }) } ``` The unit test runner would make sure that no dependency mocks return arguments which are invalid for the service revision being tested to ensure that modules don't incorrectly depend on functionality not present in a given revision. #### Integration Testing An integration test runner and fixture would also be provided which instead of using mocks would test actual module dependencies in various combinations. Here is the proposed API: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type IntegrationTestFixture interface { TestFixture } type IntegrationTestConfig struct { ModuleConfig proto.Message // the module's config object DependencyMatrix map[string][]proto.Message // all the dependent module configs } // Run runs the test function for all combinations of dependency modules. func (cfg IntegationTestConfig) Run(t *testing.T, f func (t *testing.T, f IntegrationTestFixture)) { // ... } ``` And here is an example with foo and bar: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func TestBarIntegration(t *testing.T) { IntegrationTestConfig{ ModuleConfig: &barmodulev1.Module{ }, DependencyMatrix: map[string][]proto.Message{ "runtime": []proto.Message{ // test against two versions of runtime &runtimev1.Module{ }, &runtimev2.Module{ }, }, "foo": []proto.Message{ // test against three versions of foo &foomodulev1.Module{ }, &foomodulev2.Module{ }, &foomodulev3.Module{ }, } } }.Run(t, func (t *testing.T, f moduletesting.IntegrationTestFixture) { barMsgClient := barv1.NewMsgClient(f) res, err := barMsgClient.CallFoo(f, &MsgCallFoo{ }) ... }) } ``` Unlike unit tests, integration tests actually pull in other module dependencies. So that modules can be written without direct dependencies on other modules and because golang has no concept of development dependencies, integration tests should be written in separate go modules, ex. `example.com/bar/v2/test`. Because this paradigm uses go semantic versioning, it is possible to build a single go module which imports 3 versions of bar and 2 versions of runtime and can test these all together in the six various combinations of dependencies. ## Consequences ### Backwards Compatibility Modules which migrate fully to ADR 033 will not be compatible with existing modules which use the keeper paradigm. As a temporary workaround we may create some wrapper types that emulate the current keeper interface to minimize the migration overhead. ### Positive * we will be able to deliver interoperable semantically versioned modules which should dramatically increase the ability of the Cosmos SDK ecosystem to iterate on new features * it will be possible to write Cosmos SDK modules in other languages in the near future ### Negative * all modules will need to be refactored somewhat dramatically ### Neutral * the `cosmossdk.io/core/appconfig` framework will play a more central role in terms of how modules are defined, this is likely generally a good thing but does mean additional changes for users wanting to stick to the pre-depinject way of wiring up modules * `depinject` is somewhat less needed or maybe even obviated because of the full ADR 033 approach. If we adopt the core API proposed in [Link](https://github.com/cosmos/cosmos-sdk/pull/12239), then a module would probably always instantiate itself with a method `ProvideModule(appmodule.Service) (appmodule.AppModule, error)`. There is no complex wiring of keeper dependencies in this scenario and dependency injection may not have as much of (or any) use case. ## Further Discussions The decision described above is considered in draft mode and is pending final buy-in from the team and key stakeholders. Key outstanding discussions if we do adopt that direction are: * how do module clients introspect dependency module API revisions * how do modules determine a minor dependency module API revision requirement * how do modules appropriately test compatibility with different dependency versions * how to register and resolve interface implementations * how do modules register their protobuf file descriptors depending on the approach they take to generated code (the API module approach may still be viable as a supported strategy and would need pinned file descriptors) ## References * [Link](https://github.com/cosmos/cosmos-sdk/discussions/10162) * [Link](https://github.com/cosmos/cosmos-sdk/discussions/10582) * [Link](https://github.com/cosmos/cosmos-sdk/discussions/10368) * [Link](https://github.com/cosmos/cosmos-sdk/pull/11340) * [Link](https://github.com/cosmos/cosmos-sdk/issues/11899) * [ADR 020](/sdk/v0.50/build/architecture/adr-020-protobuf-transaction-encoding) * [ADR 033](/sdk/v0.50/build/architecture/adr-033-protobuf-inter-module-comm) # ADR 055: ORM Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-055-orm 2022-04-27: First draft ## Changelog * 2022-04-27: First draft ## Status ACCEPTED Implemented ## Abstract In order to make it easier for developers to build Cosmos SDK modules and for clients to query, index and verify proofs against state data, we have implemented an ORM (object-relational mapping) layer for the Cosmos SDK. ## Context Historically modules in the Cosmos SDK have always used the key-value store directly and created various handwritten functions for managing key format as well as constructing secondary indexes. This consumes a significant amount of time when building a module and is error-prone. Because key formats are non-standard, sometimes poorly documented, and subject to change, it is hard for clients to generically index, query and verify merkle proofs against state data. The known first instance of an "ORM" in the Cosmos ecosystem was in [weave](https://github.com/iov-one/weave/tree/master/orm). A later version was built for [regen-ledger](https://github.com/regen-network/regen-ledger/tree/157181f955823149e1825263a317ad8e16096da4/orm) for use in the group module and later [ported to the SDK](https://github.com/cosmos/cosmos-sdk/tree/35d3312c3be306591fcba39892223f1244c8d108/x/group/internal/orm) just for that purpose. While these earlier designs made it significantly easier to write state machines, they still required a lot of manual configuration, didn't expose state format directly to clients, and were limited in their support of different types of index keys, composite keys, and range queries. Discussions about the design continued in [Link](https://github.com/cosmos/cosmos-sdk/discussions/9156) and more sophisticated proofs of concept were created in [Link](https://github.com/allinbits/cosmos-sdk-poc/tree/master/runtime/orm) and [Link](https://github.com/cosmos/cosmos-sdk/pull/10454). ## Decision These prior efforts culminated in the creation of the Cosmos SDK `orm` go module which uses protobuf annotations for specifying ORM table definitions. This ORM is based on the new `google.golang.org/protobuf/reflect/protoreflect` API and supports: * sorted indexes for all simple protobuf types (except `bytes`, `enum`, `float`, `double`) as well as `Timestamp` and `Duration` * unsorted `bytes` and `enum` indexes * composite primary and secondary keys * unique indexes * auto-incrementing `uint64` primary keys * complex prefix and range queries * paginated queries * complete logical decoding of KV-store data Almost all the information needed to decode state directly is specified in .proto files. Each table definition specifies an ID which is unique per .proto file and each index within a table is unique within that table. Clients then only need to know the name of a module and the prefix ORM data for a specific .proto file within that module in order to decode state data directly. This additional information will be exposed directly through app configs which will be explained in a future ADR related to app wiring. The ORM makes optimizations around storage space by not repeating values in the primary key in the key value when storing primary key records. For example, if the object `{"a":0,"b":1}` has the primary key `a`, it will be stored in the key value store as `Key: '0', Value: {"b":1}` (with more efficient protobuf binary encoding). Also, the generated code from [Link](https://github.com/cosmos/cosmos-proto) does optimizations around the `google.golang.org/protobuf/reflect/protoreflect` API to improve performance. A code generator is included with the ORM which creates type safe wrappers around the ORM's dynamic `Table` implementation and is the recommended way for modules to use the ORM. The ORM tests provide a simplified bank module demonstration which illustrates: * [ORM proto options](https://github.com/cosmos/cosmos-sdk/blob/0d846ae2f0424b2eb640f6679a703b52d407813d/orm/internal/testpb/bank.proto) * [Generated Code](https://github.com/cosmos/cosmos-sdk/blob/0d846ae2f0424b2eb640f6679a703b52d407813d/orm/internal/testpb/bank.cosmos_orm.go) * [Example Usage in a Module Keeper](https://github.com/cosmos/cosmos-sdk/blob/0d846ae2f0424b2eb640f6679a703b52d407813d/orm/model/ormdb/module_test.go) ## Consequences ### Backwards Compatibility State machine code that adopts the ORM will need migrations as the state layout is generally backwards incompatible. These state machines will also need to migrate to [Link](https://github.com/cosmos/cosmos-proto) at least for state data. ### Positive * easier to build modules * easier to add secondary indexes to state * possible to write a generic indexer for ORM state * easier to write clients that do state proofs * possible to automatically write query layers rather than needing to manually implement gRPC queries ### Negative * worse performance than handwritten keys (for now). See [Further Discussions](#further-discussions) for potential improvements ### Neutral ## Further Discussions Further discussions will happen within the Cosmos SDK Framework Working Group. Current planned and ongoing work includes: * automatically generate client-facing query layer * client-side query libraries that transparently verify light client proofs * index ORM data to SQL databases * improve performance by: * optimizing existing reflection based code to avoid unnecessary gets when doing deletes & updates of simple tables * more sophisticated code generation such as making fast path reflection even faster (avoiding `switch` statements), or even fully generating code that equals handwritten performance ## References * [Link](https://github.com/iov-one/weave/tree/master/orm)). * [Link](https://github.com/regen-network/regen-ledger/tree/157181f955823149e1825263a317ad8e16096da4/orm) * [Link](https://github.com/cosmos/cosmos-sdk/tree/35d3312c3be306591fcba39892223f1244c8d108/x/group/internal/orm) * [Link](https://github.com/cosmos/cosmos-sdk/discussions/9156) * [Link](https://github.com/allinbits/cosmos-sdk-poc/tree/master/runtime/orm) * [Link](https://github.com/cosmos/cosmos-sdk/pull/10454) # ADR 057: App Wiring Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-057-app-wiring 2022-05-04: Initial Draft 2022-08-19: Updates ## Changelog * 2022-05-04: Initial Draft * 2022-08-19: Updates ## Status PROPOSED Implemented ## Abstract In order to make it easier to build Cosmos SDK modules and apps, we propose a new app wiring system based on dependency injection and declarative app configurations to replace the current `app.go` code. ## Context A number of factors have made the SDK and SDK apps in their current state hard to maintain. A symptom of the current state of complexity is [`simapp/app.go`](https://github.com/cosmos/cosmos-sdk/blob/c3edbb22cab8678c35e21fe0253919996b780c01/simapp/app.go) which contains almost 100 lines of imports and is otherwise over 600 lines of mostly boilerplate code that is generally copied to each new project. (Not to mention the additional boilerplate which gets copied in `simapp/simd`.) The large amount of boilerplate needed to bootstrap an app has made it hard to release independently versioned go modules for Cosmos SDK modules as described in [ADR 053: Go Module Refactoring](/sdk/latest/reference/architecture/adr-053-go-module-refactoring). In addition to being very verbose and repetitive, `app.go` also exposes a large surface area for breaking changes as most modules instantiate themselves with positional parameters which forces breaking changes anytime a new parameter (even an optional one) is needed. Several attempts were made to improve the current situation including [ADR 033: Internal-Module Communication](/sdk/latest/reference/architecture/adr-033-protobuf-inter-module-comm) and [a proof-of-concept of a new SDK](https://github.com/allinbits/cosmos-sdk-poc). The discussions around these designs led to the current solution described here. ## Decision In order to improve the current situation, a new "app wiring" paradigm has been designed to replace `app.go` which involves: * declaration configuration of the modules in an app which can be serialized to JSON or YAML * a dependency-injection (DI) framework for instantiating apps from the that configuration ### Dependency Injection When examining the code in `app.go` most of the code simply instantiates modules with dependencies provided either by the framework (such as store keys) or by other modules (such as keepers). It is generally pretty obvious given the context what the correct dependencies actually should be, so dependency-injection is an obvious solution. Rather than making developers manually resolve dependencies, a module will tell the DI container what dependency it needs and the container will figure out how to provide it. We explored several existing DI solutions in golang and felt that the reflection-based approach in [uber/dig](https://github.com/uber-go/dig) was closest to what we needed but not quite there. Assessing what we needed for the SDK, we designed and built the Cosmos SDK [depinject module](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/depinject), which has the following features: * dependency resolution and provision through functional constructors, ex: `func(need SomeDep) (AnotherDep, error)` * dependency injection `In` and `Out` structs which support `optional` dependencies * grouped-dependencies (many-per-container) through the `ManyPerContainerType` tag interface * module-scoped dependencies via `ModuleKey`s (where each module gets a unique dependency) * one-per-module dependencies through the `OnePerModuleType` tag interface * sophisticated debugging information and container visualization via GraphViz Here are some examples of how these would be used in an SDK module: * `StoreKey` could be a module-scoped dependency which is unique per module * a module's `AppModule` instance (or the equivalent) could be a `OnePerModuleType` * CLI commands could be provided with `ManyPerContainerType`s Note that even though dependency resolution is dynamic and based on reflection, which could be considered a pitfall of this approach, the entire dependency graph should be resolved immediately on app startup and only gets resolved once (except in the case of dynamic config reloading which is a separate topic). This means that if there are any errors in the dependency graph, they will get reported immediately on startup so this approach is only slightly worse than fully static resolution in terms of error reporting and much better in terms of code complexity. ### Declarative App Config In order to compose modules into an app, a declarative app configuration will be used. This configuration is based off of protobuf and its basic structure is very simple: ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package cosmos.app.v1; message Config { repeated ModuleConfig modules = 1; } message ModuleConfig { string name = 1; google.protobuf.Any config = 2; } ``` (See also [Link](https://github.com/cosmos/cosmos-sdk/blob/6e18f582bf69e3926a1e22a6de3c35ea327aadce/proto/cosmos/app/v1alpha1/config.proto)) The configuration for every module is itself a protobuf message and modules will be identified and loaded based on the protobuf type URL of their config object (ex. `cosmos.bank.module.v1.Module`). Modules are given a unique short `name` to share resources across different versions of the same module which might have a different protobuf package versions (ex. `cosmos.bank.module.v2.Module`). All module config objects should define the `cosmos.app.v1alpha1.module` descriptor option which will provide additional useful metadata for the framework and which can also be indexed in module registries. An example app config in YAML might look like this: ```yaml expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} modules: - name: baseapp config: "@type": cosmos.baseapp.module.v1.Module begin_blockers: [staking, auth, bank] end_blockers: [bank, auth, staking] init_genesis: [bank, auth, staking] - name: auth config: "@type": cosmos.auth.module.v1.Module bech32_prefix: "foo" - name: bank config: "@type": cosmos.bank.module.v1.Module - name: staking config: "@type": cosmos.staking.module.v1.Module ``` In the above example, there is a hypothetical `baseapp` module which contains the information around ordering of begin blockers, end blockers, and init genesis. Rather than lifting these concerns up to the module config layer, they are themselves handled by a module which could allow a convenient way of swapping out different versions of baseapp (for instance to target different versions of tendermint), without needing to change the rest of the config. The `baseapp` module would then provide to the server framework (which sort of sits outside the ABCI app) an instance of `abci.Application`. In this model, an app is *modules all the way down* and the dependency injection/app config layer is very much protocol-agnostic and can adapt to even major breaking changes at the protocol layer. ### Module & Protobuf Registration In order for the two components of dependency injection and declarative configuration to work together as described, we need a way for modules to actually register themselves and provide dependencies to the container. One additional complexity that needs to be handled at this layer is protobuf registry initialization. Recall that in both the current SDK `codec` and the proposed [ADR 054: Protobuf Semver Compatible Codegen](https://github.com/cosmos/cosmos-sdk/pull/11802), protobuf types need to be explicitly registered. Given that the app config itself is based on protobuf and uses protobuf `Any` types, protobuf registration needs to happen before the app config itself can be decoded. Because we don't know which protobuf `Any` types will be needed a priori and modules themselves define those types, we need to decode the app config in separate phases: 1. parse app config JSON/YAML as raw JSON and collect required module type URLs (without doing proto JSON decoding) 2. build a [protobuf type registry](https://pkg.go.dev/google.golang.org/protobuf@v1.28.0/reflect/protoregistry) based on file descriptors and types provided by each required module 3. decode the app config as proto JSON using the protobuf type registry Because in [ADR 054: Protobuf Semver Compatible Codegen](https://github.com/cosmos/cosmos-sdk/pull/11802), each module might use `internal` generated code which is not registered with the global protobuf registry, this code should provide an alternate way to register protobuf types with a type registry. In the same way that `.pb.go` files currently have a `var File_foo_proto protoreflect.FileDescriptor` for the file `foo.proto`, generated code should have a new member `var Types_foo_proto TypeInfo` where `TypeInfo` is an interface or struct with all the necessary info to register both the protobuf generated types and file descriptor. So a module must provide dependency injection providers and protobuf types, and takes as input its module config object which uniquely identifies the module based on its type URL. With this in mind, we define a global module register which allows module implementations to register themselves with the following API: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Register registers a module with the provided type name (ex. cosmos.bank.module.v1.Module) // and the provided options. func Register(configTypeName protoreflect.FullName, option ...Option) { ... } type Option { /* private methods */ } // Provide registers dependency injection provider functions which work with the // cosmos-sdk container module. These functions can also accept an additional // parameter for the module's config object. func Provide(providers ...interface{ }) Option { ... } // Types registers protobuf TypeInfo's with the protobuf registry. func Types(types ...TypeInfo) Option { ... } ``` Ex: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func init() { appmodule.Register("cosmos.bank.module.v1.Module", appmodule.Types( types.Types_tx_proto, types.Types_query_proto, types.Types_types_proto, ), appmodule.Provide( provideBankModule, ) ) } type Inputs struct { container.In AuthKeeper auth.Keeper DB ormdb.ModuleDB } type Outputs struct { Keeper bank.Keeper AppModule appmodule.AppModule } func ProvideBankModule(config *bankmodulev1.Module, Inputs) (Outputs, error) { ... } ``` Note that in this module, a module configuration object *cannot* register different dependency providers at runtime based on the configuration. This is intentional because it allows us to know globally which modules provide which dependencies, and it will also allow us to do code generation of the whole app initialization. This can help us figure out issues with missing dependencies in an app config if the needed modules are loaded at runtime. In cases where required modules are not loaded at runtime, it may be possible to guide users to the correct module if through a global Cosmos SDK module registry. The `*appmodule.Handler` type referenced above is a replacement for the legacy `AppModule` framework, and described in [ADR 063: Core Module API](/sdk/latest/reference/architecture/adr-063-core-module-api). ### New `app.go` With this setup, `app.go` might now look something like this: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package main import ( // Each go package which registers a module must be imported just for side-effects // so that module implementations are registered. _ "github.com/cosmos/cosmos-sdk/x/auth/module" _ "github.com/cosmos/cosmos-sdk/x/bank/module" _ "github.com/cosmos/cosmos-sdk/x/staking/module" "github.com/cosmos/cosmos-sdk/core/app" ) // go:embed app.yaml var appConfigYAML []byte func main() { app.Run(app.LoadYAML(appConfigYAML)) } ``` ### Application to existing SDK modules So far we have described a system which is largely agnostic to the specifics of the SDK such as store keys, `AppModule`, `BaseApp`, etc. Improvements to these parts of the framework that integrate with the general app wiring framework defined here are described in [ADR 063: Core Module API](/sdk/latest/reference/architecture/adr-063-core-module-api). ### Registration of Inter-Module Hooks ### Registration of Inter-Module Hooks Some modules define a hooks interface (ex. `StakingHooks`) which allows one module to call back into another module when certain events happen. With the app wiring framework, these hooks interfaces can be defined as a `OnePerModuleType`s and then the module which consumes these hooks can collect these hooks as a map of module name to hook type (ex. `map[string]FooHooks`). Ex: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func init() { appmodule.Register( &foomodulev1.Module{ }, appmodule.Invoke(InvokeSetFooHooks), ... ) } func InvokeSetFooHooks( keeper *keeper.Keeper, fooHooks map[string]FooHooks, ) error { for k in sort.Strings(maps.Keys(fooHooks)) { keeper.AddFooHooks(fooHooks[k]) } } ``` Optionally, the module consuming hooks can allow app's to define an order for calling these hooks based on module name in its config object. An alternative way for registering hooks via reflection was considered where all keeper types are inspected to see if they implement the hook interface by the modules exposing hooks. This has the downsides of: * needing to expose all the keepers of all modules to the module providing hooks, * not allowing for encapsulating hooks on a different type which doesn't expose all keeper methods, * harder to know statically which module expose hooks or are checking for them. With the approach proposed here, hooks registration will be obviously observable in `app.go` if `depinject` codegen (described below) is used. ### Code Generation The `depinject` framework will optionally allow the app configuration and dependency injection wiring to be code generated. This will allow: * dependency injection wiring to be inspected as regular go code just like the existing `app.go`, * dependency injection to be opt-in with manual wiring 100% still possible. Code generation requires that all providers and invokers and their parameters are exported and in non-internal packages. ### Module Semantic Versioning When we start creating semantically versioned SDK modules that are in standalone go modules, a state machine breaking change to a module should be handled as follows: * the semantic major version should be incremented, and * a new semantically versioned module config protobuf type should be created. For instance, if we have the SDK module for bank in the go module `github.com/cosmos/cosmos-sdk/x/bank` with the module config type `cosmos.bank.module.v1.Module`, and we want to make a state machine breaking change to the module, we would: * create a new go module `github.com/cosmos/cosmos-sdk/x/bank/v2`, * with the module config protobuf type `cosmos.bank.module.v2.Module`. This *does not* mean that we need to increment the protobuf API version for bank. Both modules can support `cosmos.bank.v1`, but `github.com/cosmos/cosmos-sdk/x/bank/v2` will be a separate go module with a separate module config type. This practice will eventually allow us to use appconfig to load new versions of a module via a configuration change. Effectively, there should be a 1:1 correspondence between a semantically versioned go module and a versioned module config protobuf type, and major versioning bumps should occur whenever state machine breaking changes are made to a module. NOTE: SDK modules that are standalone go modules *should not* adopt semantic versioning until the concerns described in [ADR 054: Module Semantic Versioning](/sdk/latest/reference/architecture/adr-054-semver-compatible-modules) are addressed. The short-term solution for this issue was left somewhat unresolved. However, the easiest tactic is likely to use a standalone API go module and follow the guidelines described in this comment: [Link](https://github.com/cosmos/cosmos-sdk/pull/11802#issuecomment-1406815181). For the time-being, it is recommended that Cosmos SDK modules continue to follow tried and true [0-based versioning](https://0ver.org) until an officially recommended solution is provided. This section of the ADR will be updated when that happens and for now, this section should be considered as a design recommendation for future adoption of semantic versioning. ## Consequences ### Backwards Compatibility Modules which work with the new app wiring system do not need to drop their existing `AppModule` and `NewKeeper` registration paradigms. These two methods can live side-by-side for as long as is needed. ### Positive * wiring up new apps will be simpler, more succinct and less error-prone * it will be easier to develop and test standalone SDK modules without needing to replicate all of simapp * it may be possible to dynamically load modules and upgrade chains without needing to do a coordinated stop and binary upgrade using this mechanism * easier plugin integration * dependency injection framework provides more automated reasoning about dependencies in the project, with a graph visualization. ### Negative * it may be confusing when a dependency is missing although error messages, the GraphViz visualization, and global module registration may help with that ### Neutral * it will require work and education ## Further Discussions The protobuf type registration system described in this ADR has not been implemented and may need to be reconsidered in light of code generation. It may be better to do this type registration with a DI provider. ## References * [Link](https://github.com/cosmos/cosmos-sdk/blob/c3edbb22cab8678c35e21fe0253919996b780c01/simapp/app.go) * [Link](https://github.com/allinbits/cosmos-sdk-poc) * [Link](https://github.com/uber-go/dig) * [Link](https://github.com/google/wire) * [Link](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/container) * [Link](https://github.com/cosmos/cosmos-sdk/pull/11802) * [ADR 063: Core Module API](/sdk/latest/reference/architecture/adr-063-core-module-api) # ADR 058: Auto-Generated CLI Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-058-auto-generated-cli 2022-05-04: Initial Draft ## Changelog * 2022-05-04: Initial Draft ## Status ACCEPTED Partially Implemented ## Abstract In order to make it easier for developers to write Cosmos SDK modules, we provide infrastructure which automatically generates CLI commands based on protobuf definitions. ## Context Current Cosmos SDK modules generally implement a CLI command for every transaction and every query supported by the module. These are handwritten for each command and essentially amount to providing some CLI flags or positional arguments for specific fields in protobuf messages. In order to make sure CLI commands are correctly implemented as well as to make sure that the application works in end-to-end scenarios, we do integration tests using CLI commands. While these tests are valuable on some-level, they can be hard to write and maintain, and run slowly. [Some teams have contemplated](https://github.com/regen-network/regen-ledger/issues/1041) moving away from CLI-style integration tests (which are really end-to-end tests) towards narrower integration tests which exercise `MsgClient` and `QueryClient` directly. This might involve replacing the current end-to-end CLI tests with unit tests as there still needs to be some way to test these CLI commands for full quality assurance. ## Decision To make module development simpler, we provide infrastructure - in the new [`client/v2`](https://github.com/cosmos/cosmos-sdk/tree/main/client/v2) go module - for automatically generating CLI commands based on protobuf definitions to either replace or complement handwritten CLI commands. This will mean that when developing a module, it will be possible to skip both writing and testing CLI commands as that can all be taken care of by the framework. The basic design for automatically generating CLI commands is to: * create one CLI command for each `rpc` method in a protobuf `Query` or `Msg` service * create a CLI flag for each field in the `rpc` request type * for `query` commands call gRPC and print the response as protobuf JSON or YAML (via the `-o`/`--output` flag) * for `tx` commands, create a transaction and apply common transaction flags In order to make the auto-generated CLI as easy to use (or easier) than handwritten CLI, we need to do custom handling of specific protobuf field types so that the input format is easy for humans: * `Coin`, `Coins`, `DecCoin`, and `DecCoins` should be input using the existing format (i.e. `1000uatom`) * it should be possible to specify an address using either the bech32 address string or a named key in the keyring * `Timestamp` and `Duration` should accept strings like `2001-01-01T00:00:00Z` and `1h3m` respectively * pagination should be handled with flags like `--page-limit`, `--page-offset`, etc. * it should be possible to customize any other protobuf type either via its message name or a `cosmos_proto.scalar` annotation At a basic level it should be possible to generate a command for a single `rpc` method as well as all the commands for a whole protobuf `service` definition. It should be possible to mix and match auto-generated and handwritten commands. ## Consequences ### Backwards Compatibility Existing modules can mix and match auto-generated and handwritten CLI commands so it is up to them as to whether they make breaking changes by replacing handwritten commands with slightly different auto-generated ones. For now the SDK will maintain the existing set of CLI commands for backwards compatibility but new commands will use this functionality. ### Positive * module developers will not need to write CLI commands * module developers will not need to test CLI commands * [lens](https://github.com/strangelove-ventures/lens) may benefit from this ### Negative ### Neutral ## Further Discussions We would like to be able to customize: * short and long usage strings for commands * aliases for flags (ex. `-a` for `--amount`) * which fields are positional parameters rather than flags It is an [open discussion](https://github.com/cosmos/cosmos-sdk/pull/11725#issuecomment-1108676129) as to whether these customizations options should line in: * the .proto files themselves, * separate config files (ex. YAML), or * directly in code Providing the options in .proto files would allow a dynamic client to automatically generate CLI commands on the fly. However, that may pollute the .proto files themselves with information that is only relevant for a small subset of users. ## References * [Link](https://github.com/regen-network/regen-ledger/issues/1041) * [Link](https://github.com/cosmos/cosmos-sdk/tree/main/client/v2) * [Link](https://github.com/cosmos/cosmos-sdk/pull/11725#issuecomment-1108676129) # ADR 059: Test Scopes Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-059-test-scopes 2022-08-02: Initial Draft 2023-03-02: Add precision for integration tests 2023-03-23: Add precision for E2E tests ## Changelog * 2022-08-02: Initial Draft * 2023-03-02: Add precision for integration tests * 2023-03-23: Add precision for E2E tests ## Status PROPOSED Partially Implemented ## Abstract Recent work in the SDK aimed at breaking apart the monolithic root go module has highlighted shortcomings and inconsistencies in our testing paradigm. This ADR clarifies a common language for talking about test scopes and proposes an ideal state of tests at each scope. ## Context [ADR-053: Go Module Refactoring](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-053-go-module-refactoring.md) expresses our desire for an SDK composed of many independently versioned Go modules, and [ADR-057: App Wiring](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-057-app-wiring.md) offers a methodology for breaking apart inter-module dependencies through the use of dependency injection. As described in [EPIC: Separate all SDK modules into standalone go modules](https://github.com/cosmos/cosmos-sdk/issues/11899), module dependencies are particularly complected in the test phase, where simapp is used as the key test fixture in setting up and running tests. It is clear that the successful completion of Phases 3 and 4 in that EPIC require the resolution of this dependency problem. In [EPIC: Unit Testing of Modules via Mocks](https://github.com/cosmos/cosmos-sdk/issues/12398) it was thought this Gordian knot could be unwound by mocking all dependencies in the test phase for each module, but seeing how these refactors were complete rewrites of test suites discussions began around the fate of the existing integration tests. One perspective is that they ought to be thrown out, another is that integration tests have some utility of their own and a place in the SDK's testing story. Another point of confusion has been the current state of CLI test suites, [x/auth](https://github.com/cosmos/cosmos-sdk/blob/0f7e56c6f9102cda0ca9aba5b6f091dbca976b5a/x/auth/client/testutil/suite.go#L44-L49) for example. In code these are called integration tests, but in reality function as end to end tests by starting up a tendermint node and full application. [EPIC: Rewrite and simplify CLI tests](https://github.com/cosmos/cosmos-sdk/issues/12696) identifies the ideal state of CLI tests using mocks, but does not address the place end to end tests may have in the SDK. From here we identify three scopes of testing, **unit**, **integration**, **e2e** (end to end), seek to define the boundaries of each, their shortcomings (real and imposed), and their ideal state in the SDK. ### Unit tests Unit tests exercise the code contained in a single module (e.g. `/x/bank`) or package (e.g. `/client`) in isolation from the rest of the code base. Within this we identify two levels of unit tests, *illustrative* and *journey*. The definitions below lean heavily on [The BDD Books - Formulation](https://leanpub.com/bddbooks-formulation) section 1.3. *Illustrative* tests exercise an atomic part of a module in isolation - in this case we might do fixture setup/mocking of other parts of the module. Tests which exercise a whole module's function with dependencies mocked, are *journeys*. These are almost like integration tests in that they exercise many things together but still use mocks. Example 1 journey vs illustrative tests - depinject's BDD style tests, show how we can rapidly build up many illustrative cases demonstrating behavioral rules without [very much code](https://github.com/cosmos/cosmos-sdk/blob/main/depinject/binding_test.go) while maintaining high level readability. Example 2 [depinject table driven tests](https://github.com/cosmos/cosmos-sdk/blob/main/depinject/provider_desc_test.go) Example 3 [Bank keeper tests](https://github.com/cosmos/cosmos-sdk/blob/2bec9d2021918650d3938c3ab242f84289daef80/x/bank/keeper/keeper_test.go#L94-L105) - A mock implementation of `AccountKeeper` is supplied to the keeper constructor. #### Limitations Certain modules are tightly coupled beyond the test phase. A recent dependency report for `bank -> auth` found 274 total usages of `auth` in `bank`, 50 of which are in production code and 224 in test. This tight coupling may suggest that either the modules should be merged, or refactoring is required to abstract references to the core types tying the modules together. It could also indicate that these modules should be tested together in integration tests beyond mocked unit tests. In some cases setting up a test case for a module with many mocked dependencies can be quite cumbersome and the resulting test may only show that the mocking framework works as expected rather than working as a functional test of interdependent module behavior. ### Integration tests Integration tests define and exercise relationships between an arbitrary number of modules and/or application subsystems. Wiring for integration tests is provided by `depinject` and some [helper code](https://github.com/cosmos/cosmos-sdk/blob/2bec9d2021918650d3938c3ab242f84289daef80/testutil/sims/app_helpers.go#L95) starts up a running application. A section of the running application may then be tested. Certain inputs during different phases of the application life cycle are expected to produce invariant outputs without too much concern for component internals. This type of black box testing has a larger scope than unit testing. Example 1 [client/grpc\_query\_test/TestGRPCQuery](https://github.com/cosmos/cosmos-sdk/blob/2bec9d2021918650d3938c3ab242f84289daef80/client/grpc_query_test.go#L111-L129) - This test is misplaced in `/client`, but tests the life cycle of (at least) `runtime` and `bank` as they progress through startup, genesis and query time. It also exercises the fitness of the client and query server without putting bytes on the wire through the use of [QueryServiceTestHelper](https://github.com/cosmos/cosmos-sdk/blob/2bec9d2021918650d3938c3ab242f84289daef80/baseapp/grpcrouter_helpers.go#L31). Example 2 `x/evidence` Keeper integration tests - Starts up an application composed of [8 modules](https://github.com/cosmos/cosmos-sdk/blob/2bec9d2021918650d3938c3ab242f84289daef80/x/evidence/testutil/app.yaml#L1) with [5 keepers](https://github.com/cosmos/cosmos-sdk/blob/2bec9d2021918650d3938c3ab242f84289daef80/x/evidence/keeper/keeper_test.go#L101-L106) used in the integration test suite. One test in the suite exercises [HandleEquivocationEvidence](https://github.com/cosmos/cosmos-sdk/blob/2bec9d2021918650d3938c3ab242f84289daef80/x/evidence/keeper/infraction_test.go#L42) which contains many interactions with the staking keeper. Example 3 - Integration suite app configurations may also be specified via golang (not YAML as above) statically or [dynamically](https://github.com/cosmos/cosmos-sdk/blob/8c23f6f957d1c0bedd314806d1ac65bea59b084c/tests/integration/bank/keeper/keeper_test.go#L129-L134). #### Limitations Setting up a particular input state may be more challenging since the application is starting from a zero state. Some of this may be addressed by good test fixture abstractions with testing of their own. Tests may also be more brittle, and larger refactors could impact application initialization in unexpected ways with harder to understand errors. This could also be seen as a benefit, and indeed the SDK's current integration tests were helpful in tracking down logic errors during earlier stages of app-wiring refactors. ### Simulations Simulations (also called generative testing) are a special case of integration tests where deterministically random module operations are executed against a running simapp, building blocks on the chain until a specified height is reached. No *specific* assertions are made for the state transitions resulting from module operations but any error will halt and fail the simulation. Since `crisis` is included in simapp and the simulation runs EndBlockers at the end of each block any module invariant violations will also fail the simulation. Modules must implement [AppModuleSimulation.WeightedOperations](https://github.com/cosmos/cosmos-sdk/blob/2bec9d2021918650d3938c3ab242f84289daef80/types/module/simulation.go#L31) to define their simulation operations. Note that not all modules implement this which may indicate a gap in current simulation test coverage. Modules not returning simulation operations: * `auth` * `evidence` * `mint` * `params` A separate binary, [runsim](https://github.com/cosmos/tools/tree/master/cmd/runsim), is responsible for kicking off some of these tests and managing their life cycle. #### Limitations * A success may take a long time to run, 7-10 minutes per simulation in CI. * Timeouts sometimes occur on apparent successes without any indication why. * Useful error messages not provided on failure from CI, requiring a developer to run the simulation locally to reproduce. ### E2E tests End to end tests exercise the entire system as we understand it in as close an approximation to a production environment as is practical. Presently these tests are located at [tests/e2e](https://github.com/cosmos/cosmos-sdk/tree/main/tests/e2e) and rely on [testutil/network](https://github.com/cosmos/cosmos-sdk/tree/main/testutil/network) to start up an in-process Tendermint node. An application should be built as minimally as possible to exercise the desired functionality. The SDK uses an application will only the required modules for the tests. The application developer is adviced to use its own application for e2e tests. #### Limitations In general the limitations of end to end tests are orchestration and compute cost. Scaffolding is required to start up and run a prod-like environment and the this process takes much longer to start and run than unit or integration tests. Global locks present in Tendermint code cause stateful starting/stopping to sometimes hang or fail intermittently when run in a CI environment. The scope of e2e tests has been complected with command line interface testing. ## Decision We accept these test scopes and identify the following decisions points for each. | Scope | App Type | Mocks? | | ----------- | ------------------- | ------ | | Unit | None | Yes | | Integration | integration helpers | Some | | Simulation | minimal app | No | | E2E | minimal app | No | The decision above is valid for the SDK. An application developer should test their application with their full application instead of the minimal app. ### Unit Tests All modules must have mocked unit test coverage. Illustrative tests should outnumber journeys in unit tests. Unit tests should outnumber integration tests. Unit tests must not introduce additional dependencies beyond those already present in production code. When module unit test introduction as per [EPIC: Unit testing of modules via mocks](https://github.com/cosmos/cosmos-sdk/issues/12398) results in a near complete rewrite of an integration test suite the test suite should be retained and moved to `/tests/integration`. We accept the resulting test logic duplication but recommend improving the unit test suite through the addition of illustrative tests. ### Integration Tests All integration tests shall be located in `/tests/integration`, even those which do not introduce extra module dependencies. To help limit scope and complexity, it is recommended to use the smallest possible number of modules in application startup, i.e. don't depend on simapp. Integration tests should outnumber e2e tests. ### Simulations Simulations shall use a minimal application (usually via app wiring). They are located under `/x/{moduleName}/simulation`. ### E2E Tests Existing e2e tests shall be migrated to integration tests by removing the dependency on the test network and in-process Tendermint node to ensure we do not lose test coverage. The e2e rest runner shall transition from in process Tendermint to a runner powered by Docker via [dockertest](https://github.com/ory/dockertest). E2E tests exercising a full network upgrade shall be written. The CLI testing aspect of existing e2e tests shall be rewritten using the network mocking demonstrated in [PR#12706](https://github.com/cosmos/cosmos-sdk/pull/12706). ## Consequences ### Positive * test coverage is increased * test organization is improved * reduced dependency graph size in modules * simapp removed as a dependency from modules * inter-module dependencies introduced in test code are removed * reduced CI run time after transitioning away from in process Tendermint ### Negative * some test logic duplication between unit and integration tests during transition * test written using dockertest DX may be a bit worse ### Neutral * some discovery required for e2e transition to dockertest ## Further Discussions It may be useful if test suites could be run in integration mode (with mocked tendermint) or with e2e fixtures (with real tendermint and many nodes). Integration fixtures could be used for quicker runs, e2e fixures could be used for more battle hardening. A PoC `x/gov` was completed in PR [#12847](https://github.com/cosmos/cosmos-sdk/pull/12847) is in progress for unit tests demonstrating BDD \[Rejected]. Observing that a strength of BDD specifications is their readability, and a con is the cognitive load while writing and maintaining, current consensus is to reserve BDD use for places in the SDK where complex rules and module interactions are demonstrated. More straightforward or low level test cases will continue to rely on go table tests. Levels are network mocking in integration and e2e tests are still being worked on and formalized. # ADR 60: ABCI 1.0 Integration (Phase I) Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-060-abci-1.0 2022-08-10: Initial Draft (@alexanderbez, @tac0turtle) Nov 12, 2022: Update PrepareProposal and ProcessProposal semantics per the initial implementation PR (@alexanderbez) ## Changelog * 2022-08-10: Initial Draft (@alexanderbez, @tac0turtle) * Nov 12, 2022: Update `PrepareProposal` and `ProcessProposal` semantics per the initial implementation [PR](https://github.com/cosmos/cosmos-sdk/pull/13453) (@alexanderbez) ## Status ACCEPTED ## Abstract This ADR describes the initial adoption of [ABCI 1.0](https://github.com/tendermint/tendermint/blob/master/spec/abci%2B%2B/README.md), the next evolution of ABCI, within the Cosmos SDK. ABCI 1.0 aims to provide application developers with more flexibility and control over application and consensus semantics, e.g. in-application mempools, in-process oracles, and order-book style matching engines. ## Context Tendermint will release ABCI 1.0. Notably, at the time of this writing, Tendermint is releasing v0.37.0 which will include `PrepareProposal` and `ProcessProposal`. The `PrepareProposal` ABCI method is concerned with a block proposer requesting the application to evaluate a series of transactions to be included in the next block, defined as a slice of `TxRecord` objects. The application can either accept, reject, or completely ignore some or all of these transactions. This is an important consideration to make as the application can essentially define and control its own mempool allowing it to define sophisticated transaction priority and filtering mechanisms, by completely ignoring the `TxRecords` Tendermint sends it, favoring its own transactions. This essentially means that the Tendermint mempool acts more like a gossip data structure. The second ABCI method, `ProcessProposal`, is used to process the block proposer's proposal as defined by `PrepareProposal`. It is important to note the following with respect to `ProcessProposal`: * Execution of `ProcessProposal` must be deterministic. * There must be coherence between `PrepareProposal` and `ProcessProposal`. In other words, for any two correct processes *p* and *q*, if *q*'s Tendermint calls `RequestProcessProposal` on *up*, *q*'s Application returns ACCEPT in `ResponseProcessProposal`. It is important to note that in ABCI 1.0 integration, the application is NOT responsible for locking semantics -- Tendermint will still be responsible for that. In the future, however, the application will be responsible for locking, which allows for parallel execution possibilities. ## Decision We will integrate ABCI 1.0, which will be introduced in Tendermint v0.37.0, in the next major release of the Cosmos SDK. We will integrate ABCI 1.0 methods on the `BaseApp` type. We describe the implementations of the two methods individually below. Prior to describing the implementation of the two new methods, it is important to note that the existing ABCI methods, `CheckTx`, `DeliverTx`, etc, still exist and serve the same functions as they do now. ### `PrepareProposal` Prior to evaluating the decision for how to implement `PrepareProposal`, it is important to note that `CheckTx` will still be executed and will be responsible for evaluating transaction validity as it does now, with one very important *additive* distinction. When executing transactions in `CheckTx`, the application will now add valid transactions, i.e. passing the AnteHandler, to its own mempool data structure. In order to provide a flexible approach to meet the varying needs of application developers, we will define both a mempool interface and a data structure utilizing Golang generics, allowing developers to focus only on transaction ordering. Developers requiring absolute full control can implement their own custom mempool implementation. We define the general mempool interface as follows (subject to change): ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type Mempool interface { // Insert attempts to insert a Tx into the app-side mempool returning // an error upon failure. Insert(sdk.Context, sdk.Tx) error // Select returns an Iterator over the app-side mempool. If txs are specified, // then they shall be incorporated into the Iterator. The Iterator must // closed by the caller. Select(sdk.Context, [][]byte) Iterator // CountTx returns the number of transactions currently in the mempool. CountTx() int // Remove attempts to remove a transaction from the mempool, returning an error // upon failure. Remove(sdk.Tx) error } // Iterator defines an app-side mempool iterator interface that is as minimal as // possible. The order of iteration is determined by the app-side mempool // implementation. type Iterator interface { // Next returns the next transaction from the mempool. If there are no more // transactions, it returns nil. Next() Iterator // Tx returns the transaction at the current position of the iterator. Tx() sdk.Tx } ``` We will define an implementation of `Mempool`, defined by `nonceMempool`, that will cover most basic application use-cases. Namely, it will prioritize transactions by transaction sender, allowing for multiple transactions from the same sender. The default app-side mempool implementation, `nonceMempool`, will operate on a single skip list data structure. Specifically, transactions with the lowest nonce globally are prioritized. Transactions with the same nonce are prioritized by sender address. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type nonceMempool struct { txQueue *huandu.SkipList } ``` Previous discussions1 have come to the agreement that Tendermint will perform a request to the application, via `RequestPrepareProposal`, with a certain amount of transactions reaped from Tendermint's local mempool. The exact amount of transactions reaped will be determined by a local operator configuration. This is referred to as the "one-shot approach" seen in discussions. When Tendermint reaps transactions from the local mempool and sends them to the application via `RequestPrepareProposal`, the application will have to evaluate the transactions. Specifically, it will need to inform Tendermint if it should reject and or include each transaction. Note, the application can even *replace* transactions entirely with other transactions. When evaluating transactions from `RequestPrepareProposal`, the application will ignore *ALL* transactions sent to it in the request and instead reap up to `RequestPrepareProposal.max_tx_bytes` from its own mempool. Since an application can technically insert or inject transactions on `Insert` during `CheckTx` execution, it is recommended that applications ensure transaction validity when reaping transactions during `PrepareProposal`. However, what validity exactly means is entirely determined by the application. The Cosmos SDK will provide a default `PrepareProposal` implementation that simply select up to `MaxBytes` *valid* transactions. However, applications can override this default implementation with their own implementation and set that on `BaseApp` via `SetPrepareProposal`. ### `ProcessProposal` The `ProcessProposal` ABCI method is relatively straightforward. It is responsible for ensuring validity of the proposed block containing transactions that were selected from the `PrepareProposal` step. However, how an application determines validity of a proposed block depends on the application and its varying use cases. For most applications, simply calling the `AnteHandler` chain would suffice, but there could easily be other applications that need more control over the validation process of the proposed block, such as ensuring txs are in a certain order or that certain transactions are included. While this theoretically could be achieved with a custom `AnteHandler` implementation, it's not the cleanest UX or the most efficient solution. Instead, we will define an additional ABCI interface method on the existing `Application` interface, similar to the existing ABCI methods such as `BeginBlock` or `EndBlock`. This new interface method will be defined as follows: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} ProcessProposal(sdk.Context, abci.RequestProcessProposal) error { } ``` Note, we must call `ProcessProposal` with a new internal branched state on the `Context` argument as we cannot simply just use the existing `checkState` because `BaseApp` already has a modified `checkState` at this point. So when executing `ProcessProposal`, we create a similar branched state, `processProposalState`, off of `deliverState`. Note, the `processProposalState` is never committed and is completely discarded after `ProcessProposal` finishes execution. The Cosmos SDK will provide a default implementation of `ProcessProposal` in which all transactions are validated using the CheckTx flow, i.e. the AnteHandler, and will always return ACCEPT unless any transaction cannot be decoded. ### `DeliverTx` Since transactions are not truly removed from the app-side mempool during `PrepareProposal`, since `ProcessProposal` can fail or take multiple rounds and we do not want to lose transactions, we need to finally remove the transaction from the app-side mempool during `DeliverTx` since during this phase, the transactions are being included in the proposed block. Alternatively, we can keep the transactions as truly being removed during the reaping phase in `PrepareProposal` and add them back to the app-side mempool in case `ProcessProposal` fails. ## Consequences ### Backwards Compatibility ABCI 1.0 is naturally not backwards compatible with prior versions of the Cosmos SDK and Tendermint. For example, an application that requests `RequestPrepareProposal` to the same application that does not speak ABCI 1.0 will naturally fail. However, in the first phase of the integration, the existing ABCI methods as we know them today will still exist and function as they currently do. ### Positive * Applications now have full control over transaction ordering and priority. * Lays the groundwork for the full integration of ABCI 1.0, which will unlock more app-side use cases around block construction and integration with the Tendermint consensus engine. ### Negative * Requires that the "mempool", as a general data structure that collects and stores uncommitted transactions will be duplicated between both Tendermint and the Cosmos SDK. * Additional requests between Tendermint and the Cosmos SDK in the context of block execution. Albeit, the overhead should be negligible. * Not backwards compatible with previous versions of Tendermint and the Cosmos SDK. ## Further Discussions It is possible to design the app-side implementation of the `Mempool[T MempoolTx]` in many different ways using different data structures and implementations. All of which have different tradeoffs. The proposed solution keeps things simple and covers cases that would be required for most basic applications. There are tradeoffs that can be made to improve performance of reaping and inserting into the provided mempool implementation. ## References * [Link](https://github.com/tendermint/tendermint/blob/master/spec/abci%2B%2B/README.md) * \[1] [Link](https://github.com/tendermint/tendermint/issues/7750#issuecomment-1076806155) * \[2] [Link](https://github.com/tendermint/tendermint/issues/7750#issuecomment-1075717151) # ADR ADR-061: Liquid Staking Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-061-liquid-staking 2022-09-10: Initial Draft (@zmanian) ## Changelog * 2022-09-10: Initial Draft (@zmanian) ## Status ACCEPTED ## Abstract Add a semi-fungible liquid staking primitive to the default Cosmos SDK staking module. This upgrades proof of stake to enable safe designs with lower overall monetary issuance and integration with numerous liquid staking protocols like Stride, Persistence, Quicksilver, Lido etc. ## Context The original release of the Cosmos Hub featured the implementation of a ground breaking proof of stake mechanism featuring delegation, slashing, in protocol reward distribution and adaptive issuance. This design was state of the art for 2016 and has been deployed without major changes by many L1 blockchains. As both Proof of Stake and blockchain use cases have matured, this design has aged poorly and should no longer be considered a good baseline Proof of Stake issuance. In the world of application specific blockchains, there cannot be a one size fits all blockchain but the Cosmos SDK does endeavour to provide a good baseline implementation and one that is suitable for the Cosmos Hub. The most important deficiency of the legacy staking design is that it composes poorly with on chain protocols for trading, lending, derivatives that are referred to collectively as DeFi. The legacy staking implementation starves these applications of liquidity by increasing the risk free rate adaptively. It basically makes DeFi and staking security somewhat incompatible. The Osmosis team has adopted the idea of Superfluid and Interfluid staking where assets that are participating in DeFi appliactions can also be used in proof of stake. This requires tight integration with an enshrined set of DeFi applications and thus is unsuitable for the Cosmos SDK. It's also important to note that Interchain Accounts are available in the default IBC implementation and can be used to [rehypothecate](https://www.investopedia.com/terms/h/hypothecation.asp#toc-what-is-rehypothecation) delegations. Thus liquid staking is already possible and these changes merely improve the UX of liquid staking. Centralized exchanges also rehypothecate staked assets, posing challenges for decentralization. This ADR takes the position that adoption of in-protocol liquid staking is the preferable outcome and provides new levers to incentivize decentralization of stake. These changes to the staking module have been in development for more than a year and have seen substantial industry adoption who plan to build staking UX. The internal economics at Informal team has also done a review of the impacts of these changes and this review led to the development of the exempt delegation system. This system provides governance with a tuneable parameter for modulating the risks of principal agent problem called the exemption factor. ## Decision We implement the semi-fungible liquid staking system and exemption factor system within the cosmos sdk. Though registered as fungible assets, these tokenized shares have extremely limited fungibility, only among the specific delegation record that was created when shares were tokenized. These assets can be used for OTC trades but composability with DeFi is limited. The primary expected use case is improving the user experience of liquid staking providers. A new governance parameter is introduced that defines the ratio of exempt to issued tokenized shares. This is called the exemption factor. A larger exemption factor allows more tokenized shares to be issued for a smaller amount of exempt delegations. If governance is comfortable with how the liquid staking market is evolving, it makes sense to increase this value. Min self delegation is removed from the staking system with the expectation that it will be replaced by the exempt delegations system. The exempt delegation system allows multiple accounts to demonstrate economic alignment with the validator operator as team members, partners etc. without co-mingling funds. Delegation exemption will likely be required to grow the validators' business under widespread adoption of liquid staking once governance has adjusted the exemption factor. When shares are tokenized, the underlying shares are transferred to a module account and rewards go to the module account for the TokenizedShareRecord. There is no longer a mechanism to override the validators vote for TokenizedShares. ### `MsgTokenizeShares` The MsgTokenizeShares message is used to create tokenize delegated tokens. This message can be executed by any delegator who has positive amount of delegation and after execution the specific amount of delegation disappear from the account and share tokens are provided. Share tokens are denominated in the validator and record id of the underlying delegation. A user may tokenize some or all of their delegation. They will receive shares with the denom of `cosmosvaloper1xxxx/5` where 5 is the record id for the validator operator. MsgTokenizeShares fails if the account is a VestingAccount. Users will have to move vested tokens to a new account and endure the unbonding period. We view this as an acceptable tradeoff vs. the complex book keeping required to track vested tokens. The total amount of outstanding tokenized shares for the validator is checked against the sum of exempt delegations multiplied by the exemption factor. If the tokenized shares exceeds this limit, execution fails. MsgTokenizeSharesResponse provides the number of tokens generated and their denom. ### `MsgRedeemTokensforShares` The MsgRedeemTokensforShares message is used to redeem the delegation from share tokens. This message can be executed by any user who owns share tokens. After execution delegations will appear to the user. ### `MsgTransferTokenizeShareRecord` The MsgTransferTokenizeShareRecord message is used to transfer the ownership of rewards generated from the tokenized amount of delegation. The tokenize share record is created when a user tokenize his/her delegation and deleted when the full amount of share tokens are redeemed. This is designed to work with liquid staking designs that do not redeem the tokenized shares and may instead want to keep the shares tokenized. ### `MsgExemptDelegation` The MsgExemptDelegation message is used to exempt a delegation to a validator. If the exemption factor is greater than 0, this will allow more delegation shares to be issued from the validator. This design allows the chain to force an amount of self-delegation by validators participating in liquid staking schemes. ## Consequences ### Backwards Compatibility By setting the exemption factor to zero, this module works like legacy staking. The only substantial change is the removal of min-self-bond and without any tokenized shares, there is no incentive to exempt delegation. ### Positive This approach should enable integration with liquid staking providers and improved user experience. It provides a pathway to security under non-exponential issuance policies in the baseline staking module. # ADR 062: Collections, a simplified storage layer for cosmos-sdk modules. Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-062-collections-state-layer 30/11/2022: PROPOSED ## Changelog * 30/11/2022: PROPOSED ## Status PROPOSED - Implemented ## Abstract We propose a simplified module storage layer which leverages golang generics to allow module developers to handle module storage in a simple and straightforward manner, whilst offering safety, extensibility and standardisation. ## Context Module developers are forced into manually implementing storage functionalities in their modules, those functionalities include but are not limited to: * Defining key to bytes formats. * Defining value to bytes formats. * Defining secondary indexes. * Defining query methods to expose outside to deal with storage. * Defining local methods to deal with storage writing. * Dealing with genesis imports and exports. * Writing tests for all the above. This brings in a lot of problems: * It blocks developers from focusing on the most important part: writing business logic. * Key to bytes formats are complex and their definition is error-prone, for example: * how do I format time to bytes in such a way that bytes are sorted? * how do I ensure when I don't have namespace collisions when dealing with secondary indexes? * The lack of standardisation makes life hard for clients, and the problem is exacerbated when it comes to providing proofs for objects present in state. Clients are forced to maintain a list of object paths to gather proofs. ### Current Solution: ORM The current SDK proposed solution to this problem is [ORM](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-055-orm.md). While ORM offers a lot of good functionality aimed at solving these specific problems, it has some downsides: * It requires migrations. * It uses the newest protobuf golang API, whilst the SDK still mainly uses gogoproto. * Integrating ORM into a module would require the developer to deal with two different golang frameworks (golang protobuf + gogoproto) representing the same API objects. * It has a high learning curve, even for simple storage layers as it requires developers to have knowledge around protobuf options, custom cosmos-sdk storage extensions, and tooling download. Then after this they still need to learn the code-generated API. ### CosmWasm Solution: cw-storage-plus The collections API takes inspiration from [cw-storage-plus](https://docs.cosmwasm.com/docs/1.0/smart-contracts/state/cw-plus/), which has demonstrated to be a powerful tool for dealing with storage in CosmWasm contracts. It's simple, does not require extra tooling, it makes it easy to deal with complex storage structures (indexes, snapshot, etc). The API is straightforward and explicit. ## Decision We propose to port the `collections` API, whose implementation lives in [NibiruChain/collections](https://github.com/NibiruChain/collections) to cosmos-sdk. Collections implements four different storage handlers types: * `Map`: which deals with simple `key=>object` mappings. * `KeySet`: which acts as a `Set` and only retains keys and no object (usecase: allow-lists). * `Item`: which always contains only one object (usecase: Params) * `Sequence`: which implements a simple always increasing number (usecase: Nonces) * `IndexedMap`: builds on top of `Map` and `KeySet` and allows to create relationships with `Objects` and `Objects` secondary keys. All the collection APIs build on top of the simple `Map` type. Collections is fully generic, meaning that anything can be used as `Key` and `Value`. It can be a protobuf object or not. Collections types, in fact, delegate the duty of serialisation of keys and values to a secondary collections API component called `ValueEncoders` and `KeyEncoders`. `ValueEncoders` take care of converting a value to bytes (relevant only for `Map`). And offers a plug and play layer which allows us to change how we encode objects, which is relevant for swapping serialisation frameworks and enhancing performance. `Collections` already comes in with default `ValueEncoders`, specifically for: protobuf objects, special SDK types (sdk.Int, sdk.Dec). `KeyEncoders` take care of converting keys to bytes, `collections` already comes in with some default `KeyEncoders` for some privimite golang types (uint64, string, time.Time, ...) and some widely used sdk types (sdk.Acc/Val/ConsAddress, sdk.Int/Dec, ...). These default implementations also offer safety around proper lexicographic ordering and namespace-collision. Examples of the collections API can be found here: * introduction: [Link](https://github.com/NibiruChain/collections/tree/main/examples) * usage in nibiru: [x/oracle](https://github.com/NibiruChain/nibiru/blob/master/x/oracle/keeper/keeper.go#L32), [x/perp](https://github.com/NibiruChain/nibiru/blob/master/x/perp/keeper/keeper.go#L31) * cosmos-sdk's x/staking migrated: [Link](https://github.com/testinginprod/cosmos-sdk/pull/22) ## Consequences ### Backwards Compatibility The design of `ValueEncoders` and `KeyEncoders` allows modules to retain the same `byte(key)=>byte(value)` mappings, making the upgrade to the new storage layer non-state breaking. ### Positive * ADR aimed at removing code from the SDK rather than adding it. Migrating just `x/staking` to collections would yield to a net decrease in LOC (even considering the addition of collections itself). * Simplifies and standardises storage layers across modules in the SDK. * Does not require to have to deal with protobuf. * It's pure golang code. * Generalisation over `KeyEncoders` and `ValueEncoders` allows us to not tie ourself to the data serialisation framework. * `KeyEncoders` and `ValueEncoders` can be extended to provide schema reflection. ### Negative * Golang generics are not as battle-tested as other Golang features, despite being used in production right now. * Collection types instantiation needs to be improved. ### Neutral `{neutral consequences}` ## Further Discussions * Automatic genesis import/export (not implemented because of API breakage) * Schema reflection ## References # ADR 063: Core Module API Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-063-core-module-api 2022-08-18 First Draft 2022-12-08 First Draft 2023-01-24 Updates ## Changelog * 2022-08-18 First Draft * 2022-12-08 First Draft * 2023-01-24 Updates ## Status ACCEPTED Partially Implemented ## Abstract A new core API is proposed as a way to develop cosmos-sdk applications that will eventually replace the existing `AppModule` and `sdk.Context` frameworks a set of core services and extension interfaces. This core API aims to: * be simpler * more extensible * more stable than the current framework * enable deterministic events and queries, * support event listeners * [ADR 033: Protobuf-based Inter-Module Communication](/sdk/latest/reference/architecture/adr-033-protobuf-inter-module-comm) clients. ## Context Historically modules have exposed their functionality to the framework via the `AppModule` and `AppModuleBasic` interfaces which have the following shortcomings: * both `AppModule` and `AppModuleBasic` need to be defined and registered which is counter-intuitive * apps need to implement the full interfaces, even parts they don't need (although there are workarounds for this), * interface methods depend heavily on unstable third party dependencies, in particular Comet, * legacy required methods have littered these interfaces for far too long In order to interact with the state machine, modules have needed to do a combination of these things: * get store keys from the app * call methods on `sdk.Context` which contains more or less the full set of capability available to modules. By isolating all the state machine functionality into `sdk.Context`, the set of functionalities available to modules are tightly coupled to this type. If there are changes to upstream dependencies (such as Comet) or new functionalities are desired (such as alternate store types), the changes need impact `sdk.Context` and all consumers of it (basically all modules). Also, all modules now receive `context.Context` and need to convert these to `sdk.Context`'s with a non-ergonomic unwrapping function. Any breaking changes to these interfaces, such as ones imposed by third-party dependencies like Comet, have the side effect of forcing all modules in the ecosystem to update in lock-step. This means it is almost impossible to have a version of the module which can be run with 2 or 3 different versions of the SDK or 2 or 3 different versions of another module. This lock-step coupling slows down overall development within the ecosystem and causes updates to components to be delayed longer than they would if things were more stable and loosely coupled. ## Decision The `core` API proposes a set of core APIs that modules can rely on to interact with the state machine and expose their functionalities to it that are designed in a principled way such that: * tight coupling of dependencies and unrelated functionalities is minimized or eliminated * APIs can have long-term stability guarantees * the SDK framework is extensible in a safe and straightforward way The design principles of the core API are as follows: * everything that a module wants to interact with in the state machine is a service * all services coordinate state via `context.Context` and don't try to recreate the "bag of variables" approach of `sdk.Context` * all independent services are isolated in independent packages with minimal APIs and minimal dependencies * the core API should be minimalistic and designed for long-term support (LTS) * a "runtime" module will implement all the "core services" defined by the core API and can handle all module functionalities exposed by core extension interfaces * other non-core and/or non-LTS services can be exposed by specific versions of runtime modules or other modules following the same design principles, this includes functionality that interacts with specific non-stable versions of third party dependencies such as Comet * the core API doesn't implement *any* functionality, it just defines types * go stable API compatibility guidelines are followed: [Link](https://go.dev/blog/module-compatibility) A "runtime" module is any module which implements the core functionality of composing an ABCI app, which is currently handled by `BaseApp` and the `ModuleManager`. Runtime modules which implement the core API are *intentionally* separate from the core API in order to enable more parallel versions and forks of the runtime module than is possible with the SDK's current tightly coupled `BaseApp` design while still allowing for a high degree of composability and compatibility. Modules which are built only against the core API don't need to know anything about which version of runtime, `BaseApp` or Comet in order to be compatible. Modules from the core mainline SDK could be easily composed with a forked version of runtime with this pattern. This design is intended to enable matrices of compatible dependency versions. Ideally a given version of any module is compatible with multiple versions of the runtime module and other compatible modules. This will allow dependencies to be selectively updated based on battle-testing. More conservative projects may want to update some dependencies slower than more fast moving projects. ### Core Services The following "core services" are defined by the core API. All valid runtime module implementations should provide implementations of these services to modules via both [dependency injection](/sdk/latest/reference/architecture/adr-057-app-wiring) and manual wiring. The individual services described below are all bundled in a convenient `appmodule.Service` "bundle service" so that for simplicity modules can declare a dependency on a single service. #### Store Services Store services will be defined in the `cosmossdk.io/core/store` package. The generic `store.KVStore` interface is the same as current SDK `KVStore` interface. Store keys have been refactored into store services which, instead of expecting the context to know about stores, invert the pattern and allow retrieving a store from a generic context. There are three store services for the three types of currently supported stores - regular kv-store, memory, and transient: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type KVStoreService interface { OpenKVStore(context.Context) KVStore } type MemoryStoreService interface { OpenMemoryStore(context.Context) KVStore } type TransientStoreService interface { OpenTransientStore(context.Context) KVStore } ``` Modules can use these services like this: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (k msgServer) Send(ctx context.Context, msg *types.MsgSend) (*types.MsgSendResponse, error) { store := k.kvStoreSvc.OpenKVStore(ctx) } ``` Just as with the current runtime module implementation, modules will not need to explicitly name these store keys, but rather the runtime module will choose an appropriate name for them and modules just need to request the type of store they need in their dependency injection (or manual) constructors. #### Event Service The event `Service` will be defined in the `cosmossdk.io/core/event` package. The event `Service` allows modules to emit typed and legacy untyped events: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package event type Service interface { // EmitProtoEvent emits events represented as a protobuf message (as described in ADR 032). // // Callers SHOULD assume that these events may be included in consensus. These events // MUST be emitted deterministically and adding, removing or changing these events SHOULD // be considered state-machine breaking. EmitProtoEvent(ctx context.Context, event protoiface.MessageV1) error // EmitKVEvent emits an event based on an event and kv-pair attributes. // // These events will not be part of consensus and adding, removing or changing these events is // not a state-machine breaking change. EmitKVEvent(ctx context.Context, eventType string, attrs ...KVEventAttribute) error // EmitProtoEventNonConsensus emits events represented as a protobuf message (as described in ADR 032), without // including it in blockchain consensus. // // These events will not be part of consensus and adding, removing or changing events is // not a state-machine breaking change. EmitProtoEventNonConsensus(ctx context.Context, event protoiface.MessageV1) error } ``` Typed events emitted with `EmitProto` should be assumed to be part of blockchain consensus (whether they are part of the block or app hash is left to the runtime to specify). Events emitted by `EmitKVEvent` and `EmitProtoEventNonConsensus` are not considered to be part of consensus and cannot be observed by other modules. If there is a client-side need to add events in patch releases, these methods can be used. #### Logger A logger (`cosmossdk.io/log`) must be supplied using `depinject`, and will be made available for modules to use via `depinject.In`. Modules using it should follow the current pattern in the SDK by adding the module name before using it. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type ModuleInputs struct { depinject.In Logger log.Logger } func ProvideModule(in ModuleInputs) ModuleOutputs { keeper := keeper.NewKeeper( in.logger, ) } func NewKeeper(logger log.Logger) Keeper { return Keeper{ logger: logger.With(log.ModuleKey, "x/"+types.ModuleName), } } ``` ### Core `AppModule` extension interfaces Modules will provide their core services to the runtime module via extension interfaces built on top of the `cosmossdk.io/core/appmodule.AppModule` tag interface. This tag interface requires only two empty methods which allow `depinject` to identify implementors as `depinject.OnePerModule` types and as app module implementations: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type AppModule interface { depinject.OnePerModuleType // IsAppModule is a dummy method to tag a struct as implementing an AppModule. IsAppModule() } ``` Other core extension interfaces will be defined in `cosmossdk.io/core` should be supported by valid runtime implementations. #### `MsgServer` and `QueryServer` registration `MsgServer` and `QueryServer` registration is done by implementing the `HasServices` extension interface: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type HasServices interface { AppModule RegisterServices(grpc.ServiceRegistrar) } ``` Because of the `cosmos.msg.v1.service` protobuf option, required for `Msg` services, the same `ServiceRegitrar` can be used to register both `Msg` and query services. #### Genesis The genesis `Handler` functions - `DefaultGenesis`, `ValidateGenesis`, `InitGenesis` and `ExportGenesis` - are specified against the `GenesisSource` and `GenesisTarget` interfaces which will abstract over genesis sources which may be a single JSON object or collections of JSON objects that can be efficiently streamed. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // GenesisSource is a source for genesis data in JSON format. It may abstract over a // single JSON object or separate files for each field in a JSON object that can // be streamed over. Modules should open a separate io.ReadCloser for each field that // is required. When fields represent arrays they can efficiently be streamed // over. If there is no data for a field, this function should return nil, nil. It is // important that the caller closes the reader when done with it. type GenesisSource = func(field string) (io.ReadCloser, error) // GenesisTarget is a target for writing genesis data in JSON format. It may // abstract over a single JSON object or JSON in separate files that can be // streamed over. Modules should open a separate io.WriteCloser for each field // and should prefer writing fields as arrays when possible to support efficient // iteration. It is important the caller closers the writer AND checks the error // when done with it. It is expected that a stream of JSON data is written // to the writer. type GenesisTarget = func(field string) (io.WriteCloser, error) ``` All genesis objects for a given module are expected to conform to the semantics of a JSON object. Each field in the JSON object should be read and written separately to support streaming genesis. The [ORM](/sdk/latest/reference/architecture/adr-055-orm) and [collections](/sdk/latest/reference/architecture/adr-062-collections-state-layer) both support streaming genesis and modules using these frameworks generally do not need to write any manual genesis code. To support genesis, modules should implement the `HasGenesis` extension interface: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type HasGenesis interface { AppModule // DefaultGenesis writes the default genesis for this module to the target. DefaultGenesis(GenesisTarget) error // ValidateGenesis validates the genesis data read from the source. ValidateGenesis(GenesisSource) error // InitGenesis initializes module state from the genesis source. InitGenesis(context.Context, GenesisSource) error // ExportGenesis exports module state to the genesis target. ExportGenesis(context.Context, GenesisTarget) error } ``` #### Pre Blockers Modules that have functionality that runs before BeginBlock and should implement the has `HasPreBlocker` interfaces: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type HasPreBlocker interface { AppModule PreBlock(context.Context) error } ``` #### Begin and End Blockers Modules that have functionality that runs before transactions (begin blockers) or after transactions (end blockers) should implement the has `HasBeginBlocker` and/or `HasEndBlocker` interfaces: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type HasBeginBlocker interface { AppModule BeginBlock(context.Context) error } type HasEndBlocker interface { AppModule EndBlock(context.Context) error } ``` The `BeginBlock` and `EndBlock` methods will take a `context.Context`, because: * most modules don't need Comet information other than `BlockInfo` so we can eliminate dependencies on specific Comet versions * for the few modules that need Comet block headers and/or return validator updates, specific versions of the runtime module will provide specific functionality for interacting with the specific version(s) of Comet supported In order for `BeginBlock`, `EndBlock` and `InitGenesis` to send back validator updates and retrieve full Comet block headers, the runtime module for a specific version of Comet could provide services like this: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type ValidatorUpdateService interface { SetValidatorUpdates(context.Context, []abci.ValidatorUpdate) } ``` Header Service defines a way to get header information about a block. This information is generalized for all implementations: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type Service interface { GetHeaderInfo(context.Context) Info } type Info struct { Height int64 // Height returns the height of the block Hash []byte // Hash returns the hash of the block header Time time.Time // Time returns the time of the block ChainID string // ChainId returns the chain ID of the block } ``` Comet Service provides a way to get comet specific information: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type Service interface { GetCometInfo(context.Context) Info } type CometInfo struct { Evidence []abci.Misbehavior // Misbehavior returns the misbehavior of the block // ValidatorsHash returns the hash of the validators // For Comet, it is the hash of the next validators ValidatorsHash []byte ProposerAddress []byte // ProposerAddress returns the address of the block proposer DecidedLastCommit abci.CommitInfo // DecidedLastCommit returns the last commit info } ``` If a user would like to provide a module other information they would need to implement another service like: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type RollKit Interface { ... } ``` We know these types will change at the Comet level and that also a very limited set of modules actually need this functionality, so they are intentionally kept out of core to keep core limited to the necessary, minimal set of stable APIs. #### Remaining Parts of AppModule The current `AppModule` framework handles a number of additional concerns which aren't addressed by this core API. These include: * gas * block headers * upgrades * registration of gogo proto and amino interface types * cobra query and tx commands * gRPC gateway * crisis module invariants * simulations Additional `AppModule` extension interfaces either inside or outside of core will need to be specified to handle these concerns. In the case of gogo proto and amino interfaces, the registration of these generally should happen as early as possible during initialization and in [ADR 057: App Wiring](/sdk/latest/reference/architecture/adr-057-app-wiring), protobuf type registration\ happens before dependency injection (although this could alternatively be done dedicated DI providers). gRPC gateway registration should probably be handled by the runtime module, but the core API shouldn't depend on gRPC gateway types as 1) we are already using an older version and 2) it's possible the framework can do this registration automatically in the future. So for now, the runtime module should probably provide some sort of specific type for doing this registration ex: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type GrpcGatewayInfo struct { Handlers []GrpcGatewayHandler } type GrpcGatewayHandler func(ctx context.Context, mux *runtime.ServeMux, client QueryClient) error ``` which modules can return in a provider: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func ProvideGrpcGateway() GrpcGatewayInfo { return GrpcGatewayinfo { Handlers: []Handler { types.RegisterQueryHandlerClient } } } ``` Crisis module invariants and simulations are subject to potential redesign and should be managed with types defined in the crisis and simulation modules respectively. Extension interface for CLI commands will be provided via the `cosmossdk.io/client/v2` module and its [autocli](/sdk/latest/reference/architecture/adr-058-auto-generated-cli) framework. #### Example Usage Here is an example of setting up a hypothetical `foo` v2 module which uses the [ORM](/sdk/latest/reference/architecture/adr-055-orm) for its state management and genesis. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type Keeper struct { db orm.ModuleDB evtSrv event.Service } func (k Keeper) RegisterServices(r grpc.ServiceRegistrar) { foov1.RegisterMsgServer(r, k) foov1.RegisterQueryServer(r, k) } func (k Keeper) BeginBlock(context.Context) error { return nil } func ProvideApp(config *foomodulev2.Module, evtSvc event.EventService, db orm.ModuleDB) (Keeper, appmodule.AppModule) { k := &Keeper{ db: db, evtSvc: evtSvc } return k, k } ``` ### Runtime Compatibility Version The `core` module will define a static integer var, `cosmossdk.io/core.RuntimeCompatibilityVersion`, which is a minor version indicator of the core module that is accessible at runtime. Correct runtime module implementations should check this compatibility version and return an error if the current `RuntimeCompatibilityVersion` is higher than the version of the core API that this runtime version can support. When new features are adding to the `core` module API that runtime modules are required to support, this version should be incremented. ### Runtime Modules The initial `runtime` module will simply be created within the existing `github.com/cosmos/cosmos-sdk` go module under the `runtime` package. This module will be a small wrapper around the existing `BaseApp`, `sdk.Context` and module manager and follow the Cosmos SDK's existing [0-based versioning](https://0ver.org). To move to semantic versioning as well as runtime modularity, new officially supported runtime modules will be created under the `cosmossdk.io/runtime` prefix. For each supported consensus engine a semantically-versioned go module should be created with a runtime implementation for that consensus engine. For example: * `cosmossdk.io/runtime/comet` * `cosmossdk.io/runtime/comet/v2` * `cosmossdk.io/runtime/rollkit` * etc. These runtime modules should attempt to be semantically versioned even if the underlying consensus engine is not. Also, because a runtime module is also a first class Cosmos SDK module, it should have a protobuf module config type. A new semantically versioned module config type should be created for each of these runtime module such that there is a 1:1 correspondence between the go module and module config type. This is the same practice should be followed for every semantically versioned Cosmos SDK module as described in [ADR 057: App Wiring](/sdk/latest/reference/architecture/adr-057-app-wiring). Currently, `github.com/cosmos/cosmos-sdk/runtime` uses the protobuf config type `cosmos.app.runtime.v1alpha1.Module`. When we have a standalone v1 comet runtime, we should use a dedicated protobuf module config type such as `cosmos.runtime.comet.v1.Module1`. When we release v2 of the comet runtime (`cosmossdk.io/runtime/comet/v2`) we should have a corresponding `cosmos.runtime.comet.v2.Module` protobuf type. In order to make it easier to support different consensus engines that support the same core module functionality as described in this ADR, a common go module should be created with shared runtime components. The easiest runtime components to share initially are probably the message/query router, inter-module client, service register, and event router. This common runtime module should be created initially as the `cosmossdk.io/runtime/common` go module. When this new architecture has been implemented, the main dependency for a Cosmos SDK module would be `cosmossdk.io/core` and that module should be able to be used with any supported consensus engine (to the extent that it does not explicitly depend on consensus engine specific functionality such as Comet's block headers). An app developer would then be able to choose which consensus engine they want to use by importing the corresponding runtime module. The current `BaseApp` would be refactored into the `cosmossdk.io/runtime/comet` module, the router infrastructure in `baseapp/` would be refactored into `cosmossdk.io/runtime/common` and support ADR 033, and eventually a dependency on `github.com/cosmos/cosmos-sdk` would no longer be required. In short, modules would depend primarily on `cosmossdk.io/core`, and each `cosmossdk.io/runtime/{consensus-engine}` would implement the `cosmossdk.io/core` functionality for that consensus engine. On additional piece that would need to be resolved as part of this architecture is how runtimes relate to the server. Likely it would make sense to modularize the current server architecture so that it can be used with any runtime even if that is based on a consensus engine besides Comet. This means that eventually the Comet runtime would need to encapsulate the logic for starting Comet and the ABCI app. ### Testing A mock implementation of all services should be provided in core to allow for unit testing of modules without needing to depend on any particular version of runtime. Mock services should allow tests to observe service behavior or provide a non-production implementation - for instance memory stores can be used to mock stores. For integration testing, a mock runtime implementation should be provided that allows composing different app modules together for testing without a dependency on runtime or Comet. ## Consequences ### Backwards Compatibility Early versions of runtime modules should aim to support as much as possible modules built with the existing `AppModule`/`sdk.Context` framework. As the core API is more widely adopted, later runtime versions may choose to drop support and only support the core API plus any runtime module specific APIs (like specific versions of Comet). The core module itself should strive to remain at the go semantic version `v1` as long as possible and follow design principles that allow for strong long-term support (LTS). Older versions of the SDK can support modules built against core with adaptors that convert wrap core `AppModule` implementations in implementations of `AppModule` that conform to that version of the SDK's semantics as well as by providing service implementations by wrapping `sdk.Context`. ### Positive * better API encapsulation and separation of concerns * more stable APIs * more framework extensibility * deterministic events and queries * event listeners * inter-module msg and query execution support * more explicit support for forking and merging of module versions (including runtime) ### Negative ### Neutral * modules will need to be refactored to use this API * some replacements for `AppModule` functionality still need to be defined in follow-ups (type registration, commands, invariants, simulations) and this will take additional design work ## Further Discussions * gas * block headers * upgrades * registration of gogo proto and amino interface types * cobra query and tx commands * gRPC gateway * crisis module invariants * simulations ## References * [ADR 033: Protobuf-based Inter-Module Communication](/sdk/latest/reference/architecture/adr-033-protobuf-inter-module-comm) * [ADR 057: App Wiring](/sdk/latest/reference/architecture/adr-057-app-wiring) * [ADR 055: ORM](/sdk/latest/reference/architecture/adr-055-orm) * [ADR 028: Public Key Addresses](/sdk/latest/reference/architecture/adr-028-public-key-addresses) * [Keeping Your Modules Compatible](https://go.dev/blog/module-compatibility) # ADR 64: ABCI 2.0 Integration (Phase II) Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-064-abci-2.0 ## Changelog * 2023-01-17: Initial Draft (@alexanderbez) * 2023-04-06: Add upgrading section (@alexanderbez) * 2023-04-10: Simplify vote extension state persistence (@alexanderbez) * 2023-07-07: Revise vote extension state persistence (@alexanderbez) * 2023-08-24: Revise vote extension power calculations and staking interface (@davidterpay) ## Status ACCEPTED ## Abstract This ADR outlines the continuation of the efforts to implement ABCI++ in the Cosmos SDK outlined in [ADR 060: ABCI 1.0 (Phase I)](/sdk/latest/reference/architecture/adr-060-abci-1.0). Specifically, this ADR outlines the design and implementation of ABCI 2.0, which includes `ExtendVote`, `VerifyVoteExtension` and `FinalizeBlock`. ## Context ABCI 2.0 continues the promised updates from ABCI++, specifically three additional ABCI methods that the application can implement in order to gain further control, insight and customization of the consensus process, unlocking many novel use-cases that previously not possible. We describe these three new methods below: ### `ExtendVote` This method allows each validator process to extend the pre-commit phase of the CometBFT consensus process. Specifically, it allows the application to perform custom business logic that extends the pre-commit vote and supply additional data as part of the vote, although they are signed separately by the same key. The data, called vote extension, will be broadcast and received together with the vote it is extending, and will be made available to the application in the next height. Specifically, the proposer of the next block will receive the vote extensions in `RequestPrepareProposal.local_last_commit.votes`. If the application does not have vote extension information to provide, it returns a 0-length byte array as its vote extension. **NOTE**: * Although each validator process submits its own vote extension, ONLY the *proposer* of the *next* block will receive all the vote extensions included as part of the pre-commit phase of the previous block. This means only the proposer will implicitly have access to all the vote extensions, via `RequestPrepareProposal`, and that not all vote extensions may be included, since a validator does not have to wait for all pre-commits, only 2/3. * The pre-commit vote is signed independently from the vote extension. ### `VerifyVoteExtension` This method allows validators to validate the vote extension data attached to each pre-commit message it receives. If the validation fails, the whole pre-commit message will be deemed invalid and ignored by CometBFT. CometBFT uses `VerifyVoteExtension` when validating a pre-commit vote. Specifically, for a pre-commit, CometBFT will: * Reject the message if it doesn't contain a signed vote AND a signed vote extension * Reject the message if the vote's signature OR the vote extension's signature fails to verify * Reject the message if `VerifyVoteExtension` was rejected by the app Otherwise, CometBFT will accept the pre-commit message. Note, this has important consequences on liveness, i.e., if vote extensions repeatedly cannot be verified by correct validators, CometBFT may not be able to finalize a block even if sufficiently many (+2/3) validators send pre-commit votes for that block. Thus, `VerifyVoteExtension` should be used with special care. CometBFT recommends that an application that detects an invalid vote extension SHOULD accept it in `ResponseVerifyVoteExtension` and ignore it in its own logic. ### `FinalizeBlock` This method delivers a decided block to the application. The application must execute the transactions in the block deterministically and update its state accordingly. Cryptographic commitments to the block and transaction results, returned via the corresponding parameters in `ResponseFinalizeBlock`, are included in the header of the next block. CometBFT calls it when a new block is decided. In other words, `FinalizeBlock` encapsulates the current ABCI execution flow of `BeginBlock`, one or more `DeliverTx`, and `EndBlock` into a single ABCI method. CometBFT will no longer execute requests for these legacy methods and instead will just simply call `FinalizeBlock`. ## Decision We will discuss changes to the Cosmos SDK to implement ABCI 2.0 in two distinct phases, `VoteExtensions` and `FinalizeBlock`. ### `VoteExtensions` Similarly for `PrepareProposal` and `ProcessProposal`, we propose to introduce two new handlers that an application can implement in order to provide and verify vote extensions. We propose the following new handlers for applications to implement: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type ExtendVoteHandler func(sdk.Context, abci.RequestExtendVote) abci.ResponseExtendVote type VerifyVoteExtensionHandler func(sdk.Context, abci.RequestVerifyVoteExtension) abci.ResponseVerifyVoteExtension ``` An ephemeral context and state will be supplied to both handlers. The context will contain relevant metadata such as the block height and block hash. The state will be a cached version of the committed state of the application and will be discarded after the execution of the handler, this means that both handlers get a fresh state view and no changes made to it will be written. If an application decides to implement `ExtendVoteHandler`, it must return a non-nil `ResponseExtendVote.VoteExtension`. Recall, an implementation of `ExtendVoteHandler` does NOT need to be deterministic, however, given a set of vote extensions, `VerifyVoteExtensionHandler` must be deterministic, otherwise the chain may suffer from liveness faults. In addition, recall CometBFT proceeds in rounds for each height, so if a decision cannot be made about about a block proposal at a given height, CometBFT will proceed to the next round and thus will execute `ExtendVote` and `VerifyVoteExtension` again for the new round for each validator until 2/3 valid pre-commits can be obtained. Given the broad scope of potential implementations and use-cases of vote extensions, and how to verify them, most applications should choose to implement the handlers through a single handler type, which can have any number of dependencies injected such as keepers. In addition, this handler type could contain some notion of volatile vote extension state management which would assist in vote extension verification. This state management could be ephemeral or could be some form of on-disk persistence. Example: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // VoteExtensionHandler implements an Oracle vote extension handler. type VoteExtensionHandler struct { cdc Codec mk MyKeeper state VoteExtState // This could be a map or a DB connection object } // ExtendVoteHandler can do something with h.mk and possibly h.state to create // a vote extension, such as fetching a series of prices for supported assets. func (h VoteExtensionHandler) ExtendVoteHandler(ctx sdk.Context, req abci.RequestExtendVote) abci.ResponseExtendVote { prices := GetPrices(ctx, h.mk.Assets()) bz, err := EncodePrices(h.cdc, prices) if err != nil { panic(fmt.Errorf("failed to encode prices for vote extension: %w", err)) } // store our vote extension at the given height // // NOTE: Vote extensions can be overridden since we can timeout in a round. SetPrices(h.state, req, bz) return abci.ResponseExtendVote{ VoteExtension: bz } } // VerifyVoteExtensionHandler can do something with h.state and req to verify // the req.VoteExtension field, such as ensuring the provided oracle prices are // within some valid range of our prices. func (h VoteExtensionHandler) VerifyVoteExtensionHandler(ctx sdk.Context, req abci.RequestVerifyVoteExtension) abci.ResponseVerifyVoteExtension { prices, err := DecodePrices(h.cdc, req.VoteExtension) if err != nil { log("failed to decode vote extension", "err", err) return abci.ResponseVerifyVoteExtension{ Status: REJECT } } if err := ValidatePrices(h.state, req, prices); err != nil { log("failed to validate vote extension", "prices", prices, "err", err) return abci.ResponseVerifyVoteExtension{ Status: REJECT } } // store updated vote extensions at the given height // // NOTE: Vote extensions can be overridden since we can timeout in a round. SetPrices(h.state, req, req.VoteExtension) return abci.ResponseVerifyVoteExtension{ Status: ACCEPT } } ``` #### Vote Extension Propagation & Verification As mentioned previously, vote extensions for height `H` are only made available to the proposer at height `H+1` during `PrepareProposal`. However, in order to make vote extensions useful, all validators should have access to the agreed upon vote extensions at height `H` during `H+1`. Since CometBFT includes all the vote extension signatures in `RequestPrepareProposal`, we propose that the proposing validator manually "inject" the vote extensions along with their respective signatures via a special transaction, `VoteExtsTx`, into the block proposal during `PrepareProposal`. The `VoteExtsTx` will be populated with a single `ExtendedCommitInfo` object which is received directly from `RequestPrepareProposal`. For convention, the `VoteExtsTx` transaction should be the first transaction in the block proposal, although chains can implement their own preferences. For safety purposes, we also propose that the proposer itself verify all the vote extension signatures it receives in `RequestPrepareProposal`. A validator, upon a `RequestProcessProposal`, will receive the injected `VoteExtsTx` which includes the vote extensions along with their signatures. If no such transaction exists, the validator MUST REJECT the proposal. When a validator inspects a `VoteExtsTx`, it will evaluate each `SignedVoteExtension`. For each signed vote extension, the validator will generate the signed bytes and verify the signature. At least 2/3 valid signatures, based on voting power, must be received in order for the block proposal to be valid, otherwise the validator MUST REJECT the proposal. In order to have the ability to validate signatures, `BaseApp` must have access to the `x/staking` module, since this module stores an index from consensus address to public key. However, we will avoid a direct dependency on `x/staking` and instead rely on an interface instead. In addition, the Cosmos SDK will expose a default signature verification method which applications can use: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type ValidatorStore interface { GetPubKeyByConsAddr(context.Context, sdk.ConsAddress) (cmtprotocrypto.PublicKey, error) } // ValidateVoteExtensions is a function that an application can execute in // ProcessProposal to verify vote extension signatures. func (app *BaseApp) ValidateVoteExtensions(ctx sdk.Context, currentHeight int64, extCommit abci.ExtendedCommitInfo) error { votingPower := 0 totalVotingPower := 0 for _, vote := range extCommit.Votes { totalVotingPower += vote.Validator.Power if !vote.SignedLastBlock || len(vote.VoteExtension) == 0 { continue } valConsAddr := sdk.ConsAddress(vote.Validator.Address) pubKeyProto, err := valStore.GetPubKeyByConsAddr(ctx, valConsAddr) if err != nil { return fmt.Errorf("failed to get public key for validator %s: %w", valConsAddr, err) } if len(vote.ExtensionSignature) == 0 { return fmt.Errorf("received a non-empty vote extension with empty signature for validator %s", valConsAddr) } cmtPubKey, err := cryptoenc.PubKeyFromProto(pubKeyProto) if err != nil { return fmt.Errorf("failed to convert validator %X public key: %w", valConsAddr, err) } cve := cmtproto.CanonicalVoteExtension{ Extension: vote.VoteExtension, Height: currentHeight - 1, // the vote extension was signed in the previous height Round: int64(extCommit.Round), ChainId: app.GetChainID(), } extSignBytes, err := cosmosio.MarshalDelimited(&cve) if err != nil { return fmt.Errorf("failed to encode CanonicalVoteExtension: %w", err) } if !cmtPubKey.VerifySignature(extSignBytes, vote.ExtensionSignature) { return errors.New("received vote with invalid signature") } votingPower += vote.Validator.Power } if (votingPower / totalVotingPower) < threshold { return errors.New("not enough voting power for the vote extensions") } return nil } ``` Once at least 2/3 signatures, by voting power, are received and verified, the validator can use the vote extensions to derive additional data or come to some decision based on the vote extensions. > NOTE: It is very important to state, that neither the vote propagation technique > nor the vote extension verification mechanism described above is required for > applications to implement. In other words, a proposer is not required to verify > and propagate vote extensions along with their signatures nor are proposers > required to verify those signatures. An application can implement its own > PKI mechanism and use that to sign and verify vote extensions. #### Vote Extension Persistence In certain contexts, it may be useful or necessary for applications to persist data derived from vote extensions. In order to facilitate this use case, we propose to allow app developers to define a pre-Blocker hook which will be called at the very beginning of `FinalizeBlock`, i.e. before `BeginBlock` (see below). Note, we cannot allow applications to directly write to the application state during `ProcessProposal` because during replay, CometBFT will NOT call `ProcessProposal`, which would result in an incomplete state view. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (a MyApp) PreBlocker(ctx sdk.Context, req *abci.RequestFinalizeBlock) error { voteExts := GetVoteExtensions(ctx, req.Txs) // Process and perform some compute on vote extensions, storing any resulting // state. if err a.processVoteExtensions(ctx, voteExts); if err != nil { return err } } ``` ### `FinalizeBlock` The existing ABCI methods `BeginBlock`, `DeliverTx`, and `EndBlock` have existed since the dawn of ABCI-based applications. Thus, applications, tooling, and developers have grown used to these methods and their use-cases. Specifically, `BeginBlock` and `EndBlock` have grown to be pretty integral and powerful within ABCI-based applications. E.g. an application might want to run distribution and inflation related operations prior to executing transactions and then have staking related changes to happen after executing all transactions. We propose to keep `BeginBlock` and `EndBlock` within the SDK's core module interfaces only so application developers can continue to build against existing execution flows. However, we will remove `BeginBlock`, `DeliverTx` and `EndBlock` from the SDK's `BaseApp` implementation and thus the ABCI surface area. What will then exist is a single `FinalizeBlock` execution flow. Specifically, in `FinalizeBlock` we will execute the application's `BeginBlock`, followed by execution of all the transactions, finally followed by execution of the application's `EndBlock`. Note, we will still keep the existing transaction execution mechanics within `BaseApp`, but all notions of `DeliverTx` will be removed, i.e. `deliverState` will be replace with `finalizeState`, which will be committed on `Commit`. However, there are current parameters and fields that exist in the existing `BeginBlock` and `EndBlock` ABCI types, such as votes that are used in distribution and byzantine validators used in evidence handling. These parameters exist in the `FinalizeBlock` request type, and will need to be passed to the application's implementations of `BeginBlock` and `EndBlock`. This means the Cosmos SDK's core module interfaces will need to be updated to reflect these parameters. The easiest and most straightforward way to achieve this is to just pass `RequestFinalizeBlock` to `BeginBlock` and `EndBlock`. Alternatively, we can create dedicated proxy types in the SDK that reflect these legacy ABCI types, e.g. `LegacyBeginBlockRequest` and `LegacyEndBlockRequest`. Or, we can come up with new types and names altogether. ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (app *BaseApp) FinalizeBlock(req abci.RequestFinalizeBlock) (*abci.ResponseFinalizeBlock, error) { ctx := ... if app.preBlocker != nil { ctx := app.finalizeBlockState.ctx rsp, err := app.preBlocker(ctx, req) if err != nil { return nil, err } if rsp.ConsensusParamsChanged { app.finalizeBlockState.ctx = ctx.WithConsensusParams(app.GetConsensusParams(ctx)) } } beginBlockResp, err := app.beginBlock(req) appendBlockEventAttr(beginBlockResp.Events, "begin_block") txExecResults := make([]abci.ExecTxResult, 0, len(req.Txs)) for _, tx := range req.Txs { result := app.runTx(runTxModeFinalize, tx) txExecResults = append(txExecResults, result) } endBlockResp, err := app.endBlock(app.finalizeBlockState.ctx) appendBlockEventAttr(beginBlockResp.Events, "end_block") return abci.ResponseFinalizeBlock{ TxResults: txExecResults, Events: joinEvents(beginBlockResp.Events, endBlockResp.Events), ValidatorUpdates: endBlockResp.ValidatorUpdates, ConsensusParamUpdates: endBlockResp.ConsensusParamUpdates, AppHash: nil, } } ``` #### Events Many tools, indexers and ecosystem libraries rely on the existence `BeginBlock` and `EndBlock` events. Since CometBFT now only exposes `FinalizeBlockEvents`, we find that it will still be useful for these clients and tools to still query for and rely on existing events, especially since applications will still define `BeginBlock` and `EndBlock` implementations. In order to facilitate existing event functionality, we propose that all `BeginBlock` and `EndBlock` events have a dedicated `EventAttribute` with `key=block` and `value=begin_block|end_block`. The `EventAttribute` will be appended to each event in both `BeginBlock` and `EndBlock` events\`. ### Upgrading CometBFT defines a consensus parameter, [`VoteExtensionsEnableHeight`](https://github.com/cometbft/cometbft/blob/v0.38.0-alpha.1/spec/abci/abci%2B%2B_app_requirements.md#abciparamsvoteextensionsenableheight), which specifies the height at which vote extensions are enabled and **required**. If the value is set to zero, which is the default, then vote extensions are disabled and an application is not required to implement and use vote extensions. However, if the value `H` is positive, at all heights greater than the configured height `H` vote extensions must be present (even if empty). When the configured height `H` is reached, `PrepareProposal` will not include vote extensions yet, but `ExtendVote` and `VerifyVoteExtension` will be called. Then, when reaching height `H+1`, `PrepareProposal` will include the vote extensions from height `H`. It is very important to note, for all heights after H: * Vote extensions CANNOT be disabled * They are mandatory, i.e. all pre-commit messages sent MUST have an extension attached (even if empty) When an application updates to the Cosmos SDK version with CometBFT v0.38 support, in the upgrade handler it must ensure to set the consensus parameter `VoteExtensionsEnableHeight` to the correct value. E.g. if an application is set to perform an upgrade at height `H`, then the value of `VoteExtensionsEnableHeight` should be set to any value `>=H+1`. This means that at the upgrade height, `H`, vote extensions will not be enabled yet, but at height `H+1` they will be enabled. ## Consequences ### Backwards Compatibility ABCI 2.0 is naturally not backwards compatible with prior versions of the Cosmos SDK and CometBFT. For example, an application that requests `RequestFinalizeBlock` to the same application that does not speak ABCI 2.0 will naturally fail. In addition, `BeginBlock`, `DeliverTx` and `EndBlock` will be removed from the application ABCI interfaces and along with the inputs and outputs being modified in the module interfaces. ### Positive * `BeginBlock` and `EndBlock` semantics remain, so burden on application developers should be limited. * Less communication overhead as multiple ABCI requests are condensed into a single request. * Sets the groundwork for optimistic execution. * Vote extensions allow for an entirely new set of application primitives to be developed, such as in-process price oracles and encrypted mempools. ### Negative * Some existing Cosmos SDK core APIs may need to be modified and thus broken. * Signature verification in `ProcessProposal` of 100+ vote extension signatures will add significant performance overhead to `ProcessProposal`. Granted, the signature verification process can happen concurrently using an error group with `GOMAXPROCS` goroutines. ### Neutral * Having to manually "inject" vote extensions into the block proposal during `PrepareProposal` is an awkward approach and takes up block space unnecessarily. * The requirement of `ResetProcessProposalState` can create a footgun for application developers if they're not careful, but this is necessary in order for applications to be able to commit state from vote extension computation. ## Further Discussions Future discussions include design and implementation of ABCI 3.0, which is a continuation of ABCI++ and the general discussion of optimistic execution. ## References * [ADR 060: ABCI 1.0 (Phase I)](/sdk/latest/reference/architecture/adr-060-abci-1.0) # ADR-065: Store V2 Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-065-store-v2 Feb 14, 2023: Initial Draft (@alexanderbez) ## Changelog * Feb 14, 2023: Initial Draft (@alexanderbez) ## Status DRAFT ## Abstract The storage and state primitives that Cosmos SDK based applications have used have by and large not changed since the launch of the inaugural Cosmos Hub. The demands and needs of Cosmos SDK based applications, from both developer and client UX perspectives, have evolved and outgrown the ecosystem since these primitives were first introduced. Over time as these applications have gained significant adoption, many critical shortcomings and flaws have been exposed in the state and storage primitives of the Cosmos SDK. In order to keep up with the evolving demands and needs of both clients and developers, a major overhaul to these primitives are necessary. ## Context The Cosmos SDK provides application developers with various storage primitives for dealing with application state. Specifically, each module contains its own merkle commitment data structure -- an IAVL tree. In this data structure, a module can store and retrieve key-value pairs along with Merkle commitments, i.e. proofs, to those key-value pairs indicating that they do or do not exist in the global application state. This data structure is the base layer `KVStore`. In addition, the SDK provides abstractions on top of this Merkle data structure. Namely, a root multi-store (RMS) is a collection of each module's `KVStore`. Through the RMS, the application can serve queries and provide proofs to clients in addition to provide a module access to its own unique `KVStore` though the use of `StoreKey`, which is an OCAP primitive. There are further layers of abstraction that sit between the RMS and the underlying IAVL `KVStore`. A `GasKVStore` is responsible for tracking gas IO consumption for state machine reads and writes. A `CacheKVStore` is responsible for providing a way to cache reads and buffer writes to make state transitions atomic, e.g. transaction execution or governance proposal execution. There are a few critical drawbacks to these layers of abstraction and the overall design of storage in the Cosmos SDK: * Since each module has its own IAVL `KVStore`, commitments are not [atomic](https://github.com/cosmos/cosmos-sdk/issues/14625) * Note, we can still allow modules to have their own IAVL `KVStore`, but the IAVL library will need to support the ability to pass a DB instance as an argument to various IAVL APIs. * Since IAVL is responsible for both state storage and commitment, running an archive node becomes increasingly expensive as disk space grows exponentially. * As the size of a network increases, various performance bottlenecks start to emerge in many areas such as query performance, network upgrades, state migrations, and general application performance. * Developer UX is poor as it does not allow application developers to experiment with different types of approaches to storage and commitments, along with the complications of many layers of abstractions referenced above. See the [Storage Discussion](https://github.com/cosmos/cosmos-sdk/discussions/13545) for more information. ## Alternatives There was a previous attempt to refactor the storage layer described in [ADR-040](/sdk/v0.50/build/architecture/adr-040-storage-and-smt-state-commitments). However, this approach mainly stems on the short comings of IAVL and various performance issues around it. While there was a (partial) implementation of [ADR-040](/sdk/v0.50/build/architecture/adr-040-storage-and-smt-state-commitments), it was never adopted for a variety of reasons, such as the reliance on using an SMT, which was more in a research phase, and some design choices that couldn't be fully agreed upon, such as the snap-shotting mechanism that would result in massive state bloat. ## Decision We propose to build upon some of the great ideas introduced in [ADR-040](/sdk/v0.50/build/architecture/adr-040-storage-and-smt-state-commitments), while being a bit more flexible with the underlying implementations and overall less intrusive. Specifically, we propose to: * Separate the concerns of state commitment (**SC**), needed for consensus, and state storage (**SS**), needed for state machine and clients. * Reduce layers of abstractions necessary between the RMS and underlying stores. * Provide atomic module store commitments by providing a batch database object to core IAVL APIs. * Reduce complexities in the `CacheKVStore` implementation while also improving performance\[3]. Furthermore, we will keep the IAVL is the backing [commitment](https://cryptography.fandom.com/wiki/Commitment_scheme) store for the time being. While we might not fully settle on the use of IAVL in the long term, we do not have strong empirical evidence to suggest a better alternative. Given that the SDK provides interfaces for stores, it should be sufficient to change the backing commitment store in the future should evidence arise to warrant a better alternative. However there is promising work being done to IAVL that should result in significant performance improvement \[1,2]. ### Separating SS and SC By separating SS and SC, it will allow for us to optimize against primary use cases and access patterns to state. Specifically, The SS layer will be responsible for direct access to data in the form of (key, value) pairs, whereas the SC layer (IAVL) will be responsible for committing to data and providing Merkle proofs. Note, the underlying physical storage database will be the same between both the SS and SC layers. So to avoid collisions between (key, value) pairs, both layers will be namespaced. #### State Commitment (SC) Given that the existing solution today acts as both SS and SC, we can simply repurpose it to act solely as the SC layer without any significant changes to access patterns or behavior. In other words, the entire collection of existing IAVL-backed module `KVStore`s will act as the SC layer. However, in order for the SC layer to remain lightweight and not duplicate a majority of the data held in the SS layer, we encourage node operators to keep tight pruning strategies. #### State Storage (SS) In the RMS, we will expose a *single* `KVStore` backed by the same physical database that backs the SC layer. This `KVStore` will be explicitly namespaced to avoid collisions and will act as the primary storage for (key, value) pairs. While we most likely will continue the use of `cosmos-db`, or some local interface, to allow for flexibility and iteration over preferred physical storage backends as research and benchmarking continues. However, we propose to hardcode the use of RocksDB as the primary physical storage backend. Since the SS layer will be implemented as a `KVStore`, it will support the following functionality: * Range queries * CRUD operations * Historical queries and versioning * Pruning The RMS will keep track of all buffered writes using a dedicated and internal `MemoryListener` for each `StoreKey`. For each block height, upon `Commit`, the SS layer will write all buffered (key, value) pairs under a [RocksDB user-defined timestamp](https://github.com/facebook/rocksdb/wiki/User-defined-Timestamp-%28Experimental%29) column family using the block height as the timestamp, which is an unsigned integer. This will allow a client to fetch (key, value) pairs at historical and current heights along with making iteration and range queries relatively performant as the timestamp is the key suffix. Note, we choose not to use a more general approach of allowing any embedded key/value database, such as LevelDB or PebbleDB, using height key-prefixed keys to effectively version state because most of these databases use variable length keys which would effectively make actions likes iteration and range queries less performant. Since operators might want pruning strategies to differ in SS compared to SC, e.g. having a very tight pruning strategy in SC while having a looser pruning strategy for SS, we propose to introduce an additional pruning configuration, with parameters that are identical to what exists in the SDK today, and allow operators to control the pruning strategy of the SS layer independently of the SC layer. Note, the SC pruning strategy must be congruent with the operator's state sync configuration. This is so as to allow state sync snapshots to execute successfully, otherwise, a snapshot could be triggered on a height that is not available in SC. #### State Sync The state sync process should be largely unaffected by the separation of the SC and SS layers. However, if a node syncs via state sync, the SS layer of the node will not have the state synced height available, since the IAVL import process is not setup in way to easily allow direct key/value insertion. A modification of the IAVL import process would be necessary to facilitate having the state sync height available. Note, this is not problematic for the state machine itself because when a query is made, the RMS will automatically direct the query correctly (see [Queries](#queries)). #### Queries To consolidate the query routing between both the SC and SS layers, we propose to have a notion of a "query router" that is constructed in the RMS. This query router will be supplied to each `KVStore` implementation. The query router will route queries to either the SC layer or the SS layer based on a few parameters. If `prove: true`, then the query must be routed to the SC layer. Otherwise, if the query height is available in the SS layer, the query will be served from the SS layer. Otherwise, we fall back on the SC layer. If no height is provided, the SS layer will assume the latest height. The SS layer will store a reverse index to lookup `LatestVersion -> timestamp(version)` which is set on `Commit`. #### Proofs Since the SS layer is naturally a storage layer only, without any commitments to (key, value) pairs, it cannot provide Merkle proofs to clients during queries. Since the pruning strategy against the SC layer is configured by the operator, we can therefore have the RMS route the query SC layer if the version exists and `prove: true`. Otherwise, the query will fall back to the SS layer without a proof. We could explore the idea of using state snapshots to rebuild an in-memory IAVL tree in real time against a version closest to the one provided in the query. However, it is not clear what the performance implications will be of this approach. ### Atomic Commitment We propose to modify the existing IAVL APIs to accept a batch DB object instead of relying on an internal batch object in `nodeDB`. Since each underlying IAVL `KVStore` shares the same DB in the SC layer, this will allow commits to be atomic. Specifically, we propose to: * Remove the `dbm.Batch` field from `nodeDB` * Update the `SaveVersion` method of the `MutableTree` IAVL type to accept a batch object * Update the `Commit` method of the `CommitKVStore` interface to accept a batch object * Create a batch object in the RMS during `Commit` and pass this object to each `KVStore` * Write the database batch after all stores have committed successfully Note, this will require IAVL to be updated to not rely or assume on any batch being present during `SaveVersion`. ## Consequences As a result of a new store V2 package, we should expect to see improved performance for queries and transactions due to the separation of concerns. We should also expect to see improved developer UX around experimentation of commitment schemes and storage backends for further performance, in addition to a reduced amount of abstraction around KVStores making operations such as caching and state branching more intuitive. However, due to the proposed design, there are drawbacks around providing state proofs for historical queries. ### Backwards Compatibility This ADR proposes changes to the storage implementation in the Cosmos SDK through an entirely new package. Interfaces may be borrowed and extended from existing types that exist in `store`, but no existing implementations or interfaces will be broken or modified. ### Positive * Improved performance of independent SS and SC layers * Reduced layers of abstraction making storage primitives easier to understand * Atomic commitments for SC * Redesign of storage types and interfaces will allow for greater experimentation such as different physical storage backends and different commitment schemes for different application modules ### Negative * Providing proofs for historical state is challenging ### Neutral * Keeping IAVL as the primary commitment data structure, although drastic performance improvements are being made ## Further Discussions ### Module Storage Control Many modules store secondary indexes that are typically solely used to support client queries, but are actually not needed for the state machine's state transitions. What this means is that these indexes technically have no reason to exist in the SC layer at all, as they take up unnecessary space. It is worth exploring what an API would look like to allow modules to indicate what (key, value) pairs they want to be persisted in the SC layer, implicitly indicating the SS layer as well, as opposed to just persisting the (key, value) pair only in the SS layer. ### Historical State Proofs It is not clear what the importance or demand is within the community of providing commitment proofs for historical state. While solutions can be devised such as rebuilding trees on the fly based on state snapshots, it is not clear what the performance implications are for such solutions. ### Physical DB Backends This ADR proposes usage of RocksDB to utilize user-defined timestamps as a versioning mechanism. However, other physical DB backends are available that may offer alternative ways to implement versioning while also providing performance improvements over RocksDB. E.g. PebbleDB supports MVCC timestamps as well, but we'll need to explore how PebbleDB handles compaction and state growth over time. ## References * \[1] [Link](https://github.com/cosmos/iavl/pull/676) * \[2] [Link](https://github.com/cosmos/iavl/pull/664) * \[3] [Link](https://github.com/cosmos/cosmos-sdk/issues/14990) # ADR 068: Preblock Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-068-preblock Sept 13, 2023: Initial Draft ## Changelog * Sept 13, 2023: Initial Draft ## Status DRAFT ## Abstract Introduce `PreBlock`, which runs before begin blocker other modules, and allows to modify consensus parameters, and the changes are visible to the following state machine logics. ## Context When upgrading to sdk 0.47, the storage format for consensus parameters changed, but in the migration block, `ctx.ConsensusParams()` is always `nil`, because it fails to load the old format using new code, it's supposed to be migrated by the `x/upgrade` module first, but unfortunately, the migration happens in `BeginBlocker` handler, which runs after the `ctx` is initialized. When we try to solve this, we find the `x/upgrade` module can't modify the context to make the consensus parameters visible for the other modules, the context is passed by value, and sdk team want to keep it that way, that's good for isolations between modules. ## Alternatives The first alternative solution introduced a `MigrateModuleManager`, which only includes the `x/upgrade` module right now, and baseapp will run their `BeginBlocker`s before the other modules, and reload context's consensus parameters in between. ## Decision Suggested this new lifecycle method. ### `PreBlocker` There are two semantics around the new lifecycle method: * It runs before the `BeginBlocker` of all modules * It can modify consensus parameters in storage, and signal the caller through the return value. When it returns `ConsensusParamsChanged=true`, the caller must refresh the consensus parameter in the finalize context: ``` app.finalizeBlockState.ctx = app.finalizeBlockState.ctx.WithConsensusParams(app.GetConsensusParams()) ``` The new ctx must be passed to all the other lifecycle methods. ## Consequences ### Backwards Compatibility ### Positive ### Negative ### Neutral ## Further Discussions ## Test Cases ## References * \[1] [Link](https://github.com/cosmos/cosmos-sdk/issues/16494) * \[2] [Link](https://github.com/cosmos/cosmos-sdk/pull/16583) * \[3] [Link](https://github.com/cosmos/cosmos-sdk/pull/17421) * \[4] [Link](https://github.com/cosmos/cosmos-sdk/pull/17713) # ADR 070: Unordered Transactions Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-070-unordered-account ## Changelog * Dec 4, 2023: Initial Draft (@yihuang, @tac0turtle, @alexanderbez) * Jan 30, 2024: Include section on deterministic transaction encoding * Mar 18, 2025: Revise implementation to use Cosmos SDK KV Store and require unique timeouts per-address (@technicallyty) * Apr 25, 2025: Add note about rejecting unordered txs with sequence values. ## Status ACCEPTED Not Implemented ## Abstract We propose a way to do replay-attack protection without enforcing the order of transactions and without requiring the use of monotonically increasing sequences. Instead, we propose the use of a time-based, ephemeral sequence. ## Context Account sequence values serve to prevent replay attacks and ensure transactions from the same sender are included into blocks and executed in sequential order. Unfortunately, this makes it difficult to reliably send many concurrent transactions from the same sender. Victims of such limitations include IBC relayers and crypto exchanges. ## Decision We propose adding a boolean field `unordered` and a google.protobuf.Timestamp field `timeout_timestamp` to the transaction body. Unordered transactions will bypass the traditional account sequence rules and follow the rules described below, without impacting traditional ordered transactions which will follow the same sequence rules as before. We will introduce new storage of time-based, ephemeral unordered sequences using the SDK's existing KV Store library. Specifically, we will leverage the existing x/auth KV store to store the unordered sequences. When an unordered transaction is included in a block, a concatenation of the `timeout_timestamp` and sender’s address bytes will be recorded to state (i.e. `542939323/`). In cases of multi-party signing, one entry per signer will be recorded to state. New transactions will be checked against the state to prevent duplicate submissions. To prevent the state from growing indefinitely, we propose the following: * Define an upper bound for the value of `timeout_timestamp` (i.e. 10 minutes). * Add PreBlocker method x/auth that removes state entries with a `timeout_timestamp` earlier than the current block time. ### Transaction Format ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message TxBody { ... bool unordered = 4; google.protobuf.Timestamp timeout_timestamp = 5 } ``` ### Replay Protection We facilitate replay protection by storing the unordered sequence in the Cosmos SDK KV store. Upon transaction ingress, we check if the transaction's unordered sequence exists in state, or if the TTL value is stale, i.e. before the current block time. If so, we reject it. Otherwise, we add the unordered sequence to the state. This section of the state will belong to the `x/auth` module. The state is evaluated during x/auth's `PreBlocker`. All transactions with an unordered sequence earlier than the current block time will be deleted. ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (am AppModule) PreBlock(ctx context.Context) (appmodule.ResponsePreBlock, error) { err := am.accountKeeper.RemoveExpired(sdk.UnwrapSDKContext(ctx)) if err != nil { return nil, err } return &sdk.ResponsePreBlock{ ConsensusParamsChanged: false }, nil } ``` ```golang expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package keeper import ( sdk "github.com/cosmos/cosmos-sdk/types" "cosmossdk.io/collections" "cosmossdk.io/core/store" ) var ( // just arbitrarily picking some upper bound number. unorderedSequencePrefix = collections.NewPrefix(90) ) type AccountKeeper struct { // ... unorderedSequences collections.KeySet[collections.Pair[uint64, []byte]] } func (m *AccountKeeper) Contains(ctx sdk.Context, sender []byte, timestamp uint64) (bool, error) { return m.unorderedSequences.Has(ctx, collections.Join(timestamp, sender)) } func (m *AccountKeeper) Add(ctx sdk.Context, sender []byte, timestamp uint64) error { return m.unorderedSequences.Set(ctx, collections.Join(timestamp, sender)) } func (m *AccountKeeper) RemoveExpired(ctx sdk.Context) error { blkTime := ctx.BlockTime().UnixNano() it, err := m.unorderedSequences.Iterate(ctx, collections.NewPrefixUntilPairRange[uint64, []byte](uint64(blkTime))) if err != nil { return err } defer it.Close() keys, err := it.Keys() if err != nil { return err } for _, key := range keys { if err := m.unorderedSequences.Remove(ctx, key); err != nil { return err } } return nil } ``` ### AnteHandler Decorator To facilitate bypassing nonce verification, we must modify the existing `IncrementSequenceDecorator` AnteHandler decorator to skip the nonce verification when the transaction is marked as unordered. ```golang theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func (isd IncrementSequenceDecorator) AnteHandle(ctx sdk.Context, tx sdk.Tx, simulate bool, next sdk.AnteHandler) (sdk.Context, error) { if tx.UnOrdered() { return next(ctx, tx, simulate) } // ... } ``` We also introduce a new decorator to perform the unordered transaction verification. ```golang expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} package ante import ( "slices" "strings" "time" sdk "github.com/cosmos/cosmos-sdk/types" sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper" authsigning "github.com/cosmos/cosmos-sdk/x/auth/signing" errorsmod "cosmossdk.io/errors" ) var _ sdk.AnteDecorator = (*UnorderedTxDecorator)(nil) // UnorderedTxDecorator defines an AnteHandler decorator that is responsible for // checking if a transaction is intended to be unordered and, if so, evaluates // the transaction accordingly. An unordered transaction will bypass having its // nonce incremented, which allows fire-and-forget transaction broadcasting, // removing the necessity of ordering on the sender-side. // // The transaction sender must ensure that unordered=true and a timeout_height // is appropriately set. The AnteHandler will check that the transaction is not // a duplicate and will evict it from state when the timeout is reached. // // The UnorderedTxDecorator should be placed as early as possible in the AnteHandler // chain to ensure that during DeliverTx, the transaction is added to the unordered sequence state. type UnorderedTxDecorator struct { // maxUnOrderedTTL defines the maximum TTL a transaction can define. maxTimeoutDuration time.Duration txManager authkeeper.UnorderedTxManager } func NewUnorderedTxDecorator( utxm authkeeper.UnorderedTxManager, ) *UnorderedTxDecorator { return &UnorderedTxDecorator{ maxTimeoutDuration: 10 * time.Minute, txManager: utxm, } } func (d *UnorderedTxDecorator) AnteHandle( ctx sdk.Context, tx sdk.Tx, _ bool, next sdk.AnteHandler, ) (sdk.Context, error) { if err := d.ValidateTx(ctx, tx); err != nil { return ctx, err } return next(ctx, tx, false) } func (d *UnorderedTxDecorator) ValidateTx(ctx sdk.Context, tx sdk.Tx) error { unorderedTx, ok := tx.(sdk.TxWithUnordered) if !ok || !unorderedTx.GetUnordered() { // If the transaction does not implement unordered capabilities or has the // unordered value as false, we bypass. return nil } blockTime := ctx.BlockTime() timeoutTimestamp := unorderedTx.GetTimeoutTimeStamp() if timeoutTimestamp.IsZero() || timeoutTimestamp.Unix() == 0 { return errorsmod.Wrap( sdkerrors.ErrInvalidRequest, "unordered transaction must have timeout_timestamp set", ) } if timeoutTimestamp.Before(blockTime) { return errorsmod.Wrap( sdkerrors.ErrInvalidRequest, "unordered transaction has a timeout_timestamp that has already passed", ) } if timeoutTimestamp.After(blockTime.Add(d.maxTimeoutDuration)) { return errorsmod.Wrapf( sdkerrors.ErrInvalidRequest, "unordered tx ttl exceeds %s", d.maxTimeoutDuration.String(), ) } execMode := ctx.ExecMode() if execMode == sdk.ExecModeSimulate { return nil } signerAddrs, err := getSigners(tx) if err != nil { return err } for _, signer := range signerAddrs { contains, err := d.txManager.Contains(ctx, signer, uint64(unorderedTx.GetTimeoutTimeStamp().Unix())) if err != nil { return errorsmod.Wrap( sdkerrors.ErrIO, "failed to check contains", ) } if contains { return errorsmod.Wrapf( sdkerrors.ErrInvalidRequest, "tx is duplicated for signer %x", signer, ) } if err := d.txManager.Add(ctx, signer, uint64(unorderedTx.GetTimeoutTimeStamp().Unix())); err != nil { return errorsmod.Wrap( sdkerrors.ErrIO, "failed to add unordered sequence to state", ) } } return nil } func getSigners(tx sdk.Tx) ([][]byte, error) { sigTx, ok := tx.(authsigning.SigVerifiableTx) if !ok { return nil, errorsmod.Wrap(sdkerrors.ErrTxDecode, "invalid tx type") } return sigTx.GetSigners() } ``` ### Unordered Sequences Unordered sequences provide a simple, straightforward mechanism to protect against both transaction malleability and transaction duplication. It is important to note that the unordered sequence must still be unique. However, the value is not required to be strictly increasing as with regular sequences, and the order in which the node receives the transactions no longer matters. Clients can handle building unordered transactions similarly to the code below: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} for _, tx := range txs { tx.SetUnordered(true) tx.SetTimeoutTimestamp(time.Now() + 1 * time.Nanosecond) } ``` We will reject transactions that have both sequence and unordered timeouts set. We do this to avoid assuming the intent of the user. ### State Management The storage of unordered sequences will be facilitated using the Cosmos SDK's KV Store service. ## Note On Previous Design Iteration The previous iteration of unordered transactions worked by using an ad-hoc state-management system that posed severe risks and a vector for duplicated tx processing. It relied on graceful app closure which would flush the current state of the unordered sequence mapping. If the 2/3's of the network crashed, and the graceful closure did not trigger, the system would lose track of all sequences in the mapping, allowing those transactions to be replayed. The implementation proposed in the updated version of this ADR solves this by writing directly to the Cosmos KV Store. While this is less performant, for the initial implementation, we opted to choose a safer path and postpone performance optimizations until we have more data on real-world impacts and a more battle-tested approach to optimization. Additionally, the previous iteration relied on using hashes to create what we call an "unordered sequence." There are known issues with transaction malleability in Cosmos SDK signing modes. This ADR gets away from this problem by enforcing single-use unordered nonces, instead of deriving nonces from bytes in the transaction. ## Consequences ### Positive * Support unordered transaction inclusion, enabling the ability to "fire and forget" many transactions at once. ### Negative * Requires additional storage overhead. * Requirement of unique timestamps per transaction causes a small amount of additional overhead for clients. Clients must ensure each transaction's timeout timestamp is different. However, nanosecond differentials suffice. * Usage of Cosmos SDK KV store is slower in comparison to using a non-merklized store or ad-hoc methods, and block times may slow down as a result. ## References * [Link](https://github.com/cosmos/cosmos-sdk/issues/13009) # Cosmos SDK Transaction Malleability Risk Review and Recommendations Source: https://docs.cosmos.network/sdk/latest/reference/architecture/adr-076-tx-malleability 2025-03-10: Initial draft (@aaronc) ## Changelog * 2025-03-10: Initial draft (@aaronc) ## Status PROPOSED: Not Implemented ## Abstract Several encoding and sign mode related issues have historically resulted in the possibility that Cosmos SDK transactions may be re-encoded in such a way as to change their hash (and in rare cases, their meaning) without invalidating the signature. This document details these cases, their potential risks, the extent to which they have been addressed, and provides recommendations for future improvements. ## Review One naive assumption about Cosmos SDK transactions is that hashing the raw bytes of a submitted transaction creates a safe unique identifier for the transaction. In reality, there are multiple ways in which transactions could be manipulated to create different transaction bytes (and as a result different hashes) that still pass signature verification. This document attempts to enumerate the various potential transaction "malleability" risks that we have identified and the extent to which they have or have not been addressed in various sign modes. We also identify vulnerabilities that could be introduced if developers make changes in the future without careful consideration of the complexities involved with transaction encoding, sign modes and signatures. ### Risks Associated with Malleability The malleability of transactions poses the following potential risks to end users: * unsigned data could get added to transactions and be processed by state machines * clients often rely on transaction hashes for checking transaction status, but whether or not submitted transaction hashes match processed transaction hashes depends primarily on good network actors rather than fundamental protocol guarantees * transactions could potentially get executed more than once (faulty replay protection) If a client generates a transaction, keeps a record of its hash and then attempts to query nodes to check the transaction's status, this process may falsely conclude that the transaction had not been processed if an intermediary processor decoded and re-encoded the transaction with different encoding rules (either maliciously or unintentionally). As long as no malleability is present in the signature bytes themselves, clients *should* query transactions by signature instead of hash. Not being cognizant of this risk may lead clients to submit the same transaction multiple times if they believe that earlier transactions had failed or gotten lost in processing. This could be an attack vector against users if wallets primarily query transactions by hash. If the state machine were to rely on transaction hashes as a replay mechanism itself, this would be faulty and not provide the intended replay protection. Instead, the state machine should rely on deterministic representations of transactions rather than the raw encoding, or other nonces, if they want to provide some replay protection that doesn't rely on a monotonically increasing account sequence number. ### Sources of Malleability #### Non-deterministic Protobuf Encoding Cosmos SDK transactions are encoded using protobuf binary encoding when they are submitted to the network. Protobuf binary is not inherently a deterministic encoding meaning that the same logical payload could have several valid bytes representations. In a basic sense, this means that protobuf in general can be decoded and re-encoded to produce a different byte stream (and thus different hash) without changing the logical meaning of the bytes. [ADR 027: Deterministic Protobuf Serialization](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-027-deterministic-protobuf-serialization.md) describes in detail what needs to be done to produce what we consider to be a "canonical", deterministic protobuf serialization. Briefly, the following sources of malleability at the encoding level have been identified and are addressed by this specification: * fields can be emitted in any order * default field values can be included or omitted, and this doesn't change meaning unless `optional` is used * `repeated` fields of scalars may use packed or "regular" encoding * `varint`s can include extra ignored bits * extra fields may be added and are usually simply ignored by decoders. [ADR 020](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-020-protobuf-transaction-encoding.md#unknown-field-filtering) specifies that in general such extra fields should cause messages and transactions to be rejected) When using `SIGN_MODE_DIRECT` none of the above malleabilities will be tolerated because: * signatures of messages and extensions must be done over the raw encoded bytes of those fields * the outer tx envelope (`TxRaw`) must follow ADR 027 rules or be rejected Transactions signed with `SIGN_MODE_LEGACY_AMINO_JSON`, however, have no way of protecting against the above malleabilities because what is signed is a JSON representation of the logical contents of the transaction. These logical contents could have any number of valid protobuf binary encodings, so in general there are no guarantees regarding transaction hash with Amino JSON signing. In addition to being aware of the general non-determinism of protobuf binary, developers need to pay special attention to make sure that unknown protobuf fields get rejected when developing new capabilities related to protobuf transactions. The protobuf serialization format was designed with the assumption that unknown data known to encoders could safely be ignored by decoders. This assumption may have been fairly safe within the walled garden of Google's centralized infrastructure. However, in distributed blockchain systems, this assumption is generally unsafe. If a newer client encodes a protobuf message with data intended for a newer server, it is not safe for an older server to simply ignore and discard instructions that it does not understand. These instructions could include critical information that the transaction signer is relying upon and just assuming that it is unimportant is not safe. [ADR 020](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-020-protobuf-transaction-encoding.md#unknown-field-filtering) specifies some provisions for "non-critical" fields which can safely be ignored by older servers. In practice, I have not seen any valid usages of this. It is something in the design that maintainers should be aware of, but it may not be necessary or even 100% safe. #### Non-deterministic Value Encoding In addition to the non-determinism present in protobuf binary itself, some protobuf field data is encoded using a micro-format which itself may not be deterministic. Consider for instance integer or decimal encoding. Some decoders may allow for the presence of leading or trailing zeros without changing the logical meaning, ex. `00100` vs `100` or `100.00` vs `100`. So if a sign mode encodes numbers deterministically, but decoders accept multiple representations, a user may sign over the value `100` while `0100` gets encoded. This would be possible with Amino JSON to the extent that the integer decoder accepts leading zeros. I believe the current `Int` implementation will reject this, however, it is probably possible to encode a octal or hexadecimal representation in the transaction whereas the user signs over a decimal integer. #### Signature Encoding Signatures themselves are encoded using a micro-format specific to the signature algorithm being used and sometimes these micro-formats can allow for non-determinism (multiple valid bytes for the same signature). Most of the signature algorithms supported by the SDK should reject non-canonical bytes in their current implementation. However, the `Multisignature` protobuf type uses normal protobuf encoding and there is no check as to whether the decoded bytes followed canonical ADR 027 rules or not. Therefore, multisig transactions can have malleability in their signatures. Any new or custom signature algorithms must make sure that they reject any non-canonical bytes, otherwise even with `SIGN_MODE_DIRECT` there can be transaction hash malleability by re-encoding signatures with a non-canonical representation. #### Fields not covered by Amino JSON Another area that needs to be addressed carefully is the discrepancy between `AminoSignDoc`(see [`aminojson.proto`](https://github.com/cosmos/cosmos-sdk/blob/v0.50.10/x/tx/signing/aminojson/internal/aminojsonpb/aminojson.proto)) used for `SIGN_MODE_LEGACY_AMINO_JSON` and the actual contents of `TxBody` and `AuthInfo` (see [`tx.proto`](https://github.com/cosmos/cosmos-sdk/blob/v0.50.10/proto/cosmos/tx/v1beta1/tx.proto)). If fields get added to `TxBody` or `AuthInfo`, they must either have a corresponding representing in `AminoSignDoc` or Amino JSON signatures must be rejected when those new fields are set. Making sure that this is done is a highly manual process, and developers could easily make the mistake of updating `TxBody` or `AuthInfo` without paying any attention to the implementation of `GetSignBytes` for Amino JSON. This is a critical vulnerability in which unsigned content can now get into the transaction and signature verification will pass. ## Sign Mode Summary and Recommendations The sign modes officially supported by the SDK are `SIGN_MODE_DIRECT`, `SIGN_MODE_TEXTUAL`, `SIGN_MODE_DIRECT_AUX`, and `SIGN_MODE_LEGACY_AMINO_JSON`. `SIGN_MODE_LEGACY_AMINO_JSON` is used commonly by wallets and is currently the only sign mode supported on Nano Ledger hardware devices (although `SIGN_MODE_TEXTUAL` was designed to also support hardware devices). `SIGN_MODE_DIRECT` is the simplest sign mode and its usage is also fairly common. `SIGN_MODE_DIRECT_AUX` is a variant of `SIGN_MODE_DIRECT` that can be used by auxiliary signers in a multi-signer transaction by those signers who are not paying gas. `SIGN_MODE_TEXTUAL` was intended as a replacement for `SIGN_MODE_LEGACY_AMINO_JSON`, but as far as we know it has not been adopted by any clients yet and thus is not in active use. All known malleability concerns have been addressed in the current implementation of `SIGN_MODE_DIRECT`. The only known malleability that could occur with a transaction signed with `SIGN_MODE_DIRECT` would need to be in the signature bytes themselves. Since signatures are not signed over, it is impossible for any sign mode to address this directly and instead signature algorithms need to take care to reject any non-canonically encoded signature bytes to prevent malleability. For the known malleability of the `Multisignature` type, we should make sure that any valid signatures were encoded following canonical ADR 027 rules when doing signature verification. `SIGN_MODE_DIRECT_AUX` provides the same level of safety as `SIGN_MODE_DIRECT` because * the raw encoded `TxBody` bytes are signed over in `SignDocDirectAux`, and * a transaction using `SIGN_MODE_DIRECT_AUX` still requires the primary signer to sign the transaction with `SIGN_MODE_DIRECT` `SIGN_MODE_TEXTUAL` also provides the same level of safety as `SIGN_MODE_DIRECT` because the hash of the raw encoded `TxBody` and `AuthInfo` bytes are signed over. Unfortunately, the vast majority of unaddressed malleability risks affect `SIGN_MODE_LEGACY_AMINO_JSON` and this sign mode is still commonly used. It is recommended that the following improvements be made to Amino JSON signing: * hashes of `TxBody` and `AuthInfo` should be added to `AminoSignDoc` so that encoding-level malleablity is addressed * when constructing `AminoSignDoc`, [protoreflect](https://pkg.go.dev/google.golang.org/protobuf/reflect/protoreflect) API should be used to ensure that there no fields in `TxBody` or `AuthInfo` which do not have a mapping in `AminoSignDoc` have been set * fields present in `TxBody` or `AuthInfo` that are not present in `AminoSignDoc` (such as extension options) should be added to `AminoSignDoc` if possible ## Testing To test that transactions are resistant to malleability, we can develop a test suite to run against all sign modes that attempts to manipulate transaction bytes in the following ways: * changing protobuf encoding by * reordering fields * setting default values * adding extra bits to varints, or * setting new unknown fields * modifying integer and decimal values encoded as strings with leading or trailing zeros Whenever any of these manipulations is done, we should observe that the sign doc bytes for the sign mode being tested also change, meaning that the corresponding signatures will also have to change. In the case of Amino JSON, we should also develop tests which ensure that if any `TxBody` or `AuthInfo` field not supported by Amino's `AminoSignDoc` is set that signing fails. In the general case of transaction decoding, we should have unit tests to ensure that * any `TxRaw` bytes which do not follow ADR 027 canonical encoding cause decoding to fail, and * any top-level transaction elements including `TxBody`, `AuthInfo`, public keys, and messages which have unknown fields set cause the transaction to be rejected (this ensures that ADR 020 unknown field filtering is properly applied) For each supported signature algorithm, there should also be unit tests to ensure that signatures must be encoded canonically or get rejected. ## References * [ADR 027: Deterministic Protobuf Serialization](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-027-deterministic-protobuf-serialization.md) * [ADR 020](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-020-protobuf-transaction-encoding.md#unknown-field-filtering) * [`aminojson.proto`](https://github.com/cosmos/cosmos-sdk/blob/v0.50.10/x/tx/signing/aminojson/internal/aminojsonpb/aminojson.proto) * [`tx.proto`](https://github.com/cosmos/cosmos-sdk/blob/v0.50.10/proto/cosmos/tx/v1beta1/tx.proto) # RFC Creation Process Source: https://docs.cosmos.network/sdk/latest/reference/rfc/PROCESS 1. Copy the `rfc-template.md` file. Use the following filename pattern: `rfc-next_number-title.md` 2. Create a draft Pull Request if you want to get an early feedback. 3. Make sure the context and a solution is clear and well documented. 4. Add an entry to a list in the [README](/sdk/v0.50/build/rfc/README) file. 5. Create a Pull Request to propose a new ADR. ## What is an RFC? An RFC is a sort of async whiteboarding session. It is meant to replace the need for a distributed team to come together to make a decision. Currently, the Cosmos SDK team and contributors are distributed around the world. The team conducts working groups to have a synchronous discussion and an RFC can be used to capture the discussion for a wider audience to better understand the changes that are coming to the software. The main difference the Cosmos SDK is defining as a differentiation between RFC and ADRs is that one is to come to consensus and circulate information about a potential change or feature. An ADR is used if there is already consensus on a feature or change and there is not a need to articulate the change coming to the software. An ADR will articulate the changes and have a lower amount of communication . ## RFC life cycle RFC creation is an **iterative** process. An RFC is meant as a distributed collaboration session, it may have many comments and is usually the byproduct of no working group or synchronous communication 1. Proposals could start with a new GitHub Issue, be a result of existing Issues or a discussion. 2. An RFC doesn't have to arrive to `main` with an *accepted* status in a single PR. If the motivation is clear and the solution is sound, we SHOULD be able to merge it and keep a *proposed* status. It's preferable to have an iterative approach rather than long, not merged Pull Requests. 3. If a *proposed* RFC is merged, then it should clearly document outstanding issues either in the RFC document notes or in a GitHub Issue. 4. The PR SHOULD always be merged. In the case of a faulty RFC, we still prefer to merge it with a *rejected* status. The only time the RFC SHOULD NOT be merged is if the author abandons it. 5. Merged RFCs SHOULD NOT be pruned. 6. If there is consensus and enough feedback then the RFC can be accepted. > Note: An RFC is written when there is no working group or team session on the problem. RFC's are meant as a distributed whiteboarding session. If there is a working group on the proposal there is no need to have an RFC as there is synchronous whiteboarding going on. ### RFC status Status has two components: ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} {CONSENSUS STATUS} ``` #### Consensus Status ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} DRAFT -> PROPOSED -> LAST CALL yyyy-mm-dd -> ACCEPTED | REJECTED -> SUPERSEDED by ADR-xxx \ | \ | v v ABANDONED ``` * `DRAFT`: \[optional] an ADR which is work in progress, not being ready for a general review. This is to present an early work and get an early feedback in a Draft Pull Request form. * `PROPOSED`: an ADR covering a full solution architecture and still in the review - project stakeholders haven't reached agreement yet. * `LAST CALL `: \[optional] clear notify that we are close to accept updates. Changing a status to `LAST CALL` means that social consensus (of Cosmos SDK maintainers) has been reached and we still want to give it a time to let the community react or analyze. * `ACCEPTED`: ADR which will represent a currently implemented or to be implemented architecture design. * `REJECTED`: ADR can go from PROPOSED or ACCEPTED to rejected if the consensus among project stakeholders will decide so. * `SUPERSEDED by ADR-xxx`: ADR which has been superseded by a new ADR. * `ABANDONED`: the ADR is no longer pursued by the original authors. ## Language used in RFC * The background/goal should be written in the present tense. * Avoid using a first, personal form. # Requests for Comments Source: https://docs.cosmos.network/sdk/latest/reference/rfc/README A Request for Comments (RFC) is a record of discussion on an open-ended topic related to the design and implementation of the Cosmos SDK, for which no immediate decision is required. A Request for Comments (RFC) is a record of discussion on an open-ended topic related to the design and implementation of the Cosmos SDK, for which no immediate decision is required. The purpose of an RFC is to serve as a historical record of a high-level discussion that might otherwise only be recorded in an ad-hoc way (for example, via gists or Google docs) that are difficult to discover for someone after the fact. An RFC *may* give rise to more specific architectural *decisions* for the Cosmos SDK, but those decisions must be recorded separately in [Architecture Decision Records (ADR)](/sdk/latest/reference/architecture/README). As a rule of thumb, if you can articulate a specific question that needs to be answered, write an ADR. If you need to explore the topic and get input from others to know what questions need to be answered, an RFC may be appropriate. ## RFC Content An RFC should provide: * A **changelog**, documenting when and how the RFC has changed. * An **abstract**, briefly summarizing the topic so the reader can quickly tell whether it is relevant to their interest. * Any **background** a reader will need to understand and participate in the substance of the discussion (links to other documents are fine here). * The **discussion**, the primary content of the document. The [rfc-template.md](/sdk/v0.50/build/rfc/rfc-template) file includes placeholders for these sections. ## Table of Contents * [RFC-001: Tx Validation](/sdk/v0.50/build/rfc/rfc-001-tx-validation) # RFC 001: Transaction Validation Source: https://docs.cosmos.network/sdk/latest/reference/rfc/rfc-001-tx-validation 2023-03-12: Proposed ## Changelog * 2023-03-12: Proposed ## Background Transaction Validation is crucial to a functioning state machine. Within the Cosmos SDK there are two validation flows, one is outside the message server and the other within. The flow outside of the message server is the `ValidateBasic` function. It is called in the antehandler on both `CheckTx` and `DeliverTx`. There is an overhead and sometimes duplication of validation within these two flows. This extra validation provides an additional check before entering the mempool. With the deprecation of [`GetSigners`](https://github.com/cosmos/cosmos-sdk/issues/11275) we have the optionality to remove [sdk.Msg](https://github.com/cosmos/cosmos-sdk/blob/16a5404f8e00ddcf8857c8a55dca2f7c109c29bc/types/tx_msg.go#L16) and the `ValidateBasic` function. With the separation of CometBFT and Cosmos-SDK, there is a lack of control of what transactions get broadcasted and included in a block. This extra validation in the antehandler is meant to help in this case. In most cases the transaction is or should be simulated against a node for validation. With this flow transactions will be treated the same. ## Proposal The acceptance of this RFC would move validation within `ValidateBasic` to the message server in modules, update tutorials and docs to remove mention of using `ValidateBasic` in favour of handling all validation for a message where it is executed. We can and will still support the `ValidateBasic` function for users and provide an extension interface of the function once `sdk.Msg` is deprecated. > Note: This is how messages are handled in VMs like Ethereum and CosmWasm. ### Consequences The consequence of updating the transaction flow is that transaction that may have failed before with the `ValidateBasic` flow will now be included in a block and fees charged. # Rfc template Source: https://docs.cosmos.network/sdk/latest/reference/rfc/rfc-template ## Changelog * `{date}`: `{changelog}` ## Background > The next section is the "Background" section. This section should be at least two paragraphs and can take up to a whole > page in some cases. The guiding goal of the background section is: as a newcomer to this project (new employee, team > transfer), can I read the background section and follow any links to get the full context of why this change is\ > necessary? > > If you can't show a random engineer the background section and have them acquire nearly full context on the necessity > for the RFC, then the background section is not full enough. To help achieve this, link to prior RFCs, discussions, and > more here as necessary to provide context so you don't have to simply repeat yourself. ## Proposal > The next required section is "Proposal" or "Goal". Given the background above, this section proposes a solution. > This should be an overview of the "how" for the solution, but for details further sections will be used. ## Abandoned Ideas (Optional) > As RFCs evolve, it is common that there are ideas that are abandoned. Rather than simply deleting them from the > document, you should try to organize them into sections that make it clear they're abandoned while explaining why they > were abandoned. > > When sharing your RFC with others or having someone look back on your RFC in the future, it is common to walk the same > path and fall into the same pitfalls that we've since matured from. Abandoned ideas are a way to recognize that path > and explain the pitfalls and why they were abandoned. ## Decision > This section describes alternative designs to the chosen design. This section > is important and if an ADR does not have any alternatives then it should be > considered that the ADR was not thought through. ## Consequences (optional) > This section describes the resulting context, after applying the decision. All > consequences should be listed here, not just the "positive" ones. A particular > decision may have positive, negative, and neutral consequences, but all of them > affect the team and project in the future. ### Backwards Compatibility > All ADRs that introduce backwards incompatibilities must include a section > describing these incompatibilities and their severity. The ADR must explain > how the author proposes to deal with these incompatibilities. ADR submissions > without a sufficient backwards compatibility treatise may be rejected outright. ### Positive > `{positive consequences}` ### Negative > `{negative consequences}` ### Neutral > `{neutral consequences}` ### References > Links to external materials needed to follow the discussion may be added here. > > In addition, if the discussion in a request for comments leads to any design > decisions, it may be helpful to add links to the ADR documents here after the > discussion has settled. ## Discussion > This section contains the core of the discussion. > > There is no fixed format for this section, but ideally changes to this > section should be updated before merging to reflect any discussion that took > place on the PR that made those changes. # Specifications Source: https://docs.cosmos.network/sdk/latest/reference/spec/README This directory contains specifications for the modules of the Cosmos SDK as well as Interchain Standards (ICS) and other specifications. This directory contains specifications for the modules of the Cosmos SDK as well as Interchain Standards (ICS) and other specifications. Cosmos SDK applications hold this state in a Merkle store. Updates to the store may be made during transactions and at the beginning and end of every block. ## Cosmos SDK specifications * [Store](/sdk/v0.50/learn/advanced/store) - The core Merkle store that holds the state. * [Bech32](/sdk/v0.50/build/spec/addresses/bech32) - Address format for Cosmos SDK applications. ## Modules specifications Go the [module directory](/sdk/latest/modules/modules) ## CometBFT For details on the underlying blockchain and p2p protocols, see the [CometBFT specification](https://github.com/cometbft/cometbft/tree/main/spec). # Specification of Modules Source: https://docs.cosmos.network/sdk/latest/reference/spec/SPEC_MODULE This file intends to outline the common structure for specifications within this directory. This file intends to outline the common structure for specifications within this directory. ## Tense For consistency, specs should be written in passive present tense. ## Pseudo-Code Generally, pseudo-code should be minimized throughout the spec. Often, simple bulleted-lists which describe a function's operations are sufficient and should be considered preferable. In certain instances, due to the complex nature of the functionality being described pseudo-code may be the most suitable form of specification. In these cases use of pseudo-code is permissible, but should be presented in a concise manner, ideally restricted to only the complex element as a part of a larger description. ## Common Layout The following generalized `README` structure should be used to breakdown specifications for modules. The following list is nonbinding and all sections are optional. * `# {Module Name}` - overview of the module * `## Concepts` - describe specialized concepts and definitions used throughout the spec * `## State` - specify and describe structures expected to be marshaled into the store, and their keys * `## State Transitions` - standard state transition operations triggered by hooks, messages, etc. * `## Messages` - specify message structure(s) and expected state machine behavior(s) * `## Begin Block` - specify any begin-block operations * `## End Block` - specify any end-block operations * `## Hooks` - describe available hooks to be called by/from this module * `## Events` - list and describe event tags used * `## Client` - list and describe CLI commands and gRPC and REST endpoints * `## Params` - list all module parameters, their types (in JSON) and examples * `## Future Improvements` - describe future improvements of this module * `## Tests` - acceptance tests * `## Appendix` - supplementary details referenced elsewhere within the spec ### Notation for key-value mapping Within `## State` the following notation `->` should be used to describe key to value mapping: ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} key -> value ``` to represent byte concatenation the `|` may be used. In addition, encoding type may be specified, for example: ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} 0x00 | addressBytes | address2Bytes -> amino(value_object) ``` Additionally, index mappings may be specified by mapping to the `nil` value, for example: ```text theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} 0x01 | address2Bytes | addressBytes -> nil ``` # What is an SDK standard? Source: https://docs.cosmos.network/sdk/latest/reference/spec/SPEC_STANDARD An SDK standard is a design document describing a particular protocol, standard, or feature expected to be used by the Cosmos SDK. An SDK standard should list the desired properties of the standard, explain the design rationale, and provide a concise but comprehensive technical specification. The primary author is responsible for pushing the proposal through the standardization process, soliciting input and support from the community, and communicating with relevant stakeholders to ensure (social) consensus. ## Sections An SDK standard consists of: * a synopsis, * overview and basic concepts, * technical specification, * history log, and * copyright notice. All top-level sections are required. References should be included inline as links, or tabulated at the bottom of the section if necessary. Included subsections should be listed in the order specified below. ### Table Of Contents Provide a table of contents at the top of the file to help readers. ### Synopsis The document should include a brief (\~200 word) synopsis providing a high-level description of and rationale for the specification. ### Overview and basic concepts This section should include a motivation subsection and a definition subsection if required: * *Motivation* - A rationale for the existence of the proposed feature, or the proposed changes to an existing feature. * *Definitions* - A list of new terms or concepts used in the document or required to understand it. ### System model and properties This section should include an assumption subsection if any, the mandatory properties subsection, and a dependency subsection. Note that the first two subsections are tightly coupled: how to enforce a property will depend directly on the assumptions made. This subsection is important to capture the interactions of the specified feature with the "rest-of-the-world," i.e., with other features of the ecosystem. * *Assumptions* - A list of any assumptions made by the feature designer. It should capture which features are used by the feature under specification, and what do we expect from them. * *Properties* - A list of the desired properties or characteristics of the feature specified, and expected effects or failures when the properties are violated. In case it is relevant, it can also include a list of properties that the feature does not guarantee. * *Dependencies* - A list of the features that use the feature under specification and how. ### Technical specification This is the main section of the document, and should contain protocol documentation, design rationale, required references, and technical details where appropriate. The section may have any or all of the following subsections, as appropriate to the particular specification. The API subsection is especially encouraged when appropriate. * *API* - A detailed description of the feature's API. * *Technical Details* - All technical details including syntax, diagrams, semantics, protocols, data structures, algorithms, and pseudocode as appropriate. The technical specification should be detailed enough such that separate correct implementations of the specification without knowledge of each other are compatible. * *Backwards Compatibility* - A discussion of compatibility (or lack thereof) with previous feature or protocol versions. * *Known Issues* - A list of known issues. This subsection is specially important for specifications of already in-use features. * *Example Implementation* - A concrete example implementation or description of an expected implementation to serve as the primary reference for implementers. ### History A specification should include a history section, listing any inspiring documents and a plaintext log of significant changes. See an example history section [below](#history-1). ### Copyright A specification should include a copyright section waiving rights via [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0). ## Formatting ### General Specifications must be written in GitHub-flavored Markdown. For a GitHub-flavored Markdown cheat sheet, see [here](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet). For a local Markdown renderer, see [here](https://github.com/joeyespo/grip). ### Language Specifications should be written in Simple English, avoiding obscure terminology and unnecessary jargon. For excellent examples of Simple English, please see the [Simple English Wikipedia](https://simple.wikipedia.org/wiki/Main_Page). The keywords "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in specifications are to be interpreted as described in [RFC 2119](https://tools.ietf.org/html/rfc2119). ### Pseudocode Pseudocode in specifications should be language-agnostic and formatted in a simple imperative standard, with line numbers, variables, simple conditional blocks, for loops, and English fragments where necessary to explain further functionality such as scheduling timeouts. LaTeX images should be avoided because they are challenging to review in diff form. Pseudocode for structs can be written in a simple language like TypeScript or golang, as interfaces. Example Golang pseudocode struct: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type CacheKVStore interface { cache: map[Key]Value parent: KVStore deleted: Key } ``` Pseudocode for algorithms should be written in simple Golang, as functions. Example pseudocode algorithm: ```go expandable theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func get( store CacheKVStore, key Key) Value { value = store.cache.get(Key) if (value !== null) { return value } else { value = store.parent.get(key) store.cache.set(key, value) return value } } ``` ## History This specification was significantly inspired by and derived from IBC's [ICS](https://github.com/cosmos/ibc/blob/main/spec/ics-001-ics-standard/README.md), which was in turn derived from Ethereum's [EIP 1](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1.md). Nov 24, 2022 - Initial draft finished and submitted as a PR ## Copyright All content herein is licensed under [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0). # Security Audits Source: https://docs.cosmos.network/sdk/latest/security/audits Security audits and transparency reports for Cosmos Stack components This page is auto-generated from the [cosmos/security](https://github.com/cosmos/security) repository. **Last synced:** Apr 10, 2026 | [View all audits](https://github.com/cosmos/security/tree/main/audits) Cosmos Labs maintains a comprehensive security program for all Cosmos Stack components. This page provides links to third-party security audits and transparency reports. ## Cosmos EVM * [Sherlock 2025 07 28 Final](https://github.com/cosmos/security/blob/main/audits/evm/sherlock_2025_07_28_final.pdf) ## Cosmos Hub (Gaia) * [2022 Liquid Staking Oak](https://github.com/cosmos/security/blob/main/audits/gaia/2022-liquid-staking-oak.pdf) ## Interchain Security (ICS) * [Informal Ics 2023](https://github.com/cosmos/security/blob/main/audits/ics/informal-ics-2023.pdf) ## Ledger **ledger/** * [2023 Zondax](https://github.com/cosmos/security/blob/main/audits/ledger/ledger/2023-zondax.pdf) * [2026 Zondax](https://github.com/cosmos/security/blob/main/audits/ledger/ledger/2026-zondax.pdf) ## Cosmos SDK * [Cosmos Sdk 2019 Final](https://github.com/cosmos/security/blob/main/audits/sdk/cosmos_sdk_2019_final.pdf) * [Cosmos Sdk V53 Audit Final](https://github.com/cosmos/security/blob/main/audits/sdk/cosmos_sdk_v53_audit_final.pdf) * [Group Module Audit](https://github.com/cosmos/security/blob/main/audits/sdk/group_module_audit.pdf) ## Transparency Reports * [Transparency Report 2023 2024](https://github.com/cosmos/security/blob/main/reports/transparency_report_2023_2024.pdf) ## Additional Resources * [Security and Maintenance Policy](/sdk/latest/security/security-policy) - Release and maintenance policy * [Bug Bounty Program](/sdk/latest/security/bug-bounty) - Report vulnerabilities and earn rewards * [cosmos/security Repository](https://github.com/cosmos/security) - Complete security documentation # Bug Bounty Program Source: https://docs.cosmos.network/sdk/latest/security/bug-bounty Security and maintenance policy documentation for the Cosmos Stack This content is sourced from the official [Cosmos Security](https://github.com/cosmos/security) repository. **Last sync:** Apr 10, 2026 | [View source](https://github.com/cosmos/security/blob/main/SECURITY.md) ## Introduction Cosmos Labs is committed to maintaining the security of the Cosmos Stack and supporting responsible vulnerability disclosure. We operate a bug bounty program to incentivize security researchers to identify and report security issues. This document defines the process for reporting vulnerabilities, describes the bug bounty program, and outlines Cosmos Labs’ approach to patching and public disclosure. *** ## Reporting a Vulnerability **Private Disclosure Required** Security vulnerabilities affecting the Cosmos ecosystem—including the Cosmos SDK, CometBFT, IBC, and other core components—must be reported privately through the channels listed below. * **Preferred:** Submit reports through the [Cosmos HackerOne Bug Bounty Program](https://hackerone.com/cosmos). * If HackerOne submission is not possible, reports may be sent to `security@cosmoslabs.io` with sufficient technical detail, including impact and reproduction steps. > Reports submitted via email are *not eligible* for bounty rewards. > Only reports submitted through HackerOne qualify for bounties. Public disclosure of vulnerabilities (including GitHub issues, blog posts, or social media) is prohibited until Cosmos Labs has remediated the issue and explicitly authorized disclosure. Disclosure timelines may be coordinated with the reporter. Submission of a report constitutes agreement to participate in **coordinated vulnerability disclosure**, allowing time for development, testing, and deployment of a fix prior to public release of details. *** ## Bug Bounty Program Overview Cosmos Labs operates a bug bounty program through **HackerOne**. Eligible reports are rewarded based on severity, impact, and quality. **In Scope:** Core Cosmos Stack components, including the Cosmos SDK, CometBFT, IBC, Cosmos EVM, and other critical infrastructure components. The authoritative scope definition, severity classifications, and reward ranges are maintained on the Cosmos [HackerOne program page](https://hackerone.com/cosmos). The program is governed by **Safe Harbor** provisions for good-faith research. The HackerOne page defines the applicable **Coordinated Vulnerability Disclosure Policy** and **Safe Harbor terms**. > In the event of conflict, the HackerOne policy supersedes all other > documentation. *** ## Vulnerability Severity Levels Reported vulnerabilities are assigned a severity classification that determines handling priority and disclosure timing. | **Level** | **Description** | **Examples** | | ------------ | ---------------------------------------------------------------------------- | --------------------------------------------------------------------------------------- | | **Critical** | Permanent and irrecoverable loss of fund | Direct fund loss, unauthorized and unlimited token minting, irreversible theft of fund. | | **High** | Severe impact affecting many nodes or users; often remotely exploitable. | Remote crash or chain halt vulnerabilities. | | **Medium** | Limited or conditional impact; exploitation may require specific conditions. | Node halt requiring elevated permissions. | | **Low** | Minor impact or impractical exploitation scenarios. | Slow block propagation, limited denial-of-service. | These classifications follow industry standards and inform response urgency and disclosure policy. Additional details are available in the [Classification Matrix](https://github.com/cosmos/security/blob/main/resources/CLASSIFICATION_MATRIX.md). *** ## Silent Patch and Disclosure Process Cosmos Labs follows a **silent patch** model for most security vulnerabilities. Issues are addressed privately and remediated prior to public disclosure. This approach aligns with practices used by other major protocols, such as **Ethereum's Geth** (see [https://geth.ethereum.org/docs/developers/geth-developer/disclosures](https://geth.ethereum.org/docs/developers/geth-developer/disclosures)), **Bitcoin Core** (see [https://bitcoincore.org/en/security-advisories/](https://bitcoincore.org/en/security-advisories/)), and **Zcash** (see [https://z.cash/technology/security-advisories/](https://z.cash/technology/security-advisories/)). Premature disclosure can place unpatched networks at risk. Silent remediation allows operators time to upgrade before vulnerability details become public. Vulnerabilities classified as **Critical** are handled on a case-by-case basis. When an issue presents an immediate or network-wide risk, Cosmos Labs will initiate emergency mitigations, private fix distribution, or coordinated upgrades before any public disclosure occurs. If Cosmos Labs determines that a vulnerability with **network-wide impact** (such as a chain halt or consensus failure) is already being actively exploited, or that attacker awareness is confirmed prior to a scheduled release, the issue is escalated and handled as **Critical** for response and disclosure purposes, regardless of its original classification. ### Fix Distribution * Fixes are delivered through patch or minor releases. * Release notes may omit explicit references to security implications. * Validators and node operators may be notified privately to upgrade. * For critical vulnerabilities, fixes may be distributed privately to key operators or require emergency network upgrades. ### Disclosure Timeline | **Severity** | **Disclosure Timing** | **Details** | | ---------------- | --------------------------------------------------------------------------- | --------------------------------------------------------------------- | | **Low / Medium** | Approximately four weeks after public release of the fix | Full advisory published with impact and remediation details. | | **High** | After the affected version reaches **End-of-Life (EOL)** (\~1 year typical) | Disclosure delayed to reduce exploitation risk. | | **Critical** | Case-by-case (At minimum after EOL) | Disclosure only when deemed safe; details may be limited or withheld. | *** ## Transparency and Post-Disclosure After expiration of the disclosure embargo, Cosmos Labs publishes a **Security Advisory** (via GitHub advisories or official blog posts) containing: * Vulnerability description * Affected versions * Severity classification * Remediation guidance * Reporter attribution (unless anonymity is requested) All advisories remain publicly available. This delayed disclosure model balances ecosystem safety with long-term transparency. *** Cosmos Labs acknowledges and appreciates the contributions of security researchers, auditors, and white-hat hackers who strengthen the Cosmos ecosystem. *** ### References * [Bitcoin Core Security Advisories](https://bitcoincore.org/en/security-advisories/) * [Go Ethereum Vulnerability Disclosure](https://ethereumpow.github.io/go-ethereum/docs/vulnerabilities/vulnerabilities) * [Bitcoin Core Security Disclosure Policy Announcement](https://bitexes.com/blog/124272) # Security and Maintenance Policy Source: https://docs.cosmos.network/sdk/latest/security/security-policy Security and maintenance policy documentation for the Cosmos Stack This content is sourced from the official [Cosmos Security](https://github.com/cosmos/security) repository. **Last sync:** Apr 10, 2026 | [View source](https://github.com/cosmos/security/blob/main/POLICY.md) ## Overview This policy defines how Cosmos Labs manages maintenance and support for the core Cosmos Stack components: * **CometBFT** * **Cosmos SDK** * **Cosmos EVM** * **Inter-Blockchain Communication Protocol (IBC)** This release process aims to provide clarity and predictability to both developers using the Stack and the Cosmos Labs engineering team. Developers should know exactly which software combinations are supported and should be used in production. At the same time, the Cosmos Labs team can coordinate fixes, security patches, and upgrades across a smaller set of well-defined release families, allowing for faster response times and more predictable maintenance. To achieve this, we are introducing the concept of **Release Families**, curated sets of component versions of the Stack. Each family is fully tested for compatibility, stability, and long-term support. Maintenance and bug fixes are provided only for active families. *** ## Release Families A **Release Family** is defined as a specific combination of component versions. Each release family is maintained for **one (1) year from the date it is introduced**. As a general policy, Cosmos Labs intends to introduce **two release families per year**, which typically results in two active families at any given time. However, because each family is supported for one full year, there may be temporary periods where more than two families are active simultaneously. In the event that a new release family is not introduced on schedule, the support window for the most recent family will extend beyond one year until a successor family is formally released. We will not allow a gap in supported release families. ### Current Family 1 *(older, maintained under 1-year lifecycle policy)* * CometBFT **v0.38.x** * Cosmos SDK **v0.50.x** * IBC **v8.x** ### Current Family 2 *(newer, active)* * CometBFT **v0.38.x** * Cosmos SDK **v0.53.x** * IBC **v10.x** * Cosmos EVM **v0.5.x** *** ## Future Family Evolution When a new family is introduced, it begins its own one-year maintenance window. Because families are supported for one year from release, the retirement of an older family is determined by its lifecycle timeline rather than strictly by the introduction of a new family. In practice, given our target cadence of two families per year, this will typically result in two active families at a time. ### Planned Future Family * CometBFT **v0.39.x** * Cosmos SDK **v0.54.x** * IBC **v11.x** * Cosmos EVM **v0.6.x** Upon introduction of this family, it will enter its one-year maintenance window. The retirement date of older families will be determined based on their original release date and lifecycle timeline. *** ## What Is Supported * **Bug Fixes:** Critical security and stability issues are patched for all active families. * **Compatibility:** All components within a family are guaranteed to work together. * **Lifecycle:** Each family is supported for one year from release, subject to extension if a successor family has not yet been introduced. * **Retirement:** Once a family reaches the end of its lifecycle, it no longer receives patches or compatibility updates. * **Upgradability:** We guarantee an upgrade path from one release family to the next adjacent family in the form of clear guides, compatibility guarantees, and tooling for assistance. *** ## Security Fix Process Please read our [security policy](https://github.com/cosmos/security/blob/main/SECURITY.md) for a detailed breakdown of how bugs and vulnerabilities are to be handled for the Cosmos Stack. *** ## Addendum: End of Life (EOL) Notices The following releases were not covered under this Release Family policy, as it was not yet in place at the time of their lifecycle. However, they have had sufficient maintenance windows and will formally reach **End of Life (EOL) at the end of Q1 2026** with the release of our new Release Family: * **CometBFT v0.37.x and lower** * **ibc-go v0.7.x and lower** * **Cosmos SDK v0.50.x and lower** After the end of Q1 2026, these versions will no longer receive maintenance, security patches, or support from Cosmos Labs. Additionally: * **CometBFT v1.x** will not be supported. That release has been fully retracted and is not part of any supported release family. We strongly encourage all teams running affected versions to plan their upgrades accordingly. # v0.54 Release Notes Source: https://docs.cosmos.network/sdk/latest/upgrade/release What's new in the latest Cosmos SDK release, including performance improvements, new features, and removals. If you are upgrading to v0.54, see the [upgrade guide](/sdk/latest/upgrade/upgrade). For a full list of changes, see the [changelog](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/CHANGELOG.md). ## Overview This release introduces order of magnitude improvements to network stability and throughput. In testing, we are able to support sustained 1K TPS on a variety of network configurations with no degradation in block time, whereas previously block production would have slowed / halted almost immediately after 200+ TPS. This is made possible through 2 critical performance improvements targeting different layers of the stack: * **Parallel transactions (BlockSTM)**: When applied to blocks containing fully parallelizable transactions, Block STM shows between 5-10x improvements in execution time depending on the available CPUs, size of the blocks, and types of transactions being run. We have modified the underlying implementations of Cosmos bank sends and EVM native sends to ensure they are parallelizable, so you will benefit from speed ups of these transactions immediately. It is possible to do the same for other common kinds of Cosmos transactions (e.g. governance, staking, auth), but we haven’t optimized them yet. Custom transaction types and EVM smart contracts may similarly require implementation modifications to benefit from parallelization. See our guide [here](/sdk/latest/experimental/blockstm) for more information. * **Enhanced Networking (LibP2P):** The lib-p2p based reactor implementation outperforms Comet’s existing p2p implementation on latency benchmarks across a variety of workloads, reducing p99 latency metrics by a factor of 100 and up to 1000 in some cases. libp2p is industry-standard in peer-to-peer data exchange. Under the hood, it leverages QUIC, a modern low-latency UDP-based communication protocol. At this time, lib-p2p is meant for usage in centrally managed Cosmos networks, as peer exchange and upgradeability from comet’s networking stack are not supported yet. Please reach out if you are interested in testing libp2p in devnet or testnet environments and potentially contributing these improvements. We want to work closely with teams to gather feedback. See the [LibP2P guide](/cometbft/latest/docs/experimental/lib-p2p) for more information. ## Additional Features 1. **AdaptiveSync** helps nodes catchup when they fall behind by letting consensus and blocksync work simultaneously. During traffic spikes or short block times, this keeps nodes progressing with the network while preserving normal consensus safety and finality behavior. Especially valuable for RPC-heavy nodes. See the [block sync guide](/cometbft/latest/docs/core/block-sync#adaptivesync) for more information. 2. **Log/v2** supports the transition of the Cosmos SDK’s observability to OpenTelemetry, enabling automatic trace correlation across all log output (show via the logged keys `trace_id`, `span_id`, and `trace_flags`, if a span is present in the `ctx`). This is powered by four new required contextual logging methods on the `Logger` interface (`InfoContext`, `WarnContext`, etc). Additionally, a new `MultiLogger` allows fanning out to multiple logging backends simultaneously, which the server now uses automatically when OpenTelemetry is configured. See the [logging guide](/sdk/latest/guides/testing/log) and [telemetry guide](/sdk/latest/guides/testing/telemetry) for more information. 3. **IBC General Message Passing (GMP)**: General Message Passing in IBC enables calling arbitrary smart contracts on remote networks. Unlike Interchain Accounts, the caller does not need to own an account on the destination chain (though it is general enough to support this usage pattern). Instead, GMP directly calls contracts on the destination chain. This makes it especially useful for implementing mint/burn bridges (See [below](#upcoming-features-available-soon-in-minor-releases) for more details) ## Enterprise Features The following features are released as part of [Cosmos Enterprise](/enterprise/overview): 1. The **Groups module** enables on-chain multisig and collective decision-making for any set of accounts. Groups are formed with weighted members and one or more configurable decision policies that define how proposals pass. Members submit proposals containing arbitrary SDK messages, vote, and any account can trigger execution once a proposal is accepted. Two built-in decision policies are included: threshold (absolute weighted vote count) and percentage (proportion of YES votes), each with configurable voting and minimum execution periods. The decision policy interface supports custom extensions. See the [Groups module docs](/enterprise/components/group/overview) for more information. 2. The **POA module** provides an admin-managed validator set as a drop-in replacement for the staking, distribution, and slashing modules. Purpose-built for institutional deployments run by a known set of operators, it offers a streamlined validator lifecycle with no native token required. Fee distribution to validators and full governance compatibility are included out of the box. See the [POA module docs](/enterprise/components/poa/overview) for more information. ## Upcoming Features (Available soon in Minor Releases) 1. **Krakatoa mempool (Cosmos EVM only)**: This mempool significantly improves transaction throughput and network stability by making the comet mempool stateless and introducing two new concurrent ABCI methods for transaction processing (`reapTxs` and `insertTx`). The upshot is that transaction processing is more concurrent and more lightweight, resulting in performance and stability gains. This will be available for Cosmos EVM chains at the end of April. 2. **Interchain Fungible Token Standard (IFT):** This is a more modern and flexible approach to token transfers in IBC compared to ICS20 that enables mint/burn based bridging. IFT decouples the contract or module that mints a token from the IBC channel. Importantly, this allows token issuers to establish canonical, owned deployments of their tokens on any networks they choose and manage cross-chain mints/burns with IBC, rather than using “wrapped” tokens that they cannot control. It also allows a single token to support fungibility over multiple IBC paths and to upgrade/change the IBC connection in the background without worrying about the “token path” changing. This is coming shortly to ibc-go, ibc-solidity, and ibc-sol. 3. **IBC support for any EVM network:** IBC functionality will extend directly to any EVM network as a collection of Solidity contracts that implement IBC Eureka. This will enable direct IBC connectivity without requiring any modifications to the EVM chain. This means Ethereum, Base, Arbitrum, Optimism, and other EVM networks can participate directly in IBC transfers. Combined with IFT, token issuers can manage canonical token deployments across Cosmos and any number of EVM chains from a single source of truth. 4. **IBC support for Solana:** Similar to EVM support, IBC connectivity will extend to Solana with a native program implementation. This will allow Solana to participate directly in IBC transfers with Cosmos and EVM chains, enabling cross-ecosystem token movement without wrapped tokens or intermediary chains. 5. **IBC v2 relayer:** A standalone, production-ready, request-driven relayer service for the IBC v2 protocol. This relayer will support interoperating between a Cosmos-based chain and major EVM networks (Ethereum, Base, Optimism, Arbitrum, Polygon, and more). Operators submit a source transaction hash and can track each packet's status in real time, from submission through relay completion, with full retry and failure recovery handled automatically. ## Removals The following features have been removed from this release family: * **ibc-apps/async-icq:** We have never had official support for ibc-apps/async-icq middleware. This is us just stating this explicitly. We will not be updating it as a part of this release or going forward. We will not be testing its compatibility with IBC-go v11.0.0 * **ibc-apps/pfm (packet forwarding middleware):** We have never had official support for PFM , but historically, we did update it and make a best effort to ensure compatibility with IBC in during previous release cycles. We will not be doing that as a part of this release or going forward. Instead, we are upstreaming PFM into IBC-Go to streamline our support. We will guarantee equivalent functionality and APIs as part of this migration. The upstreamed version will be available for you to migrate to in IBC-go v11.1.0, which we are planning to release towards the end of April 2026. * **ibc-apps/rate-limits:** We have never had official support for ibc-apps/rate-limits middleware, but historically, we did update it and make a best effort to ensure compatibility with IBC in during previous release cycles. We will not be doing that as a part of this release or going forward. Instead, we are upstreaming PFM into IBC-Go to streamline our support. We will guarantee equivalent functionality and APIs as part of this migration. The upstreamed version will be available for you to migrate to in IBC-go v11.2.0, which we are planning to release in the first weeks of May 2026. * **ibc-apps/ibc-hooks:** We have never had official support for ibc-apps/ibc-hooks middleware, but historically, we did update it and make a best effort to ensure compatibility with IBC in during previous release cycles. We will not be doing that as a part of this release or going forward. Instead, we are introducing and will maintain a new `callbacks` middleware that enables calling Cosmwasm contracts (like ibc-hooks) as well as Cosmos modules and EVM contracts when processing ICS20 packets. We are working to ensure the upcoming wasmd release will enable Cosmwasm contracts to adopt this without changing contract interfaces. # Network Manager Source: https://docs.cosmos.network/enterprise/components/network-manager Unified deployment, orchestration, and lifecycle management for production Cosmos networks. The **Cosmos Network Manager** is a unified platform for deploying and operating production-grade Cosmos-based blockchain networks. It provides the infrastructure and tooling required to **provision, orchestrate, scale, and secure** a distributed ledger in environments with strict reliability, security, and auditability requirements. Operating a distributed ledger is fundamentally different from operating traditional infrastructure. * **Genesis creation** requires isolated transaction generation, aggregation, and redistribution so every node starts with an identical state. * **Network upgrades** require all validators to halt at the same block height, upgrade binaries, and restart in a coordinated sequence—capabilities that blockchain protocols do not natively provide. * Validators must remain isolated from external traffic while preserving low-latency peer communication. * In regulated environments, all changes must flow through **controlled, auditable workflows**. The Cosmos Network Manager addresses these challenges through two tightly integrated components: * **Infrastructure-as-Code (IaC) tooling**, which provisions foundational infrastructure and ledger-specific resources * **Fleet Manager**, which programmatically orchestrates node lifecycle operations across the network | Challenge | How The Cosmos Network Manager Addresses It | | :-------------------- | :----------------------------------------------------------------------------------------------- | | Coordination overhead | Fleet Manager automates genesis creation, coordinated upgrades, and lifecycle operations via API | | Network performance | Configurable topology with validator isolation, sentry nodes, and optimized peer settings | | Scaling | Decoupled IaC and orchestration layers enable independent horizontal and vertical scaling | | Security | Air-gapped deployment, CI/CD-enforced changes, strict network policies, encrypted storage | *** ## What The Cosmos Network Manager Provides ### Deterministic Network Operations The Cosmos Network Manager replaces manual, error-prone node operations with **repeatable, programmatic workflows**. High-level API endpoints coordinate low-level actions across all nodes and return a single, authoritative result, enabling safe network initialization, upgrades, recovery procedures, and deterministic redeployment when required. ### Infrastructure Automation CLI-based IaC tooling provisions and configures: * Kubernetes-based compute * Persistent storage and relational databases * Networking primitives and load balancers * Ledger node infrastructure and auxiliary services * **IBC relaying and attestation services**, including light client configuration, key management, and monitoring Hardware specifications are configurable and support both **horizontal and vertical scaling** without disrupting network operations. ### Secure, Auditable Control Plane All network and infrastructure changes are designed to flow through **CI/CD-enforced workflows**, ensuring authenticated execution, full auditability, and alignment with enterprise security requirements. The platform supports **air-gapped deployment**, strict network policies, and encrypted storage by default. *** ## Architecture Overview The Cosmos Network Manager is composed of two core layers that remain decoupled but interoperable: the [Infrastructure-as-Code (IaC) tooling](#infrastructure-as-code-tooling) and the [Fleet Manager](#fleet-manager). An overview of the architecture of the Cosmos Network Manager ### Infrastructure-as-Code Tooling The IaC tooling provisions a Kubernetes environment alongside **relational database services (RDS), blob storage, and network primitives**. An **observability stack**, managed via ArgoCD, is deployed into the same environment. The tooling encapsulates all dependencies required to provision a ledger instance with minimal configuration, enabling teams to focus on application and protocol development rather than infrastructure management. It also allows individual engineers to spin up development and testing environments that mirror production topology. In addition to core ledger infrastructure, the IaC tooling provisions **IBC relaying and attestation services** required for interchain connectivity. This includes deployment and configuration of relayer processes, light client and IBC smart contract setup, optional remote signing via managed key services, and integration with the observability stack. Access to relaying infrastructure is governed through role-based access control and Kubernetes network policies. Components deployed by the IaC tooling ### Fleet Manager The **Fleet Manager** is responsible for **starting, operating, upgrading, and scaling** the ledger. It addresses the coordination challenges inherent to distributed ledger infrastructure through a **controller–agent architecture** that enables programmatic control over all nodes. The Fleet Manager can be deployed as a Kubernetes service or as a standalone component, depending on operational requirements. #### Controller–Agent Architecture * The Fleet Manager acts as the controller * Ledger nodes run lightweight agent software and must be explicitly registered Each agent consists of: * **Node Manager**, which controls the underlying Cosmos binary and tracks node state * **RPC Server**, which receives validated instructions from the Fleet Manager Agents rely exclusively on Cosmos SDK command-line utilities and have no outbound network access, acting only on instructions received via authenticated RPC calls. This design avoids SSH-based access patterns, reduces operational risk, and eliminates single points of failure. Node identity and operational credentials are managed as part of the Fleet Manager lifecycle. Validator and operator keys can be provisioned during initialization using managed key services, enabling remote signing and eliminating direct key material exposure on hosts. Key generation, rotation, and recovery workflows are designed to integrate with enterprise security controls and audit requirements. An overview of the node orchestration functionality of the Fleet Manager *** ## Network Lifecycle Operations The Cosmos Network Manager provides composable but decoupled methods for infrastructure provisioning and node orchestration. ### Network Initialization * Registers provisioned hardware with the Fleet Manager * Aggregates validator inputs to generate a single canonical genesis * Distributes genesis and configuration artifacts * Starts all nodes deterministically with identical initial state ### Coordinated Upgrades * Stops all nodes at a predefined block height * Distributes new binaries and configuration * Restarts nodes in a controlled sequence with safe rollback behavior These workflows integrate directly into CI/CD pipelines, ensuring all changes occur through authenticated, auditable processes. ### Development and Debugging In non-production environments, engineers can: * Provision local or ephemeral ledger deployments * View node status and stream logs * Export genesis and state for debugging and testing The Network Manager supports **multiple isolated ledger deployments**, enabling strict separation between development, test, staging, and production environments while reusing the same operational workflows and tooling. *** ## Performance and Scalability Performance and scalability are achieved through a design that is **topology-aware, security-preserving, and operationally consistent**. ### Topology-Aware Consensus Validators communicate directly over the CometBFT peer-to-peer network. The Fleet Manager initializes validators with a full validator address set marked as private peers, reducing peer-exchange overhead and preserving low-latency block propagation. Validators are isolated behind **sentry node architectures**, which relay blocks to RPC nodes without exposing validator endpoints. ### Horizontal Scaling Query capacity scales independently from consensus by provisioning additional RPC nodes via IaC tooling and registering them with the Fleet Manager. This enables throughput increases without validator disruption. ### Vertical Scaling CPU, memory, and storage resources are tunable via Kubernetes node groups. When scaling nodes under Fleet Manager control, agents automatically restart nodes with updated configurations through CI/CD-driven workflows. *** ## Security and Enterprise Guarantees Cosmos Network Manager is included as part of **Cosmos Enterprise** and benefits from enterprise-grade security and lifecycle commitments: * Air-gapped and restricted-network deployment support * Encrypted storage and managed secrets * Fine-grained access control using IAM, RBAC, and network policies * Long-term support branches and upgrade guidance * Incident response coordination and defined SLAs The Network Manager supports disaster recovery workflows through managed backups, state exports, and deterministic redeployment of ledger infrastructure using Infrastructure-as-Code. *** ## Who The Cosmos Network Manager Is For The Cosmos Network Manager is designed for: * Production Cosmos-based L1 networks * Enterprise and institutional operators * Teams with strict security, compliance, or audit requirements * Organizations integrating blockchain infrastructure into existing CI/CD and governance frameworks *** ## Availability The Cosmos Network Manager is available as part of the Cosmos Enterprise subscription. Contact **[institutions@cosmoslabs.io](mailto:institutions@cosmoslabs.io)** for more information tailored to your needs and to discuss deployment models and service levels. # Enterprise Overview Source: https://docs.cosmos.network/enterprise/overview Enterprise-grade modules, infrastructure, and support for production Cosmos networks. Cosmos Enterprise is a comprehensive enterprise subscription designed for teams operating **production-grade Cosmos-based blockchain networks**. It combines hardened protocol modules, on-premises and managed infrastructure components, and direct access to the engineers building the Cosmos technology stack. Cosmos Enterprise is built for organizations that require **reliability, security, and operational confidence** as they scale critical blockchain infrastructure in production environments. *** ## What Cosmos Enterprise Provides ### Enterprise-Grade Software & Infrastructure * Access to a growing library of **hardened, institutional-grade Cosmos modules** * Production-ready infrastructure components designed for **high availability and operational resilience** * Components built and maintained by the teams responsible for core Cosmos protocol development ### Operational & Engineering Support * Enterprise-ready support with defined SLAs * Access to senior support engineers and escalation paths to core protocol engineers * Guidance across the full lifecycle of your network, from launch to long-term operation Learn more about [**Cosmos Enterprise Professional Services**](/enterprise/professional-services). ### Security & Reliability * Increased security investment for enterprise modules, including **elevated bug bounty incentives** under the Cosmos bug bounty program * Proactively hardened implementations and ongoing review focused on **risk reduction in production environments** * Ongoing guidance on upgrades, configuration, and incident response to **maintain long-term operational resilience** Learn more about [**Cosmos Enterprise Security & Long-Term Stability**](/enterprise/security). ### Strategic Partnership * Direct feedback loops into the Cosmos technology roadmap * Early access to new enterprise components * Opportunity to influence feature development based on real-world production needs *** ## Enterprise Components Cosmos Enterprise includes a set of modular enterprise components that can be adopted independently or together, depending on your network’s architecture and operational requirements. Unified monitoring, orchestration, and lifecycle management for validators and node infrastructure. Secure, highly-performant interchain infrastructure, including IBC relayers and attestors. Proof of Authority consensus and governance framework for permissioned and enterprise networks. On-chain policy enforcement and role-based control mechanisms for accounts, contracts, and funds. ### Coming Soon * **Privacy Module ("Cosmos Confidential")** — Confidential transactions and privacy-preserving execution for sensitive enterprise use cases * Additional enterprise modules driven by customer demand and production requirements *** ## Who Cosmos Enterprise Is For Cosmos Enterprise is designed for: * Layer-1 Cosmos-based blockchain networks operating in production * Enterprise and institutional teams deploying regulated or high-value applications * Organizations that require long-term support, predictable operations, and protocol-level expertise *** ## Get Started Cosmos Enterprise subscriptions are tailored based on your network architecture, operational maturity, compliance requirements, and desired service level. Contact **[institutions@cosmoslabs.io](mailto:institutions@cosmoslabs.io)** to discuss how Cosmos Enterprise can support your production network. # Professional Services Source: https://docs.cosmos.network/enterprise/professional-services Enterprise-grade support, advisory, and delivery services for production Cosmos networks. Operating a production-grade blockchain network involves far more than deploying software. It requires deep protocol expertise, operational rigor, proactive monitoring, and close collaboration between engineering, infrastructure, and governance stakeholders. **Cosmos Enterprise Professional Services** is a core part of the Cosmos Enterprise subscription, providing enterprise-grade support, advisory, and delivery services designed to help teams launch, operate, and evolve secure, reliable Cosmos-based networks with confidence. *** ## What Cosmos Enterprise Professional Services Includes ### Dedicated Support & Communication * **Dedicated Slack channel** with a named primary point of contact * **Escalation paths** to senior support engineers and core protocol engineers * **Guaranteed response-time SLAs** for asynchronous support * **Scheduled live support calls** for real-time issue resolution ### Architecture & Design Advisory * **Architecture review sessions** covering chain design, validator topology, governance configuration, and interchain interoperability * **Best-practice guidance** for upgrades, parameter tuning, and production hardening * **Pre-launch readiness reviews** to identify operational or security risks before mainnet deployment ### Engineering Support * **Pull request and upgrade reviews** with defined completion SLAs * **Guidance on protocol upgrades**, dependency changes, and version compatibility * **Hands-on support** for critical incidents, migrations, or emergency fixes ### Operational Excellence * **Operational runbook guidance** for validators, relayers, and critical infrastructure * **Incident response coordination** during outages, chain halts, or security events * **Post-incident reviews** with remediation recommendations For details on security processes, audits, and long-term support guarantees, see [**Cosmos Enterprise Security & Long-Term Stability**](/enterprise/security). *** ## Engagement Models Cosmos Enterprise Professional Services can be delivered through: * **Ongoing retainer-based support** * **Time-bound advisory engagements** * **Launch-focused or upgrade-focused service packages** Service scope, availability, and SLAs are tailored based on your operational requirements and risk profile. Your dedicated Cosmos Enterprise point of contact will walk you through all available service options and help design the package that best suits your needs. *** ## Discuss a Professional Services Package Cosmos Enterprise Professional Services packages are scoped based on your network’s architecture, operational requirements, and risk profile. Contact **[institutions@cosmoslabs.io](mailto:institutions@cosmoslabs.io)** to discuss your goals and determine the appropriate level of support. # Security & Long-Term Stability Source: https://docs.cosmos.network/enterprise/security Security assurance, compliance readiness, and long-term stability guarantees for production Cosmos networks. Security and long-term stability are foundational requirements for production blockchain networks. As part of the Cosmos Enterprise subscription, Cosmos Labs' provides structured security assurance, structured release practices, and long-term support commitments designed to meet the expectations of enterprise and institutional operators. *** ## Security & Compliance Cosmos Enterprise includes a comprehensive security program focused on proactive risk reduction, transparent disclosure, and operational readiness. * **Independent security audits** of enterprise modules and long-term support releases, conducted by reputable third-party firms, with audit reports made available to Cosmos Enterprise subscribers * **Coordinated vulnerability disclosure** processes that ensure impacted parties are notified promptly and responsibly, alongside access to patches with hands-on remediation guidance and support * **Defined security SLAs**, including time-to-notification and time-to-patch targets for supported components ### Bug Bounty Program Coverage Enterprise modules included in Cosmos Enterprise receive **increased security investment** through the Cosmos bug bounty program. * **Elevated bug bounty incentives** are applied to production-critical enterprise modules to encourage proactive, responsible vulnerability discovery * Scope definitions and reward levels reflect the **operational importance and potential impact** of enterprise components * Findings are handled through **coordinated vulnerability disclosure** to support timely remediation and responsible communication ### Compliance Documentation Cosmos Enterprise provides **compliance-ready security documentation**, including: * Audit reports and executive summaries * Security questionnaires and attestations * Incident response and escalation procedures These materials are designed to support internal risk reviews, partner due diligence, and regulatory or compliance workflows. *** ## Long-Term Stability & Lifecycle Support Cosmos Enterprise emphasizes long-term operational stability through structured release management and backward-compatibility commitments. * **Long-term support (LTS) branches** for major versions of enterprise components * A strong **backward compatibility commitment**; when breaking changes are unavoidable, Cosmos Labs provides migration guidance and hands-on support * Enterprise components are **upgraded, validated, and tested** as part of the core Cosmos SDK release lifecycle * **Comprehensive end-to-end test suites** covering common production usage patterns and upgrade scenarios This approach enables teams to plan upgrades predictably, minimize operational risk, and operate production networks with confidence over multi-year horizons. *** ## Shared Responsibility Model Cosmos Enterprise security guarantees apply to supported enterprise modules and infrastructure components. Secure deployment, validator operations, and application-layer security remain a shared responsibility between Cosmos Labs and the network operator. Details regarding scope, SLAs, and supported configurations are defined as part of each Cosmos Enterprise subscription package. *** ## Learn More About Cosmos Enterprise Security To learn more about Cosmos Enterprise security assurances, audit coverage, or long-term support commitments, contact **[institutions@cosmoslabs.io](mailto:institutions@cosmoslabs.io)**. # API Reference Source: https://docs.cosmos.network/enterprise/components/group/api Complete API reference for Group module queries and messages # Group Module API Reference ## Overview The Group module provides a comprehensive API for managing on-chain multisig groups and collective decision-making. **Package:** `cosmos.group.v1` **Go Import:** `github.com/cosmos/cosmos-sdk/enterprise/group/x/group` *** ## Data Types ### GroupInfo Represents a group on-chain. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message GroupInfo { uint64 id = 1; string admin = 2; bytes metadata = 3; uint64 version = 4; string total_weight = 5; google.protobuf.Timestamp created_at = 6; } ``` **Fields:** * `id` (uint64): Unique group identifier, auto-assigned on creation * `admin` (string): Cosmos SDK address of the group administrator * `metadata` (bytes): Optional group metadata * `version` (uint64): Incremented on every group update; used to detect stale proposals * `total_weight` (string): Sum of all member weights * `created_at` (Timestamp): Block time when the group was created *** ### GroupMember Represents a member's relationship to a group. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message GroupMember { uint64 group_id = 1; Member member = 2; } message Member { string address = 1; string weight = 2; bytes metadata = 3; google.protobuf.Timestamp added_at = 4; } ``` **Fields:** * `address` (string): Cosmos SDK address of the member * `weight` (string): Voting weight. Set to `"0"` to remove a member. * `metadata` (bytes): Optional member metadata * `added_at` (Timestamp): Block time when the member was added *** ### GroupPolicyInfo Represents a group policy account. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message GroupPolicyInfo { string address = 1; uint64 group_id = 2; string admin = 3; bytes metadata = 4; uint64 version = 5; google.protobuf.Any decision_policy = 6; google.protobuf.Timestamp created_at = 7; } ``` **Fields:** * `address` (string): The group policy's account address (auto-generated) * `group_id` (uint64): The group this policy is associated with * `admin` (string): Address with authority to update the policy * `decision_policy` (Any): The policy's decision logic (threshold or percentage) * `version` (uint64): Incremented on every update; used to detect aborted proposals *** ### Proposal Represents an on-chain proposal submitted to a group policy. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message Proposal { uint64 id = 1; string group_policy_address = 2; bytes metadata = 3; repeated string proposers = 4; google.protobuf.Timestamp submit_time = 5; uint64 group_version = 6; uint64 group_policy_version = 7; ProposalStatus status = 8; TallyResult final_tally_result = 9; google.protobuf.Timestamp voting_period_end = 10; ProposalExecutorResult executor_result = 11; repeated google.protobuf.Any messages = 12; string title = 13; string summary = 14; } ``` **ProposalStatus values:** * `PROPOSAL_STATUS_SUBMITTED` - Open for voting * `PROPOSAL_STATUS_ACCEPTED` - Passed; ready for execution * `PROPOSAL_STATUS_REJECTED` - Failed tally * `PROPOSAL_STATUS_ABORTED` - Group or policy updated during voting * `PROPOSAL_STATUS_WITHDRAWN` - Withdrawn by proposer or policy admin **ProposalExecutorResult values:** * `PROPOSAL_EXECUTOR_RESULT_NOT_RUN` * `PROPOSAL_EXECUTOR_RESULT_SUCCESS` * `PROPOSAL_EXECUTOR_RESULT_FAILURE` *** ### TallyResult The accumulated vote counts for a proposal. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message TallyResult { string yes_count = 1; string abstain_count = 2; string no_count = 3; string no_with_veto_count = 4; } ``` *** ## Query API The Query service provides read-only access to Group module state. ### GroupInfo Get information about a group by ID. **gRPC:** `cosmos.group.v1.Query/GroupInfo` **REST:** `GET /cosmos/group/v1/groups/{group_id}` **CLI:** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd q group group-info [group-id] ``` **Example:** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd q group group-info 1 ``` *** ### GroupPolicyInfo Get information about a group policy account. **gRPC:** `cosmos.group.v1.Query/GroupPolicyInfo` **REST:** `GET /cosmos/group/v1/group_policies/{address}` **CLI:** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd q group group-policy-info [group-policy-account] ``` *** ### GroupMembers List all members of a group. **gRPC:** `cosmos.group.v1.Query/GroupMembers` **REST:** `GET /cosmos/group/v1/groups/{group_id}/members` **CLI:** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd q group group-members [group-id] ``` *** ### GroupsByAdmin List all groups administered by a given address. **gRPC:** `cosmos.group.v1.Query/GroupsByAdmin` **REST:** `GET /cosmos/group/v1/groups/by_admin/{admin}` **CLI:** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd q group groups-by-admin [admin] ``` *** ### GroupPoliciesByGroup List all group policies associated with a group. **gRPC:** `cosmos.group.v1.Query/GroupPoliciesByGroup` **REST:** `GET /cosmos/group/v1/groups/{group_id}/group_policies` **CLI:** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd q group group-policies-by-group [group-id] ``` *** ### GroupPoliciesByAdmin List all group policies administered by a given address. **gRPC:** `cosmos.group.v1.Query/GroupPoliciesByAdmin` **REST:** `GET /cosmos/group/v1/group_policies/by_admin/{admin}` **CLI:** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd q group group-policies-by-admin [admin] ``` *** ### Proposal Get a proposal by ID. **gRPC:** `cosmos.group.v1.Query/Proposal` **REST:** `GET /cosmos/group/v1/proposals/{proposal_id}` **CLI:** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd q group proposal [proposal-id] ``` *** ### ProposalsByGroupPolicy List all proposals for a given group policy account. **gRPC:** `cosmos.group.v1.Query/ProposalsByGroupPolicy` **REST:** `GET /cosmos/group/v1/proposals/by_group_policy/{address}` **CLI:** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd q group proposals-by-group-policy [group-policy-account] ``` *** ### VoteByProposalVoter Get a specific vote on a proposal. **gRPC:** `cosmos.group.v1.Query/VoteByProposalVoter` **REST:** `GET /cosmos/group/v1/votes/{proposal_id}/{voter}` **CLI:** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd q group vote [proposal-id] [voter] ``` *** ### VotesByProposal List all votes on a proposal. **gRPC:** `cosmos.group.v1.Query/VotesByProposal` **REST:** `GET /cosmos/group/v1/votes/by_proposal/{proposal_id}` **CLI:** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd q group votes-by-proposal [proposal-id] ``` *** ### TallyResult Get the current tally for a proposal. **gRPC:** `cosmos.group.v1.Query/TallyResult` **REST:** `GET /cosmos/group/v1/proposals/{proposal_id}/tally` **CLI:** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd q group tally-result [proposal-id] ``` **Example Response:** ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "tally": { "yes_count": "2", "abstain_count": "0", "no_count": "1", "no_with_veto_count": "0" } } ``` *** ### Groups List all groups on chain. **gRPC:** `cosmos.group.v1.Query/Groups` **REST:** `GET /cosmos/group/v1/groups` **CLI:** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd q group groups ``` *** ## Transaction Messages (Msg Service) ### CreateGroup Create a new group with an admin and initial members. **Msg:** `MsgCreateGroup` **CLI:** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group create-group [admin] [metadata] [members-json-file] ``` **Members JSON format:** ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "members": [ { "address": "cosmos1...", "weight": "1", "metadata": "member description" } ] } ``` **Authorization:** Any address can create a group. **Failure conditions:** * Metadata length exceeds `MaxMetadataLen` * Members have invalid addresses, duplicate entries, or zero weight *** ### UpdateGroupMembers Add, remove, or reweight members in a group. **Msg:** `MsgUpdateGroupMembers` **CLI:** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group update-group-members [admin] [group-id] [members-json-file] ``` **Note:** Set a member's weight to `"0"` to remove them from the group. **Authorization:** Must be signed by the group admin. **Failure conditions:** * Signer is not the group admin * Any associated group policy's `Validate()` method fails against the updated member set *** ### UpdateGroupAdmin Transfer group administration to a new address. **Msg:** `MsgUpdateGroupAdmin` **CLI:** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group update-group-admin [admin] [group-id] [new-admin] ``` **Authorization:** Must be signed by the current group admin. *** ### UpdateGroupMetadata Update a group's metadata. **Msg:** `MsgUpdateGroupMetadata` **CLI:** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group update-group-metadata [admin] [group-id] [metadata] ``` **Authorization:** Must be signed by the group admin. *** ### CreateGroupPolicy Create a new group policy account with a decision policy. **Msg:** `MsgCreateGroupPolicy` **CLI:** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group create-group-policy [admin] [group-id] [metadata] [decision-policy-json] ``` **Threshold policy example:** ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "@type": "/cosmos.group.v1.ThresholdDecisionPolicy", "threshold": "2", "windows": { "voting_period": "24h", "min_execution_period": "0s" } } ``` **Percentage policy example:** ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "@type": "/cosmos.group.v1.PercentageDecisionPolicy", "percentage": "0.5", "windows": { "voting_period": "48h", "min_execution_period": "0s" } } ``` **Authorization:** Must be signed by the group admin. **Failure conditions:** * Signer is not the group admin * Metadata length exceeds `MaxMetadataLen` * Decision policy's `Validate()` method fails against the group *** ### CreateGroupWithPolicy Create a group and a group policy in a single transaction. **Msg:** `MsgCreateGroupWithPolicy` **CLI:** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group create-group-with-policy [admin] [group-metadata] [group-policy-metadata] [members-json-file] [decision-policy-json] ``` Set `--group-policy-as-admin` to make the group policy account the group admin (enabling a self-governed group). *** ### UpdateGroupPolicyAdmin Transfer group policy administration to a new address. **Msg:** `MsgUpdateGroupPolicyAdmin` **CLI:** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group update-group-policy-admin [admin] [group-policy-account] [new-admin] ``` **Authorization:** Must be signed by the group policy admin. *** ### UpdateGroupPolicyDecisionPolicy Update the decision policy for a group policy account. **Msg:** `MsgUpdateGroupPolicyDecisionPolicy` **CLI:** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group update-group-policy-decision-policy [admin] [group-policy-account] [decision-policy-json] ``` **Authorization:** Must be signed by the group policy admin. **Note:** Updating the decision policy aborts any in-flight proposals for that policy. *** ### UpdateGroupPolicyMetadata Update a group policy's metadata. **Msg:** `MsgUpdateGroupPolicyMetadata` **CLI:** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group update-group-policy-metadata [admin] [group-policy-account] [metadata] ``` **Authorization:** Must be signed by the group policy admin. *** ### SubmitProposal Submit a proposal to a group policy account. **Msg:** `MsgSubmitProposal` **CLI:** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group submit-proposal [proposal-json-file] \ --from proposer \ --keyring-backend test ``` **Proposal JSON format:** ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "group_policy_address": "cosmos1...", "proposers": ["cosmos1..."], "metadata": "proposal description", "title": "My Proposal", "summary": "A brief description of the proposal", "messages": [ { "@type": "/cosmos.bank.v1beta1.MsgSend", "from_address": "cosmos1...", "to_address": "cosmos1...", "amount": [{"denom": "uatom", "amount": "1000"}] } ], "exec": 0 } ``` Set `"exec": 1` (`EXEC_TRY`) to attempt immediate execution. When using `EXEC_TRY`, proposers are automatically counted as yes votes. **Authorization:** Must be signed by at least one group member. **Failure conditions:** * Metadata, title, or summary length exceeds `MaxMetadataLen` * Proposer is not a group member *** ### WithdrawProposal Withdraw a pending proposal. **Msg:** `MsgWithdrawProposal` **CLI:** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group withdraw-proposal [proposal-id] [group-policy-admin-or-proposer] ``` **Authorization:** Must be signed by a proposer or the group policy admin. **Failure conditions:** * Signer is neither a proposer nor the group policy admin * Proposal is already closed or aborted *** ### Vote Cast a vote on an open proposal. **Msg:** `MsgVote` **CLI:** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group vote [proposal-id] [voter] [vote-option] [metadata] ``` **Vote options:** * `VOTE_OPTION_YES` * `VOTE_OPTION_NO` * `VOTE_OPTION_ABSTAIN` * `VOTE_OPTION_NO_WITH_VETO` Set `--exec 1` to attempt immediate execution after voting. **Authorization:** Must be signed by a group member. **Failure conditions:** * Metadata length exceeds `MaxMetadataLen` * Proposal is no longer in the voting period *** ### Exec Execute an accepted proposal. **Msg:** `MsgExec` **CLI:** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group exec [proposal-id] \ --from executor \ --keyring-backend test ``` **Authorization:** Any address can execute an accepted proposal. **Notes:** * Proposal must be in `ACCEPTED` status * Execution must occur before `MaxExecutionPeriod` after the voting period ends * A failed execution (`PROPOSAL_EXECUTOR_RESULT_FAILURE`) can be retried until expiry *** ### LeaveGroup Remove yourself from a group. **Msg:** `MsgLeaveGroup` **CLI:** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx group leave-group [member-address] [group-id] ``` **Authorization:** Must be signed by the member leaving. **Failure conditions:** * Signer is not a group member * Any associated group policy's `Validate()` method fails against the updated member set *** ## Events The Group module emits the following events: | Event Type | Key | Value | | ---------------------------------------- | --------------------------------------- | --------------------------- | | `cosmos.group.v1.EventCreateGroup` | `group_id` | `{groupId}` | | `cosmos.group.v1.EventUpdateGroup` | `group_id` | `{groupId}` | | `cosmos.group.v1.EventCreateGroupPolicy` | `address` | `{groupPolicyAddress}` | | `cosmos.group.v1.EventUpdateGroupPolicy` | `address` | `{groupPolicyAddress}` | | `cosmos.group.v1.EventCreateProposal` | `proposal_id` | `{proposalId}` | | `cosmos.group.v1.EventWithdrawProposal` | `proposal_id` | `{proposalId}` | | `cosmos.group.v1.EventVote` | `proposal_id` | `{proposalId}` | | `cosmos.group.v1.EventExec` | `proposal_id`, `logs` | `{proposalId}`, `{logs}` | | `cosmos.group.v1.EventLeaveGroup` | `proposal_id`, `address` | `{proposalId}`, `{address}` | | `cosmos.group.v1.EventProposalPruned` | `proposal_id`, `status`, `tally_result` | pruning details | *** ## REST API Endpoints | Method | Endpoint | Description | | ------ | ------------------------------------------------------ | --------------------------- | | GET | `/cosmos/group/v1/groups/{group_id}` | Get group info | | GET | `/cosmos/group/v1/groups/by_admin/{admin}` | List groups by admin | | GET | `/cosmos/group/v1/groups` | List all groups | | GET | `/cosmos/group/v1/groups/{group_id}/members` | List group members | | GET | `/cosmos/group/v1/group_policies/{address}` | Get group policy info | | GET | `/cosmos/group/v1/groups/{group_id}/group_policies` | List policies for a group | | GET | `/cosmos/group/v1/group_policies/by_admin/{admin}` | List policies by admin | | GET | `/cosmos/group/v1/proposals/{proposal_id}` | Get proposal | | GET | `/cosmos/group/v1/proposals/by_group_policy/{address}` | List proposals for a policy | | GET | `/cosmos/group/v1/proposals/{proposal_id}/tally` | Get tally result | | GET | `/cosmos/group/v1/votes/{proposal_id}/{voter}` | Get a specific vote | | GET | `/cosmos/group/v1/votes/by_proposal/{proposal_id}` | List votes for a proposal | *** ## Common Use Cases ### 1. Create a 2-of-3 Multisig Group ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} # Create the group with 3 members of equal weight simd tx group create-group cosmos1admin "" members.json --from admin # members.json { "members": [ {"address": "cosmos1alice...", "weight": "1"}, {"address": "cosmos1bob...", "weight": "1"}, {"address": "cosmos1carol...", "weight": "1"} ] } # Create a policy requiring 2 of 3 yes votes simd tx group create-group-policy cosmos1admin 1 "" policy.json --from admin # policy.json (threshold = 2) { "@type": "/cosmos.group.v1.ThresholdDecisionPolicy", "threshold": "2", "windows": {"voting_period": "72h", "min_execution_period": "0s"} } ``` ### 2. Submit and Execute a Proposal ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} # Alice submits a proposal simd tx group submit-proposal proposal.json --from alice # Bob and Carol vote yes simd tx group vote 1 cosmos1bob YES "" --from bob simd tx group vote 1 cosmos1carol YES "" --from carol # Anyone executes the accepted proposal simd tx group exec 1 --from alice ``` ### 3. Self-Governing Group (Policy as Admin) ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} # Create group with policy as its own admin simd tx group create-group-with-policy cosmos1admin "" "" members.json policy.json \ --group-policy-as-admin \ --from admin ``` ### 4. Multiple Policies for Different Actions ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} # Low-threshold policy for routine actions (1-of-3) simd tx group create-group-policy cosmos1admin 1 "routine" low_policy.json --from admin # High-threshold policy for critical actions (3-of-3) simd tx group create-group-policy cosmos1admin 1 "critical" high_policy.json --from admin ``` *** # Architecture Source: https://docs.cosmos.network/enterprise/components/group/architecture System architecture, core concepts, and module integration details for the Group module # Group Module Architecture ## Overview The Group module enables collective decision-making through a proposal-and-vote system. Groups are collections of accounts with associated voting weights. Each group can have one or more policy accounts, each with its own decision policy governing how proposals are accepted or rejected. You can think of it like a dynamic multi-signature account. ## Architecture Diagram Group Module Architecture *The diagram above shows the Group module's actor model, data structures, and proposal lifecycle — from submission through voting to execution.* ## Core Concepts ### Group A group is an aggregation of accounts with associated voting weights. It is not itself an account and does not hold a balance. A group has an **administrator** who can add, remove, and update members. Key points: * The administrator does not need to be a member of the group * A group policy account can itself be the administrator of a group, enabling self-governed groups * Members have weights that determine their relative voting power within proposals ### Group Policy A group policy is an account associated with a group and a decision policy. Group policies are abstracted from groups so that a single group can have **multiple decision policies** for different types of actions. This separation keeps membership consistent across policies while allowing different authorization thresholds for different operations. The recommended pattern is: 1. Create a **master group policy** for a given group 2. Create additional group policies with different decision policies for specific action types 3. Delegate permissions from the master account to sub-accounts using the `x/authz` module ### Decision Policy A decision policy is the mechanism by which group members vote on proposals and the rules that determine whether a proposal passes based on its tally outcome. All decision policies have: * **Minimum Execution Period**: The minimum time after submission before a proposal can be executed. Can be set to `0` to allow immediate execution. * **Maximum Voting Window**: The maximum time after submission during which members can vote. The chain developer also defines an **app-wide maximum execution period** — the window after a proposal's voting period ends during which execution is permitted. #### Threshold Decision Policy A threshold decision policy defines a minimum total weight of yes votes required for a proposal to pass. Abstain and veto votes are treated as no votes. ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "@type": "/cosmos.group.v1.ThresholdDecisionPolicy", "threshold": "2", "windows": { "voting_period": "24h", "min_execution_period": "0s" } } ``` #### Percentage Decision Policy A percentage decision policy defines acceptance as a minimum percentage of total group weight voting yes. This policy is better suited for groups with dynamic membership, since the percentage threshold remains meaningful as member weights change. ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "@type": "/cosmos.group.v1.PercentageDecisionPolicy", "percentage": "0.5", "windows": { "voting_period": "24h", "min_execution_period": "0s" } } ``` #### Custom Decision Policies Chain developers can implement custom decision policies by implementing the `DecisionPolicy` interface. This enables encoding arbitrary acceptance logic into a group policy. ### Proposal Any group member can submit a proposal to a group policy account. A proposal consists of: * A list of messages to execute if the proposal is accepted * Optional metadata, title, and summary * An optional `Exec` field to attempt immediate execution on submission #### Voting Members vote with one of four options: * `VOTE_OPTION_YES` * `VOTE_OPTION_NO` * `VOTE_OPTION_ABSTAIN` * `VOTE_OPTION_NO_WITH_VETO` The voting window opens immediately on proposal submission and closes at the time defined by the group policy's decision policy. #### Tallying Tallying occurs when either: 1. A `Msg/Exec`, `Msg/SubmitProposal` (with `TRY_EXEC`), or `Msg/Vote` (with `TRY_EXEC`) triggers an execution attempt 2. The proposal's voting period end is reached during `EndBlock` If the tally passes the decision policy's rules, the proposal is marked `PROPOSAL_STATUS_ACCEPTED`. Otherwise it is marked `PROPOSAL_STATUS_REJECTED`. No further voting is permitted after tallying. #### Executing Proposals Accepted proposals must be executed before `MaxExecutionPeriod` after the voting period ends. Any account (not just group members) can submit a `Msg/Exec` transaction to execute an accepted proposal. When `Exec` is set to `EXEC_TRY` on a submit or vote message, the chain attempts immediate execution. If the proposal doesn't yet pass, it remains open for further votes. #### Withdrawn and Aborted Proposals * **Withdrawn**: Any proposer or the group policy admin can withdraw a proposal before the voting period ends. Withdrawn proposals cannot be executed. A proposal can be withdrawn using `MsgWithdrawProposal` which has an `address` (can be either a proposer or the group policy admin) and a `proposal_id` (which has to be withdrawn). * **Aborted**: If the group or group policy is updated during the voting period, the proposal is automatically marked as `PROPOSAL_STATUS_ABORTED` since the rules it was created under no longer apply. ### Pruning Proposals and votes are automatically pruned to prevent unbounded state growth. **Votes are pruned:** * After a successful tally triggered by `Msg/Exec` or a submit/vote with `TRY_EXEC` * On `EndBlock` immediately after the proposal's voting period ends (including aborted and withdrawn proposals) **Proposals are pruned:** * On `EndBlock` when a withdrawn or aborted proposal's voting period ends * After a successful proposal execution * On `EndBlock` after `voting_period_end + max_execution_period` has passed # Overview Source: https://docs.cosmos.network/enterprise/components/group/overview On-Chain Multisig Accounts and Collective Decision-Making The Group module is a Cosmos SDK module that enables on-chain multisig accounts and collective decision-making through configurable voting policies. Any set of accounts can form a named group, attach one or more decision policies to it, and collectively authorize the execution of arbitrary messages through a proposal-and-vote workflow. Unlike chain-wide governance proposals, group proposals are scoped to a specific group policy account — enabling organizations, DAOs, and consortiums to manage their on-chain operations with flexible, programmable authorization rules. The Group module is designed for networks that require: 1. **Multi-Party Authorization:** Groups aggregate accounts with weighted voting power, enabling multiple parties to collectively authorize on-chain actions without relying on a single key. 2. **Flexible Decision Policies:** Each group can have multiple policy accounts with independent threshold or percentage-based rules, allowing different authorization requirements for different types of actions. 3. **Permissioned Execution:** Proposals are only executed when they meet the policy's acceptance criteria, ensuring on-chain actions reflect genuine collective agreement. 4. **DAO and Consortium Support:** Ideal for coordinating on-chain operations across organizations, multisig signers, and governance participants. ## Source Code The source code for the Group module can be found [here](https://github.com/cosmos/cosmos-sdk/tree/main/enterprise/group). ## Available Documentation This section contains detailed documentation for the Group module. * **[API Reference](/enterprise/components/group/api)** - Complete API reference for queries and messages * **[Architecture](/enterprise/components/group/architecture)** - System architecture, core concepts, and module integration details ## Availability The Group module is commercially licensed and available as part of the Cosmos Enterprise subscription. Contact [institutions@cosmoslabs.io](mailto:institutions@cosmoslabs.io) to learn more about how you can use the Group module for your chain. # Attestor Source: https://docs.cosmos.network/enterprise/components/interoperability/attestor Lightweight, blockchain-agnostic attestation service for IBC cross-chain communication. The IBC Attestor is a lightweight, blockchain-agnostic attestation service that provides cryptographically signed attestations of blockchain state for IBC cross-chain communication. IBC Attestors publish attestations on demand and are stateless: consumers of the service must send requests to the service's gRPC server to receive attestations. ### Key Features * Multi-chain support via pluggable adapter pattern (EVM, Solana, Cosmos) * Flexible signing (local keystore or remote HSM/KMS) * gRPC API for attestation requests * Concurrent packet validation ## Attestation Types IBC attestors support two kinds of attestations: * **State attestations** — A given chain has a block of height `x` where the block was produced at time `y` * **Packet attestations** — A given chain's state explicitly does or does not contain packet commitment `z` Both types use the same underlying [Attestation](https://github.com/cosmos/ibc-attestor/blob/main/proto/ibc_attestor/attestation.proto) proto type, but the `attested_data` field differs: * State attestations hold the height and timestamp of a block * Packet attestations contain the packets provided in the initial request and the height at which the commitments were found ## Security Model Within the context of IBC relaying, attestors are an off-chain trusted service. Trust is established with on-chain components via two mechanisms: * Securely managed secp256k1 signing keys used by attestors to create attestations. The public parts of the keys must be registered with an on-chain light client. * An aggregation layer during relaying that asserts a configurable m-of-n signatures attest to the same state. **Trust assumptions per attestor instance:** * RPC endpoints provide accurate data * Private key is kept secure **Security guarantees per attestor instance:** * Packet commitments must be valid before signing: packet must match computed value; ack must exist on chain; receipt must be absent (zero) * Signatures are cryptographically sound and recoverable * Any heights in gRPC queries cannot be greater than the configured finalization height ## Architecture ### Component Structure Attestor Component Structure ## Operating an Attestor Instance ### CLI and Configuration When running the attestor you need to specify: * What kind of chain (`--chain-type`) will be attested to * How the signer key (`--signer-type`) will be provided Each chain and signer type has its own configuration parameters captured under separate sections in the configuration TOML. See the [example EVM attestor config](https://github.com/cosmos/ibc-attestor/blob/main/apps/ibc-attestor/server.dev.toml) for reference. ### Chain Adapters To add support for new kinds of chains, implement the `AttestationAdapter` and `AdapterBuilder` [interfaces](https://github.com/cosmos/ibc-attestor/blob/main/apps/ibc-attestor/src/adapter/mod.rs): * **`AttestationAdapter`** — Responsible for retrieving on-chain state and ensuring it can be parsed as a valid height and timestamp for a `StateAttestation`, or a valid 32-byte commitment path for an IBC Packet. * **`AdapterBuilder`** — Enables per-chain configurations needed by the `AttestationAdapter` implementation. The CLI must also be extended to support any new chain types. ### Signing The IBC attestor supports two signing modes: **local** and **remote**. The remote signer is based on the Cosmos Labs remote signer which uses AWS KMS for key rotation. The attestor signing algorithm: 1. Retrieve relevant chain/packet state via the chain adapter 2. Encode the data using ABI format to facilitate EVM parsing 3. Send the encoded message to the signer, which first hashes and then signs the data as an ECDSA 65-byte recoverable signature (`r||s||v`) Any new signer implementations **must guarantee**: * Arbitrary ABI-encoded data is hashed before signing * The signature is in the ECDSA 65-byte recoverable signature format (`r||s||v`) ## Observability The IBC attestor uses a logging middleware to time requests, set trace IDs, and add structured fields to traces. These fields include: * Adapter kind * Signer kind * Requested height (where applicable) * Number of packets (where applicable) * Packet commitment kind (where applicable) ### Logging * Logs are emitted in JSON format * Errors must be logged at occurrence to simplify line number tracing * Info logs are reserved for middleware and startup operations * Debug logs capture adapter and attestation creation operations ### Tracing * OpenTelemetry-compatible spans * Minimal request time tracking # Deployment Overview Source: https://docs.cosmos.network/enterprise/components/interoperability/deployment End-to-end IBC v2 deployment for mint/burn transfers between Cosmos and EVM chains. This document explains how an IBC v2 deployment works end-to-end to support mint/burn transfers between Cosmos and EVM chains. ## System Architecture IBC system diagram **Legend** * Blue = on-chain contracts (EVM) * Purple = IBC / SDK modules * Orange = off-chain infrastructure * Dashed arrows = proofs / verification ## Components ### On-Chain **Cosmos Modules** * **Core IBC Modules** — Core IBC stack including the ICS 26 Router, ICS 26 Application Callbacks, and ICS 27 GMP. * Specifications: [cosmos/ibc](https://github.com/cosmos/ibc/tree/main/spec/IBC_V2) * Implementations: [cosmos/ibc-go](https://github.com/cosmos/ibc-go/tree/main/modules) * **Attestor Light Client** — An attestor-based IBC light client that verifies IBC packets using quorum-signed ECDSA attestations from a fixed set of trusted signers, implemented in Go. * Specification: [cosmos/ibc ics-026-application-callbacks](https://github.com/cosmos/ibc/tree/main/spec/IBC_V2/core/ics-026-application-callbacks) * Implementation: [cosmos/ibc-go attestor light client](https://github.com/cosmos/ibc-go/tree/main/modules/light-clients/attestations) * **Token Factory** — Chain-dependent module that handles core asset logic and is configured with the IBC stack to initiate outgoing and/or process incoming IBC packets. **EVM Contracts** * **Core IBC Contracts** — Core IBC contracts including the ICS 26 Router and ICS 27 GMP + Callbacks contracts. * Specifications: [cosmos/ibc](https://github.com/cosmos/ibc/tree/main/spec/IBC_V2) * Implementations: [cosmos/solidity-ibc-eureka](https://github.com/cosmos/solidity-ibc-eureka/tree/main/contracts) * **Attestor Light Client** — An attestor-based IBC light client that verifies IBC packets using quorum-signed ECDSA attestations from a fixed set of trusted signers, implemented in Solidity. * Specification: [cosmos/solidity-ibc-eureka design](https://github.com/cosmos/solidity-ibc-eureka/blob/main/contracts/light-clients/attestation/IBC_ATTESTOR_DESIGN.md) * Implementation: [cosmos/solidity-ibc-eureka attestation](https://github.com/cosmos/solidity-ibc-eureka/tree/main/contracts/light-clients/attestation) * **Interchain Fungible Token (IFT)** — A set of rules and interfaces for creating and managing fungible tokens that can be transferred across different blockchain networks using ICS-27 GMP. * Implementation: [cosmos/solidity-ibc-eureka IFTBase.sol](https://github.com/cosmos/solidity-ibc-eureka/blob/815ddaf194c81f2098c88afb9d73108687bb48eb/contracts/utils/IFTBaseUpgradeable.sol) ### Off-Chain * **Attestation Service** — A lightweight, blockchain-agnostic attestation service that provides cryptographically signed attestations of blockchain state for IBC cross-chain communication. See the [Attestor](/enterprise/components/interoperability/attestor) documentation for details. * Implementation: [cosmos/ibc-attestor](https://github.com/cosmos/ibc-attestor) (Rust service in `apps/ibc-attestor/`) * **Proof API** — A gRPC server that can be queried by a client to get the data needed to generate transaction(s) for relaying IBC packets. * Implementation: [cosmos/solidity-ibc-eureka proof-api](https://github.com/cosmos/solidity-ibc-eureka/tree/main/programs/relayer) * **Relayer** — A standalone, production-ready, request-driven relayer service for the IBC v2 Protocol. See the [Relayer](/enterprise/components/interoperability/relayer) documentation for details. ## Example IBC Transfer Flows ### Cosmos to EVM 1. The user or client submits a transaction on the Cosmos source chain containing a burn/transfer message to the Token Factory module. * The Token Factory module calls the IBC GMP module to make a GMP call to the mint function on the EVM destination chain's IFT contract. * The GMP module calls the core IBC module to send a packet with the GMP payload to the ICS 26 Router on the EVM destination chain. * The IBC modules emit the relevant packet information as an event. * The Attestor continuously monitors blocks for relevant IBC events, parses a valid IBC transfer packet, and generates a signed attestation with associated blockchain state. 2. The client submits a request to the relayer service to relay the IBC transfer packet. * The relayer requests data necessary to submit the IBC transaction and proof on the destination chain from the Proof API. * The Proof API queries each configured Attestor, aggregates signed attestations until the quorum threshold is reached, generates the IBC RecvPacket data, and responds to the relayer. 3. The relayer submits the IBC RecvPacket transaction to the EVM destination chain. * The ICS 26 Router contract parses the packet and executes core validation logic (sequencing, timeouts, etc.), then routes it to the relevant light client contract. * The light client contract validates the IBC packet. * The ICS 26 Router routes the validated packet to the GMP contract, which executes the IFT contract's mint function. * The IFT contract mints and transfers the token to the destination address specified in the GMP payload. ### EVM to Cosmos 1. The user or client submits a transaction on the EVM source chain calling `iftTransfer` on the IFT contract. * The IFT contract burns the tokens from the sender and calls the ICS 27 GMP contract with the necessary information for an IBC mint/burn transfer packet. * The GMP contract calls the IBC Router contract to send a GMP call to mint tokens on the Cosmos destination chain. * The IBC contracts emit the relevant packet information as an event. * The Attestor generates a signed attestation of the packet and associated blockchain state. 2. The client submits a request to the relayer service to relay the IBC transfer packet. * The relayer requests data from the Proof API, which aggregates attestations and generates IBC RecvPacket data. 3. The relayer submits the IBC RecvPacket transaction to the Cosmos destination chain. * The core IBC modules parse the packet and execute core validation logic (sequencing, timeouts, etc.), then route it to the relevant light client module. * The light client module validates the IBC packet. * The IBC Core modules route the validated packet to the GMP application module. * The GMP module calls the Token Factory module to mint tokens to the destination address specified in the GMP payload. # Relayer Source: https://docs.cosmos.network/enterprise/components/interoperability/relayer Production-ready, request-driven relayer service for the IBC v2 Protocol. IBC v2 Relayer is a standalone, production-ready, request-driven relayer service for the IBC v2 Protocol. The relayer supports interoperating between a Cosmos-based chain and major EVM networks (Ethereum, Base, Optimism, Arbitrum, Polygon, and more). The core relayer service has been used in production since 2023 via Skip Go, and is now modularized to be run on-prem by clients who cannot leverage the Skip Go managed service. ## Relaying Sequence Relaying Sequence ## Supported Features * Compatible with all major EVM chains (Ethereum, Base, Optimism, Arbitrum, Polygon, and more) * Request-driven design for configurable, on-demand relaying * Transaction failure retry support * Re-orgs * Out-of-gas * Inadequate gas price * Tx network propagation fails to reach leader * Invalid by the network, but valid by the submitting node * Transaction Tracking API * Remote signing support * Concurrent packet intake and processing * Configurable packet delivery latency via batching * Ability to blacklist addresses (ex: OFAC) * Transaction cost tracking ## Getting Started ### Prerequisites * Go 1.24+ * Docker and Docker Compose * A running [proof API](https://github.com/cosmos/solidity-ibc-eureka/tree/main/programs/relayer) service * RPC endpoints for the chains you want to relay between ### Local Development 1. Start Postgres and run migrations: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} docker-compose up -d ``` 2. Create a local config file (see [Configuration Reference](#configuration-reference) below). 3. Create a local keys file (see [Local Signing](#local-signing) below). 4. Run the relayer: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} make relayer-local ``` The relayer will start: * gRPC API server on the address configured in `relayer_api.address` * Prometheus metrics server on the configured address * Relay dispatcher polling for new transfers ### CLI Flags | Flag | Default | Description | | ------------------ | --------------------------- | ----------------------------------- | | `--config` | `./config/local/config.yml` | Path to relayer config file | | `--ibcv2-relaying` | `true` | Enable/disable the relay dispatcher | ## Database Migrations Database migrations must be run before starting the relayer. The relayer expects the database schema to already exist. **Local Development:** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} docker-compose up -d ``` This starts PostgreSQL and runs migrations automatically. **Using the migrate CLI:** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} # Install: https://github.com/golang-migrate/migrate migrate -path ./db/migrations -database "postgres://relayer:relayer@localhost:5432/relayer?sslmode=disable" up ``` **Using the relayer migrations container:** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} docker run --rm --network host /relayer-migrate: \ -database "postgres://relayer:relayer@localhost:5432/relayer?sslmode=disable" \ up ``` ## Design Design The relayer has three main components — the gRPC server which clients use to interact with the relayer, a Postgres database, and the core relayer. The gRPC server populates the database with packets, which the core relayer monitors and updates as it progresses in relaying those packets. Relaying Pipeline The relayer is designed as a pipeline composed of a set of asynchronously running processors. Transfers pass through the processors sequentially. Some pipeline steps process transfers individually while others process transfers in batches. ## API Interface The relayer serves a gRPC server which clients use to specify what packets to relay and track packet relaying progress. ```proto theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} service RelayerApiService { // Relay is used to specify a source tx hash for packets the relayer should relay. // The relayer will identify all packets created by the transaction and attempt to relay them all. rpc Relay(RelayRequest) returns (RelayResponse) {} // The status endpoint is used to track the progress of packet relaying. // It takes a transaction hash and returns the status of any relevant packets the relayer is aware of. // The transaction must first have been passed to the relay endpoint. rpc Status(StatusRequest) returns (StatusResponse) {} } message StatusRequest { string tx_hash = 1; string chain_id = 2; } enum TransferState { TRANSFER_STATE_UNKNOWN = 0; TRANSFER_STATE_PENDING = 1; TRANSFER_STATE_COMPLETE = 2; TRANSFER_STATE_FAILED = 3; } message TransactionInfo { string tx_hash = 1; string chain_id = 2; } message PacketStatus { TransferState state = 1; uint64 sequence_number = 2; string source_client_id = 3; TransactionInfo send_tx = 4; TransactionInfo recv_tx = 5; TransactionInfo ack_tx = 6; TransactionInfo timeout_tx = 7; } message StatusResponse { repeated PacketStatus packet_statuses = 1; } message RelayRequest { string tx_hash = 1; string chain_id = 2; } message RelayResponse {} ``` ## Observability | Type | Name | Description | | ------ | --------------------------------------- | ----------------------------------------------------------------------------------------------------------------------- | | Metric | Relayer api request count | Paginated by method and response code | | Metric | Relayer api request latency | Paginated by method | | Metric | Transfer count | Paginated by source, destination chain, and transfer state | | Metric | Relayer gas balance | Paginated by chain and gas token | | Metric | Relayer gas balance state | A gauge where each value represents a gas balance state. 0 = ok, 1 = warning, 2 = critical. Paginated by chain | | Metric | External request count | Paginated by endpoint, method and response code | | Metric | External request latency | Paginated by endpoint and method | | Metric | Transactions submitted counter | Paginated by node response success status and chain | | Metric | Transaction retry counter | Paginated by source and destination chain | | Metric | Transactions confirmed counter | Paginated by execution success and chain | | Metric | Transaction gas cost counter | Paginated by chain | | Metric | Relay latency | Time between send tx and ack/timeout tx. Paginated by source and destination chain | | Metric | Detected client update required counter | Paginated by chain | | Metric | Client updated counter | Paginated by chain | | Metric | Excessive relay latency counter | Incremented anytime a transfer is pending longer than a configured threshold. Paginated by source and destination chain | | Alert | Excessive relay latency | Should alert whenever the excessive relay latency counter increases | | Alert | Excessive gas usage | Should alert whenever the gas cost counter increases faster than some threshold | | Alert | Low gas balance | Should alert whenever the relayer gas balance state metric is in the warning or critical state | ## Configuration Reference The relayer is configured via a YAML file. Below is the complete configuration schema with all available options. ### Full Example ```yaml theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} postgres: hostname: localhost port: "5432" database: relayer metrics: prometheus_address: "0.0.0.0:8888" relayer_api: address: "0.0.0.0:9000" ibcv2_proof_api: grpc_address: "localhost:50051" grpc_tls_enabled: false signing: # Local signing — set keys_path to use local key file keys_path: "./config/local/ibcv2keys.json" # Remote signing — set grpc_address to use remote signer (takes precedence over keys_path) # grpc_address: "localhost:50052" # cosmos_wallet_key: "cosmos-wallet-id" # evm_wallet_key: "evm-wallet-id" # svm_wallet_key: "svm-wallet-id" coingecko: base_url: "https://pro-api.coingecko.com/api/v3" api_key: "your-api-key" requests_per_minute: 30 cache_refresh_interval: 5m chains: cosmoshub: chain_name: "cosmoshub" chain_id: "cosmoshub-4" type: "cosmos" environment: "mainnet" gas_token_symbol: "ATOM" gas_token_coingecko_id: "cosmos" gas_token_decimals: 6 supported_bridges: - ibcv2 cosmos: ibcv2_tx_fee_denom: "uatom" ibcv2_tx_fee_amount: 5000 rpc: "https://cosmos-rpc.example.com" rpc_basic_auth_var: "COSMOS_RPC_AUTH" grpc: "cosmos-grpc.example.com:9090" grpc_tls_enabled: true address_prefix: "cosmos" ibcv2: counterparty_chains: "08-wasm-0": "1" # client ID on cosmoshub → ethereum chain ID finality_offset: 10 recv_batch_size: 50 recv_batch_timeout: 10s recv_batch_concurrency: 3 ack_batch_size: 50 ack_batch_timeout: 10s ack_batch_concurrency: 3 timeout_batch_size: 50 timeout_batch_timeout: 10s timeout_batch_concurrency: 3 should_relay_success_acks: true should_relay_error_acks: true signer_gas_alert_thresholds: ibcv2: warning_threshold: "5000000" # in smallest denom units (uatom) critical_threshold: "1000000" ethereum: chain_name: "ethereum" chain_id: "1" type: "evm" environment: "mainnet" gas_token_symbol: "ETH" gas_token_coingecko_id: "ethereum" gas_token_decimals: 18 supported_bridges: - ibcv2 evm: rpc: "https://eth-mainnet.g.alchemy.com/v2/your-key" rpc_basic_auth_var: "ETH_RPC_AUTH" contracts: ics_26_router_address: "0x..." ics_20_transfer_address: "0x..." gas_fee_cap_multiplier: 1.5 gas_tip_cap_multiplier: 1.2 ibcv2: counterparty_chains: "tendermint-0": "cosmoshub-4" # client ID on ethereum → cosmoshub chain ID recv_batch_size: 100 recv_batch_timeout: 10s recv_batch_concurrency: 3 ack_batch_size: 100 ack_batch_timeout: 10s ack_batch_concurrency: 3 timeout_batch_size: 100 timeout_batch_timeout: 10s timeout_batch_concurrency: 3 should_relay_success_acks: true should_relay_error_acks: true signer_gas_alert_thresholds: ibcv2: warning_threshold: "1000000000000000000" # 1 ETH critical_threshold: "500000000000000000" # 0.5 ETH ``` ### Section Reference #### `postgres` | Field | Type | Description | | ---------- | ------ | ------------- | | `hostname` | string | Postgres host | | `port` | string | Postgres port | | `database` | string | Database name | Database credentials are read from environment variables `POSTGRES_USER` and `POSTGRES_PASSWORD` (default: `relayer`/`relayer`). #### `metrics` | Field | Type | Description | | -------------------- | ------ | --------------------------------------------------------- | | `prometheus_address` | string | Address to serve Prometheus metrics (e.g. `0.0.0.0:8888`) | #### `relayer_api` | Field | Type | Description | | --------- | ------ | ------------------------------------------------------------------ | | `address` | string | Address for the gRPC API server to listen on (e.g. `0.0.0.0:9000`) | #### `ibcv2_proof_api` Connection to the proof API service that generates relay transactions. | Field | Type | Description | | ------------------ | ------ | --------------------------------------- | | `grpc_address` | string | gRPC address of the proof API | | `grpc_tls_enabled` | bool | Enable TLS for the proof API connection | #### `signing` Signing configuration. The mode is inferred from which fields are set: * If `grpc_address` is set → remote signing (ignores `keys_path`) * Else if `keys_path` is set → local signing from key file * Else → fatal error at startup | Field | Type | Description | | ------------------- | ------ | ------------------------------------------------------------------------------------ | | `keys_path` | string | Path to local signing keys JSON file | | `grpc_address` | string | gRPC address of the remote signer service. If set, takes precedence over `keys_path` | | `cosmos_wallet_key` | string | Wallet ID for Cosmos chain signing (remote signer only) | | `evm_wallet_key` | string | Wallet ID for EVM chain signing (remote signer only) | | `svm_wallet_key` | string | Wallet ID for Solana chain signing (remote signer only) | #### `coingecko` (optional) Used for tracking transaction gas costs in USD. If omitted, gas cost tracking is disabled. | Field | Type | Description | | ------------------------ | -------- | ---------------------------------- | | `base_url` | string | Coingecko API base URL | | `api_key` | string | API key | | `requests_per_minute` | int | Rate limit | | `cache_refresh_interval` | duration | How often to refresh cached prices | #### `chains.` Each entry under `chains` defines a chain the relayer can interact with. | Field | Type | Description | | ------------------------ | --------- | ----------------------------------------------------- | | `chain_name` | string | Human-readable chain name. Used primarily in metrics. | | `chain_id` | string | Chain identifier (numeric for EVM, string for Cosmos) | | `type` | string | `cosmos`, `evm`, or `svm` | | `environment` | string | `mainnet` or `testnet` | | `gas_token_symbol` | string | Gas token ticker symbol | | `gas_token_coingecko_id` | string | Coingecko ID for gas cost tracking (optional) | | `gas_token_decimals` | uint8 | Decimal places for the gas token | | `supported_bridges` | \[]string | List of bridge types (currently only `ibcv2`) | #### `chains..cosmos` Required when `type: cosmos`. | Field | Type | Description | | --------------------- | ------- | --------------------------------------------------------------------------- | | `gas_price` | float64 | Gas price for fee estimation. Mutually exclusive with `ibcv2_tx_fee_amount` | | `ibcv2_tx_fee_denom` | string | Fee denom for ibcv2 txs (required if `ibcv2_tx_fee_amount` is set) | | `ibcv2_tx_fee_amount` | uint64 | Fixed fee amount for ibcv2 txs. Mutually exclusive with `gas_price` | | `rpc` | string | Tendermint RPC endpoint | | `rpc_basic_auth_var` | string | Environment variable name containing basic auth credentials for RPC | | `grpc` | string | gRPC endpoint | | `grpc_tls_enabled` | bool | Enable TLS for gRPC | | `address_prefix` | string | Bech32 address prefix (e.g. `cosmos`, `osmo`) | #### `chains..evm` Required when `type: evm`. | Field | Type | Description | | ----------------------------------- | ------- | ------------------------------------------------------------------- | | `rpc` | string | Ethereum JSON-RPC endpoint | | `rpc_basic_auth_var` | string | Environment variable name containing basic auth credentials for RPC | | `contracts.ics_26_router_address` | string | ICS26 Router contract address | | `contracts.ics_20_transfer_address` | string | ICS20 Transfer contract address | | `gas_fee_cap_multiplier` | float64 | Multiplier applied to the estimated gas fee cap (optional) | | `gas_tip_cap_multiplier` | float64 | Multiplier applied to the estimated gas tip cap (optional) | #### `chains..ibcv2` IBC v2 relay configuration for this chain. | Field | Type | Description | | --------------------------- | ------------------ | ----------------------------------------------------------------------------------------------------------------- | | `counterparty_chains` | map\[string]string | Maps client IDs on this chain to their counterparty chain IDs. Only connections listed here will be relayed | | `finality_offset` | uint64 | Number of blocks to wait after a tx before considering it finalized. If omitted, uses the chain's native finality | | `recv_batch_size` | int | Max packets to accumulate before flushing a recv batch | | `recv_batch_timeout` | duration | Max time to wait for recv packets to accumulate before flushing | | `recv_batch_concurrency` | int | Max concurrent recv batches being processed | | `ack_batch_size` | int | Max packets to accumulate before flushing an ack batch | | `ack_batch_timeout` | duration | Max time to wait for ack packets to accumulate before flushing | | `ack_batch_concurrency` | int | Max concurrent ack batches being processed | | `timeout_batch_size` | int | Max packets to accumulate before flushing a timeout batch | | `timeout_batch_timeout` | duration | Max time to wait for timeout packets to accumulate before flushing | | `timeout_batch_concurrency` | int | Max concurrent timeout batches being processed | | `should_relay_success_acks` | bool | Whether to relay acknowledgements for successful packet deliveries | | `should_relay_error_acks` | bool | Whether to relay acknowledgements for failed packet deliveries | #### `chains..signer_gas_alert_thresholds` | Field | Type | Description | | -------------------------- | ------ | -------------------------------------------------------------------------- | | `ibcv2.warning_threshold` | string | Gas balance (in smallest denom) at which the metric reports warning state | | `ibcv2.critical_threshold` | string | Gas balance (in smallest denom) at which the metric reports critical state | ## Signing The relayer supports two signing modes, configured via the `signing` block in the YAML config. The mode is inferred from which fields are populated: * If `grpc_address` is set → **remote signing** (ignores `keys_path`) * Else if `keys_path` is set → **local signing** from key file * Else → fatal error at startup ### Local Signing Set `signing.keys_path` to point to a JSON file containing private keys. The format is a map of chain IDs to key objects: ```yaml theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} signing: keys_path: "./config/local/ibcv2keys.json" ``` ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "1": { "private_key": "0xabc123..." }, "cosmoshub-4": { "private_key": "abc123..." } } ``` For EVM chains, the private key is a hex-encoded ECDSA private key. For Cosmos chains, it is a hex-encoded secp256k1 private key. ### Remote Signing For production deployments, the relayer can delegate signing to an external gRPC service. This keeps private keys isolated from the relayer process. ```yaml theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} signing: grpc_address: "signer.internal:50052" cosmos_wallet_key: "my-cosmos-wallet" evm_wallet_key: "my-evm-wallet" svm_wallet_key: "my-svm-wallet" ``` The remote signer connection uses the `SERVICE_ACCOUNT_TOKEN` environment variable as a bearer token in gRPC metadata for authenticating requests to the signing service. The remote signer must implement the following gRPC service: ```proto theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} service SignerService { rpc GetChains(GetChainsRequest) returns (GetChainsResponse) {} rpc GetWallet(GetWalletRequest) returns (GetWalletResponse) {} rpc GetWallets(GetWalletsRequest) returns (GetWalletsResponse) {} rpc Sign(SignRequest) returns (SignResponse) {} } ``` The `Sign` RPC accepts transaction payloads for EVM, Cosmos, and Solana chains and returns the appropriate signature format: * **EVM**: Accepts serialized tx bytes + chain ID, returns `(r, s, v)` signature components * **Cosmos**: Accepts sign doc bytes, returns a raw signature * **Solana**: Accepts a base64-encoded transaction, returns a raw signature ## Upcoming Features * Solana support ## Unsupported Features * Charging end users fees to relay IBC transactions * Relaying IBC v1 packets # API Reference Source: https://docs.cosmos.network/enterprise/components/poa/api Complete API reference for PoA module gRPC queries and transactions # PoA Module API Documentation ## Overview The Proof of Authority (PoA) module provides a governance mechanism for managing validators in a Cosmos SDK blockchain. Unlike traditional Proof of Stake, PoA allows a designated admin to control validator set membership and voting power distribution. **Package:** `cosmos.poa.v1` **Go Import:** `github.com/cosmos/cosmos-sdk/enterprise/poa/types` *** ## Core Concepts * **Admin Control:** A single admin address has exclusive authority to manage validators and module parameters * **Validator Management:** Create validators, update voting power, and manage the active validator set * **Fee Distribution:** Validators accumulate fees that can be withdrawn by their operators * **Dynamic Updates:** Changes to the validator set are applied without stopping the chain *** ## Data Types ### Validator Represents a validator in the PoA system. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message Validator { google.protobuf.Any pub_key = 1; int64 power = 2; ValidatorMetadata metadata = 3; repeated cosmos.base.v1beta1.DecCoin allocated_fees = 4; } ``` **Fields:** * `pub_key` (Any): The validator's consensus public key (typically `/cosmos.crypto.ed25519.PubKey`) * `power` (int64): Voting power for this validator (use `0` to remove a validator) * `metadata` (ValidatorMetadata): Additional validator information * `allocated_fees` (DecCoin\[]): Accumulated fees allocated to this validator **Example:** ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "pub_key": { "@type": "/cosmos.crypto.ed25519.PubKey", "key": "YUzyiqZzKN8BmLbl75gdXfbxQ2QtSYpPSwA85bZ3xuE=" }, "power": "10000", "metadata": { "moniker": "validator-1", "description": "First validator node", "operator_address": "cosmos1x0mm8rws8lm46xay3zyyznzr6lvu5um3kht0x7" }, "allocated_fees": [] } ``` ### ValidatorMetadata Metadata information about a validator. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message ValidatorMetadata { string moniker = 3; string description = 4; string operator_address = 5; } ``` **Fields:** * `moniker` (string): Human-readable name for the validator * `description` (string): Optional description of the validator * `operator_address` (string): Cosmos SDK address that operates this validator ### Params Module parameters. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message Params { string admin = 1; } ``` **Fields:** * `admin` (string): Cosmos SDK address with administrative privileges ### ValidatorFees Represents fee allocations for a validator operator. ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message ValidatorFees { repeated cosmos.base.v1beta1.DecCoin fees = 1; } ``` **Fields:** * `fees` (DecCoin\[]): List of coins representing allocated fees *** ## Query API The Query service provides read-only access to PoA module state. ### Params Get module parameters. **gRPC:** `cosmos.poa.v1.Query/Params` **REST:** `GET /cosmos/poa/v1/params` **Request:** ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message QueryParamsRequest {} ``` **Response:** ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message QueryParamsResponse { Params params = 1; } ``` **CLI:** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd q poa params ``` **Example Response:** ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "params": { "admin": "cosmos1x0mm8rws8lm46xay3zyyznzr6lvu5um3kht0x7" } } ``` *** ### Validator Query a single validator by address. **gRPC:** `cosmos.poa.v1.Query/Validator` **REST:** `GET /cosmos/poa/v1/validator/{address}` **Request:** ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message QueryValidatorRequest { string address = 1; // Consensus or operator address } ``` **Response:** ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message QueryValidatorResponse { Validator validator = 1; } ``` **CLI:** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd q poa validator
``` **Notes:** * `address` can be either a consensus address or operator address *** ### Validators List all validators in the system. **gRPC:** `cosmos.poa.v1.Query/Validators` **REST:** `GET /cosmos/poa/v1/validators` **Request:** ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message QueryValidatorsRequest { cosmos.base.query.v1beta1.PageRequest pagination = 2; } ``` **Response:** ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message QueryValidatorsResponse { repeated Validator validators = 1; cosmos.base.query.v1beta1.PageResponse pagination = 2; } ``` **CLI:** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd q poa validators ``` **Notes:** * Results are always returned in descending order by voting power * Supports pagination for large validator sets **Example Response:** ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "validators": [ { "pub_key": { "@type": "/cosmos.crypto.ed25519.PubKey", "key": "YUzyiqZzKN8BmLbl75gdXfbxQ2QtSYpPSwA85bZ3xuE=" }, "power": "10000", "metadata": { "moniker": "validator-1", "operator_address": "cosmos1..." } } ] } ``` *** ### WithdrawableFees Query fees available for withdrawal by a validator operator. **gRPC:** `cosmos.poa.v1.Query/WithdrawableFees` **REST:** `GET /cosmos/poa/v1/allocated_fees/{operator_address}` **Request:** ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message QueryWithdrawableFeesRequest { string operator_address = 1; } ``` **Response:** ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message QueryWithdrawableFeesResponse { ValidatorFees fees = 1; } ``` **CLI:** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd q poa allocated-fees ``` **Example Response:** ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "fees": { "fees": [ { "denom": "token", "amount": "1000.500000000000000000" } ] } } ``` *** ### TotalPower Get the total voting power across all validators. **gRPC:** `cosmos.poa.v1.Query/TotalPower` **REST:** `GET /cosmos/poa/v1/total_power` **Request:** ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message QueryTotalPowerRequest {} ``` **Response:** ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message QueryTotalPowerResponse { int64 total_power = 1; } ``` **CLI:** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd q poa total-power ``` **Example Response:** ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "total_power": "50000" } ``` *** ## Transaction Messages (Msg Service) The Msg service handles state-changing operations. ### UpdateParams Update module parameters (admin only). **gRPC:** `cosmos.poa.v1.Msg/UpdateParams` **Message:** ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message MsgUpdateParams { Params params = 1; string admin = 2; // Signer must be current admin } ``` **Response:** ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message MsgUpdateParamsResponse {} ``` **CLI:** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx poa update-params \ --admin \ --from \ --keyring-backend test \ --chain-id \ -y ``` **Authorization:** Only the current admin can execute this transaction. *** ### CreateValidator Create a new validator with zero voting power (operator initiates, admin must activate). **gRPC:** `cosmos.poa.v1.Msg/CreateValidator` **Message:** ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message MsgCreateValidator { google.protobuf.Any pub_key = 1; string moniker = 2; string description = 3; string operator_address = 4; // Signer } ``` **Response:** ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message MsgCreateValidatorResponse {} ``` **CLI:** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx poa create-validator \ --pubkey \ --moniker "my-validator" \ --description "Validator description" \ --from \ --keyring-backend test \ --chain-id \ -y ``` **Authorization:** Any account can create a validator, but it starts with power=0. **Notes:** * The validator will not participate in consensus until the admin updates its power to a non-zero value * Public key must be a valid consensus public key (typically Ed25519) *** ### UpdateValidators Update validator set (admin only). This is the primary mechanism for managing validators. **gRPC:** `cosmos.poa.v1.Msg/UpdateValidators` **Message:** ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message MsgUpdateValidators { repeated Validator validators = 1; string admin = 2; // Signer must be admin } ``` **Response:** ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message MsgUpdateValidatorsResponse {} ``` **CLI (inline):** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx poa update-validators \ --validator '{ "pub_key": { "@type": "/cosmos.crypto.ed25519.PubKey", "key": "YUzyiqZzKN8BmLbl75gdXfbxQ2QtSYpPSwA85bZ3xuE=" }, "power": 10000 }' \ --validator '{ "pub_key": { "@type": "/cosmos.crypto.ed25519.PubKey", "key": "lSR1GEByJtzgiuCevrWgcyBWjhQXjycsuzzIdf56Oa4=" }, "power": 0 }' \ --from account \ --keyring-backend test \ --chain-id \ -y ``` **CLI (from file):** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx poa update-validators validators.json \ --from account \ --keyring-backend test \ --chain-id \ -y ``` **File Format (validators.json):** ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} [ { "pub_key": { "@type": "/cosmos.crypto.ed25519.PubKey", "key": "YUzyiqZzKN8BmLbl75gdXfbxQ2QtSYpPSwA85bZ3xuE=" }, "power": 10000, "metadata": { "moniker": "validator-1", "description": "First validator", "operator_address": "cosmos1x0mm8rws8lm46xay3zyyznzr6lvu5um3kht0x7" } }, { "pub_key": { "@type": "/cosmos.crypto.ed25519.PubKey", "key": "lSR1GEByJtzgiuCevrWgcyBWjhQXjycsuzzIdf56Oa4=" }, "power": 0 } ] ``` **Authorization:** Only the admin can execute this transaction. **Notes:** * Can update multiple validators in a single transaction * Setting `power: 0` removes a validator from the active set * Changes propagate to CometBFT consensus in the next block * Missing fields in metadata are preserved from existing state *** ### WithdrawFees Withdraw accumulated fees to the operator's account. **gRPC:** `cosmos.poa.v1.Msg/WithdrawFees` **Message:** ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message MsgWithdrawFees { string operator = 1; // Signer } ``` **Response:** ```protobuf theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} message MsgWithdrawFeesResponse {} ``` **CLI:** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx poa withdraw-fees \ --from \ --keyring-backend test \ --chain-id \ -y ``` **Authorization:** Must be signed by the validator's operator address. **Notes:** * Transfers all accumulated fees to the operator's account * Fees are denominated in the chain's native token(s) *** ## Common Use Cases ### 1. Query Current Admin ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd q poa params ``` ### 2. List All Active Validators ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd q poa validators ``` ### 3. Add a New Validator **Step 1:** Operator creates the validator: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx poa create-validator \ --pubkey \ --moniker "new-validator" \ --from operator-account \ --keyring-backend test \ -y ``` **Step 2:** Admin activates with voting power: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx poa update-validators \ --validator '{ "pub_key": {"@type": "/cosmos.crypto.ed25519.PubKey", "key": "..."}, "power": 10000 }' \ --from admin \ --keyring-backend test \ -y ``` ### 4. Change Validator Voting Power ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx poa update-validators \ --validator '{ "pub_key": {"@type": "/cosmos.crypto.ed25519.PubKey", "key": "..."}, "power": 20000 }' \ --from admin \ --keyring-backend test \ -y ``` ### 5. Remove a Validator ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx poa update-validators \ --validator '{ "pub_key": {"@type": "/cosmos.crypto.ed25519.PubKey", "key": "..."}, "power": 0 }' \ --from admin \ --keyring-backend test \ -y ``` ### 6. Withdraw Validator Fees ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx poa withdraw-fees \ --from validator-operator \ --keyring-backend test \ -y ``` ### 7. Transfer Admin Rights ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} simd tx poa update-params \ --admin cosmos1newadminaddress... \ --from current-admin \ --keyring-backend test \ -y ``` *** ## REST API Endpoints All query endpoints are available via REST: | Method | Endpoint | Description | | ------ | -------------------------------------------------- | ---------------------- | | GET | `/cosmos/poa/v1/params` | Get module parameters | | GET | `/cosmos/poa/v1/validator/{address}` | Get single validator | | GET | `/cosmos/poa/v1/validators` | List all validators | | GET | `/cosmos/poa/v1/allocated_fees/{operator_address}` | Get withdrawable fees | | GET | `/cosmos/poa/v1/total_power` | Get total voting power | **Example REST Query:** ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} curl http://localhost:1317/cosmos/poa/v1/validators ``` *** ## Error Handling Common error scenarios: ### Unauthorized Admin Action **Error:** Transaction rejected **Cause:** Non-admin attempted to call admin-only function **Solution:** Ensure transaction is signed by the admin account ### Invalid Public Key **Error:** Invalid validator public key **Cause:** Malformed or wrong type of public key **Solution:** Use Ed25519 public key in correct format ### Validator Not Found **Error:** Validator does not exist **Cause:** Querying non-existent validator **Solution:** Verify validator address/public key ### Insufficient Fees **Error:** No fees to withdraw **Cause:** Validator has no accumulated fees **Solution:** Wait for fees to accumulate from block rewards *** ## Integration Examples ### JavaScript/TypeScript (CosmJS) ```typescript theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import { SigningStargateClient } from "@cosmjs/stargate"; // Query validators const client = await StargateClient.connect("http://localhost:26657"); const response = await client.queryContractSmart( "cosmos.poa.v1.Query/Validators", {} ); // Update validators (requires signing) const signingClient = await SigningStargateClient.connectWithSigner( "http://localhost:26657", wallet ); const msg = { typeUrl: "/cosmos.poa.v1.MsgUpdateValidators", value: { validators: [{ pubKey: { typeUrl: "/cosmos.crypto.ed25519.PubKey", value: ... }, power: 10000, metadata: { moniker: "validator-1", operatorAddress: "cosmos1..." } }], admin: "cosmos1adminaddress..." } }; const result = await signingClient.signAndBroadcast( adminAddress, [msg], "auto" ); ``` ### Python (cosmpy) ```python theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} from cosmpy.aerial.client import LedgerClient, NetworkConfig from cosmpy.aerial.wallet import LocalWallet # Create client client = LedgerClient(NetworkConfig.fetchai_mainnet()) # Query validators response = client.query_contract( "cosmos.poa.v1.Query/Validators", {} ) print(response) ``` ### Go ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import ( "context" poatypes "github.com/cosmos/cosmos-sdk/enterprise/poa/types" "google.golang.org/grpc" ) // Query client conn, _ := grpc.Dial("localhost:9090", grpc.WithInsecure()) queryClient := poatypes.NewQueryClient(conn) // Get validators resp, err := queryClient.Validators(context.Background(), &poatypes.QueryValidatorsRequest{}) if err != nil { panic(err) } for _, val := range resp.Validators { fmt.Printf("Validator: %s, Power: %d\n", val.Metadata.Moniker, val.Power) } ``` *** ## Security Considerations 1. **Admin Key Security:** The admin private key has complete control over the validator set. Use hardware wallets or secure key management systems. 2. **Validator Public Keys:** Ensure validator public keys are correctly generated and stored securely. 3. **Power Distribution:** Consider the security implications of power concentration. Avoid giving a single validator >67% of total power. 4. **Operator Separation:** Use separate accounts for operator and admin roles to limit exposure. 5. **Fee Withdrawal:** Operators should regularly withdraw fees to prevent accumulation in the module. *** ## Appendix ### Public Key Formats Ed25519 public keys should be base64-encoded: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "@type": "/cosmos.crypto.ed25519.PubKey", "key": "YUzyiqZzKN8BmLbl75gdXfbxQ2QtSYpPSwA85bZ3xuE=" } ``` ### Address Formats * **Operator Address:** Standard Cosmos SDK bech32 address (e.g., `cosmos1...`) * **Consensus Address:** Can be derived from public key or use operator address for queries ### Power Units * Voting power is represented as `int64` * Total power affects block signing requirements (typically need >2/3 of total power for consensus) * Zero power effectively removes a validator from the active set s # Architecture Source: https://docs.cosmos.network/enterprise/components/poa/architecture System architecture and module integration details for the PoA module # PoA Module Architecture ## Overview The Proof of Authority (PoA) module is a Cosmos SDK module that implements a permissioned consensus mechanism where a designated admin controls the validator set. Unlike traditional Proof of Stake systems, PoA validators are explicitly authorized and managed by an administrative authority rather than being selected based on staked tokens. ## Table of Contents * [Architecture](#architecture) * [SDK Integration Points](#sdk-integration-points) * [Architectural Decisions](#architectural-decisions) * [Admin Control Flow](#admin-control-flow) * [Setting Admin Authority](#setting-admin-authority) * [Managing Validator Set](#managing-validator-set) * [Updating Parameters](#updating-parameters) * [Validator Lifecycle](#validator-lifecycle) * [Validator Registration](#validator-registration) * [Gaining Consensus Power](#gaining-consensus-power) * [Removing Validators](#removing-validators) * [Fee Distribution](#fee-distribution) → See [distribution.md](/enterprise/components/poa/distribution) * [Governance](#governance) → See [governance.md](/enterprise/components/poa/governance) * [Technical Implementation](#technical-implementation) * [Storage Design](#storage-design) * [ABCI Integration](#abci-integration) * [Dependencies](#dependencies) * [Security Considerations](#security-considerations) ## Architecture ### SDK Integration Points The PoA module plugs into the Cosmos SDK as a replacement for the standard staking module, providing an alternative consensus mechanism: PoA Module Architecture *The diagram above shows how the PoA module integrates with Cosmos SDK modules (x/auth, x/bank, x/gov), the fee\_collector account, and CometBFT consensus engine.* **Key Integration Points:** 1. **Replaces [x/staking](/sdk/v0.53/build/modules/staking/README)**: PoA provides validator management without token delegation or bonding 2. **Integrates with [x/gov](/sdk/v0.53/build/modules/gov/README)**: Custom governance hooks ensure only active validators can participate and tally function override allocates vote weight to validator power ([details](#governance)) 3. **Uses [x/auth](/sdk/v0.53/build/modules/auth/auth) & [x/bank](/sdk/v0.53/build/modules/bank/README)**: Standard account and token management for fee distribution ([details](#fee-distribution)) 4. **ABCI Lifecycle**: Implements `EndBlocker` to communicate validator updates to CometBFT ([details](#abci-integration)) ### Architectural Decisions **Admin-Controlled Validator Set** Unlike proof-of-stake where validators are determined by token weight, PoA uses a single admin address to authorize validators. This design choice: * Enables permissioned networks with known validator identities * Removes token requirement from validator participation (no token bonding required) * Centralizes trust in the admin address (see [Security Considerations](#security-considerations)) **Custom Fee Distribution** Rather than using the standard [x/distribution](/sdk/v0.53/build/modules/distribution/README) module, PoA implements its own fee mechanism: * Fees are routed to the PoA module account via a custom ante handler (see [Fee Routing Setup](/enterprise/components/poa/distribution.mdx#fee-routing-setup) for complete details). * Fees allocated proportionally to validator power (not delegated stake) * Validators withdraw fees on-demand * See [Fee Distribution](#fee-distribution) for complete details **Governance Without Staking** Standard SDK [governance](/sdk/v0.53/build/modules/gov/README) uses bonded tokens for voting weight. PoA replaces this with validator power: * Only active validators (power > 0) can submit, deposit, or vote on proposals * Voting weight determined by validator power, not token holdings * Prevents non-validator governance participation * See [Governance](#governance) for implementation details **Storage Design Philosophy** The module uses `cosmossdk.io/collections` with a composite key structure: * Primary key: `(power, consensus_address)` enables efficient power-sorted iteration * Secondary indexes on consensus and operator addresses for fast lookups * Requires re-keying when power changes, but eliminates need for separate sorting * See [Storage Design](#storage-design) for technical details ## Admin Control Flow ### Setting Admin Authority The PoA module is controlled by a single admin address configured at genesis. This admin has exclusive authority to: * Update validator power (grant/revoke consensus participation) * Modify module parameters * Batch update the entire validator set The admin could be set to any authority that has an address. This includes a group from x/groups, the governance module account, and multisigs. **Location**: Admin address stored in [`x/poa/types/keys.go:10`](https://github.com/cosmos/cosmos-sdk/blob/main/enterprise/poa/x/poa/types/keys.go#L10) (params prefix) Only the admin can update itself with a parameter change. ### Managing Validator Set **MsgUpdateValidators** ([`x/poa/keeper/msg_server.go:72`](https://github.com/cosmos/cosmos-sdk/blob/main/enterprise/poa/x/poa/keeper/msg_server.go#L72)) The admin can batch update validators through a single transaction: 1. **Authentication**: Transaction must be signed by the admin address 2. **Validation**: Each validator update is validated for: * Valid public key * Non-negative power * Valid metadata (operator address, moniker, description) * No duplicate operator addresses 3. **Power Changes**: Any power change triggers: * Fee checkpoint (allocates pending fees before power changes) * Total power recalculation * ABCI validator update queue 4. **Consensus Update**: Changes take effect at the end of the current block ### Updating Parameters **MsgUpdateParams** ([`x/poa/keeper/msg_server.go:26`](https://github.com/cosmos/cosmos-sdk/blob/main/enterprise/poa/x/poa/keeper/msg_server.go#L26)) The admin can update module parameters (currently only the admin address itself). This requires: * Transaction signed by current admin * Validation of new parameters ## Validator Lifecycle ### Validator Registration **MsgCreateValidator** ([`x/poa/keeper/msg_server.go:45`](https://github.com/cosmos/cosmos-sdk/blob/main/enterprise/poa/x/poa/keeper/msg_server.go#L45)) **Permissionless Creation**: Any address can register as a validator candidate: 1. **Submit Registration**: Provide public key and metadata * **PubKey**: Ed25519 * **Operator Address**: Account that will receive fees and manage the validator * **Moniker**: Human-readable name (max 256 chars) * **Description**: Additional details (max 256 chars) 2. **Initial State**: Created validators have **power = 0** until the admin updates it via `MsgUpdateValidators` * Not participating in consensus * Not earning fees * Cannot vote in governance **Location**: [`x/poa/keeper/validator.go:95`](https://github.com/cosmos/cosmos-sdk/blob/main/enterprise/poa/x/poa/keeper/validator.go#L95) ### Gaining Consensus Power Validators can only gain consensus power through admin action: 1. **Admin Updates Power**: Via [`MsgUpdateValidators`](#managing-validator-set) 2. **Power > 0**: Validator becomes active 3. **ABCI Update**: CometBFT adds validator to active set at next block 4. **Fee Eligibility**: Validator starts accumulating fees proportionally 5. **Governance Rights**: Validator can submit proposals, deposit, and vote **Power Mechanics**: * Power is an integer representing voting weight * Higher power = more consensus influence and fee share * Power can be adjusted up or down by admin * Setting power = 0 removes validator from consensus without deleting **Location**: [`x/poa/keeper/validator.go:19`](https://github.com/cosmos/cosmos-sdk/blob/main/enterprise/poa/x/poa/keeper/validator.go#L19) ### Removing Validators **Soft Removal** (Removing power): * Admin sets validator power to 0 * Validator remains registered but inactive * Can be reactivated by admin later * Validator entry is preserved in the map of validators ## Fee Distribution The PoA module implements a custom checkpoint-based fee distribution system that allocates block fees proportionally to validator power. **Key Features**: * Fees accumulate in [the PoA module account](/enterprise/components/poa/distribution#fee-routing-setup) * Allocated proportionally to validator power at checkpoints * Checkpoints triggered by power changes or withdrawals * Validators withdraw accumulated fees on-demand * Uses DecCoins for precision to prevent dust accumulation **Why Checkpointing?**: Allows for efficient, lazy distribution rather than actively moving funds every block. **See [Fee Distribution Documentation](./distribution.md)** for complete details. **Location**: [`x/poa/keeper/distribution.go`](https://github.com/cosmos/cosmos-sdk/blob/main/enterprise/poa/x/poa/keeper/distribution.go) ## Governance The PoA module restricts governance participation to active validators only, using validator power as voting weight instead of bonded tokens. **Key Features**: * Uses existing x/gov module * Only active validators (power > 0) can submit, deposit, or vote on proposals * Voting weight equals validator power * Custom tally function replaces standard governance tallying * Admin indirectly controls governance through power distribution **Power-Based Voting**: Each validator's vote is weighted by their validator power set in the x/poa module. **See [Governance Documentation](./governance.md)** for complete details. **Location**: [`x/poa/keeper/governance.go`](https://github.com/cosmos/cosmos-sdk/blob/main/enterprise/poa/x/poa/keeper/governance.go) and [`x/poa/keeper/hooks.go`](https://github.com/cosmos/cosmos-sdk/blob/main/enterprise/poa/x/poa/keeper/hooks.go) ## Technical Implementation ### Storage Design **Collections Schema** ([`x/poa/types/keys.go`](https://github.com/cosmos/cosmos-sdk/blob/main/enterprise/poa/x/poa/types/keys.go)) The module uses `cosmossdk.io/collections` for type-safe state management: | Prefix | Collection | Key Type | Value Type | Purpose | | ------ | ------------------------ | ----------------- | ----------------- | ------------------------------------- | | 0 | `params` | - | `Params` | Admin address and module config | | 1 | `validators` | `(int64, string)` | `Validator` | Primary map, sorted by power | | 2 | `validator_by_consensus` | `string` | `(int64, string)` | Index: consensus addr → composite key | | 3 | `validator_by_operator` | `string` | `(int64, string)` | Index: operator addr → composite key | | 4 | `total_power` | - | `int64` | Sum of all validator power | | 5 | `total_allocated` | - | `ValidatorFees` | Sum of allocated fees | **Location**: [`x/poa/keeper/keeper.go:16`](https://github.com/cosmos/cosmos-sdk/blob/main/enterprise/poa/x/poa/keeper/keeper.go#L16) ### ABCI Integration **EndBlocker** ([`x/poa/keeper/abci.go:9`](https://github.com/cosmos/cosmos-sdk/blob/main/enterprise/poa/x/poa/keeper/abci.go#L9)) The module integrates with CometBFT consensus through ABCI: 1. **Power Changes**: When validator power changes, create `ValidatorUpdate` 2. **Queue Updates**: Store updates in memory queue 3. **EndBlock**: At end of block, return all queued updates 4. **CometBFT Processing**: Consensus engine applies updates for next block 5. **Clear Queue**: After returning, clear the queue **ValidatorUpdate Format**: ``` ValidatorUpdate { PubKey: PublicKey // Consensus public key Power: int64 // New power (0 = remove) } ``` **Location**: [`x/poa/module.go:128`](https://github.com/cosmos/cosmos-sdk/blob/main/enterprise/poa/x/poa/module.go#L128) ## Security Considerations 1. **Single Point of Control**: * Admin address controls entire validator set 2. **Validator Registration**: * Anyone can register as validator candidate * Only admin can grant consensus power 3. **Total Power Invariant**: * Total power must remain > 0 * Prevents zero-power chain halts * Validated on every power adjustment via a checkpoint trigger 4. **Governance Restrictions**: * Only active validators (power > 0) can participate * Prevents governance spam from unauthorized users * Ensures governance represents actual consensus participants 5. **Validator Indexing**: * Unique consensus address prevents duplicate validators * Unique operator address prevents fee confusion # Fee Distribution Source: https://docs.cosmos.network/enterprise/components/poa/distribution Fee distribution mechanics and algorithms in the PoA module # Fee Distribution ## Overview The PoA module implements a custom fee distribution mechanism based on validator power. Unlike the standard Cosmos SDK x/distribution module, PoA uses a checkpoint-based system to allocate fees proportionally to validators without automatic distribution. ## How Fees Accumulate Fees flow through the PoA system differently than standard Cosmos SDK: 1. **Block Fees**: Transaction fees collected in each block go to the `fee_collector` module account by default, or to the PoA module account if configured (see [Fee Routing Setup](#fee-routing-setup)) 2. **Checkpoint System**: Allocated fees are updated for validators when: * Any validator power changes * Any validator withdraws fees **Why Checkpointing?**: Ensures fair distribution when power changes. If power changes mid-period, fees are allocated based on old power distribution before the change takes effect. **Location**: [`x/poa/keeper/distribution.go:18`](https://github.com/cosmos/cosmos-sdk/blob/main/enterprise/poa/x/poa/keeper/distribution.go#L18) ## Distribution Algorithm ### Checkpoint-Based Allocation The PoA module uses a checkpoint system to allocate fees fairly when validator power changes. Rather than distributing fees actively at every block, allocation efficiently happens at discrete checkpoints. **Checkpoint Triggers**: * Any validator power change (via `MsgUpdateValidators`) * Any fee withdrawal (via `MsgWithdrawFees`) **Unallocated Fees Calculation**: At checkpoint time $t$, calculate unallocated fees: $$ U_t = B_{collector}(t) - A_{total}(t) $$ Where: * $U_t$ = unallocated fees at checkpoint $t$ * $B_{collector}(t)$ = current balance in the PoA module account * $A_{total}(t) = \sum_{i=1}^{n} F_i(t)$ = sum of all previously- allocated fees across all validators (0 if no checkpoints have been done) **Proportional Share Allocation**: For each active validator $i$ (where $P_i(t) > 0$), allocate a share proportional to their power: $$ S_i(t) = U_t \times \frac{P_i(t)}{P_{total}(t)} $$ Where: * $S_i(t)$ = share allocated to validator $i$ at checkpoint $t$ * $P_i(t)$ = voting power of validator $i$ at checkpoint $t$ * $P_{total}(t) = \sum_{j=1}^{n} P_j(t)$ = sum of all validator powers **Accumulated Fees Update**: After allocation, update each validator's accumulated fees: $$ F_i(t+1) = F_i(t) + S_i(t) $$ Where: * $F_i(t)$ = validator $i$'s accumulated fees before checkpoint * $F_i(t+1)$ = validator $i$'s accumulated fees after checkpoint * $S_i(t)$ = share allocated in this checkpoint **Total Allocated Tracking**: Update the global allocated tracker: $$ A_{total}(t+1) = A_{total}(t) + U_t $$ After this checkpoint, $A_{total}(t+1) = B_{collector}(t)$ (all fees are now allocated). ### Example Checkpoint Sequence **Initial State** (before checkpoint): * PoA module account balance: $B_{collector} = 1000$ tokens * Total allocated: $A_{total} = 400$ tokens (from previous checkpoints) * Validator A: $P_A = 50$, $F_A = 200$ tokens allocated * Validator B: $P_B = 50$, $F_B = 200$ tokens allocated * Total power: $P_{total} = 100$ **Admin Action**: Admin submits `MsgUpdateValidators` to change power distribution to 30/70 **Checkpoint Triggered** (before power change takes effect): 1. Calculate unallocated: $U = 1000 - 400 = 600$ tokens 2. Allocate shares based on **current power** (50/50): * Validator A: $S_A = 600 \times \frac{50}{100} = 300$ tokens * Validator B: $S_B = 600 \times \frac{50}{100} = 300$ tokens 3. Update accumulated fees: * Validator A: $F_A = 200 + 300 = 500$ tokens * Validator B: $F_B = 200 + 300 = 500$ tokens 4. Update total allocated: $A_{total} = 400 + 600 = 1000$ tokens **After Checkpoint** - Power Change Applied: * Validator A: $P_A = 30$ (new power for future blocks) * Validator B: $P_B = 70$ (new power for future blocks) * All 1000 tokens now allocated ($A_{total} = B_{collector}$) * Each validator has updated $F_i$ available for withdrawal **Why This Matters**: Validator A earned 300 tokens (50% share) based on their power during the period when those fees were collected. After the checkpoint, their power drops to 30%, so future fees will be split 30/70. Checkpointing ensures validators are rewarded based on the work they actually performed. **Precision**: Uses `DecCoins` (decimal coins) to prevent rounding dust accumulation. Each validator tracks fractional amounts that are too small to withdraw. ## Withdrawing Fees **MsgWithdrawFees** ([`x/poa/keeper/msg_server.go:91`](https://github.com/cosmos/cosmos-sdk/blob/main/enterprise/poa/x/poa/keeper/msg_server.go#L91)) Any validator operator can withdraw accumulated fees: 1. **Submit Withdrawal**: Signed by operator address 2. **Checkpoint**: System checkpoints all validators first (allocates any pending fees) 3. **Truncate**: Decimal coins truncated to whole coins 4. **Transfer**: Coins transferred from the PoA module account to operator address 5. **Update Tracking**: Total allocated decreases by withdrawn amount 6. **Remainder**: Decimal remainder stays in validator's allocated balance **Example**: ``` Validator has: 100.7543 utokens allocated Withdrawal: 100 utokens transferred to operator Remainder: 0.7543 utokens remain allocated (less than least significant utoken digit) ``` **Location**: [`x/poa/keeper/distribution.go:106`](https://github.com/cosmos/cosmos-sdk/blob/main/enterprise/poa/x/poa/keeper/distribution.go#L106) ## Withdrawal Formula When validator $i$ withdraws fees: $$ W_i = \lfloor F_i \rfloor $$ $$ F_i' = F_i - W_i $$ $$ A_{total}' = A_{total} - W_i $$ Where: * $W_i$ = amount withdrawn (truncated to integer coins) * $F_i$ = validator's allocated fees before withdrawal * $F_i'$ = validator's allocated fees after withdrawal (decimal remainder) * $\lfloor F_i \rfloor$ = floor function (truncate decimals) * $A_{total}'$ = updated total allocated across all validators ## Fee Routing Setup PoA has its own module account for collecting fees. Enabling the PoA module account is recommended to keep fee accounting isolated and accurate. If not enabled, fees are deposited into the standard `fee_collector` account by default. To enable the PoA module account, two wiring changes are required: ### 1. Register the PoA Module Account Register `poatypes.ModuleName` in the `maccPerms` map passed to `authkeeper.NewAccountKeeper`: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} app.AccountKeeper = authkeeper.NewAccountKeeper( appCodec, runtime.NewKVStoreService(storeKeys[authtypes.StoreKey]), authtypes.ProtoBaseAccount, map[string][]string{ authtypes.FeeCollectorName: nil, govtypes.ModuleName: {authtypes.Burner, authtypes.Staking}, poatypes.ModuleName: nil, // register PoA module account }, // ... ) ``` **Source**: [`simapp/app.go`](https://github.com/cosmos/cosmos-sdk/blob/7bc1b146d437d834d971f415924104188203c96f/enterprise/poa/simapp/app.go#L191) ### 2. Configure the Ante Handler Use `WithFeeRecipientModule` on `NewDeductFeeDecorator` to route fees to the PoA module account: ```go {9} theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} anteDecorators := []sdk.AnteDecorator{ ante.NewSetUpContextDecorator(), ante.NewExtensionOptionsDecorator(options.ExtensionOptionChecker), ante.NewValidateBasicDecorator(), ante.NewTxTimeoutHeightDecorator(), ante.NewValidateMemoDecorator(options.AccountKeeper), ante.NewConsumeGasForTxSizeDecorator(options.AccountKeeper), ante.NewDeductFeeDecorator(options.AccountKeeper, options.BankKeeper, options.FeegrantKeeper, options.TxFeeChecker). WithFeeRecipientModule(poatypes.ModuleName), // redirect fees to PoA module account ante.NewSetPubKeyDecorator(options.AccountKeeper), ante.NewValidateSigCountDecorator(options.AccountKeeper), ante.NewSigGasConsumeDecorator(options.AccountKeeper, options.SigGasConsumer), ante.NewSigVerificationDecorator(options.AccountKeeper, options.SignModeHandler, options.SigVerifyOptions...), ante.NewIncrementSequenceDecorator(options.AccountKeeper), } ``` **Source**: [`simapp/ante.go`](https://github.com/cosmos/cosmos-sdk/blob/7bc1b146d437d834d971f415924104188203c96f/enterprise/poa/simapp/ante.go#L49) `WithFeeRecipientModule` is backwards compatible — omitting it defaults to the standard `fee_collector` behavior. ## Security Considerations 1. **Decimal Precision**: * Uses DecCoins to prevent dust accumulation * Validators track fractional amounts * Remainders preserved across withdrawals * Prevents rounding errors from accumulating # Governance Integration Source: https://docs.cosmos.network/enterprise/components/poa/governance Governance integration and power-based voting in the PoA module # Governance ## Overview The PoA module integrates with Cosmos SDK governance to restrict participation to authorized validators only. Unlike standard governance that uses bonded tokens for voting weight, PoA governance uses validator power as the basis for voting. ## Validator-Only Governance The PoA module restricts governance participation to authorized validators only through governance hooks. **Governance Hooks** ([`x/poa/keeper/hooks.go`](https://github.com/cosmos/cosmos-sdk/blob/main/enterprise/poa/x/poa/keeper/hooks.go)) The module implements `govtypes.GovHooks`: 1. **AfterProposalSubmission**: Only authorized validators can submit proposals 2. **AfterProposalDeposit**: Only authorized validators can deposit on proposals 3. **AfterProposalVote**: Only authorized validators can vote **Authorized Validator Definition**: * Registered in PoA module * Power > 0 * Has valid operator address **Rejected Actions**: * If non-validator attempts governance action → transaction fails * If validator has power = 0 → transaction fails * Error: "voter X is not an active PoA validator" **Location**: [`x/poa/keeper/governance.go:92`](https://github.com/cosmos/cosmos-sdk/blob/main/enterprise/poa/x/poa/keeper/governance.go#L92) ## Voting Power **Custom Vote Tallying** ([`x/poa/keeper/governance.go:18`](https://github.com/cosmos/cosmos-sdk/blob/main/enterprise/poa/x/poa/keeper/governance.go#L18)). An example of the wiring can be found in the [SimApp](https://github.com/cosmos/cosmos-sdk/blob/main/enterprise/poa/simapp/app.go#L197-214). Standard governance uses staked tokens as voting weight. PoA governance uses validator power: 1. **Vote Collection**: System iterates all votes on a proposal 2. **Validator Check**: For each vote, verify voter is active PoA validator 3. **Weight Calculation**: Use validator's power as voting weight 4. **Weighted Options**: Supports split votes, exactly like x/staking in traditional POS governance (e.g., 70% Yes, 30% Abstain) 5. **Tally Results**: Sum weighted votes by option ### Vote Tallying Algorithm **Voting Power Formula**: $$ V_i = P_i $$ Where: * $V_i$ = voting power of validator $i$ * $P_i$ = validator power (consensus weight) **Weighted Vote Calculation**: For a validator casting a split vote across multiple options: $$ W_{i,o} = V_i \times w_{i,o} $$ Where: * $W_{i,o}$ = vote weight from validator $i$ for option $o$ * $w_{i,o}$ = weight assigned to option $o$ by validator $i$ (where $\sum_{o} w_{i,o} = 1$) **Total Tally per Option**: $$ T_o = \sum_{i \in voters} W_{i,o} $$ Where: * $T_o$ = total votes for option $o$ * Sum over all validators who voted ### Example **Validator A**: $P_A = 100$, votes 100% Yes * $W_{A,Yes} = 100 \times 1.0 = 100$ **Validator B**: $P_B = 50$, votes 60% Yes, 40% No * $W_{B,Yes} = 50 \times 0.6 = 30$ * $W_{B,No} = 50 \times 0.4 = 20$ **Totals**: * $T_{Yes} = 130$ * $T_{No} = 20$ ## Proposal Lifecycle ### 1. Proposal Submission **[MsgSubmitProposal](https://github.com/cosmos/cosmos-sdk/blob/main/proto/cosmos/gov/v1/tx.proto#L54-L65)** (standard x/gov module) When a proposal is submitted: 1. Standard governance validates the proposal content 2. `AfterProposalSubmission` hook is called 3. PoA module checks if proposer is authorized validator: * Look up proposer by operator address * Verify validator exists and has $P > 0$ * If not active, reject with error 4. If valid, proposal enters deposit period **Restriction**: Only authorized validators can submit proposals, preventing spam from non-consensus participants. ### 2. Deposit Period **[MsgDeposit](https://github.com/cosmos/cosmos-sdk/blob/main/proto/cosmos/gov/v1/tx.proto#L90-L98)** (standard x/gov module) When a deposit is made: 1. Standard governance processes the deposit 2. `AfterProposalDeposit` hook is called 3. PoA module checks if depositor is authorized validator 4. If deposit threshold reached, proposal moves to voting period **Restriction**: Only authorized validators can deposit, ensuring only consensus participants can advance proposals. ### 3. Voting Period **[MsgVote](https://github.com/cosmos/cosmos-sdk/blob/main/proto/cosmos/gov/v1/tx.proto#L100-L108)** or **[MsgVoteWeighted](https://github.com/cosmos/cosmos-sdk/blob/main/proto/cosmos/gov/v1/tx.proto#L110-L118)** (standard x/gov module) When a vote is cast: 1. Standard governance records the vote 2. `AfterProposalVote` hook is called 3. PoA module validates voter is authorized validator 4. If invalid, transaction fails **Vote Options**: * `Yes`: Support the proposal * `No`: Oppose the proposal * `NoWithVeto`: Oppose and veto (can burn deposits if threshold met) * `Abstain`: Participate in quorum without taking a position **Weighted Voting**: Validators can split their vote across multiple options, with weights summing to 1. ### 4. Vote Tallying At the end of the voting period, the [custom tally function](#vote-tallying-algorithm) is called: **NewPoACalculateVoteResultsAndVotingPowerFn** ([`x/poa/keeper/governance.go:18`](https://github.com/cosmos/cosmos-sdk/blob/main/enterprise/poa/x/poa/keeper/governance.go#L18)) 1. Iterate all votes on the proposal 2. For each vote, look up the validator by voter address 3. If validator is not active ($P \leq 0$), skip the vote 4. Otherwise, use validator power as voting weight 5. For weighted votes, distribute power across options 6. Sum all weighted votes by option 7. Apply standard governance thresholds: * Quorum: Minimum participation percentage * Threshold: Minimum "Yes" percentage to pass * Veto: Maximum "NoWithVeto" percentage before rejection **Result**: Proposal passes, fails, or is vetoed based on power-weighted votes. ## Implementation Details ### Governance Hooks **Location**: [`x/poa/keeper/hooks.go`](https://github.com/cosmos/cosmos-sdk/blob/main/enterprise/poa/x/poa/keeper/hooks.go) The module implements the `govtypes.GovHooks` interface: ``` type GovHooks interface { AfterProposalSubmission(ctx, proposalID, depositorAddr) AfterProposalDeposit(ctx, proposalID, depositorAddr) AfterProposalVote(ctx, proposalID, voterAddr) // ... other hooks } ``` Each hook implementation: 1. Extracts the operator address from the context 2. Looks up the validator by operator address 3. Checks if validator exists and has power > 0 4. Returns error if validation fails ### Custom Tally Function **Location**: [`x/poa/keeper/governance.go:18`](https://github.com/cosmos/cosmos-sdk/blob/main/enterprise/poa/x/poa/keeper/governance.go#L18) The tally function replaces the standard governance tally: ```go theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} func NewPoACalculateVoteResultsAndVotingPowerFn(keeper) TallyFn { return func(ctx, proposal) (totalVotingPower, results) { // Iterate votes for vote in votes(proposal) { validator = keeper.GetValidatorByOperator(vote.voter) if validator == nil || validator.Power <= 0 { continue // Skip non-authorized validators } // Add validator power to total totalVotingPower += validator.Power // Apply vote weights for option, weight in vote.options { results[option] += validator.Power * weight } } return totalVotingPower, results } } ``` ## Governance Parameters The standard governance module parameters still apply: * **MinDeposit**: Minimum tokens required to enter voting period * **MaxDepositPeriod**: Time limit for reaching minimum deposit * **VotingPeriod**: Duration of the voting period * **Quorum**: Minimum participation rate (fraction of total power) * **Threshold**: Minimum "Yes" rate to pass (fraction of non-abstain votes) * **VetoThreshold**: Maximum "NoWithVeto" rate before rejection **Key Difference**: Quorum is calculated as a percentage of total validator power, not total bonded tokens. ## Security Considerations 1. **Validator Exclusivity**: * Only authorized validators (power > 0) can participate * Prevents sybil attacks through unauthorized validator spam * Ensures governance represents actual consensus participants 2. **Power-Based Voting**: * Voting weight tied to consensus power * Admin controls power distribution, thus controls governance indirectly 3. **Admin Governance Control**: * Admin can change validator power at any time * Admin can effectively control governance by adjusting power * Consider multi-sig admin or governance-controlled admin changes 4. **Proposal Spam Prevention**: * Restricting submissions to authorized validators reduces spam * Deposit requirements still apply * Validators have reputational stake in proposal quality ## Comparison to Standard Governance | Aspect | Standard Cosmos Governance | PoA Governance | | ------------------ | --------------------------------------- | ------------------------------------- | | Who can vote | Token holders (delegators + validators) | Authorized validators only | | Voting weight | Bonded tokens | Validator power | | Who can propose | Anyone with min deposit | Authorized validators only | | Who can deposit | Anyone | Authorized validators only | | Vote tallying | Sum of bonded tokens | Sum of validator power | | Quorum calculation | % of bonded tokens | % of total validator power | | Admin control | No direct control | Admin controls power → controls votes | ## Example Governance Flow **Scenario**: Validator A wants to propose a parameter change 1. **Submit Proposal**: * Validator A (power = 40) submits `MsgSubmitProposal` * Hook verifies A is authorized validator * Proposal enters deposit period 2. **Reach Deposit**: * Validator B (power = 30) deposits * Validator C (power = 30) deposits * Deposit threshold reached → voting period starts 3. **Voting**: * Validator A: 100% Yes (40 power → 40 Yes votes) * Validator B: 60% Yes, 40% No (30 power → 18 Yes, 12 No) * Validator C: 100% Abstain (30 power → 30 Abstain) * Total power: 100 (all authorized validators) 4. **Tally**: * Total voting power: 100 (all voted) * Quorum: 100/100 = 100% ✓ (assuming 33% quorum) * Results: 58 Yes, 12 No, 30 Abstain (out of 70 non-abstain) * Threshold: 58/70 = 82.9% Yes ✓ (assuming 50% threshold) * **Proposal passes** # Overview Source: https://docs.cosmos.network/enterprise/components/poa/overview Enterprise-Ready Network Security and Operations The Proof of Authority (PoA) module is a Cosmos SDK module that enables permissioned consensus for networks requiring controlled participation. A designated administrative authority manages the validator set directly, ensuring that only approved operators participate in block production and governance. Unlike traditional Proof-of-Stake systems, validator membership is not determined by token staking. Validators are explicitly authorized, updated, and removed through on-chain administrative actions, enabling predictable operations and compliance-aligned governance. The PoA module is designed for networks that require: 1. **Permissioned Operators:** A configurable administrative authority defines validators and governance participants to meet organizational security, compliance, or consortium requirements. 2. **Instant Validator Updates:** Add, remove, or replace validators, adjust relative validator weights, and rotate keys in a single atomic on-chain action. 3. **Token-Free Operation:** Launch, operate, and govern a network without issuing or managing a native token. 4. **Future-Proof Architecture:** Seamlessly transition to Proof-of-Stake and introduce a token when needed. ## The best available option for Proof of Authority | Characteristic | Alternatives | Cosmos PoA Module | | --------------------------------------------- | ------------ | ----------------- | | Compatibility with Cosmos SDK v0.53+ | ✗ | ✓ | | Support for token-free operation | ✗ | ✓ | | Flexible governance authority | ✗ | ✓ | | Programmable penalties (jailing, slashing) | ✗ | ✓ | | Included in Cosmos bug bounty program | ✗ | ✓ | | Ongoing development by Cosmos core developers | ✗ | ✓ | ## Source Code The source code for the Proof of Authority module can be found [here](https://github.com/cosmos/cosmos-sdk/tree/main/enterprise/poa). ## Available Documentation This directory contains detailed documentation for the Proof of Authority module. * **[API Reference](/enterprise/components/poa/api)** - Complete API reference for gRPC queries and transactions * **[Architecture](/enterprise/components/poa/architecture)** - System architecture and module integration details * **[Distribution](/enterprise/components/poa/distribution)** - Fee distribution mechanics and algorithms * **[Governance](/enterprise/components/poa/governance)** - Governance integration and power-based voting ## Availability The Proof of Authority module is commercially licensed and available as part of the Cosmos Enterprise subscription. Contact [institutions@cosmoslabs.io](mailto:institutions@cosmoslabs.io) to learn more about how you can use the Proof of Authority module for your chain. # Balances, Gas and Transaction Fee Utilities Source: https://docs.cosmos.network/skip-go/client/balance-gas-and-fee-tooling This page details the utility functions for token balances, gas calculations, and transaction fees in Skip Go. ## Getting token balances To query token balances, you can use the `balances` function or via the [REST API](../api-reference/prod/info/post-v2infobalances). You have the option to specify a set of token denoms to query or leave the array empty to fetch balances for all denoms associated with an address. * **When no denoms are specified**: The response will include only the denoms for which you have a balance. * **When denoms are specified**: The response will include balances for all the specified denoms. If you have no balance for a given denom, it will be included with a balance of zero. If there is an error fetching a given denom (e.g. the chain is down), the response will include an error message for that denom. The balance query is currently compatible with all Skip Go-supported assets, excluding cw20 assets, across svm, evm, and Cosmos chains. ```ts TypeScript (Client) theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} const balances = await balances({ chains: { "noble-1": { address: noble.address, // noblef8js... denoms: ["uusdc"] }, "osmosis-1": { address: osmosis.address, // osmois8fo... denoms: [] // Fetch all denoms for address }, apiUrl: "https://api.skip.build" } }); ``` ```JSON JSON (REST API) theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // POST /v2/info/balances { "chains": { "137": { "address": "0x24a9267cE9e0a8F4467B584FDDa12baf1Df772B5", "denoms": [ "polygon-native", "0x3c499c542cEF5E3811e1192ce70d8cC03d5c3359" ] }, "osmosis-1": { "address": "osmo12xufazw43lanl8dkvf3l7y9zzm8n3zswftw2yc", "denoms": [] // Fetch all denoms for address } } } ``` ## Getting info about gas and fees **Video Overview** Here's a [video overview](https://www.loom.com/share/063e96e126d2422bb621b5b0ecf9be2c) of our gas and transaction fee tooling. These functions are useful for getting information about our default gas and fee values or for estimating the fee for a particular transaction (e.g. so you can build a working MAX button). ### `getRecommendedGasPrice` This returns the gas price (i.e. the price of a single unit of gas) the API recommends you use for transactions on a particular chain. ```ts theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} async getRecommendedGasPrice({chainId, apiUrl, apiKey}: {chainId: string, apiUrl?:string, apiKey?: string}) -> GasPrice ``` `GasPrice` is a [cosmjs](https://cosmos.github.io/cosmjs/latest/stargate/classes/GasPrice.html) type giving the recommended fee denom and recommend price amount (fee/gas): ```ts theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type GasPrice = { denom: string; amount: Decimal } ``` ### `getFeeInfoForChain` This will return high, medium, and low gas prices for a particular chain, given by `chainId`, along with the default fee denom as a `FeeAsset` object: ```ts theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} async getFeeInfoForChain({chainId, apiUrl, apiKey}: {chainId: string, apiUrl?:string, apiKey?: string}) -> FeeAsset ``` ```ts theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type FeeAsset = { denom: string; gasPrice: GasPriceInfo; }; type GasPriceInfo = { low: string; average: string; high: string; }; ``` An undefined response indicates that the API cannot find up-to-date gas price information for the chain. ## Settings on `ExecuteRouteOptions` for customizing how gas & fees are set on transactions ### `ExecuteRouteOptions.getGasPrice` This field in `ExecuteRouteOptions` allows you to override our default gas price on a per chain basis for any transactions created in the router (e.g. in `executeRoute`): `getGasPrice?: (chainId: string) => Promise;` The argument is a function that takes in a chain ID and returns a gas price for that chain as a `GasPrice` object from CosmJS ```ts theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} type GasPrice = { denom: string; amount: Decimal } ``` If you provide a function that only returns a price for a subset of chains, the router will use its default price in cases where yours is missing. If it can't find a default price for a chain, it will error. ### `ExecuteRouteOptions.gasAmountMultiplier` This field in `ExecuteRouteOptions` allows you to override the default gas multiplier used by default in the SDK. The default value is 1.5. Increasing this value provides higher confidence that transactions will not run out of gas while executing, but increases the fee for the end user. The gas multiplier increases a transaction's `gasAmount` multiplicatively. To get a final gas amount, the router: * Simulates a transaction to get an initial gasAmount * Multiplies the gas consumed in the simulation by `gasAmountMultiplier` # Getting Started Source: https://docs.cosmos.network/skip-go/client/getting-started @skip-go/client is a TypeScript library that streamlines interaction with the Skip Go API, enabling cross-chain swaps and transfers across multiple ecosystems. Install the library using npm or yarn: ```Shell npm theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} npm install @skip-go/client ``` ```Shell yarn theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} yarn add @skip-go/client ``` If you're using `yarn` (or another package manager that doesn't install peer dependencies by default) you may need to install these peer dependencies as well: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} yarn add viem @solana/web3.js ``` To start integrating with the Skip Go API, you no longer initialize a `SkipClient` instance. Instead, you configure the library once and then import and use individual functions directly. ### Initialization Options The library can be initialized using `setClientOptions` or `setApiOptions`. Both functions accept `apiUrl` and `apiKey`. * **`setClientOptions(options)`:** Use this if you plan to use `executeRoute`. It configures your API credentials and lets you provide chain-specific settings like endpoints, Amino types, and registry types. * **`setApiOptions(options)`:** Use this if you primarily need to configure API interaction (`apiUrl`, `apiKey`) or set up affiliate fees (`chainIdsToAffiliates`). This option does not configure `endpointOptions`, `aminoTypes`, or `registryTypes`. You typically call one of these functions once at application startup. ```ts Import and Initialize theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import { setClientOptions, setApiOptions, chains, assets, route, executeRoute, // ... other functions you need } from "@skip-go/client"; // Example: Initialize for executeRoute usage setClientOptions({ apiUrl: "YOUR_API_URL", // Optional: defaults to Skip API apiKey: "YOUR_API_KEY", // Optional: required for certain features endpointOptions: { /* ... */ }, // ... other options like aminoTypes, registryTypes, cacheDurationMs }); // Example: Initialize for direct API calls (simpler, if not using executeRoute) setApiOptions({ apiUrl: "YOUR_API_URL", // Optional: defaults to Skip API apiKey: "YOUR_API_KEY", // Optional: required for certain features }); // Now you can call functions directly, e.g.: // const supportedChains = await chains(); ``` ### Configuration Parameters Below are the common configuration parameters. Refer to the specific `options` type for `setClientOptions` or `setApiOptions` for full details. * `apiUrl?: string`: Override the default API URL. Can be passed to `setClientOptions` or `setApiOptions`, or directly to individual API functions if neither initialization function is called. * `apiKey?: string`: Your Skip API key. Can be passed to `setClientOptions` or `setApiOptions`, or directly to individual API functions if neither initialization function is called. Required for certain features. * `endpointOptions?: EndpointOptions`: Provide RPC and REST endpoints for specific chains (used by `setClientOptions`). * `aminoTypes?: AminoConverters`: Additional amino types for message encoding (used by `setClientOptions`). * `registryTypes?: Iterable<[string, GeneratedType]>`: Additional registry types (used by `setClientOptions`). * `cacheDurationMs?: number`: Duration in milliseconds to cache responses for functions like `chains` and `assets` (used by `setClientOptions`). To execute transactions, you need to set up signers for the ecosystems you plan to interact with. Below are examples for Cosmos SDK, EVM, and Solana (SVM). Note that for EVM and SVM, you'll need to install additional libraries. ### Signer Setup ```ts Cosmos Signer theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // For Cosmos transactions, we'll use Keplr wallet from the window object const getCosmosSigner = async (chainId: string) => { const key = await window.keplr?.getKey(chainId); if (!key) throw new Error("Keplr not installed or chain not added"); return key.isNanoLedger ? window.keplr?.getOfflineSignerOnlyAmino(chainId) : window.keplr?.getOfflineSigner(chainId); }; ``` ```ts EVM Signer theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // For EVM transactions, we'll use MetaMask and viem // npm install viem import { createWalletClient, custom, Account } from "viem"; import { mainnet } from 'viem/chains'; const getEvmSigner = async (chainId: string) => { const ethereum = window.ethereum; if (!ethereum) throw new Error("MetaMask not installed"); const accounts = await ethereum.request({ method: 'eth_requestAccounts' }) as Account[]; const account = accounts?.[0] if (!account) throw new Error('No accounts found'); const client = createWalletClient({ account, chain: mainnet, transport: custom(window.ethereum), }); return client; } ``` ```ts Svm Signer theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // For Solana transactions, we'll use the Phantom wallet adapter // npm install @solana/wallet-adapter-phantom import { PhantomWalletAdapter } from '@solana/wallet-adapter-phantom'; const getSvmSigner = async () => { const phantom = new PhantomWalletAdapter(); await phantom.connect(); return phantom; }; ``` With the library initialized, you can query balances, supported chains and assets using the imported functions. ### Query Examples ```ts Supported Chains theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import { chains } from "@skip-go/client"; // returns a Chain[] of all supported Cosmos mainnet chains const cosmosChains = await chains(); // include EVM and SVM chains const allChains = await chains({ includeEvm: true, includeSvm: true, }); // only show testnet chains const testnetChains = await chains({ onlyTestnets: true }); ``` ```ts Supported Assets theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import { assets } from "@skip-go/client"; // returns `Record` const allAssets = await assets({ includeEvmAssets: true, includeSvmAssets: true, }); // get assets filtered by chain ID const cosmosHubAssets = await assets({ chainId: 'cosmoshub-4', includeCw20Assets: true, }); // only get assets for specific chains const specificAssets = await assets({ chainIds: ['osmosis-1', '1'], // Ethereum and Osmosis }); ``` ```ts Token Balances theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import { balances } from "@skip-go/client"; // Define the request structure (already defined earlier in doc, ensure consistency or repeat here for clarity) interface BalancesRequest { chainId: string; address: string; denoms?: string[]; // Optional: specify denoms, otherwise fetches all } const balanceRequests: BalancesRequest[] = [ { chainId: "137", // Polygon address: "0x24a9267cE9e0a8F4467B584FDDa12baf1Df772B5", denoms: [ "polygon-native", // Matic "0x3c499c542cEF5E3811e1192ce70d8cC03d5c3359" // USDC ] }, { chainId: "osmosis-1", address: "osmo12xufazw43lanl8dkvf3l7y9zzm8n3zswftw2yc", denoms: ["uosmo"] } ]; // returns a map of assets by chain ID (or the structure defined by BalanceResponse) const userBalances = await balances({ requests: balanceRequests }); ``` Once you've selected your source and destination chains and tokens, you can generate a route and get a quote using the `route` function. See it in context [here](https://github.com/skip-mev/skip-go-example/blob/d68ec668ebaa230325ad31658b547bd27c42ac49/pages/index.tsx#L46). ### Route Examples ```ts Swap ATOM for OSMO Example theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import { route } from "@skip-go/client"; const routeResult = await route({ amountIn: "1000000", // Desired amount in smallest denomination (e.g., uatom) sourceAssetDenom: "uatom", sourceAssetChainId: "cosmoshub-4", destAssetDenom: "uosmo", destAssetChainId: "osmosis-1", cumulativeAffiliateFeeBps: '0', }); ``` ```ts Swap ETH for TIA Example theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import { route } from "@skip-go/client"; const routeResult = await route({ amountOut: "1000000", // Desired amount out sourceAssetDenom: "ethereum-native", sourceAssetChainId: "1", // Ethereum mainnet chain ID destAssetDenom: "utia", destAssetChainId: "celestia", smartRelay: true, smartSwapOptions: { splitRoutes: true, evmSwaps: true }, }); ``` ```ts Transfer USDC from Solana to Noble Example theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import { route } from "@skip-go/client"; const routeResult = await route({ sourceAssetDenom: "EPjFWdd5AufqSSqeM2qN1xzybapC8G4wEGGkZwyTDt1v", sourceAssetChainId: "solana", destAssetDenom: "uusdc", destAssetChainId: "noble-1", amountIn: "1000000", smartRelay: true }); ``` Read more about [affiliate fees](../general/affiliate-fees), [Smart Relay](../general/smart-relay) and [EVM Swaps](../advanced-swapping/smart-swap-options#feature-evm-swaps). After generating a route, you need to provide user addresses for the required chains. The `route.requiredChainAddresses` array lists the chain IDs for which addresses are needed. **Only use addresses your user can sign for.** Funds could get stuck in any address you provide, including intermediate chains in certain failure conditions. Ensure your user can sign for each address you provide. See [Cross-chain Failure Cases](../advanced-transfer/handling-cross-chain-failure-cases) for more details. We recommend storing the user's addresses and creating a function like [`getAddress`](https://github.com/skip-mev/skip-go-example/blob/c55d9208bb46fbf1a4934000e7ec4196d8ccdca4/pages/index.tsx#L99) that retrieves the address based on the chain ID. ```ts theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Assuming 'routeResult' holds the object from the route() call in Step 5 // get user addresses for each requiredChainAddress to execute the route const userAddresses = await Promise.all( routeResult.requiredChainAddresses.map(async (chainId) => ({ chainId, address: await getAddress(chainId), })) ); ``` Once you have a route, you can execute it in a single function call by passing in the route, the user addresses for at least the chains the route includes, and optional callback functions. This also registers the transaction for tracking. ```ts theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} await executeRoute({ route: routeResult, userAddresses, getCosmosSigner, getEvmSigner, getSvmSigner, onTransactionCompleted: async ({ txHash, chainId, status}) => { console.log( `Route completed on chain ${chainId} with tx hash: ${txHash} & status: ${status?.state}` ); }, onTransactionBroadcast: async ({ txHash, chainId }) => { console.log(`Transaction broadcasted on ${chainId} with tx hash: ${txHash}`); }, onTransactionTracked: async ({ txHash, chainId, explorerLink }) => { console.log(`Transaction tracked for ${chainId} with tx hash: ${txHash}, explorer: ${explorerLink}`); }, onTransactionSigned: async ({ chainId }) => { console.log(`Transaction signed for ${chainId}`); }, onValidateGasBalance: async (validation) => { if (validation.status === "error") { console.warn(`Insufficient gas balance or gas validation error on chain ${validation.chainId} (Tx Index: ${validation.txIndex}).`); } }, onApproveAllowance: async (approvalInfo) => { console.log(`ERC20 allowance ${approvalInfo.status} for token ${approvalInfo.allowance?.tokenContract} on chain ${approvalInfo.allowance?.chainId}`); } }); ``` For routes that consist of multiple transactions, `executeRoute` will monitor each transaction until it completes, then generate the transaction for the next step and prompt the user to sign it using the appropriate signer. Alternatively, you can handle message generation, signing, and submission manually using the individual functions: * `messages`: Generate transaction messages. * `messagesDirect`: A convenience function that combines the functionality of `/route` and `/msgs` into a single call. It returns the minimal number of messages required to execute a multi-chain swap or transfer. * `broadcastTx`: Broadcast transactions to the network. * `submitTransaction`: Submit and track transactions. Refer to the API documentation for details on these lower-level functions. After a transaction is registered for tracking (either via `executeRoute`, `submitTransaction`, or `trackTransaction`), you can poll for its status: * **Check Status:** `transactionStatus` - Takes a `txHash` and `chainId` and returns the current cross-chain status. ```ts theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Example of checking transaction status: // Assuming you have txHash and chainId from a previous step: // const statusResult = await transactionStatus({ txHash: "your_tx_hash", chainId: "your_chain_id" }); // console.log("Transaction State:", statusResult.state); // console.log("Full Status Response:", statusResult); // Possible states include: STATE_COMPLETED_SUCCESS, STATE_COMPLETED_ERROR, STATE_ABANDONED, STATE_PENDING_CONFIRMATION, STATE_PENDING_EXECUTION, etc. // Refer to the TxStatusResponse type or API documentation for a complete list of states. ``` Remember, if you use `executeRoute` (Step 7), it automatically handles the transaction lifecycle, including waiting for completion. The manual tracking functions (`submitTransaction`, `trackTransaction`, `transactionStatus`) are primarily for scenarios where you are not using `executeRoute` for full execution (e.g., if you use `submitTransaction` directly) or if you need more granular control over the tracking and status polling process. **Have questions or feedback? Help us get better!** Join [our Discord](https://discord.com/invite/interchain) and select the "Skip Go Developer" role to share your questions and feedback. # Setting Affiliate Fees Source: https://docs.cosmos.network/skip-go/general/affiliate-fees This page covers how integrators can earn affiliate fees on swaps. ### Overview Many teams use Skip Go as a source of a revenue for their project by charging fees on swaps. (Charging fees on transfers will be possible in the future!). We refer to these fees throughout the product and documentation as "affiliate fees" Skip Go's affiliate fee functionality is simple but flexible -- supporting a large variety of bespoke fee collection scenarios: * Set your desired fee level on each swap (so you can offer lower fees to your most loyal users) * Set the account that receives the fee on each swap (so you can account for funds easily & separate revenue into different tranches) * Divide the fee up in customizable proportions among different accounts (so you can create referral/affiliate revenue sharing programs with partners and KOLs) ### Affiliate Fees Work 1. **At this time, affiliate fees can only be collected on swaps**. We do not support collecting affiliate fees on routes that only consist of transfers (e.g. CCTP transfers, IBC transfers, etc...) even when there are multi-hop transfers. Please contact us if charging fees on transfers is important to you 2. **Affiliate fees are collected on the chain where the last swap takes place**: Skip Go aggregates over swap venues (DEXes, orderbooks, liquid staking protocols, etc...) on many different chains. Some routes even contain multiple swaps. For each individual cross-chain or single chain swap where you collect a fee, the fee is applied on the last swap and sent to an address you specify on the chain where the last swap takes place 3. **Affiliate fees are collected/denominated in the output token of each swap**: For example, if a user swaps OSMO to ATOM, your fee collection address will earn a fee in ATOM 4. **Affiliate fees are calculated using the minimum output amount, which is set based on our estimated execution price after accounting for the user's slippage tolerance** : For example, consider an ATOM to OSMO swap where min amount out is 10 uosmo and the cumulative fees are 1000 bps or 10%. If the swap successfully executes, the affiliate fee will be 1 uosmo. It will be 1 uosmo regardless of whether the user actually gets 10, 11, or 12 uosmo out of the swap ### How to Use Affiliate Fees There are two simple steps involved in using affiliate fees: 1. **Incorporate the fee into the quote** : You need to request the route & quote with the total fee amount (in basis points) you will collect, so Skip Go can deduct this automatically from the estimated `amount_out` it returns to the user. This ensures the quote you show the user already accounts for the fee, and they won't receive any unexpectedly low amount. 2. **Set the address(es) to receive the fee**: You also need to tell Skip Go the exact address(es) to send the fee revenue to. You need to pass a list of addresses and specify a fee amount (in basis points) for each to collect. ### Incorporating Fees with `/route` and `/msgs` When executing swaps using the `/route` and `/msgs` endpoints, you can incorporate affiliate fees by specifying the total fee during the `/route` request and detailing the fee recipients during the `/msgs` request. Below is a comprehensive guide on how to correctly implement this. 1. **Set Total Fee in `/route` Request** In your `/route` request, include the `cumulative_affiliate_fee_bps` parameter to specify the total fee you wish to collect, expressed in basis points (bps). * **Definition**: 1% fee = 100 basis points. * **Example**: To collect a **0.75%** fee, set `cumulative_affiliate_fee_bps` to `"75"`. ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "cumulative_affiliate_fee_bps": "75", // ...other parameters } ``` If you're using `@skip-go/client`, use camelCase: `cumulativeAffiliateFeeBps`. 2. **Identify Swap Chain** After the `/route` request, use the `swap_venue.chain_id` field in the response to determine which chain the swap will occur on. You'll need this information to provide valid recipient addresses in the next step. 3. **Specify Fee Recipients in `/msgs` Request** In your `/msgs` request, define the `chainIdsToAffiliates` object to allocate fees to specific addresses on the relevant chains. ##### Structure: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "chainIdsToAffiliates": { "": { "affiliates": [ { "basisPointsFee": "", "address": "" }, // ...additional affiliates ] }, // ...additional chains } } ``` ##### Example: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "chainIdsToAffiliates": { "noble-1": { "affiliates": [ { "basisPointsFee": "100", // 1% fee "address": "noble1..." }, { "basisPointsFee": "100", // 1% fee "address": "noble2..." } ] }, "osmosis-1": { "affiliates": [ { "basisPointsFee": "200", // 2% fee "address": "osmo1..." } ] } } } ``` **Notes:** * The **sum** of `basisPointsFee` values across all affiliates **on the swap chain** must equal the `cumulative_affiliate_fee_bps` set in the `/route` request. * All addresses must be **valid on the chain where the swap will take place**. Invalid addresses will result in a `400` error. * If using `@skip-go/client`, remember to use camelCase (e.g., `basisPointsFee`) in the config. ### Incorporating Fees with `/msgs_direct` We recommend using `/route` and `/msgs` over `/msgs_direct` due to the added complexity when handling fees with `/msgs_direct`. When using the `/msgs_direct` endpoint, you need to specify affiliate fees for **every possible chain** the swap might occur on since the swap chain is determined during the request. ### Steps: 1. **Define `chainIdsToAffiliates` for All Potential Swap Chains** * Use the `chainIdsToAffiliates` object to map each potential `chain_id` to its corresponding affiliates. * For each `chain_id`, provide a list of `affiliates`, each with: * `basisPointsFee`: The fee amount in basis points (bps). * `address`: The recipient's address on that chain. 2. **Include Entries for Every Possible Chain** * Retrieve all potential swap chains by querying the `/v2/fungible/swap_venues` endpoint. * Include an entry in `chainIdsToAffiliates` for each `chain_id` from the list. 3. **Ensure Fee Consistency Across Chains** * The **sum** of `basisPointsFee` values for affiliates on each chain must be **equal** across all chains. * This consistency is necessary because the fee amount used in the swap must be the same, regardless of which chain the swap occurs on. * If the fee sums differ between chains, the request will return an error. *** ### Example Request: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "chainIdsToAffiliates": { "noble-1": { "affiliates": [ { "basisPointsFee": "100", // 1% fee "address": "noble1..." }, { "basisPointsFee": "100", // 1% fee "address": "noble2..." } ] }, "osmosis-1": { "affiliates": [ { "basisPointsFee": "200", // 2% fee "address": "osmo1..." } ] }, // Include entries for all other potential chains }, // ...other parameters } ``` **Notes:** * In the example above, the total fee for each chain is **200 bps (2%)**. * Ensure that all addresses are **valid** on their respective chains. **Have questions or feedback? Help us get better!** Join [our Discord](https://discord.com/invite/interchain) and select the "Skip Go Developer" role to share your questions and feedback. # Requesting & Using API Keys Source: https://docs.cosmos.network/skip-go/general/api-keys ## Summary Authentication and authorization for the Skip Go API are managed via API keys. This document covers: 1. Why you should use an API key 2. How to get your key set up 3. How to use the key in your requests **API Keys have replaced `client_id`** Historically, we used the `client_id` passed as a request parameter to identify and authenticate integrators. This system has been fully deprecated. If you're currently using `client_id`, you should transition to using an API key. (We're making this transition because `client_id` didn't abide by best practices for security, and we could only attach limited functionality to it, since it was passed in the request body instead of a header.) ## Benefits of using an API key Technically, you can access most of the basic functionality of the Skip Go API without an API key. But there are numerous benefits to authenticating yourself with a key: * **No rate limit**: Integrators that do not pass a valid API key in their requests will be subject to a restrictive global rate limit, shared with all other unauthenticated users. * **Improved fee revenue share pricing**: Unauthenticated integrators will be subject to a 25% revenue share on their fee revenue by default. Authenticated integrators who use API keys will be subject to a cheaper 20% revenue share by default. * **Access to privileged features:** Integrators who authenticate with an API key will receive access to premium features that we cannot offer to the general public (e.g. Gas estimation APIs, universal balance query APIs, etc...) * **Metrics on your volume and revenue:** Authenticated integrators will receive access to monthly statistics regarding their total swap and transfer volume and the amount of fee revenue they've earned. They will also receive annual transaction data for taxes. ## How to get an API Key ### 1. Request an API Key Open a support ticket on our [Discord](https://discord.com/invite/interchain) and tell our customer support that you'd like an API key. Please provide the following information in your request to help us get to know your project: 1. Your name (or pseudo-anon name) and contact info (ideally Telegram, but possibly Email, Signal, etc...) 2. Your project name 3. A brief, 1-2 sentence description of your project The customer support team member at Skip will establish an official channel of communication between Skip and your project (e.g. an email thread or a telegram group etc...). ### 2. Store the API Key Securely **You should store the API key immediately when you create it. We do not store your raw API key in our server for security reasons, so we will not be able to access it for you if you lose it.** It is important to keep your API key private. Anyone with your API key can make requests to the Skip Go API as you, getting access to your rate limit, privileged features, and affecting your revenue and volume statistics. ## How to use an API key ### Via REST API You should pass your API key in every call to the Skip Go API using the `authorization` HTTP header. For example: ``` curl -X 'POST' \ 'https://api.skip.build/v2/fungible/route' \ -H 'accept: application/json' \ -H 'authorization: ' \ -H 'Content-Type: application/json' \ -d '{ "amount_in": "1000000", "source_asset_denom": "uusdc", "source_asset_chain_id": "axelar-dojo-1", "dest_asset_denom": "uatom", "dest_asset_chain_id": "cosmoshub-4", "cumulative_affiliate_fee_bps": "0", "allow_multi_tx": true }' ``` ### Via `@skip-go/client` For users of the `@skip-go/client` TypeScript package (v1.0.0+), you can configure your API key using either `setApiOptions` or `setClientOptions` at initialization. The library will automatically include it in the `authorization` header of all requests. For example: ```typescript theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import { setApiOptions, setClientOptions } from "@skip-go/client"; // Option 1: For basic API calls setApiOptions({ apiKey: , }); // Option 2: For executeRoute functionality setClientOptions({ apiKey: , // ... other options like endpointOptions, aminoTypes, etc. }); ``` Note: The `SkipClient` class has been removed in v1.0.0. Instead, you import and use individual functions directly after setting the API options. Also note that `apiURL` has been renamed to `apiUrl` to follow camelCase conventions. ### Setup a Proxy to Receive Skip Go API Requests and Add the API Key To keep your API key secure and private, we recommend that you proxy the API requests from the frontend to your own backend--where you can add your API key in the header before forwarding the request to the Skip Go API. The snippets below show you how to use Next.js/Vercel for this kind of proxying. It only takes a moment to set up. ```typescript theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // This handler runs server-side in Vercel and receive requests from the frontend // sent to APP_URL/api/skip import type { NextApiRequest } from 'next'; import { PageConfig } from 'next'; import { API_URL } from '@/constants/api'; export const config: PageConfig = { api: { externalResolver: true, bodyParser: false, }, runtime: 'edge', }; export default async function handler(req: NextApiRequest) { try { const splitter = '/api/skip/'; const [...args] = req.url!.split(splitter).pop()!.split('/'); const uri = [API_URL, ...args].join('/'); const headers = new Headers(); if (process.env.SKIP_API_KEY) { headers.set('authorization', process.env.SKIP_API_KEY); } return fetch(uri, { body: req.body, method: req.method, headers, }); } catch (error) { const data = JSON.stringify({ error }); return new Response(data, { status: 500 }); } } ``` ```typescript theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // This config maps the requests to APP_URL/api/skip to the handler we just defined rewrites: async () => [ { source: "/api/skip/(.*)", destination: "/api/skip/handler", }, ], // other config... ``` ```typescript theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // This configures your client to make requests to your proxy service instead of // the standard Skip Go API backend directly import { setApiOptions, setClientOptions } from '@skip-go/client'; const appUrl = process.env.NEXT_PUBLIC_VERCEL_ENV === 'preview' || process.env.NEXT_PUBLIC_VERCEL_ENV === 'staging' ? typeof window !== 'undefined' ? `https://${window.location.hostname}` : process.env.NEXT_PUBLIC_VERCEL_URL : 'https://'; // Option 1: For basic API calls setApiOptions({ // you don't need to pass apiKey since you already have it in your proxy handler apiUrl: `${appUrl}/api/skip`, }); // Option 2: For executeRoute functionality setClientOptions({ // you don't need to pass apiKey since you already have it in your proxy handler apiUrl: `${appUrl}/api/skip`, // ... other options if needed }); ``` ``` // These are environment variables you set in Vercel // to store your API key securely in the backend SKIP_API_KEY= ``` ## How to Request Volume & Revenue Statistics Just return to your official communication channel with Skip (probably a Telegram channel) and request the data. We can share monthly reports. Eventually, we will create a customer portal with dashboards, so you'll have access to all the data you need in a self-service manner. **Have questions or feedback? Help us get better!** Join [our Discord](https://discord.com/invite/interchain) and select the "Skip Go Developer" role to share your questions and feedback. # Skip Explorer Integration Source: https://docs.cosmos.network/skip-go/general/explorer-integration Guide to integrating Skip Explorer v2 for transaction visualization and tracking in your applications. ## Overview Skip Explorer v2 provides a user-friendly interface for visualizing cross-chain transactions and tracking their progress. This guide covers basic explorer usage and how integrators can leverage it to enhance their user experience. ## Basic Explorer Usage Users can view transaction details by navigating to [explorer.skip.build](https://explorer.skip.build) and entering: * **Transaction Hash**: Any transaction hash from a supported chain * **Chain ID**: The chain where the transaction occurred The explorer will automatically detect and display: * Transaction status and progress * Cross-chain hops and bridges used * Asset transfers and swaps * Real-time updates as transactions progress ## Integration for Custom Frontends If you're building a custom frontend and want to provide users with rich transaction visualization, you can generate explorer links that include comprehensive route data. ### Basic Explorer Links For simple transaction tracking, redirect users to the explorer with URL parameters: * `tx_hash`: Comma-separated list of transaction hashes * `chain_id`: The initial source chain ID * `is_testnet`: Optional boolean parameter for testnet transactions **Example:** ``` https://explorer.skip.build/?tx_hash=ABC123,DEF456&chain_id=osmosis-1&is_testnet=true ``` This approach works well when you have transaction hashes but limited route context. ### Advanced Rich Data Integration For a superior user experience with complete transaction context, encode your route data as base64 and pass it via the `data` parameter. This enables the explorer to display detailed multi-hop transaction flows, user addresses, and complete route information. **Example:** ``` https://explorer.skip.build/?data=eyJyb3V0ZSI6ey4uLn0sInVzZXJBZGRyZXNzZXMiOnsuLi59fQ== ``` #### Required Data Structure The base64-encoded data must contain a JSON object with this structure: ```typescript theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { route: SimpleRoute, // Route from /v2/fungible/route API response userAddresses: UserAddress[], // User wallet addresses for each chain transactionDetails: TransactionDetails[] // Transaction details from /v2/tx/status } ``` #### Implementation ```typescript theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import { SimpleRoute, UserAddress, TransactionDetails } from '@skip-go/client'; // Collect your data from Skip Go APIs const routeData = { route: simpleRoute, // From your /v2/fungible/route call userAddresses: userAddresses, // Your user's addresses per chain transactionDetails: txDetails // From your /v2/tx/status polling }; // Encode for the explorer const jsonString = JSON.stringify(routeData); const base64Encoded = btoa(jsonString); // Browser-compatible base64 encoding // Generate the rich explorer URL const explorerUrl = `https://explorer.skip.build/?data=${base64Encoded}`; // Redirect user or open in new tab window.open(explorerUrl, '_blank'); ``` ## Benefits for Your Users ### Basic Integration Benefits * **Quick Access**: Users can easily view transaction details without leaving your app * **Multi-Chain Support**: Works across all supported chains and bridges * **Real-time Updates**: Live transaction status and progress tracking ### Advanced Integration Benefits * **Complete Transaction Context**: Users see the full route, not just individual transactions * **Multi-Hop Visualization**: Clear view of complex transfers across chains and bridges * **Address Mapping**: Explorer knows which addresses belong to the user * **Real-time Status**: Current transaction state integrated with visual progress * **Shareable Links**: Users can bookmark or share complete transaction context ## Best Practices ### When to Use Basic vs Advanced Integration **Use Basic Integration when:** * You only have transaction hashes available * Users initiated transactions outside your application * You want minimal integration effort **Use Advanced Integration when:** * You have complete route information from Skip Go APIs * Users initiated transactions through your application * You want to provide the richest user experience ## Common Use Cases ### Transaction Confirmation Pages After users submit a transaction, redirect them to the explorer with full context: ```typescript theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // After transaction submission const explorerUrl = generateExplorerLink(routeData); window.location.href = explorerUrl; ``` ### Transaction History Provide explorer links for each historical transaction: ```typescript theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // In transaction history component {transactions.map(tx => (
{tx.amount} {tx.asset} View Details
))} ``` ### Support and Debugging Help users troubleshoot failed transactions by providing detailed explorer views: ```typescript theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // For failed transactions const supportUrl = generateExplorerLink(failedTransaction); // Share this URL with support team or user ``` **Best Practice**: Always use the advanced base64 approach when you have access to complete route information from your Skip Go integration. This provides the richest user experience in the explorer. # FAQ Source: https://docs.cosmos.network/skip-go/general/faq ### General / Background #### How can I get help using Skip Go? You can reach us easily at [our discord](https://discord.gg/interchain) or in [our developer support channel on Telegram](https://t.me/+3y5biSyZRPIwZWIx) #### Who uses Skip Go? Many teams use Skip Go to make it easier for them to build multi-chain flows, including: * Every major interchain wallet (Keplr, Leap, IBC wallet, Terra Station, all the metamask snap teams) * Osmosis asset deposit page * Many major interchain dapps (e.g. Stargaze, Stride, Astroport, Cosmos Millions) * Many interchain defi frontends (e.g. Coinhall, fortytwo.money) * TrustWallet * Our own frontend ([https://go.skip.build](https://go.skip.build)) * ...and many more #### How many chains does Skip Go API support? More than 70+. We support: * Almost every IBC-enabled chain * Every EVM chain Axelar supports * Every EVM chain and rollup Hyperlane supports * Every chain CCTP supports ...with more coming soon! #### Which bridges and message passing protocols does Skip Go support? * IBC * Hyperlane * Axelar * CCTP * Neutron's native bridge * Stargate * Layerzero * Eureka * GoFast ...with more coming soon! #### What DEXes does the Skip Go support? * Osmosis * Astroport across Terra, Neutron, Sei, and Injective * White Whale across Terra, Migaloo, and Chihuahua * Uniswap v2/v3 * Aerodrome * Velodrome * Tower * Elys * Dojoswap ...with many more coming soon #### What is the maximum transfer limit for CCTP? CCTP transfers have a maximum limit of 1,000,000 USDC per transfer. #### How do I get Skip Go API to support my chain? Please see the [Chain Support Requirements](/skip-go/support-requirements/chain-support-requirements) document to ensure your chain meets the necessary requirements and submit the chain support request form linked in that doc. #### How do I get the Skip Go API to support my token? Please complete the necessary steps detailed in the [Token & Route Support Requirements](/skip-go/support-requirements/token-support-requirements) doc. #### How do I get the Skip Go API to support a route for a particular token to a remote chain? Please complete the necessary steps detailed in the [Token & Route Support Requirements](/skip-go/support-requirements/token-support-requirements) doc for the destination chain and token in question. #### How do I get the Skip Go API to route swaps to my swap venue or DEX? Please see the [Swap Venue Requirements](/skip-go/support-requirements/swap-venue-requirements) page to ensure your DEX meets the necessary requirements then submit the swap venue request form linked in that doc. #### How long does it take to build an application with Skip Go? 5-30 minutes #### Why do I keep getting rate limited? You need to request an API key from the core team to use Skip Go without limitations. Pass your API key into all requests with the `client_id` parameter. #### Does the Skip Go API cost anything? No, not at this time. We offer unlimited usage at zero cost. In the future we will introduce two pricing plans: 1. **Fee revenue sharing, unlimited usage**: For customers who use Skip Go API's fee functionality to charge their end-users fees on swaps, we will collect a portion of the fee revenue that you earn. Higher volume users will unlock lower fee tiers. 2. **API Usage Pricing**: For customers who do not charge fees, we will offer monthly and annual plans that allow usage of the API up to some number of requests per month. We will offer several pricing tiers, where higher tiers support higher rate limits and more monthly requests (similar to coingecko, infura, etc..) #### I have questions, comments, or concerns about Skip Go. How can I get in touch with you? Join [our Discord](https://discord.gg/interchain) and select the "Skip Go Developer" role to share your questions and feedback. ### Refunds and other financial policies #### Does Skip ever refund users for swaps or transfers with "bad prices"? **No.** Users are responsible for the transactions they sign in all cases: 1. **If our smart contracts behave as expected (i.e. amount\_out exceeds min\_amount\_out signed by the user or amount\_in is less than max\_amount\_in signed), Skip will not offer users refunds, slippage rebates, or any other form of compensation for any reason**. Skip does not accept or bare any financial, legal, or practical responsibility for lost funds in the event a user or integrator believes a particular route is inadequate, bad, supoptimal, low liquidity, or otherwise not performant enough. 2. **If the smart contracts Skip Go depends on do not behave as expected, users who encounter and suffer bugs may report them in exchange for a bug bounty payment**. See the section on our bug bounty policy below. Integrators are solely and completely responsible for building UIs that maximally protect users from harming themselves by swapping or transferring at unfavorable prices. To help facilitate this goal, we provide: * Detailed information about quote quality, on-chain liquidity, fees, and price estimates in our API response objects (e.g. estimates of the USD value of amount in and amount out, on-chain price impact, etc...) * Extensive guidance about how to build S.A.F.E. interfaces in our docs: [SAFE Swapping: How to protect users from harming themselves](/skip-go/advanced-swapping/safe-swapping-how-to-protect-users-from-harming-themselves) * Hands-on interface design feedback based on years of working with dozens of top interfaces #### Does Skip provide refunds to users if an interface created an transaction that called a Skip contract incorrectly? **No** 1. If an integrator or end user misuses one of our contracts intentionally or by accident (e.g. calls the contract with incorrect call data, sends tokens to a contract not designed to receive tokens), Skip bares no financial, legal, or operational responsibility for funds that may be lost as a result. 2. Skip and its affiliates do not have the ability to upgrade contracts to receive or "unstick" lost funds even if we wanted to. We do not own or control the contracts in Skip Go any more than you do. #### Are there any circumstances under which Skip would offer a refund to an end user for any reason? **No** Skip accepts no legal, financial, or operational responsibility for the applications the integrators of the Skip Go API create using its data and services. Skip also accepts no legal, financial, or operational responsibility for the positive or negative financial outcomes of the swaps, transfers, and other transactions that end users may create in these interfaces. Even in the event a user lost funds as a result of a bug, Skip does not claim any legal, financial, or practical liability for lost funds. In these cases, users may report the bug to Skip's bug bounty program. ### Bug Bounty Policy #### Does Skip Go offer bug bounties? Skip may provide a financial bug bounty of up to \$25,000 as compensation to users who report a reproducible catastrophic failure of a smart contract or other piece of software that the Skip team developed. Examples of catastrophic failures include: * Receiving fewer tokens in the output of a swap than specified and signed over in the min\_amount\_out field of the transaction calldata * Total loss of funds resulting from a transaction where the call data is correctly formed -- in other words, where the contract call data matches the specification provided by the Skip team and generated by the Skip Go API product The size of the bug payment is determined at Skip's sole discretion. It may depend on a variety of factors, including: quality, accuracy, and timeliness of the report & severity/exploitability of the bug. Skip is not legally obligated to provide any payment amount. In the event a user lost funds as a result of a bug, Skip does not claim any legal, financial, or practical liability for lost funds. The size of the bug bounty will not depend directly or indirectly on the amount of funds the user lost. The bug bounty does not constitute a refund under any definition. It is a reward for identifying and characterizing gross failures in intended behavior of the Skip software. #### How do I report a bug to the bug bounty program? Please get in touch with our team on [Discord](https://discord.gg/interchain) if you believe you have a bug to report. ### Technical Support #### Is go.skip.build open source? Where can I find the code? * Yes! The code for go.skip.build -- our multi-chain DEX aggregator + swapping/transferring interface is open source * You can find the code at [https://github.com/skip-mev/skip-go-app](https://github.com/skip-mev/skip-go-app) -- Feel free to use it to guide your own integration #### What's the default IBC-Transfer timeout on the messages returned by `/fungible/msgs` and `/fungible/msgs_direct`? * 5 minutes #### What technologies does Skip Go use under the hood? * IBC and... * `ibc hooks` and `ibc callbacks`: Enables the Skip Go swap contracts to be executed as callbacks of IBC transfers, which enables constructing transactions that can transfer tokens to a chain with a swap venue, perform a swap, and transfer them out -- without multiple signing events / transactions. * Skip Go will likely not support DEXes on any chains that do not have `ibc-hooks` or equivalent functionality * `Packet-forward-middleware`(PFM): Enables incoming IBC transfers from one chain to atomically initiate outgoing transfers to other chains. This allows the chain with PFM to function as the intermediate chain on a multi-hop route. This is especially valuable for chains that issue assets for use throughout the interchain (e.g. Stride + stATOM, Noble + USDC, Axelar + axlUSDC) * (This [article we wrote](https://ideas.skip.build/t/how-to-give-ibc-superpowers/81) goes into more detail about both technologies and how to adopt them on your chain) * [Axelar](https://axelar.network/): A cross-chain messaging protocol that supports a wide variety of EVM chains and connects to Cosmos * [CCTP](https://www.circle.com/en/cross-chain-transfer-protocol): A cross-chain messaging protocol built by Circle (the issuer of USDC) to move USDC between chains without trusting any parties other than Circle (which USDC users already trust implicitly) * [Hyperlane](https://www.hyperlane.xyz/): A cross-chain messaging protocol that allows developers to deploy permissionlessly and choose their own security module * [Go Fast](https://skip-protocol.notion.site/EXT-Skip-Go-Fast-b30bc47ecc114871bc856184633b504b): A decentralized bridging protocol that enables faster-than-finality cross-chain actions across EVM and Cosmos * [Eureka](../eureka/eureka-overview): IBC v2 that enables seamless interoperability between Cosmos and Ethereum ecosystem. Eureka uses Succinct's SP1 zero-knowledge proofs for gas-efficient verification of Cosmos state on Ethereum * [Stargate](../advanced-transfer/experimental-features#stargate-%E2%80%9Cstargate%E2%80%9D): Support for routing over the Stargate V2 bridge on EVM chains incl. Sei EVM * [Layerzero](../advanced-transfer/experimental-features): A omnichain messaging protocol that sends data and instructions between different blockchains #### How do affiliate fees work? * Affiliate fees are fees developers using Skip Go charge their end-users on swaps. We support multiple parties taking and sharing fees. For example, if a defi frontend uses the widget a wallet team builds and distributes, the makers of the widget and the users of the widget can separately charge affiliate fees. * Fees are calculated in basis points or "bps". One basis point equals one one-hundredth of a percent. So 100 bps = 1% fee * **Affiliate fees are calculated based off the minimum amount out and are taken/denominated in the output token.** For example, consider an ATOM to OSMO swap where min amount out is 10 uosmo and the cumulative fees are 1000 bps or 10%. If the swap successfully executes, the affiliate fee will be 1 uosmo. It will be 1 uosmo regardless of whether the user actually gets 10, 11, or 12 uosmo out of the swap. * **When performing a swap with an exact amount in, affiliate fees are accounted for in the `amount_out` returned by `/route`**. More plainly, the Skip Go API subtracts off expected affiliate fees prior to the `amount_out` calculation, so that it represents an accurate estimate of what the user will receive at the end of the swap, net of fees. To be exact, the user will probably receive more than the amount out because the actual fee is not known at the time of this calculation / estimate. It's not known until later when slippage is set. So the user will end up paying `slippage_percent`\*`amount_out` less than the API predicts. This is good for the user because the estimated amount out will likely be lower than the actual amount they receive, offering a buffer that protects the user from the effects of slippage. #### Why does `/assets_from_source` only return assets that are reachable via transfers but not swaps? We're considering adding support for swappable destinations to `/assets_from_source`, but we're not prioritizing it because almost every asset is reachable via swapping from every other asset. So for 99.99% of use cases, this endpoint would just return a massive amount of fairly uninformative data, if we added destinations reachable via swaps. #### Why does the Skip Go API sometimes say a route doesn't exist when it should (e.g. transferring ATOM to Stargaze)? There are two common reasons Skip Go API is missing a route that a user thinks might exist: 1. **No one has ever used it:** By default, the Skip Go API does not recommend channels that have never been used or denoms that it has not seen. For example, if no one has transferred Neutron to Stargaze, the Skip Go API will not recommend a route to do so -- even if a direct connection exists between the two chains. 2. **Expired clients:** Frequently, existing connections between chains expire when they're not used for a period of time that exceeds the "light client trusting period. The Skip Go API indexes all chains every couple of hours to identify these cases and prevent users from accidentally attempting transfers. *A common gotcha:* `/assets_from_source` only returns assets reachable in a single transaction by default. If you'd like to have access to routes that require multiple transactions, set the `allow_multi_tx` flag to `true` in the input. #### How long does relaying an IBC packet take? It depends on many factors, including how many relayers are covering a particular channel, block times, the time-to-finality of the source chain, whether relayers are live, how many packets relayers are waiting to batch together, and much more... **In short, it can range from several seconds to minutes (or never) in the worst case.** After a timeout window, a packet won't be valid on the destination chain when it gets relayed. This timeout is set to 5 minutes for all packets created via the Skip Go API. It's important to understand what happens to user tokens in the event of timeouts. You can read about that in [Cross-chain Failure Cases](../advanced-transfer/handling-cross-chain-failure-cases) For now, we recommend making a small warning or disclaimer to users on your application, similar to the following: > This swap contains at least one IBC transfer. > > IBC transfers usually take 10-30 seconds, depending on block times + how quickly relayers ferry packets. But relayers frequently crash or fail to relay packets for hours or days on some chains (especially chains with low IBC volume). > > At this time, \[OUR APPLICATION]does not relay packets itself, so your swap/transfer may hang in an incomplete state. If this happens, your funds are stuck, not lost. They will be returned to you once a relayer comes back online and informs the source chain that the packet has timed out. Timeouts are set to 5 minutes but relayers may take longer to come online and process the timeout. #### Why is the time to relay a packet so variable? And why can IBC relaying be unreliable today? IBC relayers do not receive payments for relaying user packets or for relaying light client updates. In fact, they have to pay gas for the every packet they relay. That means relaying is strictly a charitable, money-losing operation with fixed infrastructure costs from running nodes + the relayer process, as well as variable costs from user gas. #### How long does it take to transfer between Cosmos and EVM ecosystem? In general, transfers take as long as it takes for the source chain to "finalize". That means: * Transfers originating on EVM chains will take 30 minutes or more. The reason is that most EVM chains take a long time (30+ minutes) to finalize, which means the bridges we rely on do not register the tokens on the source chain as "locked" in the bridge for at least that long. * Transfers originating on Cosmos chains will finish in less than 30 seconds usually (barring IBC relayer failures) because Cosmos chains have "fast" or "single slot finality". Almost as soon as the block that contains your transfer gets created, its included irreversibly in the chain #### If Skip doesn't charge fees, how come some simple transfers seem to have a fee taken out of them? This happens because many of the bridges (which Skip rely on) take fees to pay for gas and the cost of operating their protocol. ```ts entrypointsignAmino theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import { TxRaw } from "cosmjs-types/cosmos/tx/v1beta1/tx"; const txRaw = await signAmino( client, signer as OfflineAminoSigner, msgJSON.sender, [ { typeUrl: multiHopMsg.msg_type_url, value: msg.value, }, ], { amount: [coin(0, feeInfo.denom)], gas: `${simulatedGas * 1.2}`, }, "", { accountNumber: acc?.accountNumber ?? 0, sequence: acc?.sequence ?? 0, chainId: multiHopMsg.chain_id, } ); const txBytes = TxRaw.encode(txRaw).finish(); tx = await client.broadcastTx(txBytes, undefined, undefined); ``` ```ts theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} async function signAmino( client: SigningStargateClient, signer: OfflineAminoSigner, signerAddress: string, messages: readonly EncodeObject[], fee: StdFee, memo: string, { accountNumber, sequence, chainId }: SignerData ) { const aminoTypes = new AminoTypes(createDefaultAminoConverters()); const accountFromSigner = (await signer.getAccounts()).find( (account) => account.address === signerAddress ); if (!accountFromSigner) { throw new Error("Failed to retrieve account from signer"); } const pubkey = encodePubkey(encodeSecp256k1Pubkey(accountFromSigner.pubkey)); const signMode = SignMode.SIGN_MODE_LEGACY_AMINO_JSON; const msgs = messages.map((msg) => aminoTypes.toAmino(msg)); msgs[0].value.memo = messages[0].value.memo; const signDoc = makeSignDoc( msgs, fee, chainId, memo, accountNumber, sequence ); const { signature, signed } = await signer.signAmino(signerAddress, signDoc); const signedTxBody = { messages: signed.msgs.map((msg) => aminoTypes.fromAmino(msg)), memo: signed.memo, }; signedTxBody.messages[0].value.memo = messages[0].value.memo; const signedTxBodyEncodeObject: TxBodyEncodeObject = { typeUrl: "/cosmos.tx.v1beta1.TxBody", value: signedTxBody, }; const signedTxBodyBytes = client.registry.encode(signedTxBodyEncodeObject); const signedGasLimit = Int53.fromString(signed.fee.gas).toNumber(); const signedSequence = Int53.fromString(signed.sequence).toNumber(); const signedAuthInfoBytes = makeAuthInfoBytes( [{ pubkey, sequence: signedSequence }], signed.fee.amount, signedGasLimit, signed.fee.granter, signed.fee.payer, signMode ); return TxRaw.fromPartial({ bodyBytes: signedTxBodyBytes, authInfoBytes: signedAuthInfoBytes, signatures: [fromBase64(signature.signature)], }); } ``` #### How should I set gas prices for Cosmos transactions? We recommend setting Cosmos chain gas prices using the chainapsis keplr-chain-registry: [https://github.com/chainapsis/keplr-chain-registry](https://github.com/chainapsis/keplr-chain-registry). In our experience, this registry overestimates gas prices somewhat -- but this leads to a very good UX in Cosmos because: * Transactions almost always make it on chain * Gas prices are very cheap -- so overestimation is not costly Soon, the Skip Go API will surface recommended gas price too, and we will release a client-side library that will abstract away this area of concern. #### How should I set gas amount for Cosmos transactions? We recommend using the [Gas and Fee Tooling](/skip-go/client/balance-gas-and-fee-tooling) available in the [Skip Go Client TypeScript Package](https://www.npmjs.com/package/@skip-go/client). #### What does it mean when the docs say an endpoint is in "ALPHA" or "BETA"? This means we may make breaking changes to them at any time. ### What are the Skip Go app environments? The Skip Go app is available in three environments: mainnet, testnet, and dev. The mainnet/production user facing app runs at [https://go.skip.build](https://go.skip.build). It connects to mainnet chains and uses the production API. This is the version that most users interact with. The testnet app runs at [https://testnet.go.skip.build](https://testnet.go.skip.build). It connects to testnet chains but still uses the production API. This setup is useful when you want to test on testnet without changing API behavior. The dev app runs at [https://dev.go.skip.build](https://dev.go.skip.build). It connects to mainnet chains but uses a development version of the API that hasn’t been released to production yet. #### What does `cumulative_fee_bps` in `/route` mean? This is where you specify the fee amount in bps aka "bips". (1 bp = 1 / 100th of a percent; 100 bps = 1%) By specifying it in route, the Skip Go API can adjust the quote that it gives back to you, so that it shows output token estimate net of fees. This ensures the amount the end user gets out of the swap always aligns with their expectations. The parameter uses the word "cumulative" (i.e. summing over a set) because the API supports multiple partners charging affiliate fees. This parameter should be set to the sum of all of those component fees. (For example, if a widget provider charges a 5 bp fee, and the frontend developer integrating that widget charges a 10 bp fee, `cumulative_fee_bps=15`) ### Misc #### Is Skip doing anything to make IBC relaying more reliable? * In the short term, we are adding real time data about relayer + channel health to Skip Go API, so we can accurately predict how long it will take to relay a packet over a channel + the packet tracking + relaying into the API. * In the medium term, we're going to enable multi-chain relaying through Skip Go API. This will be a paid service. * In the long term, we're working to build better incentives for relaying, so relayers don't need to run as charities. (Relayers do not receive fees or payment of any kind today and subsidize gas for users cross-chain) # Getting Fee Info Source: https://docs.cosmos.network/skip-go/general/fee-info Understand how Skip Go handles user-facing fees ## Background This doc describes functionality in Skip Go for accessing standardized information about the various fees that a user can incur while swapping or transferring. Common fees include: * Affiliate fees (e.g. fees you charge the user for swapping on your frontend) * Bridge and relayer fees * Smart Relayer fees * Gas fees ### How Bridging Fees Are Applied The method for applying bridging fees depends on the source chain of the transfer: * **EVM-Source Transfers**: When a bridge transfer originates from an EVM-based chain (e.g., Ethereum), and the fee is paid in the native token of that chain (like ETH), the fee is typically an *additional charge* on top of the amount being transferred. This approach is common among underlying bridge providers, and we maintain this behavior for consistency across all EVM-source transfers, whether for ERC20 tokens or native assets. * **Cosmos-Source Transfers**: For transfers originating from Cosmos-based chains, bridging fees are *deducted directly* from the transferred asset amount. Consequently, the amount received on the destination chain will reflect this deduction. ## Understanding incurred fees from all sources: `estimated_fees` `estimated_fees` in the response of `/route` and `/msgs_direct` provides a `Fee` array where each entry gives information about a particular fee. Each `Fee` object in the array will have the following attributes: * `fee_type`: Enum giving the kind of fee — `SMART_RELAY` , `SWAP` , `BRIDGE`, `RELAY` , `GAS` * `bridge_id` : If the fee is a relayer fee (`RELAY`) or a bridge fee (`BRIDGE`), this gives the ID of the bridge charging the fee (or the bridge where the relayer is providing a service) * `amount` : The amount of the fee (expressed in the token the fee will be charged in) * `usd_amount`: Estimated USD value of the fee * `origin_asset` : An `Asset` object containing useful metadata for displaying and identifying the token in which the fee is denominated * `chain_id` : Chain ID where the fee will be collected * `denom` : Denom of the token in which the fee is denominated * `tx_index`: The zero-indexed identifier of the transaction where the user will incur this fee (For multi-transaction routes, fees may be incurred after the first transaction) * `operation_index`: The zero-indexed entry in the operations array where the user will incur this fee **Included Fees & Current Limitations** The `estimated_fees` array consolidates certain fees for easier display. Currently, it includes: * `FeeType: SMART_RELAY` for CCTP bridge transfers. * `FeeType: BRIDGE` fees for the following specific bridges: `AXELAR`, `HYPERLANE`, and `GO_FAST`. Please note that fees for other bridges (`IBC`, `Stargate`, `LayerZero`, etc.) and other fee types like `Swap` or `Gas` are not yet included here and the steps where these fees occur are detailed in the `operations` array. We plan to expand the coverage of `estimated_fees` in the future. **Have questions or feedback? Help us get better!** Join [our Discord](https://discord.com/invite/interchain) and select the "Skip Go Developer" role to share your questions and feedback. # Introduction Source: https://docs.cosmos.network/skip-go/general/getting-started This pages explains what the Skip Go API is, gives examples of applications built with it, and provides guidance on standard ways to use it. ## 👋 Introduction Welcome to the Skip Go API docs! **Skip Go API is an end-to-end interoperability platform that enables developers to create seamless cross-chain experiences for their end users with a variety of underlying DEXes and cross-chain messaging protocols, including IBC, CCTP, Hyperlane, Eureka, Stargate, Gofast and Axelar. We're on a mission to make all interop dead easy for developers and their users!** Unlike most aggregators, Skip Go API is based around the idea of composing many underlying bridging and swapping operations to create multi-hop routes that can connect any two chains and tokens in a single transaction. The goal is to help developers build interfaces where users can teleport any token to any chain from wherever they might be. We've designed it so that even developers who are completely new to interoperability and have never worked with any of the bridges or DEXes we use can build applications and frontends that feel magical and offer: * Any-to-any cross-chain swaps with built-in cross-chain DEX aggregation under the hood (e.g. Swap OSMO on Neutron for ARCH on Archway in a single transaction) * Onboarding to a sovereign chain from an origin chain or token in any ecosystem (e.g. Onboard to Sei from ETH on Blast) * Unified bridge-and-act flows (e.g. Transfer and buy an NFT in a single transaction) * Multi-hop cross-chain transfers with automatic denom & path recommendations for any asset and chain (e.g. Get the most liquid version of ATOM on Terra2) * Composite bridging paths that use multiple underlying bridges for different stages of the path (e.g. Transfer USDC from Base to Injective over CCTP and IBC behind the scenes) * Real-time cross-chain workflow tracking, along with gas + completion timing estimates * Protection from common cross-chain UX failures (e.g. bridging to a chain where the end user doesn't have gas) ... and much more Skip Go API includes several different endpoint types which allow developers to choose their level of abstraction and build a wide variety of cross-chain applications. These include: * **High-level endpoints** that return fully-formed messages and transactions * **Low-level endpoints** that return detailed pathing data about all the "operations" that make up a route * **Utility endpoints** for multi-chain transaction + packet submission, relaying, tracking, and simulation **What does it cost?** The Skip Go API is free to use and integrate with. For integrators who charge fees on swaps using our affiliate fee functionality, we collect a share of those fees. You can learn more about [pricing and fees in our FAQ](/skip-go/general/faq#does-the-skip-go-api-cost-anything%3F). You will have very limited access if you do not have an API key from us. Please join [our Discord](https://discord.com/invite/interchain) and request one. ## 3 Basic Ways to Use the Skip Go API There are 3 ways to leverage Skip Go API to build cross-chain swapping & transferring experiences, depending on whether you're optimizing for control or for integration speed. Use REST endpoints for low-level control over your interactions. Use the TypeScript library to abstract the complexity of making HTTP calls and access related helper functionality. Embed the Skip Go Widget in your frontend in a single line of code to launch with no developer effort. ## Learn More * [DeepWiki for skip-go](https://deepwiki.com/skip-mev/skip-go): Explore the codebase and ask questions using AI. # Transaction Tracking Source: https://docs.cosmos.network/skip-go/general/multi-chain-realtime-transaction-and-packet-tracking This document covers the tooling provided in Skip Go for tracking transaction status across multiple chains and bridge hops. ## Background The `/v2/tx` endpoints include unified APIs that broadcast transactions and provide insight into the status of transactions and their subsequent cross-chain actions. You can use the `/v2/tx` endpoints to track the progress of a cross-chain transfer or swap in realtime -- no matter how many chains or bridges the transfer touches. (For more advanced use cases, the endpoints support tracking multiple distinct transfers initiated in a single transaction). You can also use it to identify failure cases (e.g. a swap that fails due to slippage, an IBC transfer that fails due to inactive relayers) and determine where the user's tokens will become available. For example, if one of your end users initiates a swap that begins with ATOM on Neutron and concludes with ATOM on Osmosis, you can use the lifecycle tracking to report when the ATOM moves from Neutron to Osmosis. ## Basics At a high-level, the transaction tracking works in two stages: 1. Inform our tracking engine that you want to track a particular transaction by submitting it to the chain via `/v2/tx/submit` or submitting it to your own Node RPC then calling `/v2/tx/track` 2. Query the transaction status at some regular interval to get updates on it's progress **Tracking Multiple Independent Routes** You can use the endpoint to track multiple transfers initiated in a single transaction, where the status of transfer `i` is given by entry `i` in the `transfers` array. For example, I could transfer ATOM to the Cosmos Hub and OSMO to Osmosis in a single transaction and track them with `transfers[0]` and `transfers[1]` respectively. For a single transfer that has multiple hops (e.g. transferring ATOM to the Cosmos Hub then to Osmosis), you can track the status of each hop using the entries of `transfers[i].transfer_sequence` The status endpoint provides a mixture of high-level fields to keep track of the basic progress of your route and low-level fields to provide high visbility into the individual transactions that make up a single transfer. ### Important high-level `/v2/tx/status` fields Each entry in the `transfers` array corresponds to a single route that may contain many steps, and each entry in `transfer_sequence` will contain very detailed information about each step in that route. But there a few high-level fields in each `transfers` entry that you'll find useful no matter what your route does or what bridges it involves: * `state`: The basic status of your route. **This lets you report the basic state of the route to the user (e.g. in progress, failed, etc...)** * `STATE_SUBMITTED`: Indicates the transaction has been accepted for tracking but no on chain evidence has been found yet. * `STATE_ABANDONED`: Tracking has stopped after 30 minutes without progress. There is likely a relayer outage, an undetected out of gas error, or some other problem. * `STATE_PENDING`: The route is in progress and no errors have occurred yet * `STATE_PENDING_ERROR`: The route is in progress and an error has occurred somewhere, but the error is currently propagating, so the user doesn't have their tokens returned yet. (This state will only occur for protocols that require an *acknowledgement* on the source chain to process an error. IBC only at this time) * `STATE_COMPLETED_SUCCESS`: The route has completed successfully and the user has their tokens on the destination (indicated by `transfer_asset_release`) * `STATE_COMPLETED_ERROR`: The route errored somewhere and the user has their tokens unlocked in one of their wallets. Their tokens are either on the source chain, an intermediate chain, or the destination chain but in the wrong asset. ( `transfer_asset_release` indicates where the tokens are) * `next_blocking_transfer`: Gives the index of the entry in `transfer_sequence` that corresponds to the currently propagating transfer that is immediately blocking the release of the user's tokens -- `next_blocking_transfer.transfer_sequence_index` (it could be propagating forward or it could be propagating an error ack backward). **This lets you tell the user exactly which operation is pending at a given time** * `transfer_asset_release`: Info about where the users tokens will be released when the route completes. This populates on `STATE_PENDING_ERROR`, `STATE_COMPLETED_SUCCESS`, or `STATE_COMPLETED_ERROR`. **This lets you tell the user where to recover their funds in the event of a success or failure** *(If you want to better understand how to predict this or where funds might end up, see [Cross-chain Failure Cases](../advanced-transfer/handling-cross-chain-failure-cases))* * `transfer_asset_release.released`: Boolean given whether the funds are currently available (if the state is`STATE_PENDING_ERROR` , this will be `false`) * `transfer_asset_release.chain_id`: Chain where the assets are released or will be released * `transfer_asset_release.denom`: Denom of the tokens the user will have ### Detailed Info: Using`transfer_sequence` The `transfer_sequence` array consists of `TransferEvent` objects, which give detailed information about an individual transfer operation. The object acts as a wrapper around one details object for each bridge we support: * `CCTPTransferInfo` * `IBCTransferInfo` * `AxelarTransferInfo` * `HyperlaneTransferInfo` * `GoFastTransferInfo` * `StargateTransferInfo` Each one contains slightly different data and statuses corresponding to the details of their bridge, but they all contain some standard info: * `from_chain_id` * `to_chain_id` * Transactions and block explorer links for all of the significant events in the transfer (send tx, receive tx, and sometimes acknowledge tx) * A status field giving the status of the transfer, which can vary based on bridge #### IBC Transfer Data The `state` field in the `IBCTransferInfo` entries in the `transfer_sequence` array have the following meanings: * `TRANSFER_UNKNOWN`: The transfer state is unknown * `TRANSFER_PENDING` - The send packet for the transfer has been committed and the transfer is pending * `TRANSFER_PENDING_ERROR` - There has been a problem with the transfer (e.g. the packet has timed out) but the user doesn't have their funds unlocked yet because the error is still propagating * `TRANSFER_RECEIVED`- The transfer packet has been received by the destination chain. It can still fail and revert if it is part of a multi-hop PFM transfer * `TRANSFER_SUCCESS` - The transfer has been successfully completed and will not revert * `TRANSFER_FAILURE` - The transfer `packet_txs` contain transaction hashes, chain IDs, and block explorer links for up to 4 transactions: * `send_tx`: The packet being sent from the source chain * `receive_tx`: The packet being received on the destination chain * `timeout_tx`: The packet being timed out on the destination chain * `acknowledge_tx`: The successful or failed acknowledgement of the packet on the source chain #### Axelar Transfer Data When one of the transfers is an Axelar transfer, the `transfer_sequence` array will give an `axelar_transfer` (`AxelarTransferInfo`), instead of an `ibc_transfer`, which contains different data because: * The Skip Go API may utilize send\_token or contract\_call\_with\_token (two underlying Axelar protocols) depending on which is cheaper and which is required to execute the user's intent * Axelar does not have a notion of packets or acks, like IBC does * Axelar provides a nice high level UI (Axelarscan) to track the status of their transfers More precise details about all the fields are below: * `type` : an enum of `AXELAR_TRANSFER_SEND_TOKEN` and `AXELAR_TRANSFER_CONTRACT_CALL_WITH_TOKEN` which indicate whether the Axelar transfer is a [Send Token](https://docs.axelar.dev/dev/send-tokens/overview) or a [Contract Call With Token](https://docs.axelar.dev/dev/general-message-passing/gmp-tokens-with-messages) transfer respectively. * `axelar_scan_link`: Gives the link to Axelar's bridge explorer (which can help track down and unstick transactions) * `state` field indicates the current state of the Axelar transfer using the following values: * `AXELAR_TRANSFER_UNKNOWN` - The transfer state is unknown * `AXELAR_TRANSFER_PENDING_CONFIRMATION` - The transfer has been initiated but is pending confirmation by the Axelar network * `AXELAR_TRANSFER_PENDING_RECEIPT` - The transfer has been confirmed by the Axelar network and is pending receipt at the destination * `AXELAR_TRANSFER_SUCCESS` - The transfer has been completed successfully and assets have been received at the destination * `AXELAR_TRANSFER_FAILURE` - The transfer has failed * `txs` field schema depends on the `type` of the transfer * If `type` is `AXELAR_TRANSFER_SEND_TOKEN`, there are 3 txs: * `send_tx` (initiating the transfer) * `confirm_tx`(confirming the transfer on axelar) * `execute_tx` (executing the transfer on the destination) * If `type` is `AXELAR_TRANSFER_CONTRACT_CALL_WITH_TOKEN`: * `send_tx` (initiating the transfer) * `gas_paid_tx` (paying for the relayer gas on the source chain) * `approve_tx` (approving the transaction on Axelar - only exists when the destination chain is an EVM chain) * `confirm_tx`(confirming the transfer on Axelar - only exists when destination chain is a Cosmos chain) * `execute_tx` (executing the transfer on the destination) #### CCTP Transfer Data When one of the transfers is a CCTP transfer, the `transfer_sequence` array will give a `cctp_transfer` (`CCTPTransferInfo`), instead of an `ibc_transfer`, which contains different data because: * CCTP works by Circle attesting to & signing off on transfers * There's no notion of an acknowledgement in CCTP More precise details about the different/new fields are below: * `state` gives the status of the CCTP transfer: * `CCTP_TRANSFER_UNKNOWN` - Unknown error * `CCTP_TRANSFER_SENT` - The burn transaction on the source chain has executed * `CCTP_TRANSFER_PENDING_CONFIRMATION` - CCTP transfer is pending confirmation by the cctp attestation api * `CCTP_TRANSFER_CONFIRMED` - CCTP transfer has been confirmed by the cctp attestation api but not yet received on the destination chain * `CCTP_TRANSFER_RECEIVED` - CCTP transfer has been received at the destination chain * `txs` contains the chain IDs, block explorer links, and hashes for two transactions: * `send_tx`: The transaction submitted the CCTP burn action on the source chain to initiate the transfer * `receive_tx`: The transaction on the destination chain where the user receives their funds **Note:** CCTP transfers have a maximum limit of 1,000,000 USDC per transaction. #### Hyperlane Transfer Data When one of the transfers is a Hyperlane transfer, the `transfer_sequence` array will give a `hyperlane_transfer` (`HyperlaneTransferInfo`), instead of an `ibc_transfer`, which contains different data because: * Hyperlane is a very flexible protocol where the notion of "approving/verifying" the transfer is undefined / up to the bridge developer to implement * There's no notion of an acknowledgement in Hyperlane More precise details about the different/new fields are below: * `state` gives the status of the Hyperlane transfer: * `HYPERLANE_TRANSFER_UNKNOWN` - Unknown error * `HYPERLANE_TRANSFER_SENT` - The Hyperlane transfer transaction on the source chain has executed * `HYPERLANE_TRANSFER_FAILED` - The Hyperlane transfer failed * `HYPERLANE_TRANSFER_RECEIVED` - The Hyperlane transfer has been received at the destination chain * `txs` contains the chain IDs, block explorer links, and hashes for two transactions: * `send_tx`: The transaction submitted the CCTP burn action on the source chain to initiate the transfer * `receive_tx`: The transaction on the destination chain where the user receives their funds ### OPInit Transfer Data When one of the transfers is a OPInit transfer, the `transfer_sequence` array will give a `op_init_transfer` (`OPInitTransferInfo`), instead of an `ibc_transfer`, which contains different data because: * The OPInit bridge is the Initia ecosystem's native bridging solution facilitating transfers between Initia and the Minitias. * There's no notion of an acknowledgement in the OPInit bridge More precise details about the different/new fields are below: * `state` gives the status of the OPInit transfer: * `OPINIT_TRANSFER_UNKNOWN` - Unknown error * `OPINIT_TRANSFER_SENT` - The deposit transaction on the source chain has executed * `OPINIT_TRANSFER_RECEIVED` - OPInit transfer has been received at the destination chain * `txs` contains the chain IDs, block explorer links, and hashes for two transactions: * `send_tx`: The transaction that submitted the OPInit deposit action on the source chain to initiate the transfer * `receive_tx`: The transaction on the destination chain where the user receives their funds ### Go Fast Transfer Data When one of the transfers is a `GoFastTransfer`, the `transfer_sequence` array will include a `go_fast_transfer` (`GoFastTransferInfo`). This field includes specific information about user-initiated intents and solver fulfillments, which require specific data fields to track the transfer process. Below are detailed explanations of the different fields and their purposes: * `from_chain_id`: The chain ID where the transfer originates (source chain). * `to_chain_id`: The chain ID where the assets are being sent (destination chain). * `state`: Indicates the current status of the transfer. Possible values are: * `GO_FAST_TRANSFER_UNKNOWN`: An unknown error has occurred. * `GO_FAST_TRANSFER_SENT`: The user's intent has been successfully submitted on the source chain. * `GO_FAST_POST_ACTION_FAILED`: The transfer's post-intent action failed. For example a swap on the destination chain failed due to slippage. * `GO_FAST_TRANSFER_TIMEOUT`: The transfer did not complete within the expected time frame. * `GO_FAST_TRANSFER_FILLED`: The transfer was successfully fulfilled on the destination chain. * `GO_FAST_TRANSFER_REFUNDED`: The user's assets have been refunded on the source chain. * `txs`: Contains transaction details related to the GoFast transfer: * `order_submitted_tx`: The transaction where the user called initiateIntent on the source chain. * `order_filled_tx`: The transaction where the solver called fulfill on the destination chain. * `order_refunded_tx`: The transaction where the user received a refund on the source chain, if applicable. * `order_timeout_tx`: The transaction indicating a timeout occurred in the transfer process. * `error_message`: A message describing the error that occurred during the transfer, if applicable. When tracking a Go Fast transfer, you can use the `GoFastTransferInfo` to monitor the progress and status of your asset transfer between chains. For instance, if the state is `GO_FAST_TRANSFER_FILLED`, you know that the transfer was successful and your assets should be available on the destination chain. If the state is `GO_FAST_TRANSFER_TIMEOUT`, you can check the `orderTimeoutTx` for details on the timeout event. ### Stargate Transfer Data When one of the transfers is a `StargateTransfer`, the `transfer_sequence` array will include a `stargate_transfer` (`StargateTransferInfo`). This provides detailed information about a cross-chain asset transfer powered by Stargate, a popular cross-chain bridging protocol. Below are detailed explanations of the fields and their purposes: * `from_chain_id`: The chain ID where the transfer originates (source chain). * `to_chain_id`: The chain ID where the assets are being sent (destination chain). * `state`: Indicates the current status of the Stargate transfer. Possible values are: * `STARGATE_TRANSFER_UNKNOWN`: An unknown error has occurred or the state cannot be determined. * `STARGATE_TRANSFER_SENT`: The transfer has been successfully initiated on the source chain (i.e., the assets have left the source chain and are in transit). * `STARGATE_TRANSFER_RECEIVED`: The transfer has been successfully completed on the destination chain (i.e., the assets are now available at the recipient address on the destination chain). * `STARGATE_TRANSFER_FAILED`: The transfer encountered an error during bridging and did not complete as intended. * `txs`: Contains transaction details related to the Stargate transfer. * `send_tx`: The transaction on the source chain that initiated the Stargate transfer. * `receive_tx`: The transaction on the destination chain where the assets were received. * `error_tx`: A transaction (if any) related to the failure of the transfer. When monitoring a Stargate transfer, you can use `StargateTransferInfo` to confirm that your assets have safely bridged between chains or identify if and where a problem has occurred. The Go Fast Protocol involves interactions with solvers who fulfill transfer intents. The additional transaction fields help provide transparency and traceability throughout the transfer process, ensuring users can track each step and identify any issues that may arise. ## Detailed Example of Typical Usage This will walk through an example of how a developer would use the api to track the progress of a route that may include multiple In this particular example, we'll cover a simple 2-hop IBC transfer from axelar to the cosmoshub through osmosis. *Usage is similar for tracking Axelar transfers or transfers that include multiple hops over distinct bridges but the data structures change slightly depending on what underlying bridge is being used* ### 1. Call `/tx/submit` to broadcast a transaction Post a signed user transaction (that you can form using `/fungible` endpoints) to the `/submit` endpoint. The Skip Go API will handle the broadcasting of this transaction and asynchronously begin tracking the status of this transaction and any subsequent transfers. A successful response looks like the following: ```JSON theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "tx_hash": "AAEA76709215A808AF6D7FC2B8FBB8746BC1F196E46FFAE84B79C6F6CD0A79C9" } ``` It indicates that the transaction was accepted by the Skip Go API and its status can be tracked using the returned `tx_hash`. The transaction is broadcast using `BROADCAST_MODE_SYNC` and in the event that a transaction is rejected by the node, the `/submit` endpoint will return a 400 response along with the failure reason as shown below: ```JSON theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "code": 3, "message": "insufficient fees; got: 0uosmo which converts to 0uosmo. required: 2000uosmo: insufficient fee", "details": [] } ``` **Tracking a transaction that was not broadcast using `/submit`** If a transaction was not broadcast through the `/submit` endpoint and has already landed on chain, the `/track` endpoint can be used to initiate tracking of the transaction's progress. ### 2. Call `/status` to query the status of the transaction and IBC transfer progress Skip Go API continually indexes chain state to determine the state of the transaction and the subsequent IBC transfer progress. This information can be queried using the `/status` endpoint. It will initially yield a response that looks like the following: * There's a top-level `transfers` field, which gives an array where each entry corresponds to a **single sequence of transfers**. This does not mean there's one entry in the `transfers` field for every bridging operation. In general, one `transfer` could consist of an arbitrarily long sequence of swaps and transfers over potentially multiple bridges. `transfers` is an array because one transaction can initiate potentially several distinct and independent transfers (e.g. transferring OSMO to Osmosis and ATOM to the Hub) in the same tx. * The `state` field will give `STATE_SUBMITTED` indicating the transaction has been accepted for tracking by the Skip Go API but no events have been indexed yet: ```JSON theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "transfers": [ { "state": "STATE_SUBMITTED", "transfer_sequence": [], "next_blocking_transfer": null, "transfer_asset_release": null, "error": null } ] } ``` Once indexing for the transaction has begun, the `state` will change to `STATE_PENDING`. * The status of any transfers along the transfer sequence will be returned in the `transfer_sequence` field as shown in the example response below. * The entries in the `transfer_sequence` correspond to transfers and will be represented by different objects depending on which bridge is being used (e.g. Axelar or IBC). * The `next_blocking_transfer` field gives some information about the next blocking transfer in the`transfer_sequence` field. * The `transfer_sequence_index` indicates which transfer in the `transfer_sequence` field is blocking progress. * The `transfer_asset_release` field will be populated with information about the asset release as it is becomes known. * The `chain_id` and `denom` fields indicate the location and asset being released. * The `released` field indicates whether the assets are accessible. The `transfer_asset_release` field may become populated in advance of asset release if it can be determined with certainty where the eventual release will be. This will happen for example in a transfer sequence that is a PFM-enabled sequence of IBC transfers when one hop fails due to packet timeout or an acknowledgement failure. The transfer sequence will revert and the `transfer_asset_release` field will indicate that the assets will be released on the initial chain. ```JSON theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "transfers": [ { "state": "STATE_PENDING", "transfer_sequence": [ { "ibc_transfer": { "from_chain_id": "axelar_dojo-1", "to_chain_id": "osmosis-1", "state": "TRANSFER_PENDING", "packet": { "send_tx": { "chain_id": "axelar-dojo-1", "tx_hash": "AAEA76709215A808AF6D7FC2B8FBB8746BC1F196E46FFAE84B79C6F6CD0A79C9", "explorer_link": "https://www.mintscan.io/axelar/transactions/AAEA76709215A808AF6D7FC2B8FBB8746BC1F196E46FFAE84B79C6F6CD0A79C9" }, "receive_tx": null, "acknowledge_tx": null, "timeout_tx": null, "error": null } } } ], "next_blocking_transfer": { "transfer_sequence_index": 0 }, "transfer_asset_release": null, "error": null } ] } ``` The transfer assets will be released before all expected acknowledgements have been indexed. When the transfer sequence has reached this state, the `status` will be updated to `STATE_RECEIVED` as shown in the example response below. Note that `transfer_asset_release` now indicates the chain ID of the chain where the assets are released and the denomination of the released assets. ```JSON theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "transfers": [ { "state": "STATE_COMPLETED_SUCCESS", "transfer_sequence": [ { "ibc_transfer": { "from_chain_id": "axelar_dojo-1", "to_chain_id": "osmosis-1", "state": "TRANSFER_PENDING", "packet": { "send_tx": { "chain_id": "axelar-dojo-1", "tx_hash": "AAEA76709215A808AF6D7FC2B8FBB8746BC1F196E46FFAE84B79C6F6CD0A79C9", "explorer_link": "https://www.mintscan.io/axelar/transactions/AAEA76709215A808AF6D7FC2B8FBB8746BC1F196E46FFAE84B79C6F6CD0A79C9" }, "receive_tx": { "chain_id": "osmosis-1", "tx_hash": "082A6C8024998EC277C2B90BFDDB323CCA506C24A6730C658B9B6DC653198E3D", "explorer_link": "https://www.mintscan.io/osmosis/transactions/082A6C8024998EC277C2B90BFDDB323CCA506C24A6730C658B9B6DC653198E3D" }, "acknowledge_tx": null, "timeout_tx": null, "error": null } } }, { "ibc_transfer": { "from_chain_id": "osmosis-1", "to_chain_id": "cosmoshub-4", "state": "TRANSFER_SUCCESS", "packet": { "send_tx": { "chain_id": "osmosis-1", "tx_hash": "082A6C8024998EC277C2B90BFDDB323CCA506C24A6730C658B9B6DC653198E3D", "explorer_link": "https://www.mintscan.io/osmosis/transactions/082A6C8024998EC277C2B90BFDDB323CCA506C24A6730C658B9B6DC653198E3D" }, "receive_tx": { "chain_id": "cosmoshub-4", "tx_hash": "913E2542EBFEF2E885C19DD9C4F8ECB6ADAFFE59D60BB108FAD94FBABF9C5671", "explorer_link": "https://www.mintscan.io/cosmos/transactions/913E2542EBFEF2E885C19DD9C4F8ECB6ADAFFE59D60BB108FAD94FBABF9C5671" }, "acknowledge_tx": null, "timeout_tx": null, "error": null } } } ], "next_blocking_transfer": null, "transfer_asset_release": { "chain_id": "cosmoshub-4", "denom": "uatom", "released": true }, "error": null } ] } ``` Once it has been determined that all packets along the transfer sequence have either been acknowledged or timed out, `state` will be updated to `STATE_COMPLETED_SUCCESS` as shown in the example response below. Note that `next_blocking_transfer` is now null since the transfer is complete. ```JSON theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "transfers": [ { "state": "STATE_COMPLETED_SUCCESS", "transfer_sequence": [ { "ibc_transfer": { "from_chain_id": "axelar_dojo-1", "to_chain_id": "osmosis-1", "state": "TRANSFER_SUCCESS", "packet": { "send_tx": { "chain_id": "axelar-dojo-1", "tx_hash": "AAEA76709215A808AF6D7FC2B8FBB8746BC1F196E46FFAE84B79C6F6CD0A79C9", "explorer_link": "https://www.mintscan.io/axelar/transactions/AAEA76709215A808AF6D7FC2B8FBB8746BC1F196E46FFAE84B79C6F6CD0A79C9" }, "receive_tx": { "chain_id": "osmosis-1", "tx_hash": "082A6C8024998EC277C2B90BFDDB323CCA506C24A6730C658B9B6DC653198E3D", "explorer_link": "https://www.mintscan.io/osmosis/transactions/082A6C8024998EC277C2B90BFDDB323CCA506C24A6730C658B9B6DC653198E3D" }, "acknowledge_tx": { "chain_id": "axelar-dojo-1", "tx_hash": "C9A36F94A5B2CA9C7ABF20402561E46FD8B80EBAC4F0D5B7C01F978E34285CCA", "explorer_link": "https://www.mintscan.io/axelar/transactions/C9A36F94A5B2CA9C7ABF20402561E46FD8B80EBAC4F0D5B7C01F978E34285CCA" }, "timeout_tx": null, "error": null } } }, { "ibc_transfer": { "from_chain_id": "osmosis-1", "to_chain_id": "cosmoshub-4", "state": "TRANSFER_SUCCESS", "packet": { "send_tx": { "chain_id": "osmosis-1", "tx_hash": "082A6C8024998EC277C2B90BFDDB323CCA506C24A6730C658B9B6DC653198E3D", "explorer_link": "https://www.mintscan.io/osmosis/transactions/082A6C8024998EC277C2B90BFDDB323CCA506C24A6730C658B9B6DC653198E3D" }, "receive_tx": { "chain_id": "cosmoshub-4", "tx_hash": "913E2542EBFEF2E885C19DD9C4F8ECB6ADAFFE59D60BB108FAD94FBABF9C5671", "explorer_link": "https://www.mintscan.io/cosmos/transactions/913E2542EBFEF2E885C19DD9C4F8ECB6ADAFFE59D60BB108FAD94FBABF9C5671" }, "acknowledge_tx": { "chain_id": "osmosis-1", "tx_hash": "1EDB2886E6FD59D6B9C096FBADB1A52585745694F4DFEE3A3CD3FF0153307EBC", "explorer_link": "https://www.mintscan.io/osmosis/transactions/1EDB2886E6FD59D6B9C096FBADB1A52585745694F4DFEE3A3CD3FF0153307EBC" }, "timeout_tx": null, "error": null } } } ], "next_blocking_transfer": null, "transfer_asset_release": { "chain_id": "cosmoshub-4", "denom": "uatom", "released": true }, "error": null } ] } ``` Any packet acknowledgement errors will be surfaced in the error field for the relevant packet as follows: ```JSON theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "transfers": [ { "state": "STATE_COMPLETED_ERROR", "transfer_sequence": [ { "ibc_transfer": { "from_chain_id": "osmosis-1", "to_chain_id": "cosmoshub-4", "state": "TRANSFER_FAILED", "packet": { "send_tx": { "chain_id": "osmosis-1", "tx_hash": "112714A8144019161CAAA8317016505A9A1DDF5DA7B146320A640814DDFA41C0", "explorer_link": "https://www.mintscan.io/osmosis/transactions/112714A8144019161CAAA8317016505A9A1DDF5DA7B146320A640814DDFA41C0" }, "receive_tx": { "chain_id": "cosmoshub-4", "tx_hash": "E7FB2152D8EA58D7F377D6E8DC4172C99791346214387B65676A723FCFC7C980", "explorer_link": "https://www.mintscan.io/osmosis/cosmos/E7FB2152D8EA58D7F377D6E8DC4172C99791346214387B65676A723FCFC7C98" }, "acknowledge_tx": { "chain_id": "osmosis-1", "tx_hash": "8C9C1FA55E73CD03F04813B51C697C1D98E326E1C71AB568A2D23BF8AEAFFEC7", "explorer_link": "https://www.mintscan.io/osmosis/transactions/8C9C1FA55E73CD03F04813B51C697C1D98E326E1C71AB568A2D23BF8AEAFFEC7" }, "timeout_tx": null, "error": { "code": 1, "message": "ABCI code: 1: error handling packet: see events for details" } } } } ], "next_blocking_transfer": null, "transfer_asset_release": { "chain_id": "osmosis-1", "denom": "uosmo", "released": true }, "error": null } ] } ``` Any execution errors for the initial transaction will be surfaced in the error field at the top level of the response as follows: ```JSON theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "transfers": [ { "state": "STATE_COMPLETED_ERROR", "transfer_sequence": [], "next_blocking_transfer": null, "transfer_asset_release": null, "error": { "code": 11, "message": "out of gas in location: Loading CosmWasm module: sudo; gasWanted: 200000, gasUsed: 259553: out of gas" } } ] } ``` # Overview & Common Usage Patterns Source: https://docs.cosmos.network/skip-go/general/overview-and-typical-usage ## Summary This doc provides a high-level overview of the different kinds of methods available in the Skip Go API & describes how integrators typically use them in combination to create a cross-chain swapping frontend like [go.skip.build](https://go.skip.build). See the [API reference](/skip-go/api-reference/prod) for more information on each endpoint. ## Overview of Methods ### `/info` methods: Functionality for Retrieving General Purpose Info The `/info` endpoints & their corresponding functions in `@skip-go/client` provide general purpose metadata about the chains and bridges that the Skip Go API supports. The most important `info` methods include: * `/v2/info/chains` (`chains()`): Get info about all supported chains, including capabilities, address type, logo, etc... * `/v2/info/bridges` (`bridges()`): Get basic info about all supported bridges, including logo and name ### `/fungible` methods: Functionality for fungible token swaps and transfers The `/v2/fungible` endpoints & their corresponding functions in `@skip-go/client` provide APIs for cross-chain swaps and transfers of fungible tokens. In the background, the API provides automatic DEX aggregation, bridge routing, denom recommendation, and relaying for all actions. The most important `fungible` methods include : * `/v2/fungible/route` (`route()`): Get a swap/transfer route and quote between a pair of tokens & chains. You can customize this request significantly to only consider particular DEXes, bridges; to add route splitting for better prices; and much more. * `/v2/fungible/msgs` (`messages()`): Generates the transaction data for the transaction(s) the user must sign to execute a given route * `/v2/fungible/msgs_direct` (`messagesDirect()`): Generates a route, quote, and associated transaction data at the same time * `/v2/fungible/venues` (`venues()`): Get metadata for all supported swapping venues (DEXes, liquid staking protocols, etc...), including name and logo. (There are many other providing more specific functionality for power users. See API docs for more details.) ### `/tx` methods: Functionality for Tracking Inflight transactions The `/v2/tx` endpoints & their corresponding functions in `@skip-go/client` provide functionality for submitting transactions and tracking the status of cross-chain transactions with a unified interface across all underlying hops & bridge types. The most important `tx` methods include: * `/v2/tx/submit` (`submitTransaction()`): Submits a transaction on chain through Skip's nodes and registers the transaction for tracking with Skip Go API *(Recommended especially for Solana and other high congestion networks where successfully submitting a transaction can be tricky)* * `/v2/tx/track` (`trackTransaction()`): Registers a transaction for tracking with Skip Go API (Often used instead of `/submit` when an integrator has their own chain nodes for submitting) * `/v2/tx/status` (`transactionStatus()`): Get the current status of a multi-hop transaction [`/tx` API reference](../api-reference/prod/transaction/post-v2txsubmit) ## Typical Usage in Cross-chain Swapping Frontend On a cross-chain swapping and transferring frontend, integrators typically: 1. Use the `info` methods to populate the list of potential starting & ending chains & assets 2. Use `/fungible/route` (`route()`) to get a quote when the user selects all their chains & tokens and inputs one of their amounts 3. Use `/fungible/msgs` (`messages()`) to get a transaction for the user to sign after they've locked in the route & begun the transaction creation process 4. Use `/tx/track` (`trackTransaction()`) to register the transaction for tracking (or `/tx/submit` to register and submit it on-chain) 5. Use `/tx/status` (`transactionStatus()`) to get the real-time status of the transaction as it progresses across bridges and/or chains. # Post-Route Actions Source: https://docs.cosmos.network/skip-go/general/post-route-actions How to specify actions to perform after a route of transfers/swaps is completed Use the `post_route_handler` parameter of `/v2/fungible/msgs` endpoint to define actions that will be executed on the destination chain after a route transfer or swap is completed. These actions are executed within the same transaction as the original swap or transfer. This handler allows developers to build omni-chain and omni-token workflows where users can swap, liquid stake, deposit, buy an NFT, or take any other action starting from any chain or any token in Cosmos -- all in a single transaction. This parameter currently supports: 1. CosmWasm contract calls on the destination chain 2. `autopilot` support for liquid staking interactions on Stride ### Background Info All `post_route` actions must have the following characteristics: * **Permissionless:** Skip Go can only support permissionless actions because the underlying protocols (e.g. IBC hooks, IBC callbacks, packet-forward-middleware) derive the addresses they use to call contracts based on the origin of the transfer. Skip Go supports both `ibc-hooks` and `ibc-callbacks` depending on which module the destination chain has implemented (they are incompatible approaches), automatically generating the appropriate message payloads for each. This means one user originating on two different chains or starting with two different tokens will eventually call the final contract / module with different addresses. You can only reliably permission actions that you know will 1) always originate on the same chain and 2) always take the same path to the destination chain. In general, we recommend not making this assumption unless you are an interoperability expert * **Single-token input:** The underlying IBC transfer protocol (ICS-20) doesn't support transfers of more than 1 token denom in a single transfer message, so we can only send 1 token denom to the final contract or module at a time. This means the contract or module in the `post_route_handler` must not require multiple token denoms sent to it simultaneously. For example, a classic LP action where the user must provide tokens in both sides of the pool simultaneously would not work. **Use authority-delegation with local address for permissionless actions that enable permissioned follow-ups** Commonly, the first interaction with a contract is permissionless, but it awards the end-user some kind of permissioned authority to perform follow-on actions (e.g. staking enables unstaking + collecting rewards; depositing enables withdrawing and earning yield) or receipt tokens (e.g. LPing produces LP receipt tokens). As a cross-chain caller, you should generally avoid contracts that implicitly delegate these authorities or give receipt tokens to the caller because the caller will depend on the path the user has taken over IBC. You should look for contracts to imply authority delegation -- i.e. contracts that explicitly assign permissions to an address in the calldata that may be different than the caller and address sending the tokens. Examples of this pattern are: * Astroport’s `receiver` parameter in the `provide_liquidity` message * Mars’ `on_behalf_of` parameter in the `deposit` message * Astroport’s `to` parameter in the `swap` message We recommend setting these authority delegation parameters to the user's local address on the destination chain, so they can perform future actions locally. ### CosmWasm To call a CosmWasm contract on the destination chain, the following requirements must be satisfied: 1. The destination chain supports CosmWasm & either `ibc-hooks` or `ibc-callbacks`. 2. The chain in the route immediately before the destination chain supports IBC memos as well as `packet-forward-middleware`. To specify a CosmWasm contract call on the destination chain, pass a `wasm_msg` as the `post_route_handler` in the `/v2/fungible/msgs` call with: * `contract_address`: The target contract address * `msg`: JSON string of the message to pass to the contract In addition, set the destination address in the `address_list` to the address of the contract. For example, this is a request for a transfer of USDC from Axelar to Neutron, with a post-route handler that swaps the USDC to Neutron using an Astroport pool on Neutron: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "source_asset_denom": "uusdc", "source_asset_chain_id": "axelar-dojo-1", "dest_asset_denom": "ibc/F082B65C88E4B6D5EF1DB243CDA1D331D002759E938A0F5CD3FFDC5D53B3E349", "dest_asset_chain_id": "neutron-1", "amount_in": "1000000", "amount_out": "1000000", "address_list": [ "axelar1x8ad0zyw52mvndh7hlnafrg0gt284ga7u3rez0", "neutron1l3gtxnwjuy65rzk63k352d52ad0f2sh89kgrqwczgt56jc8nmc3qh5kag3" ], "operations": [ { "transfer": { "port": "transfer", "channel": "channel-78", "chain_id": "axelar-dojo-1", "pfm_enabled": false, "dest_denom": "ibc/F082B65C88E4B6D5EF1DB243CDA1D331D002759E938A0F5CD3FFDC5D53B3E349", "supports_memo": true } } ], "post_route_handler": { "wasm_msg": { "contract_address": "neutron1l3gtxnwjuy65rzk63k352d52ad0f2sh89kgrqwczgt56jc8nmc3qh5kag3", "msg": "{\"swap\":{\"offer_asset\":{\"info\":{\"native_token\":{\"denom\":\"ibc/F082B65C88E4B6D5EF1DB243CDA1D331D002759E938A0F5CD3FFDC5D53B3E349\"}},\"amount\":\"10000\"},\"to\":\"neutron1x8ad0zyw52mvndh7hlnafrg0gt284ga7uqunnf\"}}" } } } ``` Note that the last address provided in the `address_list` is the address of the pool contract on Neutron, rather than a user address. The message returned from this request uses `ibc-hooks` on Neutron to perform the CosmWasm contract call atomically with the IBC transfer. ### Autopilot To use Autopilot after route actions, the following requirements must be satisfied: 1. The destination chain supports the `autopilot` module. Currently, this means the destination chain must be `stride-1`. 2. The chain in the route immediately before the destination chain supports IBC memos as well as `packet-forward-middleware`. To specify an Autopilot action on the destination chain, pass a `autopilot_msg` as the `post_route_handler` in the `/v2/fungible/msgs` call with: * `receiver`: Set to the address on behalf of which you're performing the action * `action`: An enum giving the action that you wish to execute * This may be one of `LIQUID_STAKE` (for liquid staking an asset) or `CLAIM` for updating airdrop claim addresses. For example, this is a request for a transfer of ATOM from Cosmos Hub to Stride, with a post-route handler that atomically liquid stakes the transferred ATOM on Stride, sending stATOM to the specific receiver on Stride: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "source_asset_denom": "uatom", "source_asset_chain_id": "cosmoshub-4", "dest_asset_denom": "ibc/27394FB092D2ECCD56123C74F36E4C1F926001CEADA9CA97EA622B25F41E5EB2", "dest_asset_chain_id": "stride-1", "amount_in": "1000000", "amount_out": "1000000", "address_list": [ "cosmos1x8ad0zyw52mvndh7hlnafrg0gt284ga7cl43fw", "stride1x8ad0zyw52mvndh7hlnafrg0gt284ga7m54daz" ], "operations": [ { "transfer": { "port": "transfer", "channel": "channel-391", "chain_id": "cosmoshub-4", "pfm_enabled": true, "dest_denom": "ibc/27394FB092D2ECCD56123C74F36E4C1F926001CEADA9CA97EA622B25F41E5EB2", "supports_memo": true } } ], "post_route_handler": { "autopilot_msg": { "receiver": "stride1x8ad0zyw52mvndh7hlnafrg0gt284ga7m54daz", "action": "LIQUID_STAKE" } } } ``` **Have questions or feedback? Help us get better!** Join [our Discord](https://discord.com/invite/interchain) and select the "Skip Go Developer" role to share your questions and feedback. # Quickstart Guide Source: https://docs.cosmos.network/skip-go/general/quickstart-guide This guide walks you through the process of setting up and using the Skip Go Client to perform a cross-chain from USDC on Noble to TIA on Celestia. For a more in-depth guide, check out our [detailed walkthrough](../client/getting-started), which covers both Solana and EVM transactions. You can also explore more about [EVM Transactions](/skip-go/advanced-transfer/evm-transactions) or dive into the specifics of [SVM Transactions](/skip-go/advanced-transfer/svm-transaction-details). ## Prerequisites * Browser environment setup with **Keplr** installed (`create-next-app` is recommended) * **Node.js** and **npm** Install the library using npm or yarn: ```Shell npm theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} npm install @skip-go/client ``` ```Shell yarn theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} yarn add @skip-go/client ``` To start integrating with the Skip Go API, configure the library once using `setClientOptions` or `setApiOptions`. After configuration, you import and call functions like `route` and `executeRoute` directly. ```ts theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import { setClientOptions, setApiOptions, route, executeRoute, // ...other helpers you might use } from "@skip-go/client"; // Example initialization for executeRoute usage setClientOptions({ apiUrl: "YOUR_API_URL", // optional: defaults to Skip API apiKey: "YOUR_API_KEY", // optional: required for some features // endpointOptions, aminoTypes, registryTypes, etc. }); // If only calling API functions directly, you can instead use: setApiOptions({ apiUrl: "YOUR_API_URL", apiKey: "YOUR_API_KEY" }); ``` Now, we can use the `route` function to request a quote and route to swap USDC on Noble to TIA on Celestia. ```ts theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} const routeResult = await route({ sourceAssetDenom: 'uusdc', sourceAssetChainID: 'noble-1', destAssetDenom: 'utia', destAssetChainID: 'celestia', amountIn: '1000000', // 1 uusdc smartRelay: true }); ``` **Understanding the Route Response** The route response contains important information about the swap process: * **`amountOut`**: The estimated amount the user will receive after the swap, net of all fees and price impact * **`requiredChainAddresses`**: Chain IDs where you need to provide user addresses when generating the transaction * **`operations`**: Steps involved in moving from the source to the destination token For more details, see the [/route](/skip-go/api-reference/prod/fungible/post-v2fungibleroute) endpoint reference. After generating a route, you need to provide user addresses for the required chains. The `routeResult.requiredChainAddresses` array lists the chain IDs for which addresses are needed. **Only use addresses your user can sign for.** Funds could get stuck in any address you provide, including intermediate chains in certain failure conditions. Ensure your user can sign for each address you provide. See [Cross-chain Failure Cases](../advanced-transfer/handling-cross-chain-failure-cases) for more details. We recommend storing the user's addresses and creating a function like `getAddress` that retrieves the address based on the chain ID (see an example [here](https://github.com/skip-mev/skip-go-example/blob/c55d9208bb46fbf1a4934000e7ec4196d8ccdca4/pages/index.tsx#L99)). ```ts theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // get user addresses for each requiredChainAddress to execute the route const userAddresses = await Promise.all( routeResult.requiredChainAddresses.map(async (chainID) => ({ chainID, address: await getAddress(chainID), })) ); ``` **Never attempt to derive an address on one chain from an address on another chain** Whenever you need a user address, please request it from the corresponding wallet or signer. Do not attempt to use bech32 cross-chain derivation. If you attempt to derive an address on one chain from an address on another chain, you may derive an address that the user cannot actually sign for if the two chains have different address-derivation processes. For example, if you derive a Cosmos address from an Ethereum address, you will get an address that the user cannot sign for and thus risk lost tokens. Once you have a route, you can execute it in a single function call by passing in the route, the user addresses for at least the chains the route includes, and optional callback functions. This also registers the transaction for tracking. ```ts theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} await executeRoute({ route: routeResult, userAddresses, getCosmosSigner: (chainId) => {}, getEvmSigner: (chainId) => {}, getSvmSigner: () => {}, // Executes after all of the operations triggered by a user's signature complete. onTransactionCompleted: async (chainID, txHash, status) => { console.log( `Route completed with tx hash: ${txHash} & status: ${status.state}` ); }, onTransactionBroadcast: async ({ txHash, chainID }) => { console.log(`Transaction broadcasted with tx hash: ${txHash}`); }, onTransactionTracked: async ({ txHash, chainID }) => { console.log(`Transaction tracked with tx hash: ${txHash}`); }, }); ``` Once the transaction is complete, you'll have new TIA in your Celestia address! # Smart Relay Source: https://docs.cosmos.network/skip-go/general/smart-relay This page covers Smart Relay -- Skip Go API's universal cross-chain data & token delivery service This document introduces Skip Go API’s Smart Relay functionality — the fastest & most reliable way to get your users where they’re going over any bridge supported by Skip Go. Smart Relay is an alternative to public bridge relayers. It's designed to enable users to access more swap and transfer routes in fewer transactions with greater speed and reliability. **We strongly advise all integrators who want to offer top-notch user experiences to activate Smart Relay.** **If you do not use Smart Relay, your performance will suffer:** * Many routes will not be available (e.g. Bridging Solana to Base, Solana to any Cosmos chain) * Many routes will require more transactions (e.g. Bridging USDC from Ethereum to Osmosis will require 2 transactions, instead of 1) * Transfers will get stuck more frequently You turn it on simply by setting `smart_relay=true` in `/msgs_direct` or `/route`. This document covers: * What Smart Relay is * The bridges Smart Relay supports today (& whats next) * How to use Smart Relay * The factors that affect the price of Smart Relay **What is relaying?** In general, relaying refers to the act of executing a cross-chain message/action by constructing the cross-chain proofs and submitting the burn/mint/wrap/ack proof transactions to the destination & source chains that are required to deliver the message. All bridges and general message passing protocols (IBC, Axelar, Wormhole, CCTP, etc…) have some notion of relaying but sometimes it goes by different names. ## Background Smart Relay is a **intent-aware**, **user-centric**, **universal** relayer with better performance guarantees than public relayers for all bridges and routes we support. We offer it at a small cost to the end-user and no cost to the developer/integrator. In short, Smart Relay helps users get to more destinations, in fewer transactions, and at faster speeds than via public relayers — no matter how many bridges or chains stand between them and their destination. Smart Relay is huge improvement over existing relaying systems, made possible by intelligent integration with Skip Go's routing capabilities.: * **Intent-aware**: Traditional relayers are unaware of what the user is trying to accomplish beyond the first hop of a transfer, but usually transfers are a part of a broader sequence of swaps, transfers, and actions. Smart Relayer has the context of the user's end-to-end cross-chain intent, and it can use this information to optimize its execution plan. * For example, Smart Relay can reduce the number of transactions a user must sign in a route by automatically triggering the second bridge in a route when delivering the packet for the first bridge (e.g. Triggering IBC from CCTP). * It can also use this information to prepare the user's destination address to receive the transfer (e.g. Dust it with gas even if there's no way to perform an atomic swap, or initialize a token account on Solana, etc...). * **User-centric**: Traditional relayers are focused on specific "channels" or "bridges," simply transferring all packets for a single bridge route from one chain to another. Once that task is complete, they consider their job done. In contrast, Smart Relayers prioritize the user, not the bridge. Instead of clearing packets on one bridge at a time, they transfer all packets associated with a specific user across multiple bridges or hops, ensuring the user reaches their destination efficiently. * It also offers a deeply simplified payment experience for users that's designed for multi-hop: pay once in the source chain, in the token you're transferring, and receive high quality relaying at every leg of the route thereafter. * **Universal**: Traditional relayers only support 1 or 2 chains or ecosystems at a time for a single bridge, making cross-ecosystem transfers fraught. Many routes have no relayers or just spotty coverage. Smart Relay was designed to support all ecosystems and all bridges from the start. It already supports EVM, Solana/SVM, Cosmos, and modular -- with more chains, bridges, and routes routinely added. The cost of Smart Relay is determined dynamically based on the factors covered below. The gas prices and bridge fees involved in a route are the principal determinants of that cost. ## State of Smart Relay Today, Smart Relay supports: * CCTP We are currently building out support for: * IBC * Hyperlane * Axelar For the bridges that Smart Relay does not support today, Skip Go uses public or enshrined relayers — at whatever cost they’re typically made available to users. (These are free for IBC and have some fee associated with them for others). All costs users will incur will always be made transparent in the relevant endpoint responses. ## How to Use Smart Relay ### How to activate Smart Relay 1. On `/route` or `/msgs_direct`, pass `smart_relay=true` 2. If using `/msgs`, ensure that you are passing the `smart_relay_fee_quote` object provided in the `cctp_transfer` operation from the `/route` response into your `/msgs` request. * If you're using the @skip-go/client library, version 0.8.0 and above supports this automatically. If you're integrating the API directly and decoding the `/route` response operations before passing them back into `msgs`, simply ensure you're decoding this new field properly and passing it back through! 3. Sign the transaction as usual, and submit the signed transaction to the Skip Go API via `/track` or `/submit` as normal * **NOTE: We HIGHLY recommend using `/submit` to submit Smart Relay transactions on chain to avoid issues incurred by submitting a transaction on chain but not sending it to the `/track` endpoint.** That’s it! Smart Relay will take care of executing your cross-chain actions across all bridges it currently supports. ### How to determine what Smart Relay will cost the user In the response to `/route` , `/msgs_direct` and `/msgs`, the cost of Smart Relay will appear in the `estimated_fees` array with `fee_type` set to `SMART_RELAY`. See [Getting Fee Info](./fee-info) for more info about `estimated_fees` For multi-tx routes, the user may pay up to 1 Smart Relay fee per transaction. The fee for each transaction pays for all Smart Relay operations in that particular transaction. This prevents Smart Relay from accepting payment prematurely to perform operations for the latter transactions, since the latter transactions may not get signed or executed. You can use the `tx_index` attribute on the `estimated_fees` entries to identify which Smart Relay fee corresponds to which transaction in the route. (e.g. `tx_index=0` indicates this is the fee for the first transaction in the route) ### What Determines the Cost of Smart Relaying Smart Relay incurs a user cost because Smart Relay involves actually submitting transactions to various chains and incurring transaction fees as a result. * **Operations**: The cost of relaying a route depends on the operations in the route, since these affect the amount of gas Smart Relay consumes (e.g. routes that include swaps will require higher gas amounts & involve more expensive relaying) * **The cost of gas:** Most networks have dynamic fee markets, where the price of gas increases during periods of high network load. Smart Relay takes this into account when generating a quote * **Token Exchange Rates:** The token the user pays their fee in and the token Smart Relaying pays gas fees in may differ, so exchange rates affect the price the end user experiences. (e.g. If the user pays in OSMO for a route that terminates on Ethereum mainnet, Smart Relay will need to pay fees in ETH, so the amount of OSMO the user pays will depend on the OSMO/ETH spot price.) ### How to properly use the Smart Relay Fee Quote Skip Go dynamically calculates the Smart Relay fee to be paid based on the underlying costs in real-time. Although you should use the information in the `estimated_fees` array for display purposes, `cctp_transfer` includes a `smart_relay_fee_quote`, providing necessary information for proper use of the dynamic relaying fee system. Specifically, the `smart_relay_fee_quote` object contains information about the smart relay fee to be paid and when the quote expires (after which the quoted amount may no longer be valid and the transaction may not be succesfully relayed). If you're using the `/msgs` endpoint, ensure that you are passing the `smart_relay_fee_quote` object provided in the `cctp_transfer` operation from the `/route` response into your `/msgs` request. This is necessary to ensure the transaction generated by the API matches the fee quoted in the `/route` request. If the quote is not passed back into the `/msgs` request, a new quote will be generated in the `/msgs` call that may be different than what was quoted previously in the `/route` request. Version `0.8.0` and above of the `@skip-go/client` library supports this automatically. If you're integrating the API directly and decoding the `/route` response operations before passing them back into `/msgs`, simply ensure you're decoding this new field properly and passing it back through! See the `SmartRelayFeeQuote` below: ```text TypeScript theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} export type SmartRelayFeeQuote = { feeAmount: string; feeDenom: string; feePaymentAddress: string; relayerAddress: string; expiration: Date; } ``` ```JSON JSON theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "smart_relay_fee_quote": { "fee_amount": "100000", // string "fee_denom": "uusdc", // string "fee_payment_address": "0x123", // string "relayer_address": "0x123", // string "expiration": "2024-08-30T05:28:05Z" // string } } ``` # Supported Ecosystems Source: https://docs.cosmos.network/skip-go/general/supported-ecosystems-and-bridges **Activate Smart Relay (`smart_relay=true`) for global coverage of all chains, bridges, and routes** There are many routes that will return a RouteNotFound error if `smart_relay=false` because there are not public relayers that support them. This includes all routes to and from Solana, as well as CCTP routes to and from Polygon, Base, and several other L2s. As a result, if you want your users to be able to transfer their tokens over all routes, we recommend using [Smart Relay](/skip-go/general/smart-relay). Skip Go API is a universal interoperability platform. It can be used to swap and transfer tokens within and between all major ecosystems in crypto over many major bridges. We're constantly adding new bridges, new chains, new routes on already-supported bridges, and new multi-hop routes by composing multiple bridges together. This document gives a high level overview of the different ecosystems and bridges Skip Go API supports, including how to onboard assets into each ecosystem, when different bridges are used, and what DEXes are supported. For each major ecosystem, this document identifies: * The bridges over which Skip Go API can route users into and out of the ecosystem * The bridges over which Skip Go API can route users within the ecosystem * The DEXes where Skip Go API can swap within the ecosystem The very short summary is that our Cosmos ecosystem support is most mature, followed by Ethereum, followed by Solana. ## Cosmos Support Details **Onboard and offboard via**: * Axelar (Move major tokens to Ethereum, Polygon, Avalanche, and ETH L2s) * CCTP (Move USDC to all major Ethereum L2s, Polygon, Avalanche, and Solana) * Maximum transfer limit: 1,000,000 USDC per transaction * Go Fast (Move assets at faster-than-finality speeds from EVM to Cosmos) * Currently, Go Fast supports the following source chains: Ethereum Mainnet, Arbitrum, Avalanche, Base, Optimism, Polygon. * Eureka (IBC v2 that enables seamless interoperability between Cosmos and Ethereum ecosystem) **Move within the ecosystem via:** * IBC **Swap within the ecosystem on:** * Osmosis * Astroport (on 3+ chains) * InitiaDEX * Astrovault (on Archway) * Dojoswap (on Injective) * Helix (on Injective) ## Ethereum Support Details **Never attempt to derive Cosmos Addresses from Ethereum Addresses (or vice versa)** Whenever you need an address on a chain for a user, please request it from the corresponding wallet. If you attempt to derive an address on one chain from an address on another chain, you may derive an address that the user cannot actually sign for if the two chains have different address-derivation processes. For example, if you derive a Cosmos address from an Ethereum address, you will get an address that the user cannot sign for or use. So don't ever try to derive one address from another -- even for intermediate addresses. (Intermediate addresses may be used for fund recovery if there is some kind of failure on an intermediate chain. Failures could include a timeout, a swap exceeding slippage tolerance, or any other unexpected execution path. We assume users can sign for intermediate addresses, even if they shouldn't have to.) **Onboard and offboard via:** * CCTP (Move USDC to/from Cosmos and Solana) * Maximum transfer limit: 1,000,000 USDC per transaction * Axelar (Move ETH, MATIC, ARB, and other major tokens to/from Cosmos) * Go Fast (Move assets at faster-than-finality speeds) * Eureka (Seamless interoperability between Cosmos and Ethereum ecosystem) **Move within ecosystem via:** * Axelar (Move major assets among all major ETH L2s, Polygon, and Avalanche) **Skip Go API supports swapping on DEXes within the Ethereum ecosystem** * Uniswap V2/v3 (on Optimism, Binance, Blast, Base, Avax, Eth, Polygon, Celo ) * Velodrome (Optimism) * Aerodrome (Base) ## Solana Support Details **Onboard and offboard via:** * CCTP (Move USDC to/from Cosmos, Ethereum/L2s and Solana) * Maximum transfer limit: 1,000,000 USDC per transfer **Move within ecosystem via:** * NA -- Solana is just 1 chain **Skip Go API does not support swapping on any DEXes within the Solana ecosystem at this time** # Transaction Support Source: https://docs.cosmos.network/skip-go/general/transaction-support ## **Overview** * This guide outlines how to troubleshoot and escalate issues related to Skip Go transactions. * The Skip Go Explorer should be used when troubleshooting * Types of transactions supported (CCTP, Eureka, Layerzero, Stargate, Opinit, Axelar, IBC, Go Fast) * Tools: [Explorer](https://explorer.skip.build/), Skip API *** ## **Step 1: Check Status in our Explorer** * Use our [Explorer](https://explorer.skip.build/) * If you click on your link transaction in your history on [go.skip.build](https://go.skip.build/) it will open with the transaction details preloaded.

Explorer from history link

* If you do not have the transaction in your history, please locate the source transaction hash and enter it into our [Explorer](https://explorer.skip.build/) * Required inputs: * Initial Transaction hash * Chain Name

Explorer input screen

*** ## **Step 2: Transaction status** `state_completed_success`, `state_pending`, `state_abandoned`, etc. The explorer will tell you where your funds have been released if the transaction has failed, the swap failed and what asset has been released.

Explorer successful transaction

* If there is no state given, the explorer will ask to reindex/track your transaction:

Explorer asking to index

* After indexing, the status of your transaction will be updated. Example of successful, failed and pending states:

Explorer after indexing result

* The explorer will show where your funds have been released on chain:

Funds released info

* If your IBC transaction is pending, it may take upto 24 hours to clear.

Funds released info

* If you cannot find where your funds are released, refunded, received, please go to step 3. *** ## **Expanded Transfer Data & Statuses** Skip Go supports multiple transfer mechanisms. Use the following references to understand and debug each type of transaction. *** ### **IBC Transfers** Learn more: [IBC Transfer Details](/skip-go/general/multi-chain-realtime-transaction-and-packet-tracking#ibc-transfer-data) * **States:** * `TRANSFER_PENDING`: Sent, waiting to be received. May take upto 24 hours to be received on chain. * `TRANSFER_RECEIVED`: Received, may still revert if multi-hop. * `TRANSFER_SUCCESS`: Completed successfully. * `TRANSFER_FAILURE`: Transfer failed. * `TRANSFER_PENDING_ERROR`: Error occurred, refund pending. * **Transactions:** * `send_tx`, `receive_tx`, `timeout_tx`, `acknowledge_tx` *** ### **Axelar Transfers** Learn more: [Axelar Transfer Details](/skip-go/general/multi-chain-realtime-transaction-and-packet-tracking#axelar-transfer-data) * **States:** * `AXELAR_TRANSFER_PENDING_CONFIRMATION`: Waiting for Axelar confirmation. * `AXELAR_TRANSFER_PENDING_RECEIPT`: Confirmed, awaiting receipt. * `AXELAR_TRANSFER_SUCCESS`: Assets received. * `AXELAR_TRANSFER_FAILURE`: Transfer failed. * **Transactions:** * `send_tx`, `confirm_tx`, `execute_tx` * May also include: `gas_paid_tx`, `approve_tx` *** ### **CCTP Transfers** Learn more: [CCTP Transfer Details](/skip-go/general/multi-chain-realtime-transaction-and-packet-tracking#cctp-transfer-data) * **States:** * `CCTP_TRANSFER_SENT`: Burn submitted. * 'CCTP\_TRANSFER\_PENDING\_CONFIRMATION' - CCTP transfer is pending confirmation by the cctp attestation api * `CCTP_TRANSFER_CONFIRMED`: Confirmed by Circle. * `CCTP_TRANSFER_RECEIVED`: Received on destination chain. * **Transactions:** * `send_tx`, `receive_tx` *** ### **Hyperlane Transfers** Learn more: [Hyperlane Transfer Details](/skip-go/general/multi-chain-realtime-transaction-and-packet-tracking#hyperlane-transfer-data) * **States:** * `HYPERLANE_TRANSFER_SENT`: The transfer transaction on the source chain has executed. * `HYPERLANE_TRANSFER_RECEIVED`: The transfer has been received on the destination chain. * `HYPERLANE_TRANSFER_FAILED`: The transfer failed to complete. * `PACKET_ERROR_TIMEOUT`: The packet timed out. * `PACKET_ERROR_ACKNOWLEDGEMENT`: Error in packet acknowledgement. * **Transactions:** * `send_tx`, `receive_tx` *** ### **OPInit Transfers** Learn more: [OPInit Transfer Details](/skip-go/general/multi-chain-realtime-transaction-and-packet-tracking#opinit-transfer-data) * **States:** * `OPINIT_TRANSFER_SENT`: The deposit transaction on the source chain has executed. * `OPINIT_TRANSFER_RECEIVED`: Received at the destination chain. * `OPINIT_TRANSFER_FAILURE`: The transfer has failed. * `PACKET_ERROR_TIMEOUT`: The packet timed out. * `PACKET_ERROR_ACKNOWLEDGEMENT`: Error in packet acknowledgement. * **Transactions:** * `send_tx`, `receive_tx` *** ### **Go Fast Transfers** Learn more: [Go Fast Transfer Details](/skip-go/general/multi-chain-realtime-transaction-and-packet-tracking#go-fast-transfer-data) * **States:** * `GO_FAST_TRANSFER_SENT`: Intent submitted. * `GO_FAST_TRANSFER_FILLED`: Success. * `GO_FAST_TRANSFER_TIMEOUT`, `REFUNDED`, `POST_ACTION_FAILED` * **Transactions:** * `order_submitted_tx`, `order_filled_tx`, `order_timeout_tx`, `order_refunded_tx` *** ### **Stargate Transfers** Learn more: [Stargate Transfer Details](/skip-go/general/multi-chain-realtime-transaction-and-packet-tracking#stargate-transfer-data) * **States:** * `STARGATE_TRANSFER_SENT`, `RECEIVED`, `FAILED`, `UNKNOWN` * **Transactions:** * `send_tx`, `receive_tx`, `error_tx` *** ### **Eureka Transfers** Learn more: [Eureka Transfer Details](/skip-go/general/multi-chain-realtime-transaction-and-packet-tracking#ibc-transfer-data) * **States:** * `TRANSFER_SUCCESS`: The Eureka transfer has completed successfully. * `TRANSFER_RECEIVED`: The Eureka transfer has been received by the destination chain. * `TRANSFER_PENDING`: The Eureka transfer is in progress. * `TRANSFER_FAILURE`: The Eureka transfer has failed. * `PACKET_ERROR_ACKNOWLEDGEMENT`: Error in packet acknowledgement * `PACKET_ERROR_TIMEOUT`: The packet timed out before reaching the destination * **Transactions:** Typically includes `send_tx`, `receive_tx` *** ### **LayerZero Transfers** * **States:** * `LAYER_ZERO_TRANSFER_SENT`: Transfer initiated on the source chain * `LAYER_ZERO_TRANSFER_RECEIVED`: Assets received on the destination chain * `LAYER_ZERO_TRANSFER_FAILED`: Transfer failed * `LAYER_ZERO_TRANSFER_UNKNOWN`: Transfer state could not be determined * **Transactions:** Usually `send_tx`, `receive_tx`, and `ack_tx` *** ## **Step 3: Escalate if needed** * If you are unable to locate your funds please open a ticket in the [Interchain Discord](https://discord.com/invite/interchain)

Explorer input screen

*** # Configuration Source: https://docs.cosmos.network/skip-go/widget/configuration This page details your widget configuration options. Tweak it to fit your exact user experience needs! **Visual Configuration Tool:** Want to see how your configuration options look before implementing them? Try [studio.skip.build](https://studio.skip.build) - our interactive widget builder that lets you customize themes, assets, routes, and more with a live preview. # Component Props The `Widget` component accepts the following props. ### `defaultRoute` Customizes the initial route displayed on widget load. Query supported assets using the Skip Go API [/assets](../api-reference/prod/fungible/get-v2fungibleassets) endpoint. Setting this triggers a route request on render. ```ts theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} defaultRoute?: { amountIn?: number; amountOut?: number; srcChainId?: string; srcAssetDenom?: string; destChainId?: string; destAssetDenom?: string; }; ``` * `amountIn`: Preset input amount for exact amount in request. * `amountOut`: Preset output amount for exact amount out request. If both specified, only `amountIn` is used. * `srcChainId` : Source chain ID. * `srcAssetDenom`: Source asset denomination. * `destChainId`: Destination chain ID. * `destAssetDenom`: Destination asset denomination. ### `routeConfig` Customizes enabled route types. ```ts theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} routeConfig?: { experimentalFeatures?: ['hyperlane', 'cctp', 'stargate', 'eureka']; allowMultiTx?: boolean; allowUnsafe?: boolean; bridges?: ('IBC' | 'AXELAR' | 'CCTP' | 'HYPERLANE' | 'GO_FAST')[]; swapVenues?: { name: string; chainId: string; }[]; goFast?: boolean; smartSwapOptions?: SmartSwapOptions; timeoutSeconds?: string; // Number of seconds for the IBC transfer timeout, defaults to 5 minutes }; ``` * `allowMultiTx`: Allow multi-transaction routes. Default: true. * `allowUnsafe`: Allow unsafe routes. Default: false. [More info](../advanced-swapping/allow_unsafe-preventing-handling-bad-execution). * `bridges`: Restrict routing to specific bridges. Default: empty (all bridges). * `swapVenues`: Restrict routing to specific swap venues. Default: empty (all venues). * `goFast`: Enable Go Fast transfers. Default: false. [More info](../advanced-transfer/go-fast). * `experimentalFeatures`: Array of experimental features to enable. Include `'eureka'` to enable routing with Eureka assets. * `smartSwapOptions`: Advanced swapping features like EVM Swaps and split trade routes. [More info](../advanced-swapping/smart-swap-options). ### `batchSignTxs` Controls whether all transactions in a multi-transaction route should be signed upfront or individually as they are executed. ```ts theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} batchSignTxs?: boolean; ``` * **Default:** `true` * When enabled, all transactions in the route will be requested for signature at the beginning of the execution. They will then be broadcast one by one in sequence. * When disabled, each transaction will be signed individually just before it is broadcast. **Example scenario with `batchSignTxs: true`:** For a route: Solana → Noble → Cosmos (3 transactions requiring signatures) * All 3 transactions will be prompted for signature upfront * After signing, they will be broadcast sequentially: Solana first, then Noble, then Cosmos **EVM Transaction Limitation:** If an EVM transaction appears as the second or later transaction in a route, batch signing cannot be performed upfront. In such cases, the EVM transaction and any subsequent transactions will need to be signed individually when they are ready to be executed. ### `filter` Key value pair of chainIds or specific asset denoms allowed on source and destination assets ```ts theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} filter?: { source?: Record; destination?: Record; }; ``` Example: ```ts theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { source: { // only assets on this chainId is allowed 'noble-1': undefined, }, destination: { // these assets on this chainId are allowed 'cosmoshub-4': ['uatom', 'ibc/2181AAB0218EAC24BC9F86BD1364FBBFA3E6E3FCC25E88E3E68C15DC6E752D86'], // these assets on this chainId are allowed 'agoric-3': ['ibc/FE98AAD68F02F03565E9FA39A5E627946699B2B07115889ED812D8BA639576A9'], // any asset on this chainId are allowed 'osmosis-1': undefined, } } ``` ### `filterOut` Opposite of filter. Key value pair of chainIds not allowed or specific asset denoms not allowed ```ts theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} filterOut?: { source?: Record; destination?: Record; }; ``` Example: ```ts theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { source: { // only assets on this chainId is not allowed 'noble-1': undefined, }, destination: { // these assets on this chainId are not allowed 'cosmoshub-4': ['uatom', 'ibc/2181AAB0218EAC24BC9F86BD1364FBBFA3E6E3FCC25E88E3E68C15DC6E752D86'], } } ``` ### `settings` Sets defaults for user-customizable settings. ```ts theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} settings?: { customGasAmount?: number; slippage?: number; useUnlimitedApproval?: boolean; // Set allowance amount to max if EVM transaction requires allowance approval }; ``` * `customGasAmount`: Gas amount for CosmosSDK chain transactions. Default: `300_000`. * `slippage`: Default slippage percentage (0-100) for CosmosSDK chain swaps. Default: `1`. ### `onlyTestnet` `onlyTestnet`: Boolean to show only testnet data. Default: false (mainnet data only). ### `endpointOptions` Override default Skip proxied endpoints. Whitelisting required, reach out [here](https://discord.com/invite/interchain). ```ts theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} endpointOptions?: { endpoints?: Record; getRpcEndpointForChain?: (chainID: string) => Promise; getRestEndpointForChain?: (chainID: string) => Promise; }; ``` ### `apiUrl` String to override default Skip Go API proxied endpoints. Whitelisting required, reach out [here](https://discord.com/invite/interchain). ### `brandColor` Customizes the main highlight color of the widget ### `borderRadius` Controls the corner roundness of cards and buttons in the widget ### `theme` Advanced widget appearance customization options ```tsx theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} theme? = { brandColor: string; borderRadius: { main?: string | number; selectionButton?: string | number; ghostButton?: string | number; modalContainer?: string | number; rowItem?: string | number; }; primary: { background: { normal: string; }; text: { normal: string; lowContrast: string; ultraLowContrast: string; }; ghostButtonHover: string; }; secondary: { background: { normal: string; transparent: string; hover: string; }; }; success: { background: string; text: string; }; warning: { background: string; text: string; }; error: { background: string; text: string; }; }; ``` ### `chainIdsToAffiliates` Define fees per chain and recipient addresses. Total basisPointsFee must be consistent across chains. Addresses must be valid for respective chains. ```ts theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} chainIdsToAffiliates: { 'noble-1': { affiliates: [{ basisPointsFee: '100', // 1% fee address: 'noble..1', // address to receive fee }, { basisPointsFee: '100', // 1% fee address: 'noble...2', // address to receive fee }] }, 'osmosis-1': { affiliates: [{ basisPointsFee: '200', // 2% fee address: 'osmo...1', // address to receive fee },] } } ``` ### `enableSentrySessionReplays` Enables sentry session replays on the widget to help with troubleshooting errors. Default: false. ### `enableAmplitudeAnalytics` Enable Amplitude analytics for the widget to improve user experience. Default: false. ### `disableShadowDom` Disables shadow dom, useful if there are issues with libraries not supporting shadow-dom or for enabling server side rendering Default: false. (shadow dom is enabled by default to avoid styling conflicts/issues) ### `hideAssetsUnlessWalletTypeConnected` Filters assets based on connected wallet types (currently only supports Sei Cosmos/EVM). Added in v3.7.3. Default: false. ### `callbacks` Event handling functions. ```ts theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} onWalletConnected?: (params: { walletName?: string; chainIdToAddressMap: Record; address?: string; }) => void; onWalletDisconnected?: (params: { walletName?: string; chainType?: string; }) => void; onTransactionSignRequested?: (props: onTransactionSignRequestedProps) => void; onTransactionBroadcasted?: (params: { chainId: string; signerAddress?: string; txIndex: number; }) => void; onTransactionComplete?: (params: { txHash: string; chainId: string; explorerLink?: string; sourceAddress: string; destinationAddress: string; sourceAssetDenom: string; sourceAssetChainID: string; destAssetDenom: string; destAssetChainID: string; }) => void; onTransactionFailed?: (params: { error: Error }) => void; onRouteUpdated?: (props: { srcChainId?: string; srcAssetDenom?: string; destChainId?: string; destAssetDenom?: string; amountIn?: string; amountOut?: string; requiredChainAddresses?: string[]; }) => void; onSourceAndDestinationSwapped?: (props: { srcChainId?: string; srcAssetDenom?: string; destChainId?: string; destAssetDenom?: string; amountIn?: string; amountOut?: string; }) => void; onSourceAssetUpdated?: (props: { chainId?: string; denom?: string; }) => void; onDestinationAssetUpdated?: (props: { chainId?: string; denom?: string; }) => void; ``` * `onWalletConnected`: Called when a wallet is connected. * `onWalletDisconnected`: Called when a wallet is disconnected. * `onTransactionBroadcasted`: Called when a transaction is broadcasted. This is called multiple times for multi-transaction routes. * `onTransactionComplete`: Triggered when a transaction is completed. * `onTransactionFailed`: Triggered when a transaction fails. ### `connectedAddresses` & `signers` If your application has already connected to a user's wallet (e.g., via MetaMask for EVM networks, Phantom for Solana, or Keplr for Cosmos), you **must provide both** the `connectedAddresses` and corresponding signer functions in order to enable the widget's injected wallet functionality. See an implementation example [here](https://github.com/skip-mev/skip-go/tree/staging/examples/nextjs/src/app/injected/page.tsx). `WalletClient` comes from the [`viem` package](https://viem.sh/docs/clients/wallet). `Adapter` comes from the [`@solana/wallet-adapter-base` package](https://solana.com/developers/cookbook/wallets/connect-wallet-react). And `OfflineSigner` comes from the [`@cosmjs` package](https://github.com/cosmos/cosmjs). * **Type:** `Record` **Example:** ```typescript theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} const connectedAddresses: Record = { "1": "0x123...abc", // Ethereum mainnet address "cosmoshub-4": "cosmos1...", // Cosmos Hub address "solana": "3n9...xyz", // Solana address // ... add more chain IDs and addresses as needed }; ``` ### Signer Functions Each signer function below must be implemented to fully leverage the injected wallet capabilities: * **`getCosmosSigner(): Promise`** Returns a Cosmos-compatible signer. * **`getEvmSigner(): Promise`** Returns an EVM-compatible signer (e.g., from `viem`). * **`getSvmSigner(): Promise`** Returns a Solana-compatible signer, such as a `PhantomWalletAdapter`. **Complete Example for injected wallet functionality:** ```jsx theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} ``` # FAQ Source: https://docs.cosmos.network/skip-go/widget/faq ## How do I fix the "Buffer is not defined" error? If you see a 'Buffer is not defined' error when using @skip-go/widget, it's likely because the widget depends on Node.js modules that aren't available in the browser. To fix this, you'll need to add polyfills for those modules. Here are some polyfill plugins for common environments: * [Webpack](https://www.npmjs.com/package/node-polyfill-webpack-plugin) * [Rollup](https://www.npmjs.com/package/rollup-plugin-polyfill-node) * [Vite](https://www.npmjs.com/package/vite-plugin-node-polyfills) * [ESBuild](https://www.npmjs.com/package/esbuild-plugins-node-modules-polyfill) ## Should I put the widget inside a container with a fixed size? It is recommended to wrap the widget with a container element that has a fixed size. This helps to prevent layout shifting as the widget uses the shadow-dom (which needs to be rendered client-side). We recommend a height of `640px` and a width of `500px`. ## Can I use Safe Multisig wallets with the Skip Go widget? Yes! Safe wallets (the popular multisig wallet at [https://app.safe.global](https://app.safe.global)) can be used with the Skip Go widget through wallet providers that support importing Safe wallets: * **Rabby Wallet**: Supports importing and using Safe wallets * **WalletConnect**: Supports connecting Safe wallets through WalletConnect protocol Both options allow you to use your Safe wallet for multi-chain swaps and transfers through the Skip Go widget. **Important:** When using Safe wallets for cross-chain transactions, ensure that your Safe wallet is deployed on all relevant destination chains before initiating transfers. Safe wallets use deterministic addresses, but the wallet contract must be deployed on each chain where you plan to receive funds. ## Why am I getting a CORS error or blocked by CORS policy? If you're seeing CORS errors when using the Skip Go widget, it means your domain needs to be whitelisted to access Skip Go's API endpoints. To resolve this issue, please get your domain whitelisted by submitting a support ticket in our [Discord server](https://discord.gg/interchain). # Gas on Receive Source: https://docs.cosmos.network/skip-go/widget/gas-on-receive Automatically provide users with gas tokens on destination chains during cross-chain swaps Gas on Receive helps users get native gas tokens on destination chains during cross-chain swaps. This prevents users from getting "stuck" with assets they can't use due to lacking gas for future transactions. **Widget**: Auto-detects need, user toggles on/off (v3.14.0+) **Client Library**: Manual setup required (v1.5.0+) ## How It Works When Gas on Receive is enabled, the widget automatically: 1. **Detects insufficient gas balance** on the destination chain 2. **Splits the swap** into two parts: * **Main route**: Your primary swap transaction * **Fee route**: A smaller swap specifically for obtaining gas tokens 3. **Provides native tokens** for gas fees on the destination chain 4. **Displays the gas top-up** amount and status to users ## Supported Destination Chains ### Supported * **Cosmos chains** (e.g., Osmosis, Juno, Stargaze) * **EVM L2 chains** (e.g., Arbitrum, Polygon, Base) ### Not Supported * **Ethereum mainnet** (disabled due to high gas costs) * **Solana** (not currently supported) ## Default Gas Amounts The feature automatically provides gas tokens worth: * **Cosmos chains**: \$0.10 USD equivalent * **EVM L2 chains**: \$2.00 USD equivalent These amounts are designed to cover multiple transactions on the respective chain types. ## Automatic Activation Gas on Receive automatically activates when: * The destination chain is supported * The user's destination address has insufficient gas balance (\< 3x current gas price) * The destination asset is different from the chain's native gas token Users can manually toggle the feature on/off via the widget interface. **Cost Impact**: The gas route uses a small portion of your swap amount (e.g., $0.10-$2.00) which slightly reduces your main swap output. ## User Interface The feature appears in the widget as: * **Toggle switch**: Allows users to enable/disable the feature * **Gas amount display**: Shows how much gas will be received (e.g., "Enable gas top up - You'll get \$2.00 in ETH") * **Transaction status**: During execution, shows "Receiving \$2.00 in ETH as gas top-up" * **Completion status**: After success, displays "Received \$2.00 in ETH as gas top-up" ## Configuration ### Widget Configuration Gas on Receive requires no configuration - the widget auto-detects when it's needed and shows a toggle switch: ```tsx theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import { Widget } from "@skip-go/widget"; function MyApp() { return ( ); } ``` ### Client Library Usage The client library requires manual setup using `executeMultipleRoutes` (v1.5.0+). Use this when building custom interfaces or need more control than the widget provides: ```typescript theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import { executeMultipleRoutes, route } from "@skip-go/client"; // Create your main route and a smaller gas route const mainRoute = await route({ amountIn: "1000000", // 1 OSMO sourceAssetChainId: "osmosis-1", sourceAssetDenom: "uosmo", destAssetChainId: "42161", // Arbitrum destAssetDenom: "0x82aF49447D8a07e3bd95BD0d56f35241523fBab1" // WETH }); const gasRoute = await route({ amountIn: "50000", // ~$2 worth for gas sourceAssetChainId: "osmosis-1", sourceAssetDenom: "uosmo", destAssetChainId: "42161", destAssetDenom: "0x0000000000000000000000000000000000000000" // Native ETH }); // Execute both routes together await executeMultipleRoutes({ route: { mainRoute, gasRoute }, userAddresses: { mainRoute: [ { chainId: "osmosis-1", address: "osmo1..." }, { chainId: "42161", address: "0x..." } ], gasRoute: [ { chainId: "osmosis-1", address: "osmo1..." }, { chainId: "42161", address: "0x..." } ] }, slippageTolerancePercent: { mainRoute: "1", gasRoute: "10", }, // Required signing functions getCosmosSigningClient: async (chainId) => { // Return your cosmos signing client for the chain return yourCosmosWallet.getSigningClient(chainId); }, getEVMSigningClient: async (chainId) => { // Return your EVM signing client for the chain return yourEvmWallet.getSigningClient(chainId); }, onRouteStatusUpdated: (status) => console.log(status) }); ``` **Tip**: Most developers should use the widget for automatic gas management. Only use the client library approach if you need custom gas amounts or are building a custom interface. ### Manual Gas Route Setup For more control, you can manually determine when to include gas routes based on user balances: ```typescript theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import { balances } from "@skip-go/client"; // Check if user has sufficient gas balance const userBalances = await balances({ chains: { "42161": { address: "0x..." } // User's Arbitrum address } }); const chainBalances = userBalances.chains?.["42161"]?.denoms; const nativeTokenBalance = chainBalances?.["0x0000000000000000000000000000000000000000"]; // Simple check: does user have any native token balance? const hasEnoughGas = nativeTokenBalance?.amount && nativeTokenBalance.amount !== "0"; if (!hasEnoughGas) { // Include gas route in executeMultipleRoutes console.log("User needs gas - including gas route"); } else { // Execute only main route console.log("User has sufficient gas"); } ``` ### Custom Gas Amounts (Advanced) For advanced use cases, you can customize gas amounts by adjusting the `amountIn` when creating gas routes. The default equivalent amounts are: * **Cosmos chains**: \$0.10 USD * **EVM L2 chains**: \$2.00 USD ## Error Handling **If gas route fails**: Your main swap continues normally, you just won't receive the gas tokens. No funds are lost. **If main swap fails**: You receive the gas tokens you paid for, plus any remaining funds in your original source token. ## Troubleshooting **Feature not appearing?** * Ensure you're using Widget v3.14.0+ or Client Library v1.5.0+ * Check that the destination chain is supported (Cosmos chains, EVM L2s) * Feature auto-disables if user already has sufficient gas or destination asset is the native gas token **Transaction issues?** * Gas top-up failures don't affect your main swap - assets are safely returned * Main swap may succeed even if gas route fails For advanced routing configuration, see [Configuration](/skip-go/widget/configuration). # Getting Started Source: https://docs.cosmos.network/skip-go/widget/getting-started # Overview The Skip Go `Widget` is the easiest way to onboard users and capital from anywhere in the world to your corner of the sovereign web! The widget provides seamless cross-chain bridging, swapping, and transferring functionality in an easy to integrate React or [Web component](./web-component). Want to experiment with the widget before integrating it? Check out our interactive builder: [studio.skip.build](https://studio.skip.build) — experiment visually and generate ready-to-use code for your app! # Useful Links * [Skip Go Repository](https://github.com/skip-mev/skip-go) * [Example Widget Implementation](https://github.com/skip-mev/skip-go/tree/main/examples/nextjs) # Quickstart Guide This guide will walk you through how to integrate the `Widget` into your React app. Starting from scratch? It's recommended to use a React framework like [Next.js](https://nextjs.org/docs/getting-started/installation) or [Create React App](https://create-react-app.dev/docs/getting-started) to get up and running quickly. ```shell NPM theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} npm install @skip-go/widget ``` ```shell Yarn theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} yarn add @tanstack/react-query viem wagmi ``` If you're using `yarn` (or another package manager that doesn't install peer dependencies by default) you may need to install these peer dependencies as well: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} yarn add @tanstack/react-query viem wagmi ``` Next, use the `Widget` component to render the swap interface: ```typescript theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import { Widget } from '@skip-go/widget'; const SwapPage = () => { return (
); }; ``` You can use the code generated by [studio.skip.build](https://studio.skip.build) directly in your integration below.
Now that you have the widget integrated, it's time to configure the user experience. Find more details on widget configuration [here](./configuration).
If there's any functionality or configurations you'd like to see in the widget, we'd love for you to contribute by opening up an issue or pull request in [the repository](https://github.com/skip-mev/skip-go/tree/main/packages/widget) or by joining [our Discord](https://discord.com/invite/interchain) and giving us a shout! # Migration Guide Source: https://docs.cosmos.network/skip-go/widget/migration-guide ## @skip-go/widget v3.9.0 ### Prop name changes **Before:** ```tsx theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} ``` **After:** ```tsx theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} ``` ## @skip-go/widget v3.0.0 ### 1. Update Dependency (`^3.0.0`) ```shell NPM theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} npm install @skip-go/widget@latest ``` ```shell Yarn theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} yarn add @skip-go/widget@latest ``` If you're using `yarn` (or another package manager that doesn't install peer dependencies by default) you may need to install these peer dependencies as well: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} yarn add @tanstack/react-query graz react react-dom viem wagmi ``` ### 2. `theme` Prop Changes #### More Customization Options You can pass either `light`, `dark`, or a custom theme object with granular control over the widget's appearance. **Before:** ```tsx theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} ``` **After:** ```tsx theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} ``` The custom theme object has the following structure: ```tsx theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} theme = { brandColor: string; borderRadius: number; primary: { background: { normal: string; transparent: string; }; text: { normal: string; lowContrast: string; ultraLowContrast: string; }; ghostButtonHover: string; }; secondary: { background: { normal: string; transparent: string; hover: string; }; }; success: { text: string; }; warning: { background: string; text: string; }; error: { background: string; text: string; }; }; ``` ### 3. Prop Spelling Changes #### `chainID` Renamed to `chainId` #### `apiURL` Renamed to `apiUrl` Update all instances of `chainID` to `chainId`, notably in the `defaultRoute` prop. **Before:** ```tsx theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} ``` **After:** ```tsx theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} ``` ### 4. Temporarily Disabled Features The following props will be reintroduced in future versions of `Widget`. #### a. `connectedWallet` Prop The connectedWallet prop, which allowed passing a custom wallet provider, isn't currently supported. #### b. `CallbackStore` Callback Props The `onWalletConnected`, `onWalletDisconnected`, `onTransactionBroadcasted`, `onTransactionComplete`, and `onTransactionFailed` callback props aren't currently supported. ### 5. Removed Features #### a. `persistWidgetState` This prop is no longer supported, as the `Widget` persists state by default. #### b. `toasterProps` The `toasterProps` prop has been removed because the `Widget` no longer generates notifications. #### c. `makeDestinationWallets` The `makeDestinationWallets` prop has been removed. The `Widget` now automatically generates destination wallets from connected wallets or manual user entry. By implementing these changes, you can successfully migrate your application from Widget V1 to Widget V2. For further assistance, refer to the official documentation or reach out to the [support team](https://discord.com/channels/669268347736686612/1365254948782342175). # Web Component Source: https://docs.cosmos.network/skip-go/widget/web-component For non-React applications, the Skip Go `Widget` is available as a web component. # Installation You can import the web component in two ways: ## 1. NPM ```ts theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import('@skip-go/widget-web-component'); ``` Note: Ensure Node has sufficient memory allocated: `CopyNODE_OPTIONS=--max-old-space-size=32384` This can be added to npm scripts in `package.json`, a `.env file`, or used when running Node directly. ## 2. Script Tag or CDN (Recommended) ```html theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} ``` ## Usage Props are the exact same as [`WidgetProps`](./configuration) but you are required to pass them to the element via javascript/typescript. ```tsx theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}}
``` ## Performance Considerations It's recommended to lazy load this component as it comes pre-bundled with all dependencies, which may impact load times, especially in development environments. # `allow_unsafe`: Preventing & Handling Bad Execution Source: https://docs.cosmos.network/skip-go/advanced-swapping/allow_unsafe-preventing-handling-bad-execution ## Introduction **The`allow_unsafe` parameter in the requests to `/route` & `/msgs_direct` endpoints is designed to protect users from bad trade execution.** This parameter indicates whether you want to allow the API to return and execute a route even when our routing engine forecasts low or unknown execution quality: * `allow_unsafe=false` (default): The API will throw an error instead of returning a route when the routing engine forecasts bad execution quality (i.e. > 10% `price_impact` or difference between USD value in and out) or when execution quality can't be determined. * `allow_unsafe=true`: The API will return a route for a trade even when the routing engine forecasts bad execution quality (i.e. > 10% `price_impact` or difference between USD value in and out) or when execution quality can't be determined. In these cases, the API appends a `warning` to the response in a `warning` field **Make sure you understand execution/quote quality measurements first** Before reading this doc, you should read our documentation on quote quality: [ Understanding Quote Quality Metrics](./understanding-quote-quality-metrics). This provides basic background information about the different ways the Skip Go API measures whether a route will likely give a user a bad execution price, namely the difference between the USD value of the input and the output & on-chain price impact. ## `allow_unsafe=false` Behavior When `allow_unsafe=false`, the endpoint throws an error when execution quality is poor (as measured by price impact or estimated USD value lost) or when execution quality can't be determined (i.e. neither of these measurements are available). In particular, if `allow_unsafe=false`, `/route` and `/msgs_direct` return errors when: * `price_impact > .10`(the swap will move the on-chain price by more than 10%) * `(usd_amount_in-usd_amount_out)/usd_amount_in)>.10` (greater than 10% of the value of the input is lost) * Neither of the above metrics can be computed Below, we provide examples of the responses in each these cases. The price impact is greater than 10% (`BAD_PRICE_ERROR`): * ```{ theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} "code": 3, "message": "swap execution price in route deviates too far from market price. expected price impact: 98.6915%", "details": [ { "@type": "type.googleapis.com/google.rpc.ErrorInfo", "reason": "BAD_PRICE_ERROR", "domain": "skip.build", "metadata": {} } ] } ``` The user loses more than 10% of their USD value (`BAD_PRICE_ERROR`): * ```{ theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} "code": 3, "message": "difference in usd value of route input and output is too large. input usd value: 1000 output usd value: 600", "details": [ { "@type": "type.googleapis.com/google.rpc.ErrorInfo", "reason": "BAD_PRICE_ERROR", "domain": "skip.build", "metadata": {} } ] } ``` The `price_impact` and the estimated USD value difference cannot be calculated (`LOW_INFO_ERROR`) * ```JSON JSON theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "code": 3, "message": "unable to determine route safety", "details": [ { "@type": "type.googleapis.com/google.rpc.ErrorInfo", "reason": "LOW_INFO_ERROR", "domain": "skip.build", "metadata": {} } ] } ``` ## `allow_unsafe=true` Behavior When `allow_unsafe=true`, the endpoints will still return routes even when the routing engine forecasts will have unknown or poor execution quality (measured by price\_impact or estimated USD lost), but they will have a `warning` field appended to them. The `warning` field is populated exactly when the endpoints would return an error if `allow_unsafe` were `false`, namely: * `price_impact > .10`(the swap will move the on-chain price by more than 10%) * `(usd_amount_in-usd_amount_out)/usd_amount_in)>.10` (greater than 10% of the value of the input is lost) * Neither of the above metrics can be computed Below, we provide examples of the responses in each these cases. The price impact is greater than 10% (`BAD_PRICE_WARNING`): * ```JSON JSON theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} "warning": { "type": "BAD_PRICE_WARNING", "message": "swap execution price in route deviates too far from market price. expected price impact: 98.6826%" } ``` More than 10% of the USD value of the input is lost in the swap (`BAD_PRICE_WARNING`): * ```"warning": { theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} "type": "BAD_PRICE_WARNING", "message": "difference in usd value of route input and output is too large. input usd value: 1000 output usd value: 600" } ``` The `price_impact` and the estimated USD value difference cannot be calculated (`LOW_INFO_ERROR`) * ``` "warning": { "type": "LOW_INFO_WARNING", "message": "unable to determine route safety" } ``` ## Best Practices for Protecting Users **Above all else, we recommend setting `allow_unsafe=false`** In addition, we recommend reading our documentation around [safe API integrations](./safe-swapping-how-to-protect-users-from-harming-themselves) to learn about UX/UI practices that can further help prevent users from performing trades they'll immediately regret. **Have questions or feedback? Help us get better!** Join [our Discord](https://discord.com/invite/interchain) and select the "Skip Go Developer" role to share your questions and feedback. # SAFE Swapping: How to Protect Users from Bad Trades Source: https://docs.cosmos.network/skip-go/advanced-swapping/safe-swapping-how-to-protect-users-from-harming-themselves **Summary** **This doc covers several UI/UX principles we recommend Skip Go API integrators implement to protect users from harming themselves by swapping at bad execution prices. Collectively, we refer to these principles as S.A.F.E.** The document introduces the S.A.F.E framework and provides detailed guidance for how you can use the information provided in the Skip Go API to implement the framework & give your users a worry-free swapping experience. ### Keeping Users Safe on your Application Many users are unfamiliar with the technology behind cross-chain swaps and transfers. As a result they will take actions that aren't in their best interests: 1. Execute swaps & transfers they don't understand at unfavorable prices using money they cannot afford to lose (e.g. Spending \$1000 on a new, illiquid meme coin within 2 hours of launch) 2. Accuse **you** (& Skip) of responsibility for their losses (even if your software & ours worked as expected), demand a refund, and publicly vilify & troll you if you do not give one. To protect both you and the user, we've developed a framework called **S.A.F.E** to help you remember and organize the UX principles that can keep your users safe: 1. **S**hare all available information about expected execution price 2. **A**lert when info indicates an action might be harmful 3. **F**ail transactions that trigger your alerts (i.e. transactions that seem likely to harm users) 4. **E**nforce additional approval stages for users who want to create these transactions anyhow ### S.hare Info You should share as much information about the estimated swap with your users as possible. Fortunately, the `/fungible/route` and `/fungible/msgs_direct` endpoints return a ton of useful information. In addition to showing estimated amount in and out (the obvious ones), we recommend showing: * **Estimated USD value of the amount in** (`response.usd_amount_in`) * **Estimated USD value of the amount out** (`response.usd_amount_out`) * **Price Impact** (`response.swap_price_impact_percent`) -- This measures how much the user's expected execution price differs from the current on-chain spot price at time of execution. A high price impact means the user's swap size is large relative to the available on chain liquidity that they're swapping against, which makes a bad price very likely. * **Swapping Venue** (Available in the `swap_venue` field of the `swap` operation in `response.operations`) - This tells the user what DEX they're actually performing the underlying swap on, which helps avoid confusion about prices. This can be useful information in the event the API returns an usual route and routes the user to a DEX they're unfamiliar with / don't want to use or to a DEX where there's not much liquidity of the token they're swapping (e.g. SEI liquidity on Osmosis is sparse at the time of this writing) * **Bridge Fee Amounts** (Available in the `transfer` and `axelar_transfer` entries in `response.operations` under `fee_asset` and `fee_amount`) -- These represent the fees that bridges take from the user along the route, denominated in the token(s) they're taking. It's important to show because sometimes bridges take fees unexpectedly (e.g. Noble used to take 0.10% fee on IBC transfers), and sometimes they take large fees (e.g. During periods of high gas prices, Axelar fees can be as high as \$200) * **USD value of bridge fee amounts** (Available in the `transfer` and `axelar_transfer` entries in `response.operations` under `usd_fee_amount`) -- This gives the user a sense of the actual cost of their fee amounts. In cases of more complex swaps and transfers, the user might have a hard time making sense of the underlying fee tokens because the fees are being charged at an intermediate point in the route The quote shown to the users should **always** match the transaction that they end up signing. Once you have called `/route` and displayed the quote to the user, a call to `/msgs` is the only way to generate the correct message. (DO NOT call `/msgs_direct` after calling `/route` since this will regenerate the quote) Alternatively you can call `/msgs_direct` to both generate the quote information and the transaction that needs to be signed with 1 request. Remember that these endpoints are not deterministic and calling either again will generate a different output and your user will not execute the transaction they think they are executing. ### A.lert users to bad prices We recommend alerting users in the following three scenarios at least: 1. **High Price Impact** (`swap_price_impact > PRICE_IMPACT_THRESHOLD`) : This indicates the user's swap is executing at a considerably worse price than the on-chain spot price -- meaning they're probably getting a worse price than they think they should. It also indicates the size of their trade is large relative to the available on chain liquidity. We recommend using`PRICE_IMPACT_THRESHOLD = 2.5 `in your calculations 2. **High difference in relative USD value in and out** (`(usd_amount_in - usd_amount_out)/usd_amount_in)*100 > USD_REL_VALUE_THRESHOLD` ): This estimates the underlying value the user will lose instantly as a result of swapping, represented as a percentage of the value of their input. A high value for this figure indicates the user is instantly losing a large percentage of the value of their starting tokens. For example, a value of 50 indicates the user loses 50% of the estimated value of their input. We recommend using `USD_REL_VALUE_THRESHOLD=2.5` 3. **High fees** ( `usd_fee_amount / usd_amount_in > FEE_THRESHOLD`) : This indicates that the value of fees charged by bridges used in the route amount to a large percentage of the underlying amount being transferred. If this value is high, user might want to wait until bridging more funds to execute (since bridge fees rarely scale with volume). We recommend setting `FEE_THRESHOLD=.25` Loud visual emphasis of the values that exceed safe tolerances is the most effective form of alerting. This includes: * Bolding unusually high/low quote numbers -- or otherwise making them larger than surrounding text/numbers * Automatically opening drop downs / detail panes that are usually closed by default to display the alert field * Highlighting the offending quote number in red, yellow, or some other loud color indicating danger and/or greying out other numbers For example, when a swap exceeds our `PRICE_IMPACT_THRESHOLD` on [go.skip.build](https://go.skip.build), we auto-open the drop-down that normally hides price impact and highlight the whole field in red. ### F.ail Transactions when they're likely to cause user harm We recommend preventing transactions that may significantly harm the user altogether -- even if your user seems to want to complete the transaction. We recommend failing/preventing user transactions in the following scenarios: 1. **Greater than 10% difference in relative USD value in and out** `(usd_amount_in - usd_amount_out)/usd_amount_in)*100 > 10` 2. **Greater than 10% price impact** (`swap_price_impact > 10`) You could can decide how you want to signal this failure to user, e.g. no route found, some form of low liquidity message, or something else entirely. Preventing transactions altogether that violate your safety thresholds is the strongest form of user safety. ### E.nforce explicit, additional approval If you do not want to fail transactions that exceed safety thresholds outright, one viable alternative is to require additional stages of user approval before letting the user sign the transaction. Importantly, this is different and more disruptive than simply warning the user about some aspect of the quote looking unfavorable. This means putting additional clicks between the user and the swap they want to perform, and having them explicitly agree to performing a swap your UI indicates will have a bad price. For example, this is go.skip.build's warning screen: * It's very clear that our expectation is that the swap will harm the user with the "Bad Trade Warning" across the top * The page explicitly reminds the user what the problem is -- foregrounding the predicted price impact and forcing them to acknowledge it again * The "happy path" or "default" path is to go back -- not to finish the swap (Notice that the "Go Back" button is highlighted) We recommend requiring additional steps of user approval in the following cases: 1. **Greater than 5% difference in relative USD value in and out** `(usd_amount_in - usd_amount_out)/usd_amount_in)*100 > 5` 2. **Greater than 5% price impact** (`swap_price_impact > 5`) 3. **Price impact AND relative-USD-value-difference cannot be calculated** (i.e. `swap_price_impact`, `usd_amount_out`, and/or `usd_amount_in` are missing) #### Choosing the right level of protection: warnings, additional approvals, and outright failures It's important to think about this tradeoff because protecting users often directly trades off against making a cleaner, simpler, and more powerful user experiences. For example, excessive warnings might get annoying to users who know they're trading illiquid shitcoins, and additional steps of approval might frustrate pro traders who care deeply about speed For any safety metric you might track to determine whether a transaction could harm a user, consider 4 tiers of safety you can implement. From least safe and least disruptive to most safe and most disruptive: 1. **None**: Just let it rip. Don't give the user any heads up. Don't do anything to slow them down or prevent them from trading. 2. **Alert**: Use some visual cue to indicate to the user that they should be wary about swap 3. **Enforce additional approval**: Require additional clicks to actually execute the swap and foreground the warning -- so the user needs to approve it explicitly. 4. **Fail**: Just block / fail / prevent transactions that exceed your safety tolerance bounds outright Here are some suggestions for navigating this design space: 1. **Set lower trigger thresholds for weaker forms of security and more conservative thresholds for stronger forms of security** (e.g. You could alert users about high price impact at 2.5%, require an additional stage of approval at 10%, and fail the transaction outright at 25%) This approach is nice because it gives users who may be very conservative some indication that they may face some danger without getting in their way too much, while still hard-stopping more inexcusable failures that are probably never acceptable to any trader 2. **Use stronger forms of security when safety tolerances are exceeded for higher value transactions** (e.g. You could use warnings when price impact is greater than 10% for transactions where the amount in is $0-100, additional approvals when amount in is $1,000 - 10,000, and block transactions above \$10k outright if price impact is greater than 10%. **Have questions or feedback? Help us get better!** Join [our Discord](https://discord.com/invite/interchain) and select the "Skip Go Developer" role to share your questions and feedback. # Smart Swap Source: https://docs.cosmos.network/skip-go/advanced-swapping/smart-swap-options This page introduces the Smart Swap functionality provided by the Skip Go API to improve swap speed, price, and customization. ## Introduction Smart Swap refers to a feature set that improves swap speed, price, and control. It currently allows for: * [External routers](#feature-use-external-routers-to-improve-price-execution) (e.g. Hallswap and Osmosis SQS) for better price execution * [Route splitting](#feature-route-splitting) for better price execution * [EVM swaps](#feature-evm-swaps) If you're using the deprecated `@skip-router` library, you must use version v4.0.0+ to enable Smart Swap. We strongly recommend using the `@skip-go/client` [TypeScript package](https://www.npmjs.com/package/@skip-go/client), which is actively maintained. The rest of this document will show you how to use Smart Swap with the `@skip-go/client` library. The only changes you'll notice between this context and the REST API are naming conventions. # Smart Swap Features Set your Smart Swap settings in your `skipClient` function call or REST API request body. ## Feature: Use External Routers to Improve Price Execution The Skip Go API considers multiple internal and external routers to find the route with the best price execution. Currently supported external swap routers: 1. Skip Go API's in-house Router 2. Hallswap's Dex Aggregator 3. Osmosis's Sidecar Query Service (SQS) (Used in the Osmosis frontend) ### Usage Pass an empty `smartSwapOptions` object into your route request. ```ts TypeScript (Client) theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} const route = await skipClient.route({ smartSwapOptions: {}, // You're not required to activate a particular flag for this feature sourceAssetDenom: "uusdc", sourceAssetChainID: "noble-1", destAssetDenom: "utia", destAssetChainID: "celestia", amountIn: "1000000", // 1 uusdc cumulativeAffiliateFeeBPS: "0" } ``` ```JSON JSON (REST API) theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // POST /v2/fungible/route { "amount_in": "1000000", "source_asset_denom": "uusdc", "source_asset_chain_id": "noble-1", "dest_asset_denom": "utia", "dest_asset_chain_id": "celestia", "cumulative_affiliate_fee_bps": "0", "allow_multi_tx": true, "smart_swap_options": {} } ``` That's it! Skip Go API will now consider supported external routers and return the best available option. ## Feature: Route Splitting Route splitting involves dividing a user's trade into multiple parts and swapping them through different pools. This reduces price impact and can increase the user's output compared to using a single route. It works especially well when one or both tokens being swapped are commonly paired with other assets on a DEX (e.g., OSMO on Osmosis). ### Usage Pass the `splitRoutes` flag in the `smartSwapOptions` object. ```ts TypeScript (Client) theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} const route = await skipClient.route({ smartSwapOptions: { splitRoutes: true }, // smart swap object sourceAssetDenom: "uusdc", sourceAssetChainID: "noble-1", destAssetDenom: "utia", destAssetChainID: "celestia", amountIn: "1000000", // 1 uusdc cumulativeAffiliateFeeBPS: "0" } ``` ```JSON JSON (REST API) theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // POST /v2/fungible/route { "amount_in": "1000000", "source_asset_denom": "uusdc", "source_asset_chain_id": "noble-1", "dest_asset_denom": "utia", "dest_asset_chain_id": "celestia", "cumulative_affiliate_fee_bps": "0", "allow_multi_tx": true, "smart_swap_options": { "split_routes": true } } ``` ### Response Changes when using Split Routes We've added a new `swapType` called `SmartSwapExactCoinIn` that's returned in the `routeResponse` and `msgsDirectResponse` when the provided route is a split route. This new `swapType` has fields that allow for multiple routes, across multiple swap venues. ```ts theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} export type SmartSwapExactCoinIn = { swapVenue: SwapVenue; swapRoutes: SwapRoute[]; }; export type SwapRoute = { swapAmountIn: string; denomIn: string; swapOperations: SwapOperation[]; }; ``` ## Feature: EVM Swaps Smart Swap supports bidirectional EVM swaps: go from any asset on an EVM chain to any asset on a Cosmos chain and back again. With EVM swaps, users can onboard to your IBC connected chain in 1 transaction from a broad range of EVM assets, including the memecoins retail loves to hold! Currently, the API supports EVM swapping on Velodrome (Optimism) & Aerodrome (Base), and swapping on official Uniswap V3 deployments on the following chains: | Network | Chain ID | | ------------ | -------- | | Ethereum | 1 | | Polygon | 137 | | Optimism | 10 | | Arbitrum One | 42161 | | Base | 8453 | | BNB Chain | 56 | | Avalanche | 43114 | | Blast | 81457 | | Celo | 42220 | ### Usage Set the `evmSwaps` flag to true in the `smartSwapOptions` object. If using the deprecated `@skip-router` library, you must be on v5.1.0+ (we strongly recommend migrating to `@skip-go/client` as soon as possible). ```ts TypeScript (Client) theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} const route = await skipClient.route({ sourceAssetDenom: "arbitrum-native", sourceAssetChainID: "42161", destAssetDenom: "ibc/8E27BA2D5493AF5636760E354E46004562C46AB7EC0CC4C1CA14E9E20E2545B5", destAssetChainID: "dydx-mainnet-1", amountIn: "10000000000000000000", cumulativeAffiliateFeeBPS: "0", smartRelay: true, smartSwapOptions: { evmSwaps: true }, } ``` ```JSON JSON (REST API) theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { // POST /v2/fungible/route "amount_in": "10000000000000000000", "source_asset_denom": "arbitrum-native", "source_asset_chain_id": "42161", "dest_asset_denom": "ibc/8E27BA2D5493AF5636760E354E46004562C46AB7EC0CC4C1CA14E9E20E2545B5", "dest_asset_chain_id": "dydx-mainnet-1", "cumulative_affiliate_fee_bps": "0", "allow_multi_tx": true, "smart_relay": true, "smart_swap_options": { "evm_swaps": true }, } ``` ### How do EVM Swaps Change the `route` Response? When an EVM swap occurs in a route, a new operation of type `evm_swap` is returned in the array of `operations` in the `v2/route` and `v2/msgs_direct` response. If your API use follows the `v2/route` then `v2/msgs` call pattern, this new operation type must be passed to the `v2/msgs` endpoint, so make sure you use the latest [Skip Go Client version](https://www.npmjs.com/package/@skip-go/client) and decode the operation properly. The `evm_swap` operation type is as follows: ```ts TypeScript theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} export type EvmSwap = { inputToken: string; amountIn: string; swapCalldata: string; amountOut: string; fromChainID: string; denomIn: string; denomOut: string; swapVenues: SwapVenue[]; } ``` ```JSON JSON theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "evm_swap": { "input_token": "ox", // string (token contract address if an ERC20 token, blank if native) "amount_in": "100", // string "swap_calldata": "0x", // string "amount_out": "123", // string "from_chain_id": "1", // string "denom_in": "0x", // string "denom_out": "0x", // string "swap_venues": [], // []swap_venue } } ``` ### How does this Change the `/msgs` and `/status` Response? Nothing new in particular! The `msg_type` used for EVM swaps is the same `evm_tx` type used for all of our EVM transactions. Similarly, there is no new `transfer_event` type; the swap is atomic with the bridging action (Axelar or CCTP), so the same types are used (`axelar_transfer_info` and `cctp_transfer_info` respectively). **Have questions or feedback? Help us get better!** Join [our Discord](https://discord.com/invite/interchain) and select the "Skip Go Developer" role to share your questions and feedback. # Understanding Quote Quality Metrics Source: https://docs.cosmos.network/skip-go/advanced-swapping/understanding-quote-quality-metrics This doc covers the various ways route quote quality is measured -- slippage, USD estimates of the amount in and out, and price impact **Video Summary** [Here's a video](https://www.loom.com/share/5293719872714557b03db41fb5ca590e) that summarizes the content below. ### Different ways to measure quote quality * `slippage`: This is the maximum allowable difference between what Skip estimates the price of the swap will execute at and the worst price it can execute at. If the price of the swap turns out to be worse than the slippage tolerance allows, the swap will revert entirely. Slippage does not account for the impact of the user's swap itself because this is incorporated in the estimate. You can still get a "bad price" with *low* slippage if you make a big swap against a low liquidity pool because your swap itself will move the price. As a result, you should think of slippage as tolerance for difference between actual price and quoted price, not a measure of whether the quoted price is "good". * `usd_estimate_in` and `usd_estimate_out`: These are estimates of the dollar-value of the amount in and amount out. These use coingecko prices that can be a maximum of 5 minutes stale. These values aren't always available because not all tokens are listed on Coingecko (e.g. some meme coins or new coins won't have feeds). This is really useful for providing a sanity check on whether a price is "good" and flagging to users when the difference between estimated input and output dollar values exceeds some threshold as a percentage of the input amount. (For example, we recommend you flag the user when their input dollar value is $100 and their output is $50. This indicates they're receiving a bad price for some reason.) * `price_impact`: This measures how much the user's expected execution price differs from the current on-chain spot price at time of execution. This is available whenever the underlying DEX makes it feasible to calculate on-chain spot price. This is especially useful when `usd_estimate_in` or `usd_estimate_out` isn't available. A high price impact means the user's swap size is large relative to the available on chain liquidity that they're swapping against, and they're moving the price significantly a lot. Like with USD estimate, we recommend warning users when this exceeds some threshold (e.g. 10%). ### More on `slippage` vs `price_impact` Some people have asked why are both slippage and price\_impact necessary. The reason is that they are trying to capture fundamentally different concepts: * slippage = tolerance to the world / liquidity changing between when the API gives you a quote and when the transaction gets executed (0 slippage = "I want to get exactly the amount out that Skip estimates"). Of course, this could still be a bad price if there's high price\_impact, or if the difference between `usd_amount_in` and `usd_amount_out` is large. **Slippage is better understood as a tolerance to volatility, rather than an estimate of execution quality.** * price impact = A measure of how much you're going to move the on-chain price with your trade. Some folks use the word "slippage" to describe price impact. This is intended to capture your tolerance to low liquidity and bad pricing. Fundamentally, you can execute a trade that has low price impact but high slippage if you want to execute a volatile trade against a lot of liquidity ### Protecting users with SAFE interfaces If you're wondering how you should use these values to help protect your users from poor execution prices, we have a whole guide written about how to build a safe swapping interface: [SAFE Swapping: How to Protect Users from Bad Trades](./safe-swapping-how-to-protect-users-from-harming-themselves)! Check it out! **Have questions or feedback? Help us get better!** Join [our Discord](https://discord.com/invite/interchain) and select the "Skip Go Developer" role to share your questions and feedback. # CW20 Tokens & Their Limitations Source: https://docs.cosmos.network/skip-go/advanced-transfer/cw20-swaps Information about performing CW20 swaps This page covers the basics of CW20s and the limitations around performing cross-chain actions with CW20 tokens -- compared to tokenfactory and "native" Cosmos assets (aka Bank Module assets). ### CW20 Token Denom Formatting In API Requests To use CW20 tokens in the Skip Go API, specify the denom as "cw20:" + the token contract address. Example denom for Astro on Terra2: `cw20:terra1nsuqsk6kh58ulczatwev87ttq2z6r3pusulg9r24mfj2fvtzd4uq3exn26` ### Background #### What is a CW20 token? [CW20](https://github.com/CosmWasm/cw-plus/blob/main/packages/cw20/README.md) is the fungible token spec for the CosmWasm (i.e. CW) VM. CosmWasm is the most popular smart contract VM among CosmosSDK blockchains today. At a high-level, CW20 is very similar to ERC20 (the popular EVM fungible token standard). Contracts that comply with the standard implement the following functionalities: * Transferring tokens from one account to another * Sending tokens to a contract along with a message (similar to `callContractWithToken`) * Tracking balances * Delegating balance spending to other accounts and contracts ASTRO (Astroport's governance token) is one CW20 token issued on Terra2. #### How do CW20 tokens interact with IBC? [CW20-ICS20](https://github.com/CosmWasm/cw-plus/tree/v0.6.0-beta1/contracts/cw20-ics20) converter contracts make a CW20 token compatible with the ICS20 token transfer standard, so they can be sent to other chains over normal ICS20 transfer channels. When they arrive on the destination chain, they're indistinguishable from bank module and tokenfactory tokens. These converter contracts are the source of much difficulty when attempting to perform cross-chain actions with CW20s: * Different converter contracts implement different versions of the ICS20 standard (e.g. Some don't support memos, which are required for post-transfer contract calls and multi-hop transfers) * On transfer failure, converter contracts just return assets to sender. That means if one of our contracts attempts to send tokens on your behalf unsuccessfully, it will receive the tokens. We can't atomically send them to you. #### How do CW20 tokens compare to "native" (aka bank module) tokens? "Native" tokens are tokens where minting, burning, balances, and transfer functionality are managed by the [bank module](/sdk/v0.53/build/modules/bank/), instead of by contracts. Unlike CW20s, native tokens are directly compatible with ICS20 and IBC modules. One can send a native token to another chain over a transfer channel just using a `MsgTransfer` -- no conversion contracts or anything of the sort required. The downside of native tokens is that they're permissioned and deeply ingrained into the chain's state machine. As a result, issuing a new native token requires a chain upgrade. Issuing a CW20 on the other hand, only requires deploying a new contract (just a transaction). #### How do CW20 tokens compare to "tokenfactory" tokens? Tokenfactory tokens are created with the [tokenfactory](https://docs.osmosis.zone/osmosis-core/modules/tokenfactory/) module. They're designed to have the best of both worlds of CW20 and native tokens: * Like CW20s, they're permissionless and users can create new ones just by submitting transactions -- no need to modify the chain's state machine * Like native tokens, they're directly compatible with IBC out-of-the-box, and the bank module manages their balances + transferring functionality. This combination of traits leads many to see tokenfactory as a strict improvement on CW20 that devs should prefer in the vast majority of cases. We strongly agree with this conclusion. Unlike `CW20s` , tokenfactory tokens have no limitations in the cross-chain functionality Skip Go API can offer for them. ### What limitations do CW20 tokens have within the Skip Go API? At a high-level, basically any multi-chain action--in which the token is on the chain where it was issued for one stage of the action--requires multiple transactions. In particular, this means you cannot perform the following actions in 1 transaction: * IBC transfer after purchasing a cw20 asset Chain 1 is the origin chain where the cw20 token can be swapped freely, but it cannot be transferred to another chain in the same transaction. * Call a contract on a remote chain after purchasing a cw20 asset (e.g. since this requires an IBC transfer under the hood) Chain 1 is the origin chain, where the token can be used freely for post-route actions, but it cannot be used in post-route actions on other chains. * IBC transfer from a remote chain to the CW20's origin chain then perform a swap or any other post-route action on that chain Chain 2 is the origin chain. The token can be transferred back there, but it can't be used or swapped for anything in the same transaction. In principle, you can use the Skip Go API to construct any of these action sequences across multiple transactions, but it will be more challenging for you and your end users. **Have questions or feedback? Help us get better!** Join [our Discord](https://discord.com/invite/interchain) and select the "Skip Go Developer" role to share your questions and feedback. # EVM Transactions Source: https://docs.cosmos.network/skip-go/advanced-transfer/evm-transactions This doc covers how to interact with the EvmTx type returned by the Skip Go API ## Intro * When a user needs to transfer or swap from an EVM chain (e.g. Ethereum mainnet, Arbitrum, Optimism, etc...), the Skip Go API will return an `EvmTx` type for the developer to pass to the user for signing * Unlike CosmosSDK transactions, EVM transactions do not have a notion of messages, so this object doesn't correspond 1-to-1 to a "message", which might be a more familiar notion to Cosmos developers * This doc is intended for CosmosSDK developers who aren't already familiar with the concepts of transaction construction in the EVM and need to use `EvmTx` to help their users move from/to EVM chains. ## `EvmTx` Data Structure The EvmTx has 4 fields that the developer needs to understand: * `to`: The address of the smart contract or externally owned account (EOA) with which this transaction interacts, as a hex-string prefixed with 0x (e.g. 0xfc05aD74C6FE2e7046E091D6Ad4F660D2A159762) * `value`: The amount of `wei` this transaction sends to the contract its interacting with (1 ETH = 1^18 WEI) * `data`: The calldata this transaction uses to call the smart contract it interacts with, as a hex string. The data bytes will be interpreted according to the application-binary-interface (ABI) of the contract that's being interacted with. If this field is empty, it means the transaction is sending funds to an address, rather than calling a contract. * `required_erc20_approvals`: The permissions that must be granted to a specific smart contract to spend or transfer a certain amount of their ERC-20 tokens on behalf of the end user. This allows smart contracts to execute expressive flows that may involve moving some amount of the user's ERC-20 tokens * Skip Go will always return this field if there are any erc20 approvals needed for the route. It is the client's responsibility to check if the user's approval is already at or above the returned approval needed (for example, if the integrator allows for max approvals). If this field is non-empty and the user does not have the approvals necessary, the approval must be granted, signed, and submitted before the `EvmTx` populated by the other fields in the response can be submitted to the network. Otherwise, it will fail to execute with a permission error. * Skip's `ERC20Approval` object has 3 fields that define approval: \_ `token_contract`: The address of the ERC-20 token on which the approval is granted \_ `spender`: The address of the contract to which the approval will grant spend authority \* `amount`: The amount of `token_contract` tokens the approval will grant the `spender` to spend * Check out EIP-2612 for more information on ERC-20 approvals. * `chain_id`: This is the same as in the Cosmos context (simply an identifier for the chain), but it's an int instead of a string For more information on transactions, check out the Ethereum foundation's [docs](https://ethereum.org/en/developers/docs/transactions/) ## Example constructing & signing an EVM Transaction ### 1. Install Signing Library and Skip Library To enable EVM transactions in your application, first install an EVM developer library. The most popular options are: * [viem](https://viem.sh/) * [ethers.js](https://docs.ethers.org/v5/) * [web3.js](https://web3js.readthedocs.io/en/v1.10.0/) The code snippets below use viem. ```Shell Shell theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} npm i viem npm i @skip-go/client ``` ### 1. Initialize the `SkipClient` client with the EVM `WalletClient` object All 3 libraries mentioned above allow you to create WalletClient "signer" objects that: * Use an RPC provider under the hood to query the chain for necessary data to create transactions (e.g. nonce, gas price, etc...) * Expose an API that allows constructing, signing, and broadcasting transactions You need to set up the `getEVMSigner` function in the `SkipClient` constructor to initialize this signer object for the a given EVM chain. For example, with Viem, we do the following: ```TypeScript TypeScript theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import { createWalletClient, custom} from 'viem'; import * as chains from 'viem/chains'; import { SkipClient } from '@skip-go/client'; const const skipClient = new SkipClient({ getEVMSigner: async (chainID) => { const chain = extractChain({ chains: Object.values(chains), id: parseInt(chainID) }); const evmWalletClient = createWalletClient({ chain: chain, transport: custom(window.ethereum!) }); return evmWalletClient; } }); ``` ### 2. Request Route using `SkipClient` and get required chain Next, request your route as normal: ```Typescript TypeScript theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} const route = await skipClient.route({ amountIn: "1000", sourceAssetDenom: "0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48", sourceAssetChainID: "1", destAssetDenom: "0xaf88d065e77c8cC2239327C5EDb3A432268e5831", destAssetChainID: "42161", smartRelay: true, smartSwapOptions: { splitRoutes: true } }; ``` ### 3. Get User Addresses for all Required Chains Use the route to determine the chains for which you need to supply a user address (the source, destination, and all intermediate chains that require a recovery address for receiving tokens in case of a failure) ```TypeScript TypeScript theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} let userAddresses = [] const requiredAddresses = route.requiredChainAddresses; // iterate over chain IDs for chains that require addresses for (const chainID of requiredAddresses) { // Check that the chain is an EVM chain if (parseInt(chainID)) { // use signer library to get address from wallet const chain = extractChain({ chains: Object.values(chains), id: parseInt(chainID) }); const evmWalletClient = createWalletClient({ chain: chain, transport: custom(window.ethereum!) }); const [address] = await client.requestAddresses(); // add to map userAddresses.append({address: address, chainID: chainID}) } else { // handle cosmos and SVM wallets -- not shown } }); return evmWalletClient; } ``` ### 4. Execute the Route using `SkipClient` Finally, you can use `SkipClient.executeRoute` to prompt the user to sign the approval(s) and transaction, and submit the transaction on chain. ```TypeScript TypeScript theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} await skipClient.executeRoute({ route:route, userAddresses: userAddresses }); ``` **Have questions or feedback? Help us get better!** Join [our Discord](https://discord.com/invite/interchain) and select the "Skip Go Developer" role to share your questions and feedback. # Experimental Features Source: https://docs.cosmos.network/skip-go/advanced-transfer/experimental-features This page provides a living record of the features that can be turned on with the experimental_features flag ### Background The `experimental_features` parameter on `/route` and `/msgs_direct` gives us a consistent mechanism for rolling out new features to the folks who want to adopt them ASAP in an "opt-in" fashion, without threatening to destabilize power users who might want the new hottness. *(Of course, we avoid shipping changes that are technically breaking no matter what. But we've found that even additive changes can break some integrators).* We will probably auto-activate most features that we soft launch in `experimental_features` within a few weeks or months of the initial launch -- especially when the feature makes a strict improvement to end user UX (e.g. adding a new DEX or bridge). The `experimental_features` parameter accepts an array of strings, where each string identifies a feature. The rest of this doc describes each experimental feature currently in prod, when the feature will become opt-out (rather than opt-in), and gives the feature's identifier. ### Stargate ("stargate") **Description:** Support for routing over the Stargate V2 bridge on EVM chains incl. Sei EVM **Opt-out switch**: Stargate routing will auto-activate on March 1 2025 **Identifier:** "stargate" ### Eureka ("eureka") **Description:** Support for routing over the IBC Eureka bridge, starting with a connection between Cosmos and Ethereum Mainnet. **Opt-out switch**: Eureka routing will auto-activate on May 1 2025 **Identifier:** "eureka" ### Layerzero (`"layer_zero"`) **Description:**\ Improves cross-chain capabilities to support more assets such as USDT. **Opt-out switch:**\ Layerzero routing was auto-activated on **March 1, 2025**. **Identifier:** `"layer_zero"` **Have questions or feedback? Help us get better!** Join [our Discord](https://discord.gg/interchain) and select the "Skip Go Developer" role to share your questions and feedback. # Go Fast Source: https://docs.cosmos.network/skip-go/advanced-transfer/go-fast A brief overview of the Go Fast Transfer system Read the whitepaper [here](https://skip-protocol.notion.site/EXT-Skip-Go-Fast-b30bc47ecc114871bc856184633b504b). Find integration details for Go Fast [here](../client/advanced-features#use-the-go-fast-transfer-system). # Overview Go Fast is a decentralized bridging protocol, built by Skip, designed to enable rapid and secure cross-chain transactions across major blockchain ecosystems such as Ethereum, Cosmos, and Solana. Go Fast accelerates cross-chain actions by up to 25 times, reducing onboarding times from 10+ minutes to seconds. # How it Works The Go Fast Protocol lets you quickly move assets and execute smart contracts between two different blockchains: the source chain and the destination chain. Here's a simple breakdown of how it all happens. To start, you—the user—initiate a transfer by calling the `submitOrder` function on the protocol contract on your current blockchain (source chain). In this step, you specify the assets, any message you want to send, and the address on the destination chain. This information is then broadcasted as an intent. Permissionless participants called solvers, who watch for these intents, ingest the event emitted from the Go Fast contracts. When they see the intent submitted, they evaluate whether they can fulfill the intent based on their resources on the destination chain and the potential reward for fulfilling it. If a solver agrees to fulfill the intent, they call the `fillOrder` function on the protocol contract deployed on the destination chain. This step transfers the specified assets and processes any additional actions, like executing a contract call with the provided message payload. From your perspective, the assets or messages appear on the destination chain almost instantly, marking the transfer as complete. After fulfilling the transfer, the solver seeks to recover the assets they fronted, plus any earned fees. They do this by calling the `initiateSettlement` function on the destination chain's Go Fast smart contract, listing the intents they fulfilled. The protocol verifies the solver's actions, then sends a secure message back to the source chain through a cross-chain messaging system. A relayer delivers this message to the source chain, where the settle function on the protocol contract verifies the solver's fulfillment of each intent. Once confirmed, the solver receives back the assets they provided and any earned fees on the source chain. # Can I become a solver? Yes! Go Fast is a permissionless protocol, so anybody can run a solver! **Open-Source Reference Solver Implementation:** To help you get started quickly, we've open-sourced a reference implementation of a solver that handles everything—from monitoring events to filling orders, settling transactions, and rebalancing. All you need to do is set up the config with the chains you want to solve for, provide node endpoints to use, and customize your capital and rebalancing preferences. Check out the repo [here](https://github.com/skip-mev/skip-go-fast-solver). **Open-Source Protocol Contracts:** Although we recommend starting with the open-source solver, ambitious solvers are already modifying the reference implementation or developing their own solving systems. If you fall under this category, another useful resource will be our open-source Solidity and CosmWasm protocol contracts to integrate directly. You can find them [here](https://github.com/skip-mev/go-fast-contracts). If you have any questions about setting up a solver, please don't hesitate to reach out to us! # What chains are supported today? Currently, Go Fast supports the following source chains: 1. Ethereum Mainnet 2. Arbitrum 3. Avalanche 4. Base 5. Optimism 6. Polygon And the following destination chains: 1. Any IBC-connected chain supported by Skip Go # What are the minimum and maximum transfer sizes for Go Fast? Below is a table summarizing the minimum and maximum transfer sizes for each chain currently supported by Go Fast. | Source Chain | Minimum Transfer Size (in USD) | Maximum Transfer Size (in USD) | | -------------------- | ------------------------------ | ------------------------------ | | **Ethereum Mainnet** | 20 | 25,000 | | **Arbitrum** | 10 | 25,000 | | **Avalanche** | 10 | 25,000 | | **Base** | 10 | 25,000 | | **Optimism** | 10 | 25,000 | | **Polygon** | 10 | 25,000 | Note: If a user is starting from an asset that is not USDC on the source chain, Skip Go will swap the asset to USDC on the source chain and the post-swap amount is used to see if it is within the min/max bounds of Go Fast transfer sizes. ### dYdX Volume Limits When transferring to dYdX via Go Fast, there is a \$100,000 USD maximum transfer size. # What is the fee model for Go Fast? Go Fast works by having solvers use their own capital to fill orders as quickly as possible, where the solvers take on the re-org risk of the source chain. The Go Fast protocol compensates solvers by paying them a fee (denoted by the difference in the input and output amount of the user's order). The fee is composed of three parts, a basis points fee on the transfer size, a source gas fee, and a destination gas fee. Currently, the basis points fee on transfer size paid to solvers is 10 basis points across all transfer sizes. As the protocol evolves, the basis points fee charged is expected to decrease as transfer size increases. The source and destination gas fees are determined dynamically based on the source and destination chains and their current gas costs, optimized to minimize costs while covering settlement and rebalancing for solvers. # How do I integrate Go Fast into my application? For instructions on integrating Go Fast using the `@skip-go/client`, see the [Advanced Features guide](../client/advanced-features#use-the-go-fast-transfer-system). If you're using the Widget, refer to the [Widget Configuration](../widget/configuration#routeconfig). Note that enabling Go Fast prioritizes speed over lower fees. For more cost-efficient routes, it's recommended to leave Go Fast disabled. # Cross-chain Failure Cases Source: https://docs.cosmos.network/skip-go/advanced-transfer/handling-cross-chain-failure-cases This page covers the different ways our cross-chain swaps + transfers might fail to help identify failures and manage user expectations ## Failures during IBC Transfers and Swaps There are two types of IBC failures that may occur when a user attempts to traverse a swap / transfer route produced by the Skip Go API. 1. **Pre-Swap / swap failures** * **What:** These are failures in the sequence of ICS-20 transfers leading up to the swap or a failure in the swap itself (usually due to slippage). * **Outcome / What to Expect:** The users' original source tokens are returned their starting address on the source chain * **Common causes:** * Inactive relayers on a channel allow a packet to timeout * Slippage (the amount out for the swap turns out to be less than the user's specified minimum, i.e. their slippage exceeds their tolerance) * The user / frontend provides an invalid recovery address * An IBC client on the destination chain has expired * **Examples:** Consider a route where the source asset is ATOM on Neutron, the destination asset is STRIDE on Stride, and the swap takes place on Osmosis: * The user's tokens transfer from Neutron to the Hub to Osmosis. The swap initiates but the price of STRIDE has gotten so high that the swap exceeds slippage tolerance and fails. A sequence of error acks is propagated back to the Hub then Neutron, releasing the user’s ATOM to their address on Neutron where they started * The user attempts to transfer tokens from Neutron to the hub, but the packet isn't picked up by a relayer for more than 5 minutes (past the timeout\_timestamp). When a relayer finally comes online, it relays a timeout message to Neutron, releasing the user's ATOM back to their address on Neutron where they first had it. * **For transfer-only routes:** This is the only kind of failure that may happen on a route that only contains transfers. Either the user's tokens will reach their destination chain as intended, or they will wind up with the same tokens, on the same chain where they started. In a pre-swap or swap related error, the user will end up with the same tokens they started with on their initial chain (e.g. ATOM on Neutron in this example) 2. **Post-swap failures:** * **Description**: These are failures that occur on the sequence of transfers between the swap venue chain and the user's destination chain, after the user's origin tokens have already been successfully swapped for their desired destination asset. * **Outcome / What to Expect**: The user's newly purchased destination asset tokens will be transferred to their address on the swap chain. (This is the address passed to `chains_to_addresses` in `/fungible/msgs` for the chain where the swap takes place, which is given by `swap_venue.chain_id` in the response from `/fungible/route`) * **Common failure sources:** * Inactive relayers on a channel allow a packet to timeout * The user / frontend provides an invalid address for the destination chain * An IBC client on the destination chain has expired * **Examples:** Consider a route where the source asset is ATOM on Neutron, the destination asset is STRIDE on Stride, and the swap takes place on Osmosis: * Suppose the swap took place and the transfer to Stride has been initiated, but the Relayer between Osmosis and Stride is down. So the packet’s timeout occurs after 5 minutes. When the Relayer comes back online after 8 minutes, it relays a timeout message to Osmosis, releasing the user’s STRIDE, which gets forwarded to their Osmosis address In a post-swap error, the user will end up with their destination asset tokens in their address on the chain where the swap took place (e.g. STRIDE on Osmosis in this example) ## Axelar Failures Axelar transfers can be tracked on [Axelarscan](https://axelarscan.io/). Often, Axelar transfers are delayed by Axelar's relayer or execution services. If a transaction is taking longer than expected, users can visit Axelarscan, find their transaction, and manually execute the steps needed to get the transfer through. See the [Axelar docs](https://docs.axelar.dev/dev/general-message-passing/recovery) for details on how to use Axelarscan. Internally, the Skip Go API may use Axelar's General Message Passing service to move assets between EVM and Cosmos. There are similar failure modes for Axelar as there are for IBC: 1. **Swap failures** * **What:** Axelar GMP takes user assets from an EVM chain to the swap chain. The swap can still fail at this point due to a timeout or slippage. * **Outcome / What to Expect:** The user receives the Axelar-transferred token on the chain where the swap was supposed to take place at their recovery address. (Note this is different from the IBC swap failure case where the user receives the swap token back on the source chain) * **Common failure sources:** * Slow relaying from Axelar causes a timeout, and the swap is not attempted. * Slippage (the amount out for the swap turns out to be less than the user's specified minimum, i.e. their slippage exceeds their tolerance) 2. **Post-swap failures** * Once the swap is executed, Axelar is no longer involved, and the same rules that apply to IBC post-swap failures apply here, so the **Post-swap failures** section above applies. ## CCTP Failures Routes that use CCTP transfers rely on Circle to produce attestations. The Circle attestation service waits for a specified number of on-chain block confirmations before producing an attestation. The number of block confirmations required is specified by Circle in their documentation [here](https://developers.circle.com/stablecoins/docs/required-block-confirmations). If Circle's attestation service experiences an outage, malfunction, or otherwise becomes unresponsive, CCTP transfers will continue to burn assets on the source chain, but will not be able to mint assets on the destination chain. In this case, funds that have been burned to initiate a CCTP transfer will be inaccessible until the Circle attestation service recovers. ## Hyperlane Failures Each Hyperlane token transfer route is secured by an Interchain Security Module (ISM) designated by the deployer of the Hyperlane Warp Route Contracts (the interface to send tokens across chains using Hyperlane). The ISM defines the requirements for a message to be successfully processed on the destination chain. The most common ISM is a Multisig ISM where "Validators" of a specific Hyperlane route sign attestations that a specific message on an origin chain is a valid message to be processed on the destination chain. In the case where the set of Validators have not hit the required signature threshold to successfully process a Hyperlane message on the receiving chain, funds will not be accessible by the user on either chain until the threshold is met (once met, funds will be sent to the user on the destination chain). This generalizes to the different requirements for different ISMs. The Hyperlane documentation explains the different types of ISMs in more detail: [https://docs.hyperlane.xyz/docs/reference/ISM/specify-your-ISM](https://docs.hyperlane.xyz/docs/reference/ISM/specify-your-ISM) ## Go Fast Failures If a transfer timeout occurs, meaning a user's intent does not receive a response from solvers within a predefined time frame, the solver initiates a refund process to ensure that users do not lose funds. Here's a breakdown of what happens in the event of a timeout: 1. Intent Expiration: When a user initiates an intent by calling the `submitOrder` function on the source chain, a time limit is specified. Solvers monitor the intent and assess whether they can fulfill it within this period. If no solver fills the intent before the timeout, the refund process begins. 2. Refunds: Once the timeout period is reached without fulfillment, the solver calls a function on the contract to trigger a refund process. This is handled on-chain, and includes any fees initially allocated from the user for solver compensation. **Failures might occur for each transaction in a multi-tx sequence** In the event of a multi-tx route, each transaction may suffer from the kinds of failures noted above. This means it's technically inaccurate to say that tokens will always end up on the initial chain or the chain where the swap takes place. More accurately, tokens may end up on each chain where a transaction is initiated or the chain where the swap takes place. For instance, if a pre-swap failure takes place on the second transaction in a sequence, the tokens will end up on the chain that transaction targeted. In our example above, if the transfer from Cosmos Hub to Osmosis required a separate user transaction and the Neutron to Hub leg of the route succeeded in the first transaction, the ATOM tokens would end up in the user's account on the Hub if the swap exceeds maximum slippage. **We're working to make these failures even less common** * In the short term, we're working to add packet tracking + live relayer + client status to the API to help identify when packets get stuck and prevent folks from using channels where they're likely to get stuck in the first place * In the medium term, we are working to add priority multi-hop relaying into the API. * In the long term, we're working to build better incentives for relaying, so relayers don't need to run as charities. (Relayers do not receive fees or payment of any kind today and subsidize gas for users cross-chain) **Have questions or feedback? Help us get better!** Join [our Discord](https://discord.com/invite/interchain) and select the "Skip Go Developer" role to share your questions and feedback. # IBC Token Routing: Problem + Skip Go API Routing Algorithm Source: https://docs.cosmos.network/skip-go/advanced-transfer/ibc-routing-algorithm This page describes the IBC token routing problem and the algorithm Skip Go API uses to select / recommend token denoms and IBC paths **tl;dr** The routing problem: 1. IBC tags assets based on the sequence of IBC channels they have been transferred over, so the same asset transferred over two different paths will have two different denoms 2. Usually, there's only 1 "correct" (i.e. highly liquid) version of each asset on each chain (and frequently there are none) Skip Go API solves this problem by: 1. Sending assets to their origin chain 2. Find the shortest path from the origin chain to the destination chain, and using the most liquid path when there are multiple distinct shortest paths. 3. Plus, staying flexible to unusual exceptions ## Routing Problem ### IBC Tokens Get Their Names & Identities from Their Paths IBC transfers data over "channels" that connect two chains. Channels are identified by human-readable port names (e.g. "transfer") and channel IDs (e.g. channel-1). For example, consider a transfer channel between Terra2 and Axelar: *Notice that both chains maintain their own channel IDs for the channel, which might not be the same. As an analogy, you might think of the different chains as cities and the channel as a road connecting them. IBC packets are cars driving across the road* When transferring a fungible token from one chain to another over a channel, the denomination of the token on the destination chain is uniquely and predictably determined by the source denomination + the channel(s) over which the token was transferred. Specifically the denomination algorithm is: ```text Naming Algorithm theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} ibc_denom = 'ibc/' + hash('path' + 'base_denom') ``` *`hash` is typically the sha256 hash function* Continuing the example from above, the denom of this version of WETH.axl on Terra2 is: ```text axlWETH on Terra2 theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} axlweth_on_terra2_denom = 'ibc/' + hash('transfer/channel-6/weth-wei') axlweth_on_terra2_denom = 'ibc/BC8A77AFBD872FDC32A348D3FB10CC09277C266CFE52081DE341C7EC6752E674' ``` ### So Different Paths Produce Different Tokens Now that you understand that IBC denoms get their names from their paths, you understand the crux of the routing problem: **The same asset transferred to the same destination over two different paths will have different denominations.** Continuing the example from above, WETH.axl transferred directly from Axelar to Terra2 will have a different denom than WETH.axl transferred through Osmosis: To make matters worse, multiple channels can exist between the same two chains (IBC is permissionless afterall), and IBC uses channel identifiers--not chain identifiers--to construct denoms. That means two different versions of the same asset will exist on the destination chain even when tokens are transferred from the same source chain, if they're transferred over two different channels: **Why don't we just consider them equivalent anyway and move on?** Some folks who don’t work with bridges on a regular basis view this path tagging as a bug, or might think we should just consider these different versions of the same asset as fungible anyway. But that's not advisable! The route-based denom construction is a critical security feature because the chain where the token has been transferred to is effectively trusting the validator set of the chain from which the token was transferred. Applied to the example here, this trust model means using the purple version of WETH.axl implies trusting the Osmosis validator set AND the Axelar validator set, while using the blue version of WETH.axl only requires trusting the Axelar validator set. ### There are many paths between two chains, but usually only 1 useful version of each asset Right now, there are about 70 IBC-enabled chains. At least one channel exists between almost every pair in this set. This dense graph of channels contains a very large number of different paths between almost any two chains, which creates many opportunities for "token winding" or "mis-pathing", where a user sends an asset through a suboptimal path of channels from one chain to another and ends up with an illiquid / unusable version of their token. Mis-pathing almost always produces a practically useless + illiquid version of their token on the destination chain because there's usually only 1 useful version of a given asset on a given destination chain (if that). (There are over 50 versions of ATOM on JUNO!) As a result, we need to be very careful to send the user through the correct sequence of channels. The next section explains our token routing algorithm. ## Routing Algorithm **Insight about routing: The correct route depends on the chains + the asset in question** Notice that the correct route for a particular asset A on a particular chain Chain-1 to another chain Chain-2 depends not only on the channels that exist between chain-1 and chain-2, but also on what asset-A is. This is because asset A is defined by its path from its origin. Consider the following two cases: * If asset-A is native to Chain-1, perhaps it can be routed directly over a channel to Chain-2. This would yield a simple asset given by path of Chain-1->Chain-2 * If asset-A originated on another chain (e.g. Chain-3), it's very unlikely that transferring directly over a channel to Chain-2 would give the correct version of asset A on Chain-2. This would yield a more complex denom given by path of Chain-3->Chain-1->Chain-2, which is probably wrong if Chain-3 and Chain-2 are directly connected. * Instead, the asset should probably be routed back through the channel that connects Chain-1 to Chain-3 first, then sent over the channel to Chain-2. This yields a path of Chain-1->Chain-3->Chain-2, and a final denom given by the path Chain-3->Chain-2 Ultimately, we use a very simple routing algorithm: 1. Route the given asset back to origin chain (i.e. "unwind" it) 2. Check whether any *high-priority* manual overrides exist for the given asset on the given destination chain. If so, recommend the path from the source that produces this *high-priority* version of the asset 3. If no *high priority* manual overrides exist: 1. If at least 1 single-hop path to the destination chain exists (i.e. if origin chain and destination chain are directly connected over IBC), recommend the most liquid direct path. 2. If no direct path exists (or if the client on the direct path is expired, or none of the asset has been transferred over the direct path), do not recommend any asset A few notes about our data collection: * We run nodes for every supported chain to ensure we always have low-latency access to high quality data * We index client + channel status of every channel + client on all the chains we support every couple of hours to ensure we never recommend a path that relayers have abandoned or forgotten about. * We index the liquidity of every token transferred over every channel on every Cosmos chain every few hours to ensure our liquidity data is up to date. And we closely monitor anomalous, short-term liquidity movements to prevent attacks # Interpreting Transaction & Transfer Status Source: https://docs.cosmos.network/skip-go/advanced-transfer/interpreting-transaction-status Learn how to interpret the status of transactions and individual transfer steps from the Skip Go API's /v2/tx/status endpoint to provide accurate feedback to users. The fields and logic discussed in this guide pertain to the JSON response object from the `/v2/tx/status` endpoint of the Skip Go API. Refer to the [API Reference](/skip-go/api-reference/prod/transaction/get-v2txstatus) for detailed schema information. Understanding the status of a cross-chain transaction and its individual steps is crucial for building a clear and reliable user experience. This guide explains how to interpret the relevant fields from the `/v2/tx/status` API response to determine if a transaction (and each of its constituent transfers) is pending, successful, has encountered an error, or has been abandoned. This approach is useful for driving UI elements such as progress indicators, status messages, and error displays in your application. ### Example `/v2/tx/status` Response Below is an example of a JSON response from the `/v2/tx/status` endpoint, illustrating some of the key fields discussed in this guide: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "state": "STATE_COMPLETED_SUCCESS", "transfer_sequence": [ { "cctp_transfer": { "from_chain_id": "42161", "to_chain_id": "8453", "state": "CCTP_TRANSFER_RECEIVED", "txs": { "send_tx": { "chain_id": "42161", "tx_hash": "0xYOUR_SEND_TRANSACTION_HASH_HERE_...", "explorer_link": "https://arbiscan.io/tx/0xYOUR_SEND_TRANSACTION_HASH_HERE_..." }, "receive_tx": { "chain_id": "8453", "tx_hash": "0xYOUR_RECEIVE_TRANSACTION_HASH_HERE_...", "explorer_link": "https://basescan.org/tx/0xYOUR_RECEIVE_TRANSACTION_HASH_HERE_..." } }, "src_chain_id": "42161", "dst_chain_id": "8453" } } ], "next_blocking_transfer": null, "transfer_asset_release": { "chain_id": "8453", "denom": "0x833589fCD6eDb6E08f4c7C32D4f71b54bdA02913", "amount": "YOUR_EXAMPLE_AMOUNT", "released": true }, "error": null } ``` ## Core Concepts for Status Interpretation The logic relies on a few key pieces of information typically available in the `/v2/tx/status` API response object: 1. **The Overall Transaction Status:** This provides a high-level view of the entire multi-step operation. * Look at the top-level `state` field in the response. * Possible values include: * `'STATE_COMPLETED_SUCCESS'`: The entire transaction finished successfully. * `'STATE_COMPLETED_ERROR'`: The transaction finished, but an error occurred. * `'STATE_ABANDONED'`: The transaction was abandoned (e.g., due to timeout or user action). * If the state is not one of these terminal states, it's generally assumed to be pending or in progress. 2. **The Next Blocking Step (or Failure Point):** This indicates which specific transfer in the sequence is currently active, or which one caused a failure or abandonment. * Utilize the `next_blocking_transfer.transfer_sequence_index` field (if `next_blocking_transfer` exists in the response). * This will be an index pointing to an operation within the top-level `transfer_sequence` array. 3. **Categorizing Each Operation in the Sequence:** For each operation within the `transfer_sequence` array of the response, you can determine its individual status: * **Loading/Pending:** * The operation's index matches the `next_blocking_transfer.transfer_sequence_index` (if `next_blocking_transfer` exists). * AND the overall transaction is still in progress (i.e., the top-level `state` is not `STATE_COMPLETED_ERROR` or `STATE_ABANDONED`). * **Error/Failed/Abandoned:** * The operation's index matches the `next_blocking_transfer.transfer_sequence_index` (if `next_blocking_transfer` exists - it is typically `null` if the overall `state` is terminal (e.g., `STATE_COMPLETED_SUCCESS`, `STATE_COMPLETED_ERROR`, or `STATE_ABANDONED`) as the transaction is no longer actively blocked or has finished successfully). * AND the overall transaction state (the top-level `state` field) is `STATE_COMPLETED_ERROR` or `STATE_ABANDONED`. * Additionally, if the overall transaction state is `STATE_COMPLETED_ERROR` and this is the *last* operation in the sequence, it is also considered to be in an error state. * **Note:** If the overall transaction `state` is `STATE_COMPLETED_ERROR`, it's also crucial to inspect the specific `error` object within each individual transfer step in the `transfer_sequence` (e.g., `step.ibc_transfer.packet_txs.error`, `step.cctp_transfer.error`, etc.). This will help pinpoint the exact leg(s) that encountered issues, as the `next_blocking_transfer` might be `null` if the transaction reached a terminal error state rather than getting stuck midway. * **Success:** * If the overall transaction state (top-level `state` field) is `STATE_COMPLETED_SUCCESS`, then all operations in the sequence are considered successful. * If the overall transaction is still in progress, or has failed/been abandoned, any operation *before* the `next_blocking_transfer.transfer_sequence_index` (if `next_blocking_transfer` exists - see note above about it typically being `null` in terminal states) is assumed to have completed successfully. ## Example Implementation Logic (JavaScript) The following JavaScript snippet demonstrates how these concepts can be translated into code to determine the status of each step. ```javascript theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Assume 'transfer' is the main transaction object from our API // and 'totalSteps' is the length of transfer.transfer_sequence // 1. Determine overall transaction status from the main transaction object const isTransactionSuccessful = transfer.state === 'STATE_COMPLETED_SUCCESS'; const isTransactionFailed = transfer.state === 'STATE_COMPLETED_ERROR'; const isTransactionAbandoned = transfer.state === 'STATE_ABANDONED'; // 2. Get the index of the step that is currently blocking progress or has failed const nextBlockingIndex = transfer.next_blocking_transfer?.transfer_sequence_index; // Then, when we process each 'step' in the 'transfer.transfer_sequence' at a given 'index': // (This logic would typically be inside a loop, e.g., transfer.transfer_sequence.forEach((step, index) => { ... })) // 3. Categorize the current step: // Is this step currently "pending" (loading)? const isStepPending = index === nextBlockingIndex && !isTransactionFailed && !isTransactionAbandoned; // Is this step in an "error" state (or part of an abandoned flow)? // 'totalSteps' would be transfer.transfer_sequence.length const isStepAbandonedOrFailed = (isTransactionAbandoned || isTransactionFailed) && (index === nextBlockingIndex || (index === totalSteps - 1 && isTransactionFailed)); // If 'isTransactionSuccessful' is true, this step is part of an overall successful transaction. // If a step isn't 'isStepPending' and isn't 'isStepAbandonedOrFailed', // and its 'index < nextBlockingIndex' (for an ongoing or failed tx), it's also implicitly a success. // These boolean flags (isStepPending, isStepAbandonedOrFailed, isTransactionSuccessful) // are then used to drive the UI styling for that specific step (e.g., node color, edge animation, icons). // For example: // if (isStepAbandonedOrFailed) { /* show error UI */ } // else if (isStepPending) { /* show loading UI */ } // else { /* show success UI (either part of overall success, or completed before a pending/error state) */ } ``` By implementing logic based on these fields and states, you can provide users with accurate and timely feedback on the progress of their cross-chain transactions. ## Understanding Asset Release (`transfer_asset_release`) The `transfer_asset_release` object in the `/v2/tx/status` response provides crucial information about where the user's assets have ultimately landed or are expected to be claimable, especially in scenarios involving swaps or complex routes. Key fields include: * `chain_id`: The chain ID where the assets are located. * `denom`: The denomination (asset identifier) of the released assets. * `amount`: The quantity of the released assets. * `released`: A boolean indicating if the assets have been definitively released to the user (e.g., available in their wallet or claimable). If `false`, it might indicate that assets are still in a contract or awaiting a final step for release. In the event of a cross-chain swap failure, the `transfer_asset_release` field is particularly important for determining the location and state of the user's funds. For a comprehensive understanding of how assets are handled in different failure scenarios (e.g., pre-swap vs. post-swap failures), please refer to our detailed guide on [Handling Cross-Chain Failure Cases](/skip-go/advanced-transfer/handling-cross-chain-failure-cases). # SVM Transactions Source: https://docs.cosmos.network/skip-go/advanced-transfer/svm-transaction-details This document explains how to use Skip Go API and Client TypeScript Package to construct SVM transactions. ## Intro * When a user needs to transfer or swap from an SVM chain (e.g. Solana), the Skip Go API will return an `SvmTx` type for the developer to pass to the user for signing * This doc is intended for CosmosSDK and EVM developers who aren't already familiar with the concepts of transaction construction in the SVM and need to use `SvmTx` to help their users move from/to Solana and other SVM chains. * **Due to the difficult nature of including Solana transactions on chain during times of high network congestion, we HIGHLY recommend using the `/submit` endpoint to avoid dealing with complex retry logic and/or multiple RPC providers for submission reliability. Skip Go API's `/submit` endpoint implements best practices for Solana transactions submission for you!** ## Interact with Solana Wallets We recommend using [@solana/wallet-adapter](https://github.com/anza-xyz/wallet-adapter#readme) to interact with Solana wallets and build transactions. It provides a standardized `Adapter` object that wraps all major Solana wallets (e.g. Phantom, Backpack, etc...), as well as visual React components for wallet selection. See [here](https://github.com/anza-xyz/wallet-adapter/blob/master/PACKAGES.md) for all the supported wallets. ## Set up the `SkipClient` to use a Solana wallet All you need to do is initialize the `getSVMSigner` method in `SkipClient.options` to extract the `@solana/wallet-adapter-base` from the user's connected wallet of choice: ```TypeScript TypeScript theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import { useWallet } from "@solana/wallet-adapter-react"; import { PhantomWalletName } from "@solana/wallet-adapter-phantom" import { SkipClient } from '@skip-go/client'; const { wallets } = useWallet(); const skipClient = new SkipClient({ getSVMSigner: async (chainID) => { const { svm } = trackWallet.get(); const solanaWallet = wallets.find((w) => w.adapter.name === PhantomWalletName); return solanaWallet } }); ``` After this point, you can use `route`, `executeRoute`, and the other methods of `SkipClient` as you normally would. The rest of these docs cover the underlying details of the data structures, in case you need them. ## Understand `SvmTx` Data Structure The `SvmTx` has 2 fields that the developer needs to understand: * `chain_id`: The ID of the chain that this transaction should be submitted to * `tx`: This is the base64 encoded bytes of the transaction you should have the end user sign. ### Info on `SvmTx.tx` This is a fully constructed transaction. You don't need to change it or add anything to it to prepare it for signing. You just need to sign it and have the user submit it on chain within about 1 minute (or the nonce will expire). For more detail, the transaction already includes: * User's nonce (In Solana, this is actually a recent blockhash by default) * Instructions (Solana's equivalent to messages) * Base transaction fees (Set to the default of 500 lamports per signature) * Priority fees (More info on how we set these below) For more information on transactions, check out the Solana's official [docs](https://solana.com/docs/core/transactions) #### Signing `tx` [Skip Go Client](https://www.npmjs.com/package/@skip-go/client) takes care of all of the complexity of signing the transaction that gets returned in `SvmTx.tx`. You just need to have set the `getSVMSigner` method in the `SkipClientOptions` object in the `SkipClient` constructor then use `executeRoute` or `executeTxs`. #### How Priority Fees are Set Solana "priority fees" affect how likely it is a transaction gets included in a block. Unlike for many other major blockchain networks, Solana's priority fees are evaluated "locally". In other words, the size of the fee is compared to other transactions that access the same pieces of state (e.g. the same DEX pool, the same token contract etc...): * If the fee is low relative to other transactions that access the same state, the transaction is unlikely to get included. * If its high relative to these other transactions accessing similar state, its likely to be included As a result, transactions that touch "congested" or "popular" state will be the most expensive. At this time, we are setting priority fees to match the 90% percentile of priority fees for the "wif" pool state on Jupiter, which we believe is highly congested state. This is a very conservative setting, but it even with these "high amounts", fees are still fractions of a cent. **Have questions or feedback? Help us get better!** Join [our Discord](https://discord.com/invite/interchain) and select the "Skip Go Developer" role to share your questions and feedback. # API Error Codes & Status Messages Source: https://docs.cosmos.network/skip-go/api-reference/error-codes Reference for error codes and status messages returned by the Skip Go API, including transaction, bridge, and packet-specific statuses. This document provides a reference for the various error codes and status messages you may encounter when using Skip Go's APIs. Understanding these codes is essential for handling API responses correctly and providing accurate feedback to users. ## Transaction Status Codes These codes represent the overall status of a transaction being processed or tracked by the Skip Go API. | Status | Description | Notes | | ------------------------- | ------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------- | | `STATE_UNKNOWN` | The status of the transaction is unknown. | Should be treated as an **error state**. | | `STATE_SUBMITTED` | The transaction has been submitted to the blockchain network. | | | `STATE_PENDING` | The transaction is in progress and waiting to be processed. | | | `STATE_RECEIVED` | The transaction has been received by the blockchain network. | | | `STATE_COMPLETED` | (Internal transitional state) | This state is resolved to `STATE_COMPLETED_SUCCESS` or `STATE_COMPLETED_ERROR` before API response. | | `STATE_ABANDONED` | The transaction was abandoned, often due to timeout (e.g., IBC relayer). | Tracking timed out. Not necessarily a permanent transaction failure; may be retriable. | | `STATE_COMPLETED_SUCCESS` | The transaction has completed successfully. | This is the **only** status that definitively indicates successful end-to-end transaction completion. | | `STATE_COMPLETED_ERROR` | The transaction has completed but encountered errors. | | | `STATE_PENDING_ERROR` | The transaction encountered errors while being processed. | | ## General Error Types These error types provide a broad categorization of issues. | Error Type | Description | | ------------------------------------ | ------------------------------------------------------------------------- | | `STATUS_ERROR_UNKNOWN` | Unknown error type. | | `STATUS_ERROR_TRANSACTION_EXECUTION` | Error occurred during the execution of the transaction on the blockchain. | | `STATUS_ERROR_INDEXING` | Error occurred during Skip's internal processing or indexing. | | `STATUS_ERROR_TRANSFER` | Error related to a specific transfer operation within the transaction. | ## Bridge Protocol Status Codes Different bridge protocols have their own specific status codes that reflect the state of a transfer leg using that bridge. ### IBC Transfer Status | Status | Description | | ------------------- | ------------------------------------------------------------ | | `TRANSFER_UNKNOWN` | The status of the IBC transfer is unknown. | | `TRANSFER_PENDING` | The IBC transfer is in progress. | | `TRANSFER_RECEIVED` | The IBC transfer has been received by the destination chain. | | `TRANSFER_SUCCESS` | The IBC transfer has completed successfully. | | `TRANSFER_FAILURE` | The IBC transfer has failed. | ### Axelar Transfer Status | Status | Description | | -------------------------------------- | ------------------------------------------------ | | `AXELAR_TRANSFER_UNKNOWN` | The status of the Axelar transfer is unknown. | | `AXELAR_TRANSFER_PENDING_CONFIRMATION` | The Axelar transfer is waiting for confirmation. | | `AXELAR_TRANSFER_PENDING_RECEIPT` | The Axelar transfer is waiting for receipt. | | `AXELAR_TRANSFER_SUCCESS` | The Axelar transfer has completed successfully. | | `AXELAR_TRANSFER_FAILURE` | The Axelar transfer has failed. | ### CCTP Transfer Status | Status | Description | | ------------------------------------ | ------------------------------------------------------- | | `CCTP_TRANSFER_UNKNOWN` | The status of the CCTP transfer is unknown. | | `CCTP_TRANSFER_SENT` | The CCTP transfer has been sent. | | `CCTP_TRANSFER_PENDING_CONFIRMATION` | The CCTP transfer is waiting for confirmation. | | `CCTP_TRANSFER_CONFIRMED` | The CCTP transfer has been confirmed. | | `CCTP_TRANSFER_RECEIVED` | The CCTP transfer has been received at the destination. | ### GoFast Transfer Status | Status | Description | | ---------------------------- | ----------------------------------------------------------- | | `GO_FAST_TRANSFER_UNKNOWN` | The status of the GoFast transfer is unknown. | | `GO_FAST_TRANSFER_SENT` | The GoFast transfer has been sent. | | `GO_FAST_POST_ACTION_FAILED` | The post-transfer action on the GoFast transfer has failed. | | `GO_FAST_TRANSFER_TIMEOUT` | The GoFast transfer has timed out. | | `GO_FAST_TRANSFER_FILLED` | The GoFast transfer has been filled. | | `GO_FAST_TRANSFER_REFUNDED` | The GoFast transfer has been refunded. | ### LayerZero Transfer Status | Status | Description | | ------------------------------ | ------------------------------------------------ | | `LAYER_ZERO_TRANSFER_UNKNOWN` | The status of the LayerZero transfer is unknown. | | `LAYER_ZERO_TRANSFER_SENT` | The LayerZero transfer has been sent. | | `LAYER_ZERO_TRANSFER_RECEIVED` | The LayerZero transfer has been received. | | `LAYER_ZERO_TRANSFER_FAILED` | The LayerZero transfer has failed. | ### Hyperlane Transfer Status | Status | Description | | ----------------------------- | ------------------------------------------------ | | `HYPERLANE_TRANSFER_UNKNOWN` | The status of the Hyperlane transfer is unknown. | | `HYPERLANE_TRANSFER_SENT` | The Hyperlane transfer has been sent. | | `HYPERLANE_TRANSFER_FAILED` | The Hyperlane transfer has failed. | | `HYPERLANE_TRANSFER_RECEIVED` | The Hyperlane transfer has been received. | ### OPInit Transfer Status | Status | Description | | -------------------------- | --------------------------------------------- | | `OPINIT_TRANSFER_UNKNOWN` | The status of the OPInit transfer is unknown. | | `OPINIT_TRANSFER_SENT` | The OPInit transfer has been sent. | | `OPINIT_TRANSFER_RECEIVED` | The OPInit transfer has been received. | ### Stargate Transfer Status | Status | Description | | ---------------------------- | ----------------------------------------------- | | `STARGATE_TRANSFER_UNKNOWN` | The status of the Stargate transfer is unknown. | | `STARGATE_TRANSFER_SENT` | The Stargate transfer has been sent. | | `STARGATE_TRANSFER_RECEIVED` | The Stargate transfer has been received. | | `STARGATE_TRANSFER_FAILED` | The Stargate transfer has failed. | ### Eureka Transfer Status The Eureka transfer status codes are identical to the IBC transfer status codes. | Status | Description | | ------------------- | --------------------------------------------------------------- | | `TRANSFER_UNKNOWN` | The status of the Eureka transfer is unknown. | | `TRANSFER_PENDING` | The Eureka transfer is in progress. | | `TRANSFER_RECEIVED` | The Eureka transfer has been received by the destination chain. | | `TRANSFER_SUCCESS` | The Eureka transfer has completed successfully. | | `TRANSFER_FAILURE` | The Eureka transfer has failed. | ## Additional Error Types ### Packet Error Types When dealing with IBC packets, you may see these error types: | Error Type | Description | | ------------------------------ | -------------------------------- | | `PACKET_ERROR_UNKNOWN` | Unknown packet error. | | `PACKET_ERROR_ACKNOWLEDGEMENT` | Error in packet acknowledgement. | | `PACKET_ERROR_TIMEOUT` | Packet timed out. | ### Bridge-Specific Error Types | Error Type | Description | | ------------------------------------------ | ------------------------------------------------------- | | `SEND_TOKEN_EXECUTION_ERROR` | Error during Axelar send token execution. | | `CONTRACT_CALL_WITH_TOKEN_EXECUTION_ERROR` | Error during Axelar contract call with token execution. | # Get /v2/fungible/assets Source: https://docs.cosmos.network/skip-go/api-reference/prod/fungible/get-v2fungibleassets swagger get /v2/fungible/assets Get supported assets. Optionally limit to assets on a given chain and/or native assets. # Get /v2/fungible/venues Source: https://docs.cosmos.network/skip-go/api-reference/prod/fungible/get-v2fungiblevenues swagger get /v2/fungible/venues Get supported swap venues. # Post /v2/fungible/assets_between_chains Source: https://docs.cosmos.network/skip-go/api-reference/prod/fungible/post-v2fungibleassets_between_chains swagger post /v2/fungible/assets_between_chains Given 2 chain IDs, returns a list of equivalent assets that can be transferred # Post /v2/fungible/assets_from_source Source: https://docs.cosmos.network/skip-go/api-reference/prod/fungible/post-v2fungibleassets_from_source swagger post /v2/fungible/assets_from_source Get assets that can be reached from a source via transfers under different conditions (e.g. single vs multiple txs) # Post /v2/fungible/ibc_origin_assets Source: https://docs.cosmos.network/skip-go/api-reference/prod/fungible/post-v2fungibleibc_origin_assets swagger post /v2/fungible/ibc_origin_assets Get origin assets from a given list of denoms and chain IDs. # Post /v2/fungible/msgs Source: https://docs.cosmos.network/skip-go/api-reference/prod/fungible/post-v2fungiblemsgs swagger post /v2/fungible/msgs This supports cross-chain actions among EVM chains, Cosmos chains, and between them. Returns minimal number of messages required to execute a multi-chain swap or transfer. Input consists of the output of route with additional information required for message construction (e.g. destination addresses for each chain) # Post /v2/fungible/msgs_direct Source: https://docs.cosmos.network/skip-go/api-reference/prod/fungible/post-v2fungiblemsgs_direct swagger post /v2/fungible/msgs_direct This supports cross-chain actions among EVM chains, Cosmos chains, and between them. Returns minimal number of messages required to execute a multi-chain swap or transfer. This is a convenience endpoint that combines /route and /msgs into a single call. # Post /v2/fungible/route Source: https://docs.cosmos.network/skip-go/api-reference/prod/fungible/post-v2fungibleroute swagger post /v2/fungible/route This supports cross-chain actions among EVM chains, Cosmos chains, and between them. Returns the sequence of transfers and/or swaps to reach the given destination asset from the given source asset, along with estimated amount out. Commonly called before /msgs to generate route info and quote. # Get /v2/info/bridges Source: https://docs.cosmos.network/skip-go/api-reference/prod/info/get-v2infobridges swagger get /v2/info/bridges Get all supported bridges # Get /v2/info/chains Source: https://docs.cosmos.network/skip-go/api-reference/prod/info/get-v2infochains swagger get /v2/info/chains Get all supported chains along with additional data useful for building applications + frontends that interface with them (e.g. logo URI, IBC capabilities, fee assets, bech32 prefix, etc...) # Post /v2/info/balances Source: https://docs.cosmos.network/skip-go/api-reference/prod/info/post-v2infobalances swagger post /v2/info/balances Get the balances of a given set of assets on a given chain and wallet address. Compatible with all Skip Go-supported assets, excluding CW20 assets, across SVM, EVM, and Cosmos chains. # Get /v2/tx/status Source: https://docs.cosmos.network/skip-go/api-reference/prod/transaction/get-v2txstatus swagger get /v2/tx/status Get the status of the specified transaction and any subsequent IBC or Axelar transfers if routing assets cross chain. The transaction must have previously been submitted to either the /submit or /track endpoints. # Post /v2/tx/submit Source: https://docs.cosmos.network/skip-go/api-reference/prod/transaction/post-v2txsubmit swagger post /v2/tx/submit Submit a signed base64 encoded transaction to be broadcast to the specified network. On successful submission, the status of the transaction and any subsequent IBC or Axelar transfers can be queried through the /status endpoint. # Post /v2/tx/track Source: https://docs.cosmos.network/skip-go/api-reference/prod/transaction/post-v2txtrack swagger post /v2/tx/track Requests tracking of a transaction that has already landed on-chain but was not broadcast through the Skip Go API. The status of a tracked transaction and subsequent IBC or Axelar transfers if routing assets cross chain can be queried through the /status endpoint. # Advanced Features Source: https://docs.cosmos.network/skip-go/client/advanced-features This page details advanced features and utilities in the Skip Go client library. ## Adding custom messages before or after route execution The `executeRoute` method now accepts `beforeMsg` and `afterMsg` parameter to allow for the execution of custom Cosmos messages before and/or after the route is executed. This is useful for executing custom messages that are not part of the route definition. ```typescript theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} const msg = JSON.stringify({ fromAddress: 'cosmos1...', // Replace with sender address toAddress: 'cosmos1...', // Replace with recipient address amount: [{ denom: 'uatom', // Replace with the actual denom, e.g., 'uatom' for ATOM amount: '1000000' // Replace with the actual amount (in smallest unit, e.g., micro-ATOM) }] }); await executeRoute({ route, userAddresses, beforeMsg: { msg, msgTypeUrl: '/cosmos.bank.v1beta1.MsgSend' } }); ``` ## Use the Go Fast Transfer system The `route` function accepts a `goFast` parameter to enable the Go Fast Transfers. Then pass this route to the `executeRoute` method to execute. ```typescript theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import { executeRoute, route } from '@skip-go/client'; const route = await route({ goFast: true ...otherParams, }); await executeRoute({ route, ...otherParams, }); ``` ## Add a fee payer in solana transaction The `executeRoute` method accepts a `svmFeePayer` parameter to specify a fee payer for Solana transactions. This is useful when you want to have a different account pay the transaction fees. ```typescript theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import { Keypair } from "@solana/web3.js"; import nacl from "tweetnacl"; import bs58 from "bs58"; import { executeRoute, route } from '@skip-go/client'; const route = await route({ // ... other parameters }); await executeRoute({ route, userAddresses, svmFeePayer: { address: 'FEE_PAYER_ADDRESS.', // Replace with the fee payer's Solana address signTransaction: async (dataToSign: Buffer) => { // this is an example the fee payer signer is using a private key const privateKey = "FEE_PAYER_PRIVATE_KEY"; const keypairBytes = bs58.decode(privateKey); const keypair = Keypair.fromSecretKey(keypairBytes); return nacl.sign.detached(dataToSign, keypair.secretKey); }, } ...otherParams, }) ``` # Executing a route Source: https://docs.cosmos.network/skip-go/client/executing-a-route This page documents the executeRoute function, used to execute a token transfer/swap using a route from /v2/fungible/route API ## Executing a cross-chain route The executeRoute function is used to execute a route, including optional support for simulating transactions, injecting custom Cosmos messages, and tracking transaction status. You must provide a list of user addresses (one per chain in route.requiredChainAddresses), along with the route object returned from route (/v2/fungible/route). This function handles validation of addresses and gas balances, prepares the route messages, and executes transactions in order across Cosmos, Evm, and Svm chains. ```ts theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} async executeRoute(options: ExecuteRouteOptions): Promise ``` ## Required fields `route` A RouteResponse containing swap or transfer operations. `userAddresses` One user address per chain in the same order as route.requiredChainAddresses. All user addresses must match the chain ids expected in the route, and must be valid for the corresponding chain type (Cosmos, Evm, or Svm). An error will be thrown if the addresses are mismatched or malformed. `getCosmosSigner` Function that takes a chainId and returns a `Promise` `getEvmSigner` Function that takes a chainId and returns a `Promise` `getSvmSigner` Function that returns a `Promise` ## Optional fields `slippageTolerancePercent` Set the maximum slippage tolerance for the route (defaults to "1"). `simulate` Whether to simulate transactions before executing (defaults to true). `batchSimulate` If true, simulate all transactions in a batch before execution; if false, simulate each transaction individually. (defaults to true). `batchSignTxs` If true, all transactions in a multi-transaction route will be signed upfront before broadcasting; if false, each transaction will be signed individually just before broadcasting. (defaults to true). `beforeMsg / afterMsg` Optional Cosmos messages to inject at the beginning or end of the route execution. `useUnlimitedApproval` If true, sets Evm token allowances to MAX\_UINT256. (defaults to false). `bypassApprovalCheck` If true, skips token approval checks on Evm. (defaults to false). `timeoutSeconds` Time in seconds to wait for message preparation before timing out. `getGasPrice` Override the gas price per chain. `getFallbackGasAmount` Fallback gas to use if simulation fails. `gasAmountMultiplier` Overrides the default simulation multiplier (default is 1.5). `getCosmosPriorityFeeDenom` function that takes a chainId and returns a `Promise` for the priority fee denom to use for Cosmos transactions. example: ```ts theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} await executeRoute({ // ...executeRoute params getCosmosPriorityFeeDenom: async (chainId) => { // this will set the priority fee as ATOM for noble-1 chain if (chainId === "noble-1") return "ibc/EF48E6B1A1A19F47ECAEA62F5670C37C0580E86A9E88498B7E393EB6F49F33C0"; return undefined; }, }); ``` ## Callbacks You can optionally provide the following callbacks: `onTransactionSignRequested` Called when a transaction is ready to be signed on the wallet. `onTransactionBroadcast` Called after each transaction is broadcasted. `onTransactionCompleted` Called after each transaction is confirmed. `onTransactionTracked` Called during confirmation polling, useful for progress tracking uis. `onValidateGasBalance` Called while the transaction is calculating the gas and balance validation `onApproveAllownace` Called when the allowance tx is being executed (only in evm txs). ```ts theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} await executeRoute({ route, userAddresses: [ { chainId: "osmosis-1", address: osmoAddress }, { chainId: "ethereum", address: evmAddress }, ], simulate: true, batchSignTxs: true, slippageTolerancePercent: "0.5", beforeMsg: cosmosMsg1, afterMsg: cosmosMsg2, getCosmosSigner: (chainId) => {}, getEvmSigner: (chainId) => {}, getSvmSigner: () => {}, onTransactionBroadcast: ({ chainId, txHash }) => { console.log(`Broadcasted on ${chainId}: ${txHash}`); }, onTransactionCompleted: ({ chainId, txHash, status }) => { console.log(`Completed on ${chainId}: ${txHash} (Status: ${status})`); }, onTransactionTracked: ({ chainId, txHash, status }) => { console.log(`Tracking ${chainId}: ${txHash} (Status: ${status})`); }, onTransactionSignRequested: ({ chainId, signerAddress }) => { console.log(`Sign requested for ${chainId}`, signerAddress); }, }); ``` # Gas on Receive with Custom Frontends Source: https://docs.cosmos.network/skip-go/client/gas-on-receive Implement Gas on Receive functionality in your custom frontend using the Skip Go Client Library This guide explains how to implement Gas on Receive functionality when building custom frontends with the Skip Go Client Library. Gas on Receive helps users automatically obtain native gas tokens on destination chains during cross-chain swaps. ## Overview Gas on Receive prevents users from getting "stuck" with assets they can't use by automatically providing a small amount of native gas tokens on the destination chain. The client library provides the `getRouteWithGasOnReceive` function that handles all the complexity for you, or you can implement custom logic if you need specific behavior. ## Prerequisites * Skip Go Client Library v1.5.0 or higher * Understanding of the basic `route` and `executeRoute` functions * Access to user wallet signers for multiple chains ### Supported Chains * **Destination**: Cosmos chains and EVM L2s (Solana not supported) * **Source**: Most chains except Ethereum mainnet and Sepolia testnet ## Quick Start: Using getRouteWithGasOnReceive The simplest way to implement Gas on Receive is using the built-in `getRouteWithGasOnReceive` function: ```typescript theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import { route, getRouteWithGasOnReceive, executeMultipleRoutes, executeRoute, type RouteRequest, type RouteResponse, type UserAddress } from "@skip-go/client"; async function swapWithGasOnReceive() { try { // Step 1: Define your route request const routeRequest = { amountIn: "1000000", // 1 OSMO sourceAssetChainId: "osmosis-1", sourceAssetDenom: "uosmo", destAssetChainId: "42161", // Arbitrum destAssetDenom: "0x82aF49447D8a07e3bd95BD0d56f35241523fBab1", // WETH smartRelay: true }; // Step 2: Get your initial route const originalRoute = await route(routeRequest); // Step 3: Automatically split into main and gas routes const { mainRoute, gasRoute } = await getRouteWithGasOnReceive({ routeResponse: originalRoute, routeRequest }); // Step 4: Get user addresses based on required chains // Note: mainRoute and gasRoute may require different chains const mainRouteAddresses = mainRoute.requiredChainAddresses.map(chainId => ({ chainId, address: getUserAddressForChain(chainId) // Your function to get user's address })); const gasRouteAddresses = gasRoute?.requiredChainAddresses.map(chainId => ({ chainId, address: getUserAddressForChain(chainId) })); // Example helper function: // function getUserAddressForChain(chainId: string): string { // const addresses = { // "osmosis-1": "osmo1...", // "42161": "0x..." // }; // return addresses[chainId] || throw new Error(`No address for chain ${chainId}`); // } // Step 5: Execute routes if (gasRoute && gasRouteAddresses) { // Execute both routes together await executeMultipleRoutes({ route: { mainRoute, feeRoute: gasRoute }, userAddresses: { mainRoute: mainRouteAddresses, feeRoute: gasRouteAddresses // May be different chains than mainRoute }, slippageTolerancePercent: { mainRoute: "1", feeRoute: "10" // Higher tolerance for gas route }, getCosmosSigningClient: async (chainId) => { return yourCosmosWallet.getSigningClient(chainId); }, getEVMSigningClient: async (chainId) => { return yourEvmWallet.getSigningClient(chainId); }, onRouteStatusUpdated: (status) => { console.log("Route status:", status); // Check if gas route failed const gasRouteFailed = status.relatedRoutes?.find( r => r.routeKey === "feeRoute" && r.status === "failed" ); if (gasRouteFailed) { console.warn("Gas route failed, but main swap continues"); } } }); } else { // No gas route needed, execute original route await executeRoute({ route: originalRoute, userAddresses: mainRouteAddresses, slippageTolerancePercent: "1", getCosmosSigningClient: async (chainId) => { return yourCosmosWallet.getSigningClient(chainId); }, getEVMSigningClient: async (chainId) => { return yourEvmWallet.getSigningClient(chainId); } }); } catch (error) { console.error("Failed to execute swap:", error); // Handle error appropriately } } ``` ### How getRouteWithGasOnReceive Works The function automatically: 1. Checks if the destination chain is supported (excludes Solana chains) 2. Checks source chain support (excludes Ethereum mainnet and Sepolia testnet) 3. Verifies the destination asset isn't already a fee asset 4. Calculates appropriate gas amounts: * **Cosmos chains**: Average gas price × 3 (or \$0.10 USD fallback if gas price unavailable) * **EVM L2 chains**: \$2.00 USD worth of native tokens 5. Creates a gas route to obtain native tokens 6. Adjusts the main route amount accordingly 7. Returns both routes, or the original route as mainRoute if gas route creation fails **Note**: If gas route creation fails for any reason, the function returns the original route as `mainRoute` with `gasRoute` as undefined, allowing your swap to proceed without gas-on-receive. ## Custom Implementation Guide If you need to customize the Gas on Receive behavior beyond what `getRouteWithGasOnReceive` provides: ### Step 1: Check Destination Gas Balance Before initiating a swap, check if the user has sufficient gas on the destination chain: ```typescript theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import { balances } from "@skip-go/client"; async function checkDestinationGasBalance( destinationChainId: string, userAddress: string, requiredGasAmount?: string ) { // Fetch user's balance on destination chain const userBalances = await balances({ chains: { [destinationChainId]: { address: userAddress } } }); const chainBalances = userBalances.chains?.[destinationChainId]?.denoms; // For EVM chains, check native token (address 0x0000...) const nativeTokenDenom = getNativeTokenDenom(destinationChainId); const nativeBalance = chainBalances?.[nativeTokenDenom]; // Simple check: does user have any native token? if (!nativeBalance?.amount || nativeBalance.amount === "0") { return false; } // Optional: Check against a minimum threshold if (requiredGasAmount) { return Number(nativeBalance.amount) >= Number(requiredGasAmount); } return true; } function getNativeTokenDenom(chainId: string): string { // For EVM chains if (isEvmChain(chainId)) { return "0x0000000000000000000000000000000000000000"; } // For Cosmos chains, you'll need to fetch the fee assets // This varies by chain (e.g., "uosmo" for Osmosis, "uatom" for Cosmos Hub) return getCosmosNativeDenom(chainId); } ``` ### Step 2: Calculate Gas Amount Needed ```typescript theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} const GAS_AMOUNTS_USD = { cosmos: 0.10, // $0.10 for Cosmos chains evm_l2: 2.00, // $2.00 for EVM L2 chains evm_mainnet: 0 // Disabled for Ethereum mainnet }; async function calculateGasAmount( sourceAsset: { chainId: string; denom: string }, destinationChainId: string, sourceAssetPriceUsd: number ): Promise { const chainType = await getChainType(destinationChainId); // Determine USD amount based on chain type let usdAmount = 0; if (chainType === 'cosmos') { usdAmount = GAS_AMOUNTS_USD.cosmos; } else if (chainType === 'evm' && destinationChainId !== "1") { usdAmount = GAS_AMOUNTS_USD.evm_l2; } if (usdAmount === 0) { throw new Error("Gas on Receive not supported for this chain"); } // Convert USD amount to source asset amount const sourceAmount = usdAmount / sourceAssetPriceUsd; // Convert to crypto amount (considering decimals) const sourceAssetDecimals = await getAssetDecimals(sourceAsset); return convertToCryptoAmount(sourceAmount, sourceAssetDecimals); } ``` ### Step 3: Create Routes with Custom Logic ```typescript theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import { route } from "@skip-go/client"; async function createRoutesManually( amountIn: string, sourceAsset: { chainId: string; denom: string }, destAsset: { chainId: string; denom: string }, gasAmount: string, enableGasOnReceive: boolean ) { // Calculate adjusted amounts const gasRouteAmount = enableGasOnReceive ? gasAmount : "0"; const mainRouteAmount = (Number(amountIn) - Number(gasRouteAmount)).toString(); // Create main route with reduced amount const mainRoute = await route({ amountIn: mainRouteAmount, sourceAssetChainId: sourceAsset.chainId, sourceAssetDenom: sourceAsset.denom, destAssetChainId: destAsset.chainId, destAssetDenom: destAsset.denom, smartRelay: true }); // Create gas route only if enabled let gasRoute = null; if (enableGasOnReceive) { const nativeTokenDenom = getNativeTokenDenom(destAsset.chainId); gasRoute = await route({ amountIn: gasRouteAmount, sourceAssetChainId: sourceAsset.chainId, sourceAssetDenom: sourceAsset.denom, destAssetChainId: destAsset.chainId, destAssetDenom: nativeTokenDenom, smartRelay: true }); } return { mainRoute, gasRoute }; } ``` ### Step 4: Execute Routes ```typescript theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import { executeMultipleRoutes, type RouteResponse, type UserAddress } from "@skip-go/client"; async function executeSwapWithGasOnReceive( mainRoute: RouteResponse, gasRoute: RouteResponse | null, userAddresses: UserAddress[], signers: { getCosmosSigningClient: (chainId: string) => Promise; getEVMSigningClient: (chainId: string) => Promise; } ) { // Build routes object with consistent naming const routes = gasRoute ? { mainRoute, feeRoute: gasRoute } : { mainRoute }; // Build user addresses object const addresses = gasRoute ? { mainRoute: userAddresses, feeRoute: userAddresses } : { mainRoute: userAddresses }; // Build slippage settings const slippage = gasRoute ? { mainRoute: "1", feeRoute: "10" } // Higher tolerance for gas route : { mainRoute: "1" }; // Execute both routes await executeMultipleRoutes({ route: routes, userAddresses: addresses, slippageTolerancePercent: slippage, getCosmosSigningClient: signers.getCosmosSigningClient, getEVMSigningClient: signers.getEVMSigningClient, onRouteStatusUpdated: (status) => { // Handle route status updates console.log("Route status:", status); // Check if gas route failed const gasRouteFailed = status.relatedRoutes?.find( r => r.routeKey === "feeRoute" && r.status === "failed" ); if (gasRouteFailed) { console.warn("Gas route failed, but main swap continues"); } } }); } ``` ## Complete Implementation Example Here's a full example combining all the concepts: ```typescript theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import { route, executeMultipleRoutes, balances, assets, getRouteWithGasOnReceive } from "@skip-go/client"; class GasOnReceiveManager { private readonly GAS_AMOUNTS_USD = { cosmos: 0.10, evm_l2: 2.00 }; async shouldEnableGasOnReceive( destinationChainId: string, destinationAddress: string, destinationAssetDenom: string ): Promise { // Check if chain is supported if (!this.isChainSupported(destinationChainId)) { return false; } // Don't enable if destination asset is already a gas token if (await this.isGasToken(destinationChainId, destinationAssetDenom)) { return false; } // Check user's gas balance const hasGas = await this.checkGasBalance(destinationChainId, destinationAddress); return !hasGas; } private isChainSupported(chainId: string): boolean { // Solana chains not supported for destination const unsupportedDestChains = ["solana", "solana-devnet"]; // Ethereum mainnet and Sepolia not supported as source const unsupportedSourceChains = ["1", "11155111"]; // For this example, checking destination support return !unsupportedDestChains.includes(chainId); } private async isGasToken(chainId: string, denom: string): boolean { const chainAssets = await assets({ chainId }); const gasTokens = chainAssets.chain?.feeAssets || []; return gasTokens.some(token => token.denom === denom); } private async checkGasBalance( chainId: string, address: string ): Promise { const balanceResponse = await balances({ chains: { [chainId]: { address } } }); const nativeDenom = await this.getNativeDenom(chainId); const balance = balanceResponse?.chains?.[chainId]?.denoms?.[nativeDenom]; // Check if user has any balance return balance?.amount && balance.amount !== "0"; } async executeSwapWithGasOnReceive( params: { amountIn: string; sourceAsset: { chainId: string; denom: string }; destAsset: { chainId: string; denom: string }; userAddresses: Array<{ chainId: string; address: string }>; enableGasOnReceive: boolean; signers: any; } ) { const { amountIn, sourceAsset, destAsset, userAddresses, enableGasOnReceive, signers } = params; // Get initial route const originalRoute = await route({ amountIn, sourceAssetChainId: sourceAsset.chainId, sourceAssetDenom: sourceAsset.denom, destAssetChainId: destAsset.chainId, destAssetDenom: destAsset.denom }); if (enableGasOnReceive) { // Use automatic splitting const { mainRoute, gasRoute } = await getRouteWithGasOnReceive({ routeResponse: originalRoute, routeRequest: { amountIn, sourceAssetChainId: sourceAsset.chainId, sourceAssetDenom: sourceAsset.denom, destAssetChainId: destAsset.chainId, destAssetDenom: destAsset.denom } }); if (gasRoute) { // Execute both routes await executeMultipleRoutes({ route: { mainRoute, feeRoute: gasRoute }, userAddresses: { mainRoute: userAddresses, feeRoute: userAddresses }, slippageTolerancePercent: { mainRoute: "1", feeRoute: "10" // Higher tolerance for gas route }, ...signers, onRouteStatusUpdated: this.handleRouteStatus }); } else { // Execute just the main route await executeRoute({ route: mainRoute, userAddresses, slippageTolerancePercent: "1", ...signers }); } } else { // Execute original route without gas await executeRoute({ route: originalRoute, userAddresses, slippageTolerancePercent: "1", ...signers }); } } private handleRouteStatus(status: RouteStatus) { if (status.status === "completed") { console.log("Swap completed successfully"); } // Check gas route status const gasRoute = status.relatedRoutes?.find(r => r.routeKey === "feeRoute"); if (gasRoute?.status === "failed") { console.warn("Gas provision failed, but main swap continues"); } else if (gasRoute?.status === "completed") { console.log("Gas tokens received successfully"); } } } ``` ## UI Considerations When implementing Gas on Receive in your UI: ### Display Gas Information ```typescript theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} function GasOnReceiveToggle({ enabled, gasAmount, gasAssetSymbol, onToggle }: GasOnReceiveProps) { return (
Enable gas top up - You'll get {gasAmount} in {gasAssetSymbol}
); } ``` ### Show Status During Execution ```typescript theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} function GasStatus({ status, amount, symbol }: GasStatusProps) { switch (status) { case 'pending': return Receiving {amount} in {symbol}...; case 'completed': return ✓ Received {amount} in {symbol} as gas top-up; case 'failed': return ⚠ Failed to receive gas tokens; default: return null; } } ``` ## Error Handling Handle various failure scenarios gracefully: ```typescript theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} async function handleGasRouteErrors(error: Error, mainRouteStatus: string) { // Gas route failures don't affect main swap if (mainRouteStatus === 'completed') { console.log("Main swap succeeded despite gas route failure"); // Show warning to user about missing gas showWarning("Swap completed but gas tokens were not received"); } // Log for debugging console.error("Gas route error:", error); // Track in analytics trackEvent("gas_route_failed", { error: error.message, mainRouteStatus }); } ``` ## Best Practices 1. **Use getRouteWithGasOnReceive**: The automatic function handles edge cases and optimizations 2. **Auto-detection**: Check gas balances and suggest Gas on Receive when needed 3. **User Control**: Always allow users to toggle the feature on/off 4. **Clear Communication**: Show exact amounts and costs transparently 5. **Graceful Degradation**: Main swap should continue even if gas route fails 6. **Higher Slippage**: Use 10% slippage for gas routes (vs 1% for main routes) 7. **Chain Support**: Disable for Ethereum mainnet and Solana 8. **Amount Limits**: Use recommended amounts ($0.10 for Cosmos, $2.00 for EVM L2s) ## Advanced Configuration ### Custom Gas Amounts ```typescript theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Override default gas amounts const customGasAmounts = { "osmosis-1": "100000", // 0.1 OSMO "42161": "0.001", // 0.001 ETH on Arbitrum "137": "2" // 2 MATIC on Polygon }; async function getCustomGasAmount(chainId: string): Promise { return customGasAmounts[chainId] || getDefaultGasAmount(chainId); } ``` ### Dynamic Pricing ```typescript theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} // Adjust gas amount based on current gas prices async function calculateDynamicGasAmount(chainId: string) { const gasPrice = await getGasPrice(chainId); const estimatedTxCount = 5; // Assume user needs gas for 5 transactions const gasPerTx = 21000; // Basic transfer gas limit const totalGasNeeded = gasPrice * gasPerTx * estimatedTxCount; return totalGasNeeded.toString(); } ``` ## Comparison with Widget Implementation | Feature | Widget (Automatic) | Client Library (Manual) | | --------------------- | ------------------ | ---------------------------------------- | | Gas balance detection | Automatic | Manual or use `getRouteWithGasOnReceive` | | Route creation | Automatic | Use `getRouteWithGasOnReceive` or manual | | Amount calculation | Built-in defaults | Built-in with `getRouteWithGasOnReceive` | | UI components | Provided | Build your own | | Error handling | Automatic | Manual implementation | | Status tracking | Built-in | Via callbacks | ## Summary The Skip Go Client Library provides flexible options for implementing Gas on Receive: 1. **Quick implementation** with `getRouteWithGasOnReceive` for automatic route splitting 2. **Full control** with manual balance checking and route creation 3. **Status tracking** via callbacks in `executeMultipleRoutes` 4. **Graceful error handling** where gas route failures don't affect main swaps Choose the approach that best fits your application's needs. For most use cases, `getRouteWithGasOnReceive` provides the ideal balance of simplicity and functionality. # Migration Guide Source: https://docs.cosmos.network/skip-go/client/migration-guide Both the Skip Router SDK ([`@skip-router/core`](https://www.npmjs.com/package/@skip-router/core)) and Skip Go Core ([`@skip-go/core`](https://www.npmjs.com/package/@skip-go/core)) are deprecated. Please migrate to Skip Go Client ([`@skip-go/client`](https://www.npmjs.com/package/@skip-go/client)), our actively maintained client package. ## Breaking changes This section details the migration from previous versions to the latest [`@skip-go/client`](https://www.npmjs.com/package/@skip-go/client). ### No More SkipClient Class The `SkipClient` class has been removed. Instead, import and use individual functions directly: ```TypeScript Example import theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import { assets, assetsBetweenChains, assetsFromSource, recommendAssets, bridges, balances, chains, venues, ibcOriginAssets, route, messages, messagesDirect, submitTransaction, trackTransaction, transactionStatus, executeRoute, setClientOptions, setApiOptions, } from '@skip-go/client'; ``` ### Initialization Changes **If not using `executeRoute`:** * Call `setApiOptions({ apiUrl, apiKey })` once at initialization. * Alternatively, pass `apiUrl` and `apiKey` as arguments to each individual API function call. **If using `executeRoute`:** * Call `setClientOptions()` with the same options object previously passed to the `SkipClient` constructor. * **Exception:** `getCosmosSigner`, `getEVMSigner`, and `getSVMSigner` have been removed from `setClientOptions`. These signer functions are now passed directly to `executeRoute` when needed. * **Renamed:** `getEVMSigner` is now `getEvmSigner`, and `getSVMSigner` is now `getSvmSigner`. ```Diff Example migration theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} - const client = new SkipClient(options); + setClientOptions(options); // Only if using executeRoute - client.chains(); + chains(); // Assuming apiUrl/apiKey were set via setApiOptions or passed in ``` ### Build Format Change The library build format has changed from CommonJS (CJS) to `ES Modules (ESM)`. This change enables better tree-shaking, leading to significantly smaller bundle sizes for applications that don't use all the library's features. If you're **not** using `executeRoute`, your final bundle size should decrease dramatically (e.g., from \~5MB to potentially \~7KB for a single API function usage), assuming tree-shaking is enabled in your bundler. ### Axios Removed `axios` is no longer a dependency. All API calls now utilize the standard `window.fetch` API internally. ### CamelCase Update All property names in API responses and configuration objects now strictly adhere to `camelCase`. **Examples:** | Before | After | | -------------- | -------------- | | `chainID` | `chainId` | | `apiURL` | `apiUrl` | | `logoURI` | `logoUri` | | `asset.isCW20` | `asset.isCw20` | ### Named parameter enforcement for API functions Some methods now require named parameters or an options object instead of positional arguments: #### recommendAssets Old: ```ts theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} await client.recommendAssets(request); // OR await client.recommendAssets([request1, request2]); ``` New: ```ts theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} await recommendAssets({ requests: [request1, request2] }); ``` Wrap the array in a `{ requests: [...] }` object. #### ibcOriginAssets Old: ```ts theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} await client.ibcOriginAssets(assets); ``` New: ```ts theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} await ibcOriginAssets({ assets: [asset1, asset2] }); ``` Wrap the assets array in a `{ assets: [...] }` object. #### `getFeeInfoForChain` Parameters for `getFeeInfoForChain` should now be passed as an object. Old: ```ts theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} await client.getFeeInfoForChain(chainID); ``` New: ```ts theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import { getFeeInfoForChain } from '@skip-go/client'; // Assuming setApiOptions was called await getFeeInfoForChain({ chainId: chainID }); // Or, if not using setApiOptions globally for apiUrl/apiKey: // await getFeeInfoForChain({ chainId: chainID, apiUrl: YOUR_API_URL, apiKey: YOUR_API_KEY }); ``` #### `getRecommendedGasPrice` Parameters for `getRecommendedGasPrice` should now be passed as an object. Old: ```ts theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} await client.getRecommendedGasPrice(chainID); ``` New: ```ts theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} import { getRecommendedGasPrice } from '@skip-go/client'; // Assuming setApiOptions was called await getRecommendedGasPrice({ chainId: chainID }); // Or, if not using setApiOptions globally for apiUrl/apiKey: // await getRecommendedGasPrice({ chainId: chainID, apiUrl: YOUR_API_URL, apiKey: YOUR_API_KEY }); ``` ### Removed Internal Functions The following functions that were previously exported are no longer available in v1.0.0. These were internal functions that were not intended for direct use by integrators, as they are used internally by `executeRoute`: * `executeTxs` * `executeEvmMsg` (merged with `executeEvmTransaction`) * `executeCosmosMessage` (merged with `executeCosmosTransaction`) * `executeEVMTransaction` * `executeSVMTransaction` * `signCosmosMessageDirect` * `signCosmosMessageAmino` * `getRpcEndpointForChain` * `getRestEndpointForChain` * `validateGasBalances` * `validateEvmGasBalance` * `validateEvmTokenApproval` * `validateSvmGasBalance` * `validateUserAddresses` * `getMainnetAndTestnetChains` * `getMainnetAndTestnetAssets` * `getAccountNumberAndSequence` If your application was using any of these functions directly, consider using `executeRoute` instead, which handles all transaction execution internally. If you have a specific use case that requires access to any of these functions, please open a ticket on our [Discord](https://discord.gg/skip). ### Breaking changes * Removed `clientID` param in `SkipClient` * Added `apiKey` param in `SkipClient` * Added `requiredChainAddresses` in `SkipClient.route` response * Added `smartSwapOptions` in `SkipClient.route`request ```JavaScript Type signature theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} smartSwapOptions: { splitRoutes: boolean } ``` ## Breaking changes * Changed parameter type of `userAddresses` from a map of chainIDs to addresses to an array of `UserAddress` types ```TypeScript Type signature theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} export interface UserAddress { chainID: string; address: string; } ``` ### Breaking changes * Removed `SkipClient.executeMultiChainMessage` method * Renamed `SkipClient.getGasAmountForMessage` method to `SkipClient.getCosmosGasAmountForMessage` * Renamed `SkipClient.getFeeForMessage` to `SkipClient.getCosmosFeeForMe` * Renamed `MultiChainMsg` type to `CosmosMsg` * Renamed and changed parameters of `SkipClient.executeMultiChainMsgs` to `SkipClient.executeTxs` ```Diff Diff theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} const client = new SkipClient({ apiURL: SKIP_API_URL, // ... rest of your configs }); - client.executeMultiChainMsgs({ + client.executeTxs({ ...options - msgs: types.Msg[] + txs: types.Tx[] }) ``` * Param of `SkipClient.executeCosmosMessage` changed from `message:MultiChainMsg` to `messages: CosmosMsg[]` ```Diff Diff theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} const client = new SkipClient({ apiURL: SKIP_API_URL, // ... rest of your configs }); client.executeCosmosMessage({ ...options - message: MultiChainMsg + messages: CosmosMsg[] }) ``` # Contract Addresses Source: https://docs.cosmos.network/skip-go/eureka/contract-addresses Key contract addresses for the IBC Eureka deployment. The contract addresses listed on this page are for the Ethereum mainnet deployment. Please note that these addresses may change. The most up-to-date deployment information can always be found in the [cosmos/eureka-ops GitHub repository](https://github.com/cosmos/eureka-ops/blob/main/deployments/mainnet/1.json). ## Security councils * **Eureka Ops Council** (Gnosis Safe): `eth:0x4b46ea82D80825CA5640301f47C035942e6D9A46` * **Eureka Security Council** (Gnosis Safe): `eth:0x7B96CD54aA750EF83ca90eA487e0bA321707559a` ## ICS26 Router * **clientIdCustomizer**: `0x4b46ea82D80825CA5640301f47C035942e6D9A46` * **implementation**: `0x4e9083eC6ed91d6ab6b59EaEcfCd4459F76dCdE1` * **portCustomizer**: `0x4b46ea82D80825CA5640301f47C035942e6D9A46` * **proxy**: `0x3aF134307D5Ee90faa2ba9Cdba14ba66414CF1A7` * **relayers**: * `0xC4C09A23dDBd1fF0f313885265113F83622284C2` * `0xACf94C35456413F9F1FCe1aE8A0412Dd6D6B40D7` * `0x4bc16fEC05B2D9Fa72AdE54BB25A464a20D0D49b` * `0xeB128b9893aDCf2F6C6305160e57b7Beab9A09AF` * **timeLockAdmin**: `0xb3999B2D30dD8c9faEcE5A8a503fAe42b8b1b614` ## ICS20 Transfer * **escrowImplementation**: `0xf24A818d2E276936a7ABDDfaAd9c369a5B9Dcde8` * **ibcERC20Implementation**: `0x337842047368607f458e3D7bb47E676aec1509d9` * **ics26Router**: `0x3aF134307D5Ee90faa2ba9Cdba14ba66414CF1A7` * **implementation**: `0x4658C167824C000eA93D62f15B5c9bb53ee329fE` * **pausers**: * `0x9D91034CF296a02ED54C18A1a2AB9520e5fC135d` * `0xCb5d44d2324D36FfFef0e594d8893c4Fee908d4d` * `0x64ACC525DC35ebca8345fDF6e2A70D012a17740A` * `0x00A8b36491dCc59f1998fa368842709aBdD24eD7` * `0x2FeD70e1Ea7bE86a5C99F3456C0fb70db2801AD2` * **permit2**: `0x000000000022D473030F116dDEE9F6B43aC78BA3` * **proxy**: `0xa348CfE719B63151F228e3C30EB424BA5a983012` * **tokenOperator**: `0x4b46ea82D80825CA5640301f47C035942e6D9A46` * **unpausers**: * `0x7B96CD54aA750EF83ca90eA487e0bA321707559a` ## Light Clients ### Client: cosmoshub-0 * **clientId**: `cosmoshub-0` * **Contract Address**: `0xeA6F72650da80093A1012606Cc7328f5474ed378` ### Client: ledger-mainnet-1 * **clientId**: `ledger-mainnet-1` * **Contract Address**: `0xc9e814bB90B7e43c138F86D5C93Df21817D976Ca` ### Client: client-4 * **clientId**: `client-4` * **Contract Address**: `0x1Ba9912Ab92d8c58e1DEF3f783e4EbE0A516d76E` # Custom ERC20 Integration Source: https://docs.cosmos.network/skip-go/eureka/custom-erc20-integration A guide for asset issuers to deploy and register custom ERC20 contracts for their tokens on Ethereum # Custom ERC20 Integration ## Overview In the initial release of [`solidity-ibc-eureka`](https://github.com/cosmos/solidity-ibc-eureka), receiving a non-native token (e.g., ATOM from Cosmos Hub) deploys a default [`IBCERC20`](https://github.com/cosmos/solidity-ibc-eureka/blob/main/contracts/utils/IBCERC20.sol) contract to represent that token on Ethereum. Many teams bridging through the Cosmos Hub, however, want ownership and control over their ERC20 contracts on Ethereum. Since `IBCERC20` is managed by the `ICS20Transfer` contract and isn't customizable, direct ownership isn't possible. To address this, we allow teams to deploy custom ERC20 contracts—provided they implement a simple interface that lets the `ICS20Transfer` contract mint and burn tokens. ## Benefits The benefits of this approach include: Customize metadata and token naming on deployment. Tokens will not initially be named `ibc/transfer/channel-0...` and can be represented with the name they are recognized by. Easier contract verification on Etherscan. Each project can easily verify the ERC20 contract deployed for their project to increase trust by associating the token with the official project domain. This is likely to result in easier CEX listing and generally increased trust in the bridged asset. Complete ownership and control over the ERC20 contract representing your token on Ethereum. ## Requirements To replace the default `IBCERC20`, your custom ERC20 contract must implement the [`IMintableAndBurnable`](https://github.com/cosmos/solidity-ibc-eureka/blob/main/contracts/interfaces/IMintableAndBurnable.sol) interface: ```solidity theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} interface IMintableAndBurnable { /// @notice Mint new tokens to the Escrow contract /// @dev This function can only be called by an authorized contract (e.g., ICS20) /// @dev This function needs to allow minting tokens to the Escrow contract /// @param mintAddress Address to mint tokens to /// @param amount Amount of tokens to mint function mint(address mintAddress, uint256 amount) external; /// @notice Burn tokens from the Escrow contract /// @dev This function can only be called by an authorized contract (e.g., ICS20) /// @dev This function needs to allow burning of tokens from the Escrow contract /// @param mintAddress Address to burn tokens from /// @param amount Amount of tokens to burn function burn(address mintAddress, uint256 amount) external; } ``` For an example implementation of this interface, you can refer to the [`RefImplIBCERC20.sol`](https://github.com/cosmos/solidity-ibc-eureka/blob/main/test/solidity-ibc/utils/RefImplIBCERC20.sol) contract in the `solidity-ibc-eureka` repository. ### Access Control Requirements These functions must be callable by the proxy of the `ICS20Transfer` contract: * **Mainnet**: `0xa348CfE719B63151F228e3C30EB424BA5a983012` > **Security Note:** > Access to the `mint` and `burn` functions must be strictly limited to the `ICS20Transfer` proxy. Allowing any other address or contract to call these functions could lead to unauthorized token manipulation and compromise the integrity of your token. While token teams may implement additional access controls or rate limits as needed, the `ICS20Transfer` proxy must always retain its ability to perform mint and burn operations. > > **Upgradability & Extensibility:** > We may update our interface over time, but we're committed to ensuring backwards compatibility. While making your contract upgradable is not required, doing so allows you to adopt new features or improvements we introduce in the future. ### Upgradability You are not required to deploy an upgradable contract for your custom ERC20. We commit to maintaining the stability of the `IMintableAndBurnable` interface. However, please note that if we extend the interface with new functionality in the future, a non-upgradable contract would not be able to utilize these new features. ## Registering a Custom ERC20 The `ICS20Transfer` contract includes a permissioned method for registering a custom ERC20 via the [`setCustomERC20`](https://github.com/cosmos/solidity-ibc-eureka/blob/bce3a4a0de85697607815e2f7c9d9e2a8a508cd3/contracts/ICS20Transfer.sol#L264C14-L264C28) function. ### Prerequisites Only addresses assigned the `ERC20_CUSTOMIZER_ROLE` can call this function. This role is established by the protocol's security council and administered by the Eureka Ops multi-sig. To request registration of your custom ERC20 contract, [join our Discord](https://discord.com/invite/interchain) and open a support ticket. Additionally, the token's denomination on the Cosmos Hub must be established. The token must either be live on the Hub, or its original denomination and complete IBC path must be known if it originates elsewhere. We require the token to be active on the Cosmos Hub before registration can proceed. ### Critical Timing Requirement `setCustomERC20` must be called **before** the first IBC transfer of the token to the chain where the custom ERC20 is deployed. Once the initial transfer is made, the ERC20 mapping becomes immutable. ## Getting Started If you're an asset issuer looking to deploy a custom ERC20 contract for your token on Ethereum: Deploy your custom ERC20 contract that implements the `IMintableAndBurnable` interface with proper access controls for the `ICS20Transfer` proxy. For an example, see the [reference implementation](https://github.com/cosmos/solidity-ibc-eureka/blob/main/test/solidity-ibc/utils/RefImplIBCERC20.sol) in the `solidity-ibc-eureka` repository. [Join our Discord](https://discord.com/invite/interchain) and open a support ticket to request registration of your custom ERC20 contract. Verify your contract on Etherscan for greater transparency. For assistance, see the [Etherscan Contract Verification page](https://etherscan.io/verifyContract). Once verified, start bridging your token with complete control over its ERC20 representation. ## Support and Resources Need help with your custom ERC20 integration? Our team is ready to assist: * [Join our Discord](https://discord.com/invite/interchain) and open a support ticket Additional resources: * For technical specifications, visit the [solidity-ibc-eureka repository](https://github.com/cosmos/solidity-ibc-eureka) * View the [reference implementation](https://github.com/cosmos/solidity-ibc-eureka/blob/main/test/solidity-ibc/utils/RefImplIBCERC20.sol) for a sample ERC20 contract * Learn about [Etherscan contract verification](https://etherscan.io/verifyContract) to enhance trust in your token # Overview Source: https://docs.cosmos.network/skip-go/eureka/eureka-overview An overview of IBC Eureka for developers # IBC Eureka: Fast, Cheap, and Seamless Interoperability between Cosmos and Ethereum ## What is IBC Eureka? IBC Eureka is the canonical implementation of IBC v2 that enables seamless interoperability between Cosmos and Ethereum ecosystem. As a subset of the proven Inter-Blockchain Communication (IBC) protocol, Eureka extends the reach of 115+ Cosmos chains to Ethereum (Q1 2025) with support for Layer 2 networks such as Base and Arbitrum or Solana to be added soon. Connect your Cosmos chain to Ethereum and other EVM chains with minimal cost and battle-tested security. If your chain already uses IBC, is connected to the Cosmos Hub and has been onboarded to the Skip API, you can immediately connect to Ethereum through Eureka with no new dependencies. ## Getting Started To start using Eureka, you have several options: Read the [Technical Overview](/skip-go/eureka/eureka-tech-overview) to see how Eureka works. If you're an asset issuer, the path to integrating your token with Eureka and maintaining control over its representation on Ethereum is by deploying a [Custom ERC20](/skip-go/eureka/custom-erc20-integration). Follow our guide to learn how. If you're interested in integrating Eureka, contact [Jeremy](https://t.me/NotJeremyLiu) or [Susannah](https://t.me/bigsuse) to walk through the details. Follow the steps outlined in the [Integration Guide](/skip-go/eureka/integration-guide) to get set up! ## Support and Resources Need help with your Eureka integration? Our team is ready to assist: * [Join our Discord](https://discord.com/invite/interchain) and open a support ticket For technical specifications, visit the [IBC v2 Specification](https://github.com/cosmos/ibc/tree/main/spec/IBC_V2). # Technical Overview Source: https://docs.cosmos.network/skip-go/eureka/eureka-tech-overview Technical details of how IBC Eureka works ## Native IBC Security Model Eureka implements the full IBC light client security model, providing trust-minimized verification of cross-chain transactions: * **Light Client Verification**: Each chain runs a light client of the other chain, enabling cryptographic verification of state transitions. On the Ethereum side we use Succinct's SP1 zero-knowledge proofs for efficient verification * **No Multisig Dependencies**: Unlike many bridge solutions, Eureka doesn't rely on trusted validator sets or multisigs for security * **Permissionless Access**: Anyone can connect to the IBC network and Ethereum, as long as your chain has an IBC implementation, classic or v2 * **Minimal Infrastructure Overhead, no ongoing costs**: Relaying, proving and routing between the Cosmos Hub and Ethereum onto your chain is handled by the smart relayer, paid for by end users. Simply maintain an IBC classic connection to the Cosmos Hub ## Performance and Cost Efficiency ```mermaid theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} graph LR A[Ethereum] --> |~$5 Fast, ~$1 Standard| B[Cosmos Hub] B --> |IBC Eureka| C[Your Cosmos Chain] ``` * **Optimized Gas Consumption**: Transfer from Ethereum to your chain, via Cosmos Hub for approximately \$5 using fast mode and less than \$1 for standard transfers * **Fast Finality**: Assets arrive on destination chains in seconds, regardless of source chain finality times ## Native Asset Representation * **Bank Module Integration**: Received assets live directly in the bank module as native tokens * **No Wrapped Tokens**: Assets are not wrapped or suffixed with bridge-specific identifiers (e.g., no ETH.axl) * **ERC20 Compatibility**: Assets can be easily represented as ERC20s in the future without conversion complexity ## How Eureka Works Eureka connects blockchains through a combination of: 1. **IBC Protocol v2**: The standardized communication layer that defines packet formats and verification logic 2. **Solidity Implementation**: Smart contracts deployed on Ethereum and EVM chains that implement the IBC protocol (Other smart contract chains to come) 3. **Light Clients**: Each chain runs a light client of the other chain to verify state transitions. On Ethereum, this uses SP1 zero-knowledge proofs for gas-efficient verification 4. **Relayers**: IBC v2 uses relayers to send messages between chains. We facilitate and operate the relaying infrastructure for Eureka for you. The IBC protocol guarantees that a packet delivered on the destination chain was definitely sent from the source chain, using cryptographic verification rather than trusted third parties. ## Permissioned Relay The initial rollout of IBC Eureka will use permissioned relayers for additional safety and security guarantees. The IBC light clients will be used in the same way as when IBC is permissionless, the permissioning only means that liveness is dependent on the permissioned relay set. Permissioning will be removed in the near future. ## Architecture Overview ```mermaid theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} flowchart TD subgraph "Ethereum Network" E[Ethereum] ICS26[ICS26 Contract] ICS20[ICS20 Transfer Contract] LC1[Tendermint Light Client] end subgraph "Cosmos Hub" CH[Cosmos Hub] LC2[Ethereum Light Client] IBCHandler[IBC Handler] Transfer[Transfer Module] end subgraph "Your Cosmos Chain" YC[Your Chain] IBCHandler2[IBC Handler] Transfer2[Transfer Module] end E <--> ICS26 ICS26 <--> ICS20 ICS26 <--> LC1 CH <--> IBCHandler IBCHandler <--> LC2 IBCHandler <--> Transfer YC <--> IBCHandler2 IBCHandler2 <--> Transfer2 Transfer <--> |IBC Connection| Transfer2 LC1 <--> |Relayer| LC2 ``` # Integration Guide Source: https://docs.cosmos.network/skip-go/eureka/integration-guide A guide on how to integrate IBC Eureka for chain developers, asset issuers, and application developers # Types of Integrators **There are three types of integrators of Eureka:** 1. **Chain Developers** - ensuring that your chain is compatible with Eureka and can facilitate the bridging of assets to and from other chains in the Eureka ecosystem. 2. **Asset Issuers** - ensuring the assets you care about being bridged over Eureka are properly set up in the protocol and in the Skip Go API for application developers to support easily. 3. **Application Developers** - ensuring your end users have access to Eureka assets and bridging capabilities via the Skip Go API. ## Chain Developers **If you're developing a Cosmos-based blockchain, the easiest way to unlock Eureka assets and bridging capabilities is by opening up an IBC connection to the Cosmos Hub:** * Requires an IBC (classic) connection to the Cosmos Hub * No chain upgrade is needed if you're already using IBC * Users benefit from reduced cost of asset transfers between Eureka-enabled domains through batching * Chains only need to maintain a single relayer to the Cosmos Hub to reach the entire Eureka and IBC network If you are interested in a direct Eureka connection to Ethereum or L2s/Solana coming later this year, please reach out to [Jeremy](https://t.me/NotJeremyLiu) or [Susannah](https://t.me/bigsuse) directly as additional integration work is required. ## Asset Issuers: Bringing Your Token to Ethereum To bridge assets to Ethereum through Eureka while maintaining full control over your token's representation, you must deploy a custom ERC20 contract. This approach ensures you retain ownership of the token's metadata and on-chain behavior. **Requirements:** 1. **Custom ERC20 Implementation** Deploy an ERC20 contract that implements our required interface standards 2. **CoinGecko Listing** Maintain a CoinGecko listing to ensure accurate pricing and metadata in our interfaces **Why deploy a custom ERC20?** * **Full Metadata Control** - Set your token's name, symbol, and other details during deployment * **Verified Ownership** - Register the contract under your project's domain on Etherscan * **Permanent Governance** - Maintain irrevocable control over the token's core logic **Ready to get started?** Follow our step-by-step [Custom ERC20 Integration Guide](/skip-go/eureka/custom-erc20-integration) to deploy your contract. ## Application Developers **If you're an application developer looking to give your users access to Eureka assets in your UI or to leverage them within your protocol, integrating into the Eureka ecosystem via Skip Go is super simple!** ### Requesting New Assets If you want to enable bridging of a new asset (e.g., an Ethereum asset) to Cosmos or Eureka-connected chains, you can submit a request by [joining our Discord](https://discord.com/invite/interchain) and opening a support ticket. Our team will review your request and provide guidance on the next steps. ### New Skip Go Integrator If you're brand new to Skip Go, read about our cross-chain developer platform on the [Getting Started](/skip-go/general/getting-started) page will be the best resource for you to get up to speed on the capabilities of Skip Go and the various integration options. * For the quickest and easiest integration, you can integrate the [Widget](/skip-go/widget/getting-started) in minutes! For more control over the UI you provide your users, the [Client Library](/skip-go/client/getting-started) is the way to go. * The integration provides a one-click experience for users to transfer assets across the Eureka ecosystem and beyond in a single integration (via Skip Go's aggregation and composability engine). ### Current Skip Go Integrator Ensuring Eureka works with your Skip Go integration is the same easy process as any other bridge! To get routes involving Eureka assets, you must add `"eureka"` to the `experimental_features` array in your `/route` and `/msgs_direct` payload. For example: `"experimental_features": ["eureka"]` if you have no other experimental features enabled, or `"experimental_features": ["cctp", "eureka"]` if you already have other features enabled. This requirement applies to all routes involving Eureka assets, including same-chain swaps. Without this flag, Eureka assets will not be included in routing responses. Changes are as follows: 1. `eureka_transfer` Operation type to be expected to be returned from the `/route` and `/msgs_direct` endpoints 2. `eureka_transfer` Transfer type to be expected to be returned from the `/status` endpoint in the transfer sequence 3. `eureka` bridge type returned from the `/bridges` endpoint 4. To keep Eureka opt-in, integrators must pass `eureka` into the `experimental_features` array in the `/route` and `/msgs_direct` calls to enable Eureka routing **What this looks like for each type of Skip Go integration:** 1. If you're using the Widget, make sure you're updated to version `3.5.0` or above and pass in `eureka` to the experimentalFeatures prop. 2. If you're using the Client Library, make sure you're updated to version `0.16.22` or above and pass in `eureka` to the experimentalFeatures param. 3. If you're integrated directly with the REST endpoints, you can find the relevant types in the API reference for the [Route Operation](/skip-go/api-reference/prod/fungible/post-v2fungibleroute#response-operations) and for the [Lifecycle Tracking Transfer](/skip-go/api-reference/prod/transaction/get-v2txstatus#response-transfers). # Security Properties Source: https://docs.cosmos.network/skip-go/eureka/security-properties Depending on where it is deployed, IBC Eureka might have different security properties compared to the ones in IBC Classic. This is mainly because EVM chains do not have any form of governance, whereas Cosmos chains do. To improve protocol and fund safety at launch, IBC Eureka is going to launch in stages, delineated by improved security properties at each stage. ## Launch stage (0) At launch, IBC Eureka is going to be deployed on two blockchains: Ethereum and Cosmos Hub mainnet. On the Cosmos Hub side, the security properties remain the same as in IBC Classic - governance has ultimate control over the chain, light client and channels. On the Ethereum mainnet side, it is different - a security council will have control over contract upgradeability, pausing and light client upgrades. ### Security council The Eureka Security Council is designated as a 5-of-7 council that can take actions such as: * upgrading the `ICS20Transfer`, `ICS26Router`, `IBCERC20` and `Escrow` contracts * migrating light clients in case of freezing due to misbehaviour, expiration or security vulnerabilities/incidents * designating specific canonical names for IBC applications and light clients on Ethereum mainnet The security council cannot take these actions instantly - the actions are timelocked using a standard OpenZeppelin `TimelockController` contract with a minimum delay of three days. The delay gives an opportunity for the Cosmos Hub to halt inbound / outbound transfers in case of a malicious action taken by the Security Council. The security council is composed of individuals associated with well-respected and trusted entities in the Ethereum and Cosmos communities: * Wildcat Finance * Informal * Hypha * ZK Validator * Chorus One * Keplr * Interchain Labs ### Pausing council The pausing council is designated for rapid-response to a security incident. The only actions that the pausing council can take are pausing and unpausing transfers out of the Ethereum-side contracts. The council is composed of a subset of people in the Security Council who are going to be rapidly responding to security incidents related to canonical IBC Eureka deployments. The actions of the pausing council are not time-locked to allow for a quick response time. ## Governance stage (1) After the protocol has successfully launched, the next step in the IBC Eureka roadmap is to allow general contract message passing between chains. This will enable canonical EVM Eureka deployments to be controlled by Cosmos Hub governance. As such, the security council will increase the minimum delay of the `TimelockController` to be longer than the time it takes to pass a governance proposal on the Cosmos Hub. This means that the security council will be much closer to becoming obsolete, while allowing the Cosmos Hub to override actions taken by the security council. ## Pausing stage (2) After a trial period of allowing the Cosmos Hub to govern the canonical Eureka deployments, the security council will revoke its' rights and controls over canonical deployments, fully allowing the Cosmos Hub to take over its' responsibilities. # Skip Go Asset Registry & Overrides Source: https://docs.cosmos.network/skip-go/support-requirements/asset-registry-overrides ## Background The Skip Go API aggregates asset metadata from multiple public registries, including: * [Cosmos Chain Registry](https://github.com/cosmos/chain-registry) * [Osmosis Assetlists](https://github.com/osmosis-labs/assetlists) * [Keplr Chain Registry](https://github.com/chainapsis/keplr-chain-registry) * [Astroport Token Lists](https://github.com/astroport-fi/astroport-token-lists) While this multi-registry approach ensures broad coverage, it can sometimes lead to inconsistencies such as: * **Mismatched logos** between token and chain representations * **Missing or outdated logos** for newer tokens * **Conflicting metadata** when different registries provide different information for the same asset * **Low-quality or incorrect images** from less-maintained registries * **Decimal display issues** ## The Skip Go Asset Registry To resolve these issues and give teams control over how their assets appear across Skip Go, we can override these with the **[Skip Go Asset Registry](https://github.com/skip-mev/skip-go-registry)**. ## When to Submit an PR to override You should submit an override to the Skip Go Asset Registry when: 1. **Logo Mismatch**: Your token logo differs from your chain logo when they should be the same 2. **Missing Logos**: Your token or chain has no logo in Skip Go integrations 3. **Incorrect Metadata**: Name, symbol, decimal or other metadata is incorrect or outdated 4. **Symbol Conflicts**: Need to differentiate between token versions on different chains (e.g., `USDC.e`, `aUSD.planq`) ## Repository Structure The Skip Go Registry follows this directory structure: ``` chains/ ├── [chain_id]/ # EVM chains use numeric chain IDs (e.g., 42161 for Arbitrum) │ ├── chain.json # Chain configuration │ ├── assetlist.json # Asset definitions │ └── images/ # Asset logos (optional) └── [chain_name]/ # Cosmos chains use chain names (e.g., cosmoshub-4) ├── chain.json # Chain configuration ├── assetlist.json # Asset definitions └── images/ # Asset logos (optional) ``` ## How to Submit Custom Assets ### Step 1: Fork the Repository Fork the [Skip Go Registry](https://github.com/skip-mev/skip-go-registry) ### Step 2: Add or Update Assets #### For EVM Chain Assets **ERC-20 Tokens:** For ERC-20 tokens, include only required fields unless other meta data is incorrect e.g decimals: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "asset_type": "erc20", "erc20_contract_address": "0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48" } ``` Additional fields to add when overriding metadata: ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "asset_type": "erc20", "erc20_contract_address": "0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48", "name": "USD Coin", "symbol": "USDC", "decimals": 6, "logo_uri": "https://raw.githubusercontent.com/...", "coingecko_id": "usd-coin" } ``` #### For Cosmos Chain Assets ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "asset_type": "cosmos", "name": "Osmosis", "denom": "uosmo", "logo_uri": "https://raw.githubusercontent.com/...", } ``` #### Symbol Overrides Use `recommended_symbol` to set the display symbol in Skip Go. This is useful for chain-specific symbols (e.g., to differentiate bridged versions): ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "asset_type": "erc20", "name": "aUSD", "symbol": "aUSD", "decimals": 18, "erc20_contract_address": "0xA2871B267a7d888F830251F6B4D9d3DFf184995a", "recommended_symbol": "aUSD.planq" } ``` Common symbol override patterns: * `USDT.kava` - Chain suffix with dot notation * `USDC.e` - Bridged token designation * `TIA.n` - Network-specific versions ### Step 3: Asset Requirements **Logo Requirements:** * **Format**: PNG or SVG (host on GitHub or a permanent CDN) * **Size**: Minimum 250x256px, Maximum: 800x800px for PNG, vector for SVG * **URL**: Use permanent URLs (GitHub raw content URLs) **Required Fields by Asset Type:** **ERC-20 Tokens:** * `asset_type`, `erc20_contract_address` (checksummed) * Optional but recommended: `name`, `symbol`, `decimals`, `logo_uri`, `coingecko_id` **Cosmos Assets:** * `asset_type`, `name`, `symbol`, `denom`, `decimals` * Recommended: `logo_uri`, `coingecko_id` Always verify decimals with the official token contract or chain documentation. ### Step 4: Validate Your Changes Run the validation scripts before submitting: ```bash theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} cd scripts/config-validator npm install npm run validate ``` This checks for: * Schema compliance * Required fields * Valid contract addresses * Duplicate assets ### Step 5: Submit a Pull Request In your PR description, include: * **What you're adding/updating**: Token name, chain name, or symbol override * **Why the change is needed**: E.g., "Logo mismatch" or "Missing metadata" * **Verification**: Link to official project documentation confirming accuracy ### Step 6: PR Review and Merge The Cosmos Labs team reviews submissions within **1-3 business days** Once merged, updates appear in Skip Go integrations within **24 hours**. ## Advanced Features ## Verifying Your Assets After your PR is merged, verify using the Skip Go API: **Endpoint:** `https://api.skip.build/v2/fungible/assets` **Query Parameters:** * `chain_ids`: Limit response to specific chains * `include_evm_assets=true`: Required to see EVM tokens * `include_cw20_assets=true`: Include CW20 tokens * `native_only`: Restrict to native assets only **Example Response:** ```json theme={"theme":{"light":"github-light-high-contrast","dark":"github-dark-high-contrast"}} { "chain_to_assets_map": { "42161": { "assets": [{ "denom": "0x816E21c33fa5F8440EBcDF6e01D39314541BEA72", "chain_id": "42161", "symbol": "agETH", "name": "Kelp Gain", "logo_uri": "https://raw.githubusercontent.com/...", "decimals": 18, "coingecko_id": "kelp-gain", "recommended_symbol": "agETH" }] } } } ``` ## Common Issues & Solutions ### My token shows a different logo than my chain **Solution**: Add both token and chain entries to the registry with matching logos. ### My logo isn't updating after PR merge **Solution**: Wait up to 24 hours for cache invalidation. Check using the API endpoint above. ### My token needs a different symbol on different chains **Solution**: Use the `recommended_symbol` field (e.g., `"USDC.e"` for bridged USDC). ### EVM assets not appearing in API **Solution**: Query with `include_evm_assets=true` parameter. ## Support If you encounter issues: * [Interchain Discord](https://discord.com/invite/interchain) * **GitHub Issues**: [Skip Go Registry Issues](https://github.com/skip-mev/skip-go-registry/issues) ## Related Documentation * [Token & Route Support Requirements](./token-support-requirements): How to add new tokens to Skip Go * [Chain Support Requirements](./chain-support-requirements): How to request chain integration * [Skip Go API Reference](/skip-go/api-reference/prod/fungible/get-v2fungibleassets): View asset metadata via API # Chain Integration Request Source: https://docs.cosmos.network/skip-go/support-requirements/chain-integration-request # Chain Support Requirements Source: https://docs.cosmos.network/skip-go/support-requirements/chain-support-requirements This document describes what new chains need to do the support Skip Go API ## Background * New chains often want Skip Go API to add support for their chain as a source + destination for tokens because the API powers cross-chain swaps + transfers in all the major cosmos wallets (Leap, Keplr, IBC Wallet, Metamask Snap) and many popular DeFi aggregators and dapp frontends (e.g. Stargaze). As a result, being added to the Skip Go API instantly offers distribution across the interchain * This document covers the basic requirements chains must satisfy and steps their contributors must complete in order for Skip Go API to support them * Once you've met the requirements below, submit an integration request through our [API Chain Integration Request Form](/skip-go/support-requirements/chain-integration-request) **Want help at the beginning of this process?** Getting connected to IBC, Axelar, CCTP, Layerzero or Hyperlane can be hard. Even choosing among them is a challenge. We're happy to provide guidance and hands-on support to serious teams early in their journey -- even before they've made a choice of interop protocol if helpful! **This guide assumes using IBC for Interop** The rest of this guide assumes you want Skip to support users interacting with your chain primarily over IBC. The Skip Go API supports other bridges and interop protocols in addition to IBC, including Hyperlane, CCTP, and Axelar. If you're using one of these, please get in contact with us on [our Discord](https://discord.com/invite/interchain), and we will help guide you through it to the extent we can. These other interop protocols are less standardized and/or less permissionless than IBC, so the process of adding support for new chains is more bespoke and varies by protocol. We're happy to help where we can, providing guidance, implementation, and introductions where necessary ## 1. Satisfy the following basic requirements 1. Provides clear instructions for permissionlessly running a full node and joining the network. Commonly instructions should include: 1. Link to genesis file 2. Full node binary or instructions for building binary from source code 3. Public peer / seeder nodes 4. Public RPCs 2. Chain metadata is available in a commonly used chain registry (e.g. [https://github.com/cosmos/chain-registry](https://github.com/cosmos/chain-registry)). Metadata should include: 1. Chain name (and optionally "pretty name" 2. Website 3. chain\_id 4. Bech32 prefix 5. slip44 (aka "coin type") 6. Fee information (with denom, low price, average price, and high price) 7. Logo URIs 8. Persistent peer lists 9. Public RPCs *This metadata and the chain registry of choice might differ for EVM chains. Please use your best judgement of whats required* 3. IBC, Axelar, Layerzero, Eureka, and/or Hyperlane support. ## 2. Configure IBC Machinery *Here we set up IBC clients, channels, and relayers.* **What is a relayer?** Relayers are the off-chain actors that: * Keep IBC light clients up to date (Regular updates are required to prevent "expiration") * Monitor chains for outbound IBC packets, grab them, and send the packet data and packet proof to the destination chain *The easiest way to complete the steps below is to use the ibc relayer software ([Hermes](https://github.com/informalsystems/hermes) or [Relayer](https://github.com/cosmos/relayer). The CLIs of both support channel and client instantiation.* For each chain that you want your chain to have a direct IBC transfer path to, you must complete steps to ensure IBC works properly: 1. Create a light client of the remote chain on your chain, and a light client of your chain on the remote chain 2. Create a ICS-20 "transfer" channel between these two clients **Don't create more than 1 channel between your chain and a remote chain** Your chain only needs 1 transfer channel for each chain it should communicate with directly. All tokens from your chain can be transferred to a particular remote chain over the same channel. Additional transfer channels may create confusion and liquidity fragmentation since users will need to pick which channel to transfer over (The Skip Go API automates this choice for the apps and users that use it, but others might not be so lucky) 1. Ensure there is at least 1 reliable relayer covering the channel (who can keep the light clients up to date and ferry packets between the two chains over time) **What if I don't want to run my own relayers?** Get in touch with us on [our Discord](https://discord.gg/interchain) We have great relationships with all the top relayer operators in the Cosmos ecosystem and can put you in touch with them. ## 3. Configure support for each asset minted on your chain For each native asset that you want to ensure users can transfer over the 1. Transfer a non-zero amount of the token over the channel 2. Confirm that the token successfully gets transferred to the destination chain 3. **Leave the transferred tokens on the destination chain** ## 4. What chains will be asked for when submitting to get added to the Skip API on discord Please include the following: * Team Name: * Team Contact Name: * Team Contact Telegram: * Team Website: * Swapping Venue Name\*\* (if different from team name): * Swapping Venue Website\*\* (if different from team website): * Swapping Venue Documentation:\*\* Chain supported by the Skip API * Use the [`/info/chains` endpoint](../api-reference/prod/info/get-v2infochains) to verify: `/v2/info/chains` * If not listed, follow the [Chain Support Requirements](./chain-support-requirements) * *This is a required prerequisite* CosmWasm Support * Include documentation or notes on any special setup or limitations IBC Support * Provide relevant details, especially if not using standard IBC-go IBC-hooks Support * Indicate if using upstream ibc-hooks or a modified version * [Upstream ibc-hooks](https://github.com/cosmos/ibc-apps/tree/main/modules/ibc-hooks) *** ## **Additional Chain Questions** * Does the chain support **Packet Forward Middleware (PFM)?** * Is **CosmWasm contract deployment permissionless**? If not, what is the approval or governance process? **Why is this required?** Warm starting the channels kicks off Skip's intelligent routing suggestions for folks bridging to and from your chain. We choose routes between chains that ensure users are always receiving the most desirable version of their chosen token on their destination chain. As a part of providing good user experiences for everyone using the API, we don't enable users to bridge assets to new chains where no one has previously bridged that asset. (Often times, for ordinary users, taking an existing token to a chain it doesn't exist leaves them stuck on that new chain with a useless token). That's why we need to "warm start" channels -- to enable recommending them as bridging routes. **Have questions or feedback? Help us get better!** Join [our Discord](https://discord.gg/interchain) and select the "Skip Go Developer" role to share your questions and feedback. # Swap Venue Requirements Source: https://docs.cosmos.network/skip-go/support-requirements/swap-venue-requirements This document covers what Skip Go API requires of DEXes to support them as potential swapping venues within the API's cross-chain DEX aggregation functionality. At the end, the document provides instructions for helping the Skip team add your DEX to the API as a swapping venue ## Background * DEXes often want Skip Go API to add support for their DEX as a swapping venue because the API powers cross-chain swaps + transfers in all the major cosmos and ethereum wallets (MM, Keplr, IBC Wallet, Metamask Snap) and cross-chain DEX aggregation to many popular defi aggregator and dapp frontends (e.g. Stargaze). As a result, being added to the Skip Go API instantly offers distribution across the interchain for your DEX * The Skip Go API’s swapping system is currently built in CosmWasm and can support swapping assets on Cosmos SDK modules (ex: Osmosis Poolmanager) and other CosmWasm contracts (ex: Astroport DEX) that can be queried and executed by Skip Go API’s CosmWasm contracts. The API support's swapping EVM dexes. (Uniswap v2/v3, Aerodrome, Velodrome). ### Chain Requirements 1. The chain must already be supported by the Skip Go API 1. Use the `/info/chains` endpoint to query a list of actively supported chains: [/v2/info/chains](/skip-go/api-reference/prod/info/get-v2infochains) 2. If your chain is not already supported, follow the instructions in [Chain Support Requirements](./chain-support-requirements) to request support 3. ***This is a pre-requisite*** 2. CosmWasm Support 3. IBC support 4. ibc-hooks Support (Check out [our blog post about ibc-hooks](https://ideas.skip.build/t/how-to-give-ibc-superpowers/81)) ### Module / Contract Requirements ### General 1. The module / contract must be able to be called by the Skip Go API’s CosmWasm contracts. For Cosmos SDK modules, this will require the module queries described below to be whitelisted and queryable by CosmWasm contracts ([see Osmosis for an example](https://github.com/osmosis-labs/osmosis/blob/d7eb3b7018cde0557216237c84f063b3915af650/wasmbinding/stargate%5Fwhitelist.go#L169)). #### Execution Messages 1. Supports a “Swap Exact In” method where a user specifies an input asset and path to swap, and the module / contract swaps the given user asset to the user’s desired output asset and sends it to the user ([see Osmosis for a module example](https://github.com/osmosis-labs/osmosis/blob/d7eb3b7018cde0557216237c84f063b3915af650/x/poolmanager/msg%5Fserver.go#L22), or [Astroport for a contract example](https://github.com/astroport-fi/astroport-core/blob/52af83eab04c620ac40019f7cc9cee433d0c601e/contracts/router/src/contract.rs#L74)). 1. Inputs into the swap: 1. An asset (Native cosmos coin or CW20 token, incl. denom and amount) 2. A path (can be a single pool, or multiple pools if designed like a router) 2. Outputs of the swap: 1. An asset 2. NICE TO HAVE (Optional): Supports a “Swap Exact Out” method where a user specifies a desired output asset, a path to swap through to achieve that asset, and a maximum amount of an input asset to swap, and the module / contracts swaps in the exact input asset needed to acquire the specified output asset and sends it to the user ([see Osmosis for a module example](https://github.com/osmosis-labs/osmosis/blob/d7eb3b7018cde0557216237c84f063b3915af650/x/poolmanager/msg%5Fserver.go#L48)). ### Query Messages 3. Exposes a “Swap Exact In Simulation” method where a user can put the inputs that would be used in the “Swap Exact In” execution method, and gets a response from the query that specifies the asset they would receive if executing the method ([see Osmosis for a module example](https://github.com/osmosis-labs/osmosis/blob/d7eb3b7018cde0557216237c84f063b3915af650/x/poolmanager/client/grpc/grpc%5Fquery.go#L113), or [Astroport for a contract example](https://github.com/astroport-fi/astroport-core/blob/52af83eab04c620ac40019f7cc9cee433d0c601e/contracts/router/src/contract.rs#L240)). 4. Exposes a “Swap Exact Out Simulation” method where a user can input the asset desired and a given pool / path, and the query returns the asset required to swap in to receive the output asset desired ([see Osmosis for a module example](https://github.com/osmosis-labs/osmosis/blob/d7eb3b7018cde0557216237c84f063b3915af650/x/poolmanager/client/grpc/grpc%5Fquery.go#L93), or [Astroport for a contract example](https://github.com/astroport-fi/astroport-core/blob/52af83eab04c620ac40019f7cc9cee433d0c601e/contracts/pair/src/contract.rs#L892)). ## Getting Skip to Add Support for Your DEX Once your DEX and its chain meet the requirements listed above, you can submit it for integration by opening a support ticket in our [Discord](https://discord.gg/interchain). We may request additional information to help prioritize the integration of your swap venue based on: * **Swap volume** * **Liquidity** * **Quality and clarity of technical documentation** *** ### Required Submission Info #### General Information * Team Name * Contact Name * Contact Telegram * Team Website * Swapping Venue Name (if different from team name) * Swapping Venue Website * Swapping Venue Documentation *** #### Additional DEX Information Please answer the following questions to help us assess integration suitability: * **What kind of venue is it?** *e.g., AMM, CLOB, staking protocol* * **If an AMM, what pricing curves are supported?** * Include documentation * Describe any non-standard logic * **How are fees calculated?** * Pool fees, maker/taker fees, etc. * Include documentation and note any non-standard mechanisms * **Which tokens typically offer better pricing on this venue?** * **Which tokens or token pairs are the most popular** # Token & Route Support Requirements Source: https://docs.cosmos.network/skip-go/support-requirements/token-support-requirements This document describes the steps you must complete for the Skip Go API to begin providing new routes for users to transfer a token over to various remote chains using IBC. ## Background * New tokens often want Skip Go API to add support for transferring their token to other chains because the API powers cross-chain swaps + transfers in all the major cosmos wallets (Leap, Keplr, IBC Wallet, Metamask Snap) and cross-chain DEX aggregation for many popular defi aggregator and dapp frontends (e.g. Stargaze). As a result, being added to the Skip Go API instantly offers distribution across the interchain for a new token * This document covers the basic requirements tokens must satisfy and steps their contributors must complete in order for Skip Go API to support transferring them throughout the interchain **Guide assumes using IBC for interop** This guide assumes you're using IBC to transfer your token between chains. The Skip Go API supports other bridges and interop protocols in addition to IBC, including Hyperlane, CCTP, and Axelar. If you're using one of these, please get in contact with us on [our Discord](https://discord.com/invite/interchain). and we will help guide you through it to the extent we can. These other interop protocols are less standardized and/or less permissionless than IBC, so the process of adding support for transferring new tokens over them is more bespoke and varies by protocol. We're happy to help where we can, providing guidance, implementation, and introductions where necessary. ## 1. Satisfy the following basic requirements 1. The chain where the token is issued must already be supported by the Skip Go API 1. Use the `/info/chains` endpoint to query a list of actively supported chains: [/v2/info/chains](/skip-go/api-reference/prod/info/get-v2infochains) 2. If the chain is not already supported, follow the instructions in [Chain Support Requirements](./chain-support-requirements) to request support 3. ***This is a pre-requisite*** 2. The Skip Go API must also support the remote chains to which you wish users to be able to transfer the asset 1. Use the `/info/chains` endpoint to query a list of actively supported chains: [/v2/info/chains](/skip-go/api-reference/prod/info/get-v2infochains) 2. If the chain is not already supported, follow the instructions in [Chain Support Requirements](./chain-support-requirements) to request support 3. ***This is a pre-requisite*** **Note:** We are unable to support IBC tokens from a source chain that is not integrated with Skip Go. Skip Go requires direct integration with the source chain to query chain state and balances, submit and monitor transactions, calculate accurate gas fees and handle chain-specific token mechanics. 3. Token metadata is available in a commonly used chain registry (e.g. [Cosmos Chain Registry](https://github.com/cosmos/chain-registry)) . Metadata should include at least: 1. Denom (programmatic string identifier) 2. Symbol (aka "ticker") 3. Asset name (human readable denom) 4. Display name (aka pretty name) 5. Decimals / exponent 6. Images 7. coingecko\_id (if applicable) 8. Description **Need to customize your token's logo or metadata?** If your token has incorrect logos, conflicting metadata, or branding issues from external registries, you can submit overrides directly to the Skip Go Asset Registry. See [Asset Registry & Overrides](./asset-registry-overrides) for detailed instructions. 4. Ensure IBC relayers are actively monitoring and relaying packets on all channels over which you want users to transfer your token (See [Chain Support Requirements](./chain-support-requirements) for more info on relayers.) ## 2. "Warm Start" your Asset Routes For each destination chain: 1. Pick a channel that you would like to be the canonical channel for transferring the asset to this destination chain 2. Transfer a non-zero amount of the token over the channel 3. Confirm that the token successfully gets transferred to the destination chain 4. **Leave the transferred tokens on the destination chain** **How do I pick a channel for a destination chain?** If you're launching a new chain, you should just pick whatever channel your team has set up. Usually, there's just one highly-trafficked and well-relayed channel between two chains over which all assets are transferred. (In theory, there can be many because IBC is permissionless, but usually relayers are only monitoring 1 and creating more adds confusion for all parties) If you're launching a new token on a chain that already has a vibrant IBC ecosystem and has already issued tokens that are widely used throughout the interchain (e.g. Osmosis or Neutron), you should probably use the same channel the well-established tokens use, since relayers are most likely to support these ones. To see which channel this is, call the [/v2/fungible/recommend\_assets](/skip-go/api-reference/prod/fungible/get-v2fungibleassets) endpoint with the following values: * `source_denom`: A well-established token on the chain where your asset is issued (e.g. `uatom`) * `source_chain_id`: The `chain_id` of the chain where your asset is issued (`e.g. cosmoshub-4`) * `dest_chain_id`: The `chain_id` of the chain to which you want to be able to transfer your asset (e.g. `osmosis-1`) The channel you want to use is available in the response in `recommendations[0].asset.trace` **How do I transfer tokens over my chosen channel before Skip Go API supports it?** The easiest way to transfer tokens over a channel before official Skip Go API support is to use Keplr's developer mode. To enable developer mode in the Keplr extension, open the hamburger menu, click on settings, then click advanced, then activate the toggle for "Developer Mode". Once developer mode is active, at the bottom of the main page you should see "Advanced IBC Transfer". Click on this then follow the instructions for inputting your token and desired channel ID. **Why is this required?** Warm starting the channels kicks off Skip's intelligent routing suggestions for folks bridging to and from your chain. We choose routes between chains that ensure users are always receiving the most desirable version of their chosen token on their destination chain. As a part of providing good user experiences for everyone using the API, we don't enable users to bridge assets to new chains where no one has previously bridged that asset. (Often times, for ordinary users, taking an existing token to a chain it doesn't exist leaves them stuck on that new chain with a useless token). That's why we need to "warm start" channels -- to enable recommending them as bridging routes. ## 3. Wait up to 24 hours and verify Skip's intelligent route detection should automatically detect new routes for all assets and chains that meet the above requirements in 4-8 hours. This will not happen immediately. Please ensure you wait the necessary amount of time. After you've let enough time pass, you can verify that Skip Go API supports the new routes you've configured using the [/v2/fungible/recommend\_assets](/skip-go/api-reference/prod/fungible/get-v2fungibleassets) endpoint. For each destination chain you've configured, call this endpoint with the following data: * `source_denom`: Your token * `source_chain_id`: The chain on which your token is issued * `dest_chain_id`: The chain to which you've warm-started an IBC route in the previous step ## Common questions ### I want a CW20 token added to Skip Go, what do I need to do to add it? 1. To add a CW20 token, you should first make sure its usable (either has ibc20-cw20 converter contracts deployed to IBC transfer the token or source-chain swappable on a swap venue used by Skip Go). 2. Once confirmed usable, you must add the CW20 token's metadata to a registry we support indexing from. The easiest one is likely the [Cosmos Chain Registry](https://github.com/cosmos/chain-registry), you can check out an example of a CW20 token entry here in [Archway's assetlist.json](https://github.com/cosmos/chain-registry/blob/master/archway/assetlist.json). 3. Once the PR is merged into a registry, our indexing will add it to the API on our next indexing run (hourly).