/config.json`.
>
> For the AvalancheGo node configuration options, see the AvalancheGo Configuration page.
This document describes all configuration options available for Subnet-EVM.
## Example Configuration
```json
{
"eth-apis": ["eth", "eth-filter", "net", "web3"],
"pruning-enabled": true,
"commit-interval": 4096,
"trie-clean-cache": 512,
"trie-dirty-cache": 512,
"snapshot-cache": 256,
"rpc-gas-cap": 50000000,
"log-level": "info",
"metrics-expensive-enabled": true,
"continuous-profiler-dir": "./profiles",
"state-sync-enabled": false,
"accepted-cache-size": 32
}
```
## Configuration Format
Configuration is provided as a JSON object. All fields are optional unless otherwise specified.
## API Configuration
### Ethereum APIs
| Option | Type | Description | Default |
| ---------- | ---------------- | ------------------------------------------------ | ----------------------------------------------------------------------------------------------------- |
| `eth-apis` | array of strings | List of Ethereum services that should be enabled | `["eth", "eth-filter", "net", "web3", "internal-eth", "internal-blockchain", "internal-transaction"]` |
### Subnet-EVM Specific APIs
| Option | Type | Description | Default |
| ------------------------ | ------ | -------------------------------------------------- | ------- |
| `validators-api-enabled` | bool | Enable the validators API | `true` |
| `admin-api-enabled` | bool | Enable the admin API for administrative operations | `false` |
| `admin-api-dir` | string | Directory for admin API operations | - |
| `warp-api-enabled` | bool | Enable the Warp API for cross-chain messaging | `false` |
### API Limits and Security
| Option | Type | Description | Default |
| ---------------------------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------ |
| `rpc-gas-cap` | uint64 | Maximum gas limit for RPC calls | `50,000,000` |
| `rpc-tx-fee-cap` | float64 | Maximum transaction fee cap in AVAX | `100` |
| `api-max-duration` | duration | Maximum duration for API calls (0 = no limit) | `0` |
| `api-max-blocks-per-request` | int64 | Maximum number of blocks per getLogs request (0 = no limit) | `0` |
| `http-body-limit` | uint64 | Maximum size of HTTP request bodies | - |
| `batch-request-limit` | uint64 | Maximum number of requests that can be batched in an RPC call. For no limit, set either this or `batch-response-max-size` to 0 | `1000` |
| `batch-response-max-size` | uint64 | Maximum size (in bytes) of response that can be returned from a batched RPC call. For no limit, set either this or `batch-request-limit` to 0. Defaults to `25 MB` | `1000` |
### WebSocket Settings
| Option | Type | Description | Default |
| -------------------- | -------- | ------------------------------------------------------------------ | ------- |
| `ws-cpu-refill-rate` | duration | Rate at which WebSocket CPU usage quota is refilled (0 = no limit) | `0` |
| `ws-cpu-max-stored` | duration | Maximum stored WebSocket CPU usage quota (0 = no limit) | `0` |
## Cache Configuration
### Trie Caches
| Option | Type | Description | Default |
| ----------------------------- | ---- | -------------------------------------------------------------------------- | ------- |
| `trie-clean-cache` | int | Size of the trie clean cache in MB | `512` |
| `trie-dirty-cache` | int | Size of the trie dirty cache in MB | `512` |
| `trie-dirty-commit-target` | int | Memory limit to target in the dirty cache before performing a commit in MB | `20` |
| `trie-prefetcher-parallelism` | int | Maximum concurrent disk reads trie prefetcher should perform | `16` |
### Other Caches
| Option | Type | Description | Default |
| ------------------------------ | ---- | ------------------------------------------------------------- | ------- |
| `snapshot-cache` | int | Size of the snapshot disk layer clean cache in MB | `256` |
| `accepted-cache-size` | int | Depth to keep in the accepted headers and logs cache (blocks) | `32` |
| `state-sync-server-trie-cache` | int | Trie cache size for state sync server in MB | `64` |
## Ethereum Settings
### Transaction Processing
| Option | Type | Description | Default |
| ----------------------------- | ----- | ------------------------------------------------------------- | -------------------- |
| `preimages-enabled` | bool | Enable preimage recording | `false` |
| `allow-unfinalized-queries` | bool | Allow queries for unfinalized blocks | `false` |
| `allow-unprotected-txs` | bool | Allow unprotected transactions (without EIP-155) | `false` |
| `allow-unprotected-tx-hashes` | array | List of specific transaction hashes allowed to be unprotected | EIP-1820 registry tx |
| `local-txs-enabled` | bool | Enable treatment of transactions from local accounts as local | `false` |
### Snapshots
| Option | Type | Description | Default |
| ------------------------------- | ---- | --------------------------------------- | ------- |
| `snapshot-wait` | bool | Wait for snapshot generation on startup | `false` |
| `snapshot-verification-enabled` | bool | Enable snapshot verification | `false` |
## Pruning and State Management
### Basic Pruning
| Option | Type | Description | Default |
| ---------------------- | ------ | ---------------------------------------------------------- | ------- |
| `pruning-enabled` | bool | Enable state pruning to save disk space | `true` |
| `commit-interval` | uint64 | Interval at which to persist EVM and atomic tries (blocks) | `4096` |
| `accepted-queue-limit` | int | Maximum blocks to queue before blocking during acceptance | `64` |
### State Reconstruction
| Option | Type | Description | Default |
| ------------------------------------ | ------ | ---------------------------------------------------------------- | ------- |
| `allow-missing-tries` | bool | Suppress warnings about incomplete trie index | `false` |
| `populate-missing-tries` | uint64 | Starting block for re-populating missing tries (null = disabled) | `null` |
| `populate-missing-tries-parallelism` | int | Concurrent readers for re-populating missing tries | `1024` |
### Offline Pruning
| Option | Type | Description | Default |
| ----------------------------------- | ------ | ------------------------------------------- | ------- |
| `offline-pruning-enabled` | bool | Enable offline pruning | `false` |
| `offline-pruning-bloom-filter-size` | uint64 | Bloom filter size for offline pruning in MB | `512` |
| `offline-pruning-data-directory` | string | Directory for offline pruning data | - |
### Historical Data
| Option | Type | Description | Default |
| ------------------------------- | ------ | --------------------------------------------------------------------------------------- | ------- |
| `historical-proof-query-window` | uint64 | Number of blocks before last accepted for proof queries (archive mode only, \~24 hours) | `43200` |
| `state-history` | uint64 | Number of most recent states that are accesible on disk (pruning mode only) | `32` |
## Transaction Pool Configuration
| Option | Type | Description | Default |
| ----------------------- | -------- | ------------------------------------------------------------------- | ------- |
| `tx-pool-price-limit` | uint64 | Minimum gas price for transaction acceptance | - |
| `tx-pool-price-bump` | uint64 | Minimum price bump percentage for transaction replacement | - |
| `tx-pool-account-slots` | uint64 | Maximum number of executable transaction slots per account | - |
| `tx-pool-global-slots` | uint64 | Maximum number of executable transaction slots for all accounts | - |
| `tx-pool-account-queue` | uint64 | Maximum number of non-executable transaction slots per account | - |
| `tx-pool-global-queue` | uint64 | Maximum number of non-executable transaction slots for all accounts | - |
| `tx-pool-lifetime` | duration | Maximum time transactions can stay in the pool | - |
## Gossip Configuration
### Push Gossip Settings
| Option | Type | Description | Default |
| ---------------------------- | ------- | ------------------------------------------------------------ | ------- |
| `push-gossip-percent-stake` | float64 | Percentage of total stake to push gossip to (range: \[0, 1]) | `0.9` |
| `push-gossip-num-validators` | int | Number of validators to push gossip to | `100` |
| `push-gossip-num-peers` | int | Number of non-validator peers to push gossip to | `0` |
### Regossip Settings
| Option | Type | Description | Default |
| ------------------------------ | ----- | -------------------------------------------- | ------- |
| `push-regossip-num-validators` | int | Number of validators to regossip to | `10` |
| `push-regossip-num-peers` | int | Number of non-validator peers to regossip to | `0` |
| `priority-regossip-addresses` | array | Addresses to prioritize for regossip | - |
### Timing Configuration
| Option | Type | Description | Default |
| ----------------------- | -------- | ------------------------ | ------- |
| `push-gossip-frequency` | duration | Frequency of push gossip | `100ms` |
| `pull-gossip-frequency` | duration | Frequency of pull gossip | `1s` |
| `regossip-frequency` | duration | Frequency of regossip | `30s` |
## Logging and Monitoring
### Logging
| Option | Type | Description | Default |
| ----------------- | ------ | ----------------------------------------------------- | -------- |
| `log-level` | string | Logging level (trace, debug, info, warn, error, crit) | `"info"` |
| `log-json-format` | bool | Use JSON format for logs | `false` |
### Profiling
| Option | Type | Description | Default |
| ------------------------------- | -------- | ----------------------------------------------------------- | ------- |
| `continuous-profiler-dir` | string | Directory for continuous profiler output (empty = disabled) | - |
| `continuous-profiler-frequency` | duration | Frequency to run continuous profiler | `15m` |
| `continuous-profiler-max-files` | int | Maximum number of profiler files to maintain | `5` |
### Metrics
| Option | Type | Description | Default |
| --------------------------- | ---- | -------------------------------------------------------------------- | ------- |
| `metrics-expensive-enabled` | bool | Enable expensive debug-level metrics; this includes Firewood metrics | `true` |
## Security and Access
### Keystore
| Option | Type | Description | Default |
| ---------------------------------- | ------ | -------------------------------------------------------- | ------- |
| `keystore-directory` | string | Directory for keystore files (absolute or relative path) | - |
| `keystore-external-signer` | string | External signer configuration | - |
| `keystore-insecure-unlock-allowed` | bool | Allow insecure account unlocking | `false` |
### Fee Configuration
| Option | Type | Description | Default |
| -------------- | ------ | ------------------------------------------------------------------ | ------- |
| `feeRecipient` | string | Address to send transaction fees to (leave empty if not supported) | - |
## Network and Sync
### Network
| Option | Type | Description | Default |
| ------------------------------ | ----- | ------------------------------------------------------------ | ------- |
| `max-outbound-active-requests` | int64 | Maximum number of outbound active requests for VM2VM network | `16` |
### State Sync
| Option | Type | Description | Default |
| ---------------------------- | ------ | ------------------------------------------------------- | -------- |
| `state-sync-enabled` | bool | Enable state sync | `false` |
| `state-sync-skip-resume` | bool | Force state sync to use highest available summary block | `false` |
| `state-sync-ids` | string | Comma-separated list of state sync IDs | - |
| `state-sync-commit-interval` | uint64 | Commit interval for state sync (blocks) | `16384` |
| `state-sync-min-blocks` | uint64 | Minimum blocks ahead required for state sync | `300000` |
| `state-sync-request-size` | uint16 | Number of key/values to request per state sync request | `1024` |
## Database Configuration
> **WARNING**: `firewood` and `path` schemes are untested in production. Using `path` is strongly discouraged. To use `firewood`, you must also set the following config options:
>
> * `pruning-enabled: true` (enabled by default)
> * `state-sync-enabled: false`
> * `snapshot-cache: 0`
Failing to set these options will result in errors on VM initialization. Additionally, not all APIs are available - see these portions of the config documentation for more details.
| Option | Type | Description | Default |
| ------------------------- | ------ | --------------------------------------------------------------------------------------------------- | ------------ |
| `database-type` | string | Type of database to use | `"pebbledb"` |
| `database-path` | string | Path to database directory | - |
| `database-read-only` | bool | Open database in read-only mode | `false` |
| `database-config` | string | Inline database configuration | - |
| `database-config-file` | string | Path to database configuration file | - |
| `use-standalone-database` | bool | Use standalone database instead of shared one | - |
| `inspect-database` | bool | Inspect database on startup | `false` |
| `state-scheme` | string | EXPERIMENTAL: specifies the database scheme to store state data; can be one of `hash` or `firewood` | `hash` |
## Transaction Indexing
| Option | Type | Description | Default |
| --------------------- | ------ | ---------------------------------------------------------------------------------------- | ------- |
| `transaction-history` | uint64 | Maximum number of blocks from head whose transaction indices are reserved (0 = no limit) | - |
| `tx-lookup-limit` | uint64 | **Deprecated** - use `transaction-history` instead | - |
| `skip-tx-indexing` | bool | Skip indexing transactions entirely | `false` |
## Warp Configuration
| Option | Type | Description | Default |
| ------------------------- | ----- | ----------------------------------------------------- | ------- |
| `warp-off-chain-messages` | array | Off-chain messages the node should be willing to sign | - |
| `prune-warp-db-enabled` | bool | Clear warp database on startup | `false` |
## Miscellaneous
| Option | Type | Description | Default |
| -------------------- | ------ | -------------------------------------------------------------------------------------------------------------------------- | ------- |
| `airdrop` | string | Path to airdrop file | - |
| `skip-upgrade-check` | bool | Skip checking that upgrades occur before last accepted block ⚠️ **Warning**: Only use when you understand the implications | `false` |
## Gossip Constants
The following constants are defined for transaction gossip behavior and cannot be configured without a custom build of Subnet-EVM:
| Constant | Type | Description | Value |
| --------------------------------------- | -------- | ------------------------------------------ | -------- |
| Bloom Filter Min Target Elements | int | Minimum target elements for bloom filter | `8,192` |
| Bloom Filter Target False Positive Rate | float | Target false positive rate | `1%` |
| Bloom Filter Reset False Positive Rate | float | Reset false positive rate | `5%` |
| Bloom Filter Churn Multiplier | int | Churn multiplier | `3` |
| Push Gossip Discarded Elements | int | Number of discarded elements | `16,384` |
| Tx Gossip Target Message Size | size | Target message size for transaction gossip | `20 KiB` |
| Tx Gossip Throttling Period | duration | Throttling period | `10s` |
| Tx Gossip Throttling Limit | int | Throttling limit | `2` |
| Tx Gossip Poll Size | int | Poll size | `1` |
## Validation Notes
* Cannot enable `populate-missing-tries` while pruning or offline pruning is enabled
* Cannot run offline pruning while pruning is disabled
* Commit interval must be non-zero when pruning is enabled
* `push-gossip-percent-stake` must be in range `[0, 1]`
* Some settings may require node restart to take effect
# X-Chain Configs
URL: /docs/nodes/chain-configs/x-chain
This page describes the configuration options available for the X-Chain.
In order to specify a config for the X-Chain, a JSON config file should be
placed at `{chain-config-dir}/X/config.json`.
For example if `chain-config-dir` has the default value which is
`$HOME/.avalanchego/configs/chains`, then `config.json` can be placed at
`$HOME/.avalanchego/configs/chains/X/config.json`.
This allows you to specify a config to be passed into the X-Chain. The default
values for this config are:
```json
{
"checksums-enabled": false
}
```
Default values are overridden only if explicitly specified in the config.
The parameters are as follows:
### `checksums-enabled`
*Boolean*
Enables checksums if set to `true`.
# Avalanche L1 Configs
URL: /docs/nodes/configure/avalanche-l1-configs
This page describes the configuration options available for Avalanche L1s.
# Subnet Configs
It is possible to provide parameters for a Subnet. Parameters here apply to all
chains in the specified Subnet.
AvalancheGo looks for files specified with `{subnetID}.json` under
`--subnet-config-dir` as documented
[here](https://build.avax.network/docs/nodes/configure/configs-flags#subnet-configs).
Here is an example of Subnet config file:
```json
{
"validatorOnly": false,
"consensusParameters": {
"k": 25,
"alpha": 18
}
}
```
## Parameters
### Private Subnet
#### `validatorOnly` (bool)
If `true` this node does not expose Subnet blockchain contents to non-validators
via P2P messages. Defaults to `false`.
Avalanche Subnets are public by default. It means that every node can sync and
listen ongoing transactions/blocks in Subnets, even they're not validating the
listened Subnet.
Subnet validators can choose not to publish contents of blockchains via this
configuration. If a node sets `validatorOnly` to true, the node exchanges
messages only with this Subnet's validators. Other peers will not be able to
learn contents of this Subnet from this node.
:::tip
This is a node-specific configuration. Every validator of this Subnet has to use
this configuration in order to create a full private Subnet.
:::
#### `allowedNodes` (string list)
If `validatorOnly=true` this allows explicitly specified NodeIDs to be allowed
to sync the Subnet regardless of validator status. Defaults to be empty.
:::tip
This is a node-specific configuration. Every validator of this Subnet has to use
this configuration in order to properly allow a node in the private Subnet.
:::
### Consensus Parameters
Subnet configs supports loading new consensus parameters. JSON keys are
different from their matching `CLI` keys. These parameters must be grouped under
`consensusParameters` key. The consensus parameters of a Subnet default to the
same values used for the Primary Network, which are given [CLI Snow Parameters](https://build.avax.network/docs/nodes/configure/configs-flags#snow-parameters).
| CLI Key | JSON Key |
| :--------------------------- | :-------------------- |
| --snow-sample-size | k |
| --snow-quorum-size | alpha |
| --snow-commit-threshold | `beta` |
| --snow-concurrent-repolls | concurrentRepolls |
| --snow-optimal-processing | `optimalProcessing` |
| --snow-max-processing | maxOutstandingItems |
| --snow-max-time-processing | maxItemProcessingTime |
| --snow-avalanche-batch-size | `batchSize` |
| --snow-avalanche-num-parents | `parentSize` |
#### `proposerMinBlockDelay` (duration)
The minimum delay performed when building snowman++ blocks. Default is set to 1 second.
As one of the ways to control network congestion, Snowman++ will only build a
block `proposerMinBlockDelay` after the parent block's timestamp. Some
high-performance custom VM may find this too strict. This flag allows tuning the
frequency at which blocks are built.
### Gossip Configs
It's possible to define different Gossip configurations for each Subnet without
changing values for Primary Network. JSON keys of these
parameters are different from their matching `CLI` keys. These parameters
default to the same values used for the Primary Network. For more information
see [CLI Gossip Configs](https://build.avax.network/docs/nodes/configure/configs-flags#gossiping).
| CLI Key | JSON Key |
| :------------------------------------------------------ | :------------------------------------- |
| --consensus-accepted-frontier-gossip-validator-size | gossipAcceptedFrontierValidatorSize |
| --consensus-accepted-frontier-gossip-non-validator-size | gossipAcceptedFrontierNonValidatorSize |
| --consensus-accepted-frontier-gossip-peer-size | gossipAcceptedFrontierPeerSize |
| --consensus-on-accept-gossip-validator-size | gossipOnAcceptValidatorSize |
| --consensus-on-accept-gossip-non-validator-size | gossipOnAcceptNonValidatorSize |
| --consensus-on-accept-gossip-peer-size | gossipOnAcceptPeerSize |
# AvalancheGo Config Flags
URL: /docs/nodes/configure/configs-flags
This page lists all available configuration options for AvalancheGo nodes.
# AvalancheGo Configs and Flags
This document lists all available configuration options for AvalancheGo nodes. You can configure your node using either command-line flags or environment variables.
> **Note:** For comparison with the previous documentation format (using individual flag headings), see the [archived version](https://gist.github.com/navillanueva/cdb9c49c411bd89a9480f05a7afbab37).
## Environment Variable Naming Convention
All environment variables follow the pattern: `AVAGO_` + flag name where the flag name is converted to uppercase with hyphens replaced by underscores.
For example:
* Flag: `--api-admin-enabled`
* Environment Variable: `AVAGO_API_ADMIN_ENABLED`
## Example Usage
### Using Command-Line Flags
```bash
avalanchego --network-id=fuji --http-host=0.0.0.0 --log-level=debug
```
### Using Environment Variables
```bash
export AVAGO_NETWORK_ID=fuji
export AVAGO_HTTP_HOST=0.0.0.0
export AVAGO_LOG_LEVEL=debug
avalanchego
```
### Using Config File
Create a JSON config file:
```json
{
"network-id": "fuji",
"http-host": "0.0.0.0",
"log-level": "debug"
}
```
Run with:
```bash
avalanchego --config-file=/path/to/config.json
```
## Configuration Precedence
Configuration sources are applied in the following order (highest to lowest precedence):
1. Command-line flags
2. Environment variables
3. Config file
4. Default values
# Configuration Options
### APIs
Configuration for various APIs exposed by the node.
| Flag | Env Var | Type | Default | Description |
| ----------------------- | --------------------------- | ---- | ------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `--api-admin-enabled` | `AVAGO_API_ADMIN_ENABLED` | bool | `false` | If set to `true`, this node will expose the Admin API. See [here](https://build.avax.network/docs/api-reference/admin-api) for more information. |
| `--api-health-enabled` | `AVAGO_API_HEALTH_ENABLED` | bool | `true` | If set to `false`, this node will not expose the Health API. See [here](https://build.avax.network/docs/api-reference/health-api) for more information. |
| `--index-enabled` | `AVAGO_INDEX_ENABLED` | bool | `false` | If set to `true`, this node will enable the indexer and the Index API will be available. See [here](https://build.avax.network/docs/api-reference/index-api) for more information. |
| `--api-info-enabled` | `AVAGO_API_INFO_ENABLED` | bool | `true` | If set to `false`, this node will not expose the Info API. See [here](https://build.avax.network/docs/api-reference/info-api) for more information. |
| `--api-metrics-enabled` | `AVAGO_API_METRICS_ENABLED` | bool | `true` | If set to `false`, this node will not expose the Metrics API. See [here](https://build.avax.network/docs/api-reference/metrics-api) for more information. |
### Avalanche Community Proposals
Support for [Avalanche Community Proposals](https://github.com/avalanche-foundation/ACPs).
| Flag | Env Var | Type | Default | Description |
| --------------- | ------------------- | ------ | ------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `--acp-support` | `AVAGO_ACP_SUPPORT` | \[]int | `[]` | The `--acp-support` flag allows an AvalancheGo node to indicate support for a set of [Avalanche Community Proposals](https://github.com/avalanche-foundation/ACPs). |
| `--acp-object` | `AVAGO_ACP_OBJECT` | \[]int | `[]` | The `--acp-object` flag allows an AvalancheGo node to indicate objection for a set of [Avalanche Community Proposals](https://github.com/avalanche-foundation/ACPs). |
### Bootstrapping
Configuration for node bootstrapping process.
| Flag | Env Var | Type | Default | Description |
| ----------------------------------------------- | --------------------------------------------------- | -------- | ----------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `--bootstrap-ancestors-max-containers-sent` | `AVAGO_BOOTSTRAP_ANCESTORS_MAX_CONTAINERS_SENT` | uint | `2000` | Max number of containers in an `Ancestors` message sent by this node. |
| `--bootstrap-ancestors-max-containers-received` | `AVAGO_BOOTSTRAP_ANCESTORS_MAX_CONTAINERS_RECEIVED` | uint | `2000` | This node reads at most this many containers from an incoming `Ancestors` message. |
| `--bootstrap-beacon-connection-timeout` | `AVAGO_BOOTSTRAP_BEACON_CONNECTION_TIMEOUT` | duration | `1m` | Timeout when attempting to connect to bootstrapping beacons. |
| `--bootstrap-ids` | `AVAGO_BOOTSTRAP_IDS` | string | network dependent | Bootstrap IDs is a comma-separated list of validator IDs. These IDs will be used to authenticate bootstrapping peers. An example setting of this field would be `--bootstrap-ids="NodeID-7Xhw2mDxuDS44j42TCB6U5579esbSt3Lg,NodeID-MFrZFVCXPv5iCn6M9K6XduxGTYp891xXZ"`. The number of given IDs here must be same with number of given `--bootstrap-ips`. The default value depends on the network ID. |
| `--bootstrap-ips` | `AVAGO_BOOTSTRAP_IPS` | string | network dependent | Bootstrap IPs is a comma-separated list of IP:port pairs. These IP Addresses will be used to bootstrap the current Avalanche state. An example setting of this field would be `--bootstrap-ips="127.0.0.1:12345,1.2.3.4:5678"`. The number of given IPs here must be same with number of given `--bootstrap-ids`. The default value depends on the network ID. |
| `--bootstrap-max-time-get-ancestors` | `AVAGO_BOOTSTRAP_MAX_TIME_GET_ANCESTORS` | duration | `50ms` | Max Time to spend fetching a container and its ancestors when responding to a GetAncestors message. |
| `--bootstrap-retry-enabled` | `AVAGO_BOOTSTRAP_RETRY_ENABLED` | bool | `true` | If set to `false`, will not retry bootstrapping if it fails. |
| `--bootstrap-retry-warn-frequency` | `AVAGO_BOOTSTRAP_RETRY_WARN_FREQUENCY` | uint | `50` | Specifies how many times bootstrap should be retried before warning the operator. |
### Chain Configuration
Some blockchains allow the node operator to provide custom configurations for individual blockchains. These custom configurations are broken down into two categories: network upgrades and optional chain configurations. AvalancheGo reads in these configurations from the chain configuration directory and passes them into the VM on initialization.
| Flag | Env Var | Type | Default | Description |
| ------------------------------ | ---------------------------------- | ------ | -------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `--chain-config-dir` | `AVAGO_CHAIN_CONFIG_DIR` | string | `$HOME/.avalanchego/configs/chains` | Specifies the directory that contains chain configs, as described [here](https://build.avax.network/docs/nodes/chain-configs). If this flag is not provided and the default directory does not exist, AvalancheGo will not exit since custom configs are optional. However, if the flag is set, the specified folder must exist, or AvalancheGo will exit with an error. This flag is ignored if `--chain-config-content` is specified. Network upgrades are passed in from the location: `chain-config-dir`/`blockchainID`/`upgrade.*`. The chain configs are passed in from the location `chain-config-dir`/`blockchainID`/`config.*`. See [here](https://build.avax.network/docs/nodes/chain-configs) for more information. |
| `--chain-config-content` | `AVAGO_CHAIN_CONFIG_CONTENT` | string | - | As an alternative to `--chain-config-dir`, chains custom configurations can be loaded altogether from command line via `--chain-config-content` flag. Content must be base64 encoded. Example: First, encode the chain config: `echo -n '{"log-level":"trace"}' \| base64`. This will output something like `eyJsb2ctbGV2ZWwiOiJ0cmFjZSJ9`. Then create the full config JSON and encode it: `echo -n '{"C":{"Config":"eyJsb2ctbGV2ZWwiOiJ0cmFjZSJ9","Upgrade":null}}' \| base64`. Finally run: `avalanchego --chain-config-content "eyJDIjp7IkNvbmZpZyI6ImV5SnNiMmN0YkdWMlpXd2lPaUowY21GalpTSjkiLCJVcGdyYWRlIjpudWxsfX0="` |
| `--chain-aliases-file` | `AVAGO_CHAIN_ALIASES_FILE` | string | `~/.avalanchego/configs/chains/aliases.json` | Path to JSON file that defines aliases for Blockchain IDs. This flag is ignored if `--chain-aliases-file-content` is specified. Example content: `{"q2aTwKuyzgs8pynF7UXBZCU7DejbZbZ6EUyHr3JQzYgwNPUPi": ["DFK"]}`. The above example aliases the Blockchain whose ID is `"q2aTwKuyzgs8pynF7UXBZCU7DejbZbZ6EUyHr3JQzYgwNPUPi"` to `"DFK"`. Chain aliases are added after adding primary network aliases and before any changes to the aliases via the admin API. This means that the first alias included for a Blockchain on a Subnet will be treated as the `"Primary Alias"` instead of the full blockchainID. The Primary Alias is used in all metrics and logs. |
| `--chain-aliases-file-content` | `AVAGO_CHAIN_ALIASES_FILE_CONTENT` | string | - | As an alternative to `--chain-aliases-file`, it allows specifying base64 encoded aliases for Blockchains. |
| `--chain-data-dir` | `AVAGO_CHAIN_DATA_DIR` | string | `$HOME/.avalanchego/chainData` | Chain specific data directory. |
### Config File
| Flag | Env Var | Type | Default | Description |
| ---------------------------- | -------------------------------- | ------ | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `--config-file` | `AVAGO_CONFIG_FILE` | string | - | Path to a JSON file that specifies this node's configuration. Command line arguments will override arguments set in the config file. This flag is ignored if `--config-file-content` is specified. Example JSON config file: `{"log-level": "debug"}`. [Install Script](https://build.avax.network/docs/tooling/avalanche-go-installer) creates the node config file at `~/.avalanchego/configs/node.json`. No default file is created if [AvalancheGo is built from source](https://build.avax.network/docs/nodes/run-a-node/from-source), you would need to create it manually if needed. |
| `--config-file-content` | `AVAGO_CONFIG_FILE_CONTENT` | string | - | As an alternative to `--config-file`, it allows specifying base64 encoded config content. |
| `--config-file-content-type` | `AVAGO_CONFIG_FILE_CONTENT_TYPE` | string | `JSON` | Specifies the format of the base64 encoded config content. JSON, TOML, YAML are among currently supported file format (see [here](https://github.com/spf13/viper#reading-config-files) for full list). |
### Data Directory
| Flag | Env Var | Type | Default | Description |
| ------------ | ---------------- | ------ | -------------------- | ----------------------------------------------------------------------------------------------------- |
| `--data-dir` | `AVAGO_DATA_DIR` | string | `$HOME/.avalanchego` | Sets the base data directory where default sub-directories will be placed unless otherwise specified. |
### Database
| Flag | Env Var | Type | Default | Description |
| ----------- | --------------- | ------ | ----------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `--db-dir` | `AVAGO_DB_DIR` | string | `$HOME/.avalanchego/db` | Specifies the directory to which the database is persisted. |
| `--db-type` | `AVAGO_DB_TYPE` | string | `leveldb` | Specifies the type of database to use. Must be one of `leveldb`, `memdb`, or `pebbledb`. `memdb` is an in-memory, non-persisted database. Note: `memdb` stores everything in memory. So if you have a 900 GiB LevelDB instance, then using `memdb` you'd need 900 GiB of RAM. `memdb` is useful for fast one-off testing, not for running an actual node (on Fuji or Mainnet). Also note that `memdb` doesn't persist after restart. So any time you restart the node it would start syncing from scratch. |
#### Database Config
| Flag | Env Var | Type | Default | Description |
| -------------------------- | ------------------------------ | ------ | ------- | ----------------------------------------------------------------------------------------------------- |
| `--db-config-file` | `AVAGO_DB_CONFIG_FILE` | string | - | Path to the database config file. Ignored if `--db-config-file-content` is specified. |
| `--db-config-file-content` | `AVAGO_DB_CONFIG_FILE_CONTENT` | string | - | As an alternative to `--db-config-file`, it allows specifying base64 encoded database config content. |
A LevelDB config file must be JSON and may have these keys. Any keys not given will receive the default value. See [here](https://pkg.go.dev/github.com/syndtr/goleveldb/leveldb/opt#Options) for more information.
### File Descriptor Limit
| Flag | Env Var | Type | Default | Description |
| ------------ | ---------------- | ---- | ------- | -------------------------------------------------------------------------------------------------------------------------- |
| `--fd-limit` | `AVAGO_FD_LIMIT` | int | `32768` | Attempts to raise the process file descriptor limit to at least this value and error if the value is above the system max. |
### Genesis
| Flag | Env Var | Type | Default | Description |
| ------------------------ | ---------------------------- | ------ | ------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `--genesis-file` | `AVAGO_GENESIS_FILE` | string | - | Path to a JSON file containing the genesis data to use. Ignored when running standard networks (Mainnet, Fuji Testnet), or when `--genesis-file-content` is specified. If not given, uses default genesis data. See the documentation for the genesis JSON format [here](https://github.com/ava-labs/avalanchego/blob/master/genesis/README.md) and an example for a local network [here](https://github.com/ava-labs/avalanchego/blob/master/genesis/genesis_local.json). |
| `--genesis-file-content` | `AVAGO_GENESIS_FILE_CONTENT` | string | - | As an alternative to `--genesis-file`, it allows specifying base64 encoded genesis data to use. |
### HTTP Server
| Flag | Env Var | Type | Default | Description |
| ------------------------------ | ---------------------------------- | -------- | ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `--http-allowed-hosts` | `AVAGO_HTTP_ALLOWED_HOSTS` | string | `localhost` | List of acceptable host names in API requests. Provide the wildcard (`'*'`) to accept requests from all hosts. API requests where the `Host` field is empty or an IP address will always be accepted. An API call whose HTTP `Host` field isn't acceptable will receive a 403 error code. |
| `--http-allowed-origins` | `AVAGO_HTTP_ALLOWED_ORIGINS` | string | `*` | Origins to allow on the HTTP port. Example: `"https://*.avax.network https://*.avax-test.network"` |
| `--http-host` | `AVAGO_HTTP_HOST` | string | `127.0.0.1` | The address that HTTP APIs listen on. This means that by default, your node can only handle API calls made from the same machine. To allow API calls from other machines, use `--http-host=`. You can also enter domain names as parameter. |
| `--http-port` | `AVAGO_HTTP_PORT` | int | `9650` | Each node runs an HTTP server that provides the APIs for interacting with the node and the Avalanche network. This argument specifies the port that the HTTP server will listen on. |
| `--http-idle-timeout` | `AVAGO_HTTP_IDLE_TIMEOUT` | duration | `120s` | Maximum duration to wait for the next request when keep-alives are enabled. If `--http-idle-timeout` is zero, the value of `--http-read-timeout` is used. If both are zero, there is no timeout. |
| `--http-read-timeout` | `AVAGO_HTTP_READ_TIMEOUT` | duration | `30s` | Maximum duration for reading the entire request, including the body. A zero or negative value means there will be no timeout. |
| `--http-read-header-timeout` | `AVAGO_HTTP_READ_HEADER_TIMEOUT` | duration | `30s` | Maximum duration to read request headers. The connection's read deadline is reset after reading the headers. If `--http-read-header-timeout` is zero, the value of `--http-read-timeout` is used. If both are zero, there is no timeout. |
| `--http-write-timeout` | `AVAGO_HTTP_WRITE_TIMEOUT` | duration | `30s` | Maximum duration before timing out writes of the response. It is reset whenever a new request's header is read. A zero or negative value means there will be no timeout. |
| `--http-shutdown-timeout` | `AVAGO_HTTP_SHUTDOWN_TIMEOUT` | duration | `10s` | Maximum duration to wait for existing connections to complete during node shutdown. |
| `--http-shutdown-wait` | `AVAGO_HTTP_SHUTDOWN_WAIT` | duration | `0s` | Duration to wait after receiving SIGTERM or SIGINT before initiating shutdown. The `/health` endpoint will return unhealthy during this duration (if the Health API is enabled.) |
| `--http-tls-enabled` | `AVAGO_HTTP_TLS_ENABLED` | boolean | `false` | If set to `true`, this flag will attempt to upgrade the server to use HTTPS. |
| `--http-tls-cert-file` | `AVAGO_HTTP_TLS_CERT_FILE` | string | - | This argument specifies the location of the TLS certificate used by the node for the HTTPS server. This must be specified when `--http-tls-enabled=true`. There is no default value. This flag is ignored if `--http-tls-cert-file-content` is specified. |
| `--http-tls-cert-file-content` | `AVAGO_HTTP_TLS_CERT_FILE_CONTENT` | string | - | As an alternative to `--http-tls-cert-file`, it allows specifying base64 encoded content of the TLS certificate used by the node for the HTTPS server. Note that full certificate content, with the leading and trailing header, must be base64 encoded. This must be specified when `--http-tls-enabled=true`. |
| `--http-tls-key-file` | `AVAGO_HTTP_TLS_KEY_FILE` | string | - | This argument specifies the location of the TLS private key used by the node for the HTTPS server. This must be specified when `--http-tls-enabled=true`. There is no default value. This flag is ignored if `--http-tls-key-file-content` is specified. |
| `--http-tls-key-file-content` | `AVAGO_HTTP_TLS_KEY_FILE_CONTENT` | string | - | As an alternative to `--http-tls-key-file`, it allows specifying base64 encoded content of the TLS private key used by the node for the HTTPS server. Note that full private key content, with the leading and trailing header, must be base64 encoded. This must be specified when `--http-tls-enabled=true`. |
### Logging
| Flag | Env Var | Type | Default | Description |
| ----------------------------------- | --------------------------------------- | ------- | ------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `--log-level=off` | `AVAGO_LOG_LEVEL` | string | `info` | No logs. |
| `--log-level=fatal` | `AVAGO_LOG_LEVEL` | string | `info` | Fatal errors that are not recoverable. |
| `--log-level=error` | `AVAGO_LOG_LEVEL` | string | `info` | Errors that the node encounters, these errors were able to be recovered. |
| `--log-level=warn` | `AVAGO_LOG_LEVEL` | string | `info` | Warnings that might be indicative of a spurious byzantine node, or potential future error. |
| `--log-level=info` | `AVAGO_LOG_LEVEL` | string | `info` | Useful descriptions of node status updates. |
| `--log-level=trace` | `AVAGO_LOG_LEVEL` | string | `info` | Traces container job results, useful for tracing container IDs and their outcomes. |
| `--log-level=debug` | `AVAGO_LOG_LEVEL` | string | `info` | Useful when attempting to understand possible bugs in the code. |
| `--log-level=verbo` | `AVAGO_LOG_LEVEL` | string | `info` | Tracks extensive amounts of information the node is processing, including message contents and binary dumps of data for extremely low level protocol analysis. |
| `--log-display-level` | `AVAGO_LOG_DISPLAY_LEVEL` | string | value of `--log-level` | The log level determines which events to display to stdout. If left blank, will default to the value provided to `--log-level`. |
| `--log-format=auto` | `AVAGO_LOG_FORMAT` | string | `auto` | Formats terminal-like logs when the output is a terminal. |
| `--log-format=plain` | `AVAGO_LOG_FORMAT` | string | `auto` | Plain text log format. |
| `--log-format=colors` | `AVAGO_LOG_FORMAT` | string | `auto` | Colored log format. |
| `--log-format=json` | `AVAGO_LOG_FORMAT` | string | `auto` | JSON log format. |
| `--log-dir` | `AVAGO_LOG_DIR` | string | `$HOME/.avalanchego/logs` | Specifies the directory in which system logs are kept. If you are running the node as a system service (ex. using the installer script) logs will also be stored in `$HOME/var/log/syslog`. |
| `--log-disable-display-plugin-logs` | `AVAGO_LOG_DISABLE_DISPLAY_PLUGIN_LOGS` | boolean | `false` | Disables displaying plugin logs in stdout. |
| `--log-rotater-max-size` | `AVAGO_LOG_ROTATER_MAX_SIZE` | uint | `8` | The maximum file size in megabytes of the log file before it gets rotated. |
| `--log-rotater-max-files` | `AVAGO_LOG_ROTATER_MAX_FILES` | uint | `7` | The maximum number of old log files to retain. 0 means retain all old log files. |
| `--log-rotater-max-age` | `AVAGO_LOG_ROTATER_MAX_AGE` | uint | `0` | The maximum number of days to retain old log files based on the timestamp encoded in their filename. 0 means retain all old log files. |
| `--log-rotater-compress-enabled` | `AVAGO_LOG_ROTATER_COMPRESS_ENABLED` | boolean | `false` | Enables the compression of rotated log files through gzip. |
### Continuous Profiling
You can configure your node to continuously run memory/CPU profiles and save the most recent ones. Continuous memory/CPU profiling is enabled if `--profile-continuous-enabled` is set.
| Flag | Env Var | Type | Default | Description |
| -------------------------------- | ------------------------------------ | -------- | ------------------------------ | ------------------------------------------------------------------------------------------------- |
| `--profile-continuous-enabled` | `AVAGO_PROFILE_CONTINUOUS_ENABLED` | boolean | `false` | Whether the app should continuously produce performance profiles. |
| `--profile-dir` | `AVAGO_PROFILE_DIR` | string | `$HOME/.avalanchego/profiles/` | If profiling enabled, node continuously runs memory/CPU profiles and puts them at this directory. |
| `--profile-continuous-freq` | `AVAGO_PROFILE_CONTINUOUS_FREQ` | duration | `15m` | How often a new CPU/memory profile is created. |
| `--profile-continuous-max-files` | `AVAGO_PROFILE_CONTINUOUS_MAX_FILES` | int | `5` | Maximum number of CPU/memory profiles files to keep. |
### Network
| Flag | Env Var | Type | Default | Description |
| --------------------------- | ------------------ | ------ | --------- | ------------------------------------------------------------------------------- |
| `--network-id=mainnet` | `AVAGO_NETWORK_ID` | string | `mainnet` | Connect to Mainnet (default). |
| `--network-id=fuji` | `AVAGO_NETWORK_ID` | string | `mainnet` | Connect to the Fuji test-network. |
| `--network-id=testnet` | `AVAGO_NETWORK_ID` | string | `mainnet` | Connect to the current test-network (currently Fuji). |
| `--network-id=local` | `AVAGO_NETWORK_ID` | string | `mainnet` | Connect to a local test-network. |
| `--network-id=network-[id]` | `AVAGO_NETWORK_ID` | string | `mainnet` | Connect to the network with the given ID. `id` must be in the range \[0, 2^32). |
### OpenTelemetry
AvalancheGo supports collecting and exporting [OpenTelemetry](https://opentelemetry.io/) traces. This might be useful for debugging, performance analysis, or monitoring.
| Flag | Env Var | Type | Default | Description |
| ------------------------- | ----------------------------- | ------- | -------------------------------------------------- | ----------------------------------------------------------------------------------- |
| `--tracing-endpoint` | `AVAGO_TRACING_ENDPOINT` | string | `localhost:4317` (gRPC) or `localhost:4318` (HTTP) | The endpoint to export trace data to. Default depends on `--tracing-exporter-type`. |
| `--tracing-exporter-type` | `AVAGO_TRACING_EXPORTER_TYPE` | string | `disabled` | Type of exporter to use for tracing. Options are \`disabled\`, \`grpc\`, \`http\`. |
| `--tracing-insecure` | `AVAGO_TRACING_INSECURE` | boolean | `true` | If true, don't use TLS when exporting trace data. |
| `--tracing-sample-rate` | `AVAGO_TRACING_SAMPLE_RATE` | float | `0.1` | The fraction of traces to sample. If >= 1, always sample. If \<= 0, never sample. |
### Partial Sync Primary Network
| Flag | Env Var | Type | Default | Description |
| -------------------------------- | ------------------------------------ | ------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `--partial-sync-primary-network` | `AVAGO_PARTIAL_SYNC_PRIMARY_NETWORK` | boolean | `false` | Partial sync enables nodes that are not primary network validators to optionally sync only the P-chain on the primary network. Nodes that use this option can still track Subnets. After the Etna upgrade, nodes that use this option can also validate L1s. |
### Public IP
Validators must know one of their public facing IP addresses so they can enable other nodes to connect to them. By default, the node will attempt to perform NAT traversal to get the node's IP according to its router.
| Flag | Env Var | Type | Default | Description |
| ---------------------------------- | -------------------------------------- | -------- | ------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `--public-ip` | `AVAGO_PUBLIC_IP` | string | - | If this argument is provided, the node assumes this is its public IP. When running a local network it may be easiest to set this value to `127.0.0.1`. |
| `--public-ip-resolution-frequency` | `AVAGO_PUBLIC_IP_RESOLUTION_FREQUENCY` | duration | `5m` | Frequency at which this node resolves/updates its public IP and renew NAT mappings, if applicable. |
| `--public-ip-resolution-service` | `AVAGO_PUBLIC_IP_RESOLUTION_SERVICE` | string | - | When provided, the node will use that service to periodically resolve/update its public IP. Only acceptable values are `ifconfigCo`, `opendns` or `ifconfigMe`. |
### State Syncing
| Flag | Env Var | Type | Default | Description |
| ------------------ | ---------------------- | ------ | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `--state-sync-ids` | `AVAGO_STATE_SYNC_IDS` | string | - | State sync IDs is a comma-separated list of validator IDs. The specified validators will be contacted to get and authenticate the starting point (state summary) for state sync. An example setting of this field would be `--state-sync-ids="NodeID-7Xhw2mDxuDS44j42TCB6U5579esbSt3Lg,NodeID-MFrZFVCXPv5iCn6M9K6XduxGTYp891xXZ"`. The number of given IDs here must be same with number of given `--state-sync-ips`. The default value is empty, which results in all validators being sampled. |
| `--state-sync-ips` | `AVAGO_STATE_SYNC_IPS` | string | - | State sync IPs is a comma-separated list of IP:port pairs. These IP Addresses will be contacted to get and authenticate the starting point (state summary) for state sync. An example setting of this field would be `--state-sync-ips="127.0.0.1:12345,1.2.3.4:5678"`. The number of given IPs here must be the same with the number of given `--state-sync-ids`. |
### Staking
| Flag | Env Var | Type | Default | Description |
| --------------------------------- | ------------------------------------- | ------ | --------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `--staking-port` | `AVAGO_STAKING_PORT` | int | `9651` | The port through which the network peers will connect to this node externally. Having this port accessible from the internet is required for correct node operation. |
| `--staking-tls-cert-file` | `AVAGO_STAKING_TLS_CERT_FILE` | string | `$HOME/.avalanchego/staking/staker.crt` | Avalanche uses two-way authenticated TLS connections to securely connect nodes. This argument specifies the location of the TLS certificate used by the node. This flag is ignored if `--staking-tls-cert-file-content` is specified. |
| `--staking-tls-cert-file-content` | `AVAGO_STAKING_TLS_CERT_FILE_CONTENT` | string | - | As an alternative to `--staking-tls-cert-file`, it allows specifying base64 encoded content of the TLS certificate used by the node. Note that full certificate content, with the leading and trailing header, must be base64 encoded. |
| `--staking-tls-key-file` | `AVAGO_STAKING_TLS_KEY_FILE` | string | `$HOME/.avalanchego/staking/staker.key` | Avalanche uses two-way authenticated TLS connections to securely connect nodes. This argument specifies the location of the TLS private key used by the node. This flag is ignored if `--staking-tls-key-file-content` is specified. |
| `--staking-tls-key-file-content` | `AVAGO_STAKING_TLS_KEY_FILE_CONTENT` | string | - | As an alternative to `--staking-tls-key-file`, it allows specifying base64 encoded content of the TLS private key used by the node. Note that full private key content, with the leading and trailing header, must be base64 encoded. |
### Subnets
#### Subnet Tracking
| Flag | Env Var | Type | Default | Description |
| ----------------- | --------------------- | ------ | ------- | -------------------------------------------------------------------------------------------------------------------------------------- |
| `--track-subnets` | `AVAGO_TRACK_SUBNETS` | string | - | Comma separated list of Subnet IDs that this node would track if added to. Defaults to empty (will only validate the Primary Network). |
#### Subnet Configs
It is possible to provide parameters for Subnets. Parameters here apply to all chains in the specified Subnets. Parameters must be specified with a `[subnetID].json` config file under `--subnet-config-dir`. AvalancheGo loads configs for Subnets specified in `--track-subnets` parameter. Full reference for all configuration options for a Subnet can be found in a separate [Subnet Configs](https://build.avax.network/docs/nodes/configure/avalanche-l1-configs) document.
| Flag | Env Var | Type | Default | Description |
| ------------------------- | ----------------------------- | ------ | ------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `--subnet-config-dir` | `AVAGO_SUBNET_CONFIG_DIR` | string | `$HOME/.avalanchego/configs/subnets` | Specifies the directory that contains Subnet configs, as described above. If the flag is set explicitly, the specified folder must exist, or AvalancheGo will exit with an error. This flag is ignored if `--subnet-config-content` is specified. Example: Let's say we have a Subnet with ID `p4jUwqZsA2LuSftroCd3zb4ytH8W99oXKuKVZdsty7eQ3rXD6`. We can create a config file under the default `subnet-config-dir` at `$HOME/.avalanchego/configs/subnets/p4jUwqZsA2LuSftroCd3zb4ytH8W99oXKuKVZdsty7eQ3rXD6.json`. An example config file is: `{"validatorOnly": false, "consensusParameters": {"k": 25, "alpha": 18}}`. By default, none of these directories and/or files exist. You would need to create them manually if needed. |
| `--subnet-config-content` | `AVAGO_SUBNET_CONFIG_CONTENT` | string | - | As an alternative to `--subnet-config-dir`, it allows specifying base64 encoded parameters for a Subnet. |
### Version
| Flag | Env Var | Type | Default | Description |
| ----------- | --------------- | ------- | ------- | ---------------------------------------------- |
| `--version` | `AVAGO_VERSION` | boolean | `false` | If this is `true`, print the version and quit. |
# Advanced Configuration Options
⚠️ **Warning**: The following options may affect the correctness of a node. Only power users should change these.
### Gossiping
Consensus gossiping parameters.
| Flag | Env Var | Type | Default | Description |
| --------------------------------------------------------- | ------------------------------------------------------------- | -------- | ------- | ----------------------------------------------------------------------- |
| `--consensus-accepted-frontier-gossip-validator-size` | `AVAGO_CONSENSUS_ACCEPTED_FRONTIER_GOSSIP_VALIDATOR_SIZE` | uint | `0` | Number of validators to gossip to when gossiping accepted frontier. |
| `--consensus-accepted-frontier-gossip-non-validator-size` | `AVAGO_CONSENSUS_ACCEPTED_FRONTIER_GOSSIP_NON_VALIDATOR_SIZE` | uint | `0` | Number of non-validators to gossip to when gossiping accepted frontier. |
| `--consensus-accepted-frontier-gossip-peer-size` | `AVAGO_CONSENSUS_ACCEPTED_FRONTIER_GOSSIP_PEER_SIZE` | uint | `15` | Number of peers to gossip to when gossiping accepted frontier. |
| `--consensus-accepted-frontier-gossip-frequency` | `AVAGO_CONSENSUS_ACCEPTED_FRONTIER_GOSSIP_FREQUENCY` | duration | `10s` | Time between gossiping accepted frontiers. |
| `--consensus-on-accept-gossip-validator-size` | `AVAGO_CONSENSUS_ON_ACCEPT_GOSSIP_VALIDATOR_SIZE` | uint | `0` | Number of validators to gossip to each accepted container to. |
| `--consensus-on-accept-gossip-non-validator-size` | `AVAGO_CONSENSUS_ON_ACCEPT_GOSSIP_NON_VALIDATOR_SIZE` | uint | `0` | Number of non-validators to gossip to each accepted container to. |
| `--consensus-on-accept-gossip-peer-size` | `AVAGO_CONSENSUS_ON_ACCEPT_GOSSIP_PEER_SIZE` | uint | `10` | Number of peers to gossip to each accepted container to. |
### Sybil Protection
Sybil protection configuration. These settings affect how the node participates in consensus.
| Flag | Env Var | Type | Default | Description |
| ------------------------------------ | ---------------------------------------- | ------- | ------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `--sybil-protection-enabled` | `AVAGO_SYBIL_PROTECTION_ENABLED` | boolean | `true` | Avalanche uses Proof of Stake (PoS) as sybil resistance to make it prohibitively expensive to attack the network. If false, sybil resistance is disabled and all peers will be sampled during consensus. Note that this can not be disabled on public networks (`Fuji` and `Mainnet`). Setting this flag to `false` **does not** mean "this node is not a validator." It means that this node will sample all nodes, not just validators. **You should not set this flag to false unless you understand what you are doing.** |
| `--sybil-protection-disabled-weight` | `AVAGO_SYBIL_PROTECTION_DISABLED_WEIGHT` | uint | `100` | Weight to provide to each peer when staking is disabled. |
### Benchlist
Peer benchlisting configuration.
| Flag | Env Var | Type | Default | Description |
| ---------------------------------- | -------------------------------------- | -------- | ------- | --------------------------------------------------------------------------------------------------------- |
| `--benchlist-duration` | `AVAGO_BENCHLIST_DURATION` | duration | `15m` | Maximum amount of time a peer is benchlisted after surpassing `--benchlist-fail-threshold`. |
| `--benchlist-fail-threshold` | `AVAGO_BENCHLIST_FAIL_THRESHOLD` | int | `10` | Number of consecutive failed queries to a node before benching it (assuming all queries to it will fail). |
| `--benchlist-min-failing-duration` | `AVAGO_BENCHLIST_MIN_FAILING_DURATION` | duration | `150s` | Minimum amount of time queries to a peer must be failing before the peer is benched. |
### Consensus Parameters
:::note
Some of these parameters can only be set on a local or private network, not on Fuji Testnet or Mainnet
:::
| Flag | Env Var | Type | Default | Description |
| ------------------------------ | ---------------------------------- | -------- | ---------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `--consensus-shutdown-timeout` | `AVAGO_CONSENSUS_SHUTDOWN_TIMEOUT` | duration | `5s` | Timeout before killing an unresponsive chain. |
| `--create-asset-tx-fee` | `AVAGO_CREATE_ASSET_TX_FEE` | int | `10000000` | Transaction fee, in nAVAX, for transactions that create new assets. This can only be changed on a local network. |
| `--tx-fee` | `AVAGO_TX_FEE` | int | `1000000` | The required amount of nAVAX to be burned for a transaction to be valid on the X-Chain, and for import/export transactions on the P-Chain. This parameter requires network agreement in its current form. Changing this value from the default should only be done on private networks or local network. |
| `--uptime-requirement` | `AVAGO_UPTIME_REQUIREMENT` | float | `0.8` | Fraction of time a validator must be online to receive rewards. This can only be changed on a local network. |
| `--uptime-metric-freq` | `AVAGO_UPTIME_METRIC_FREQ` | duration | `30s` | Frequency of renewing this node's average uptime metric. |
### Staking Parameters
Staking economics configuration.
| Flag | Env Var | Type | Default | Description |
| ------------------------------ | ---------------------------------- | -------- | -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `--min-validator-stake` | `AVAGO_MIN_VALIDATOR_STAKE` | int | network dependent | The minimum stake, in nAVAX, required to validate the Primary Network. This can only be changed on a local network. Defaults to `2000000000000` (2,000 AVAX) on Mainnet. Defaults to `5000000` (.005 AVAX) on Test Net. |
| `--max-validator-stake` | `AVAGO_MAX_VALIDATOR_STAKE` | int | network dependent | The maximum stake, in nAVAX, that can be placed on a validator on the primary network. This includes stake provided by both the validator and by delegators to the validator. This can only be changed on a local network. |
| `--min-delegator-stake` | `AVAGO_MIN_DELEGATOR_STAKE` | int | network dependent | The minimum stake, in nAVAX, that can be delegated to a validator of the Primary Network. Defaults to `25000000000` (25 AVAX) on Mainnet. Defaults to `5000000` (.005 AVAX) on Test Net. This can only be changed on a local network. |
| `--min-delegation-fee` | `AVAGO_MIN_DELEGATION_FEE` | int | `20000` | The minimum delegation fee that can be charged for delegation on the Primary Network, multiplied by \`10,000\`. Must be in the range \[0, 1000000]. This can only be changed on a local network. |
| `--min-stake-duration` | `AVAGO_MIN_STAKE_DURATION` | duration | `336h` | Minimum staking duration. This can only be changed on a local network. This applies to both delegation and validation periods. |
| `--max-stake-duration` | `AVAGO_MAX_STAKE_DURATION` | duration | `8760h` | The maximum staking duration, in hours. This can only be changed on a local network. |
| `--stake-minting-period` | `AVAGO_STAKE_MINTING_PERIOD` | duration | `8760h` | Consumption period of the staking function, in hours. This can only be changed on a local network. |
| `--stake-max-consumption-rate` | `AVAGO_STAKE_MAX_CONSUMPTION_RATE` | uint | `120000` | The maximum percentage of the consumption rate for the remaining token supply in the minting period, which is 1 year on Mainnet. This can only be changed on a local network. |
| `--stake-min-consumption-rate` | `AVAGO_STAKE_MIN_CONSUMPTION_RATE` | uint | `100000` | The minimum percentage of the consumption rate for the remaining token supply in the minting period, which is 1 year on Mainnet. This can only be changed on a local network. |
| `--stake-supply-cap` | `AVAGO_STAKE_SUPPLY_CAP` | uint | `720000000000000000` | The maximum stake supply, in nAVAX, that can be placed on a validator. This can only be changed on a local network. |
### Snow Consensus
Snow consensus protocol parameters.
| Flag | Env Var | Type | Default | Description |
| ---------------------------- | -------------------------------- | -------- | ------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `--snow-concurrent-repolls` | `AVAGO_SNOW_CONCURRENT_REPOLLS` | int | `4` | Snow consensus requires repolling transactions that are issued during low time of network usage. This parameter lets one define how aggressive the client will be in finalizing these pending transactions. This should only be changed after careful consideration of the tradeoffs of Snow consensus. The value must be at least `1` and at most `--snow-commit-threshold`. |
| `--snow-sample-size` | `AVAGO_SNOW_SAMPLE_SIZE` | int | `20` | Snow consensus defines `k` as the number of validators that are sampled during each network poll. This parameter lets one define the `k` value used for consensus. This should only be changed after careful consideration of the tradeoffs of Snow consensus. The value must be at least `1`. |
| `--snow-quorum-size` | `AVAGO_SNOW_QUORUM_SIZE` | int | `15` | Snow consensus defines `alpha` as the number of validators that must prefer a transaction during each network poll to increase the confidence in the transaction. This parameter lets us define the `alpha` value used for consensus. This should only be changed after careful consideration of the tradeoffs of Snow consensus. The value must be at greater than `k/2`. |
| `--snow-commit-threshold` | `AVAGO_SNOW_COMMIT_THRESHOLD` | int | `20` | Snow consensus defines `beta` as the number of consecutive polls that a container must increase its confidence for it to be accepted. This parameter lets us define the `beta` value used for consensus. This should only be changed after careful consideration of the tradeoffs of Snow consensus. The value must be at least `1`. |
| `--snow-optimal-processing` | `AVAGO_SNOW_OPTIMAL_PROCESSING` | int | `50` | Optimal number of processing items in consensus. The value must be at least `1`. |
| `--snow-max-processing` | `AVAGO_SNOW_MAX_PROCESSING` | int | `1024` | Maximum number of processing items to be considered healthy. Reports unhealthy if more than this number of items are outstanding. The value must be at least `1`. |
| `--snow-max-time-processing` | `AVAGO_SNOW_MAX_TIME_PROCESSING` | duration | `2m` | Maximum amount of time an item should be processing and still be healthy. Reports unhealthy if there is an item processing for longer than this duration. The value must be greater than `0`. |
### ProposerVM
ProposerVM configuration.
| Flag | Env Var | Type | Default | Description |
| --------------------------------- | ------------------------------------- | -------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `--proposervm-use-current-height` | `AVAGO_PROPOSERVM_USE_CURRENT_HEIGHT` | boolean | `false` | Have the ProposerVM always report the last accepted P-chain block height. |
| `--proposervm-min-block-delay` | `AVAGO_PROPOSERVM_MIN_BLOCK_DELAY` | duration | `1s` | The minimum delay to enforce when building a snowman++ block for the primary network chains and the default minimum delay for subnets. A non-default value is only suggested for non-production nodes. |
### Health Checks
Health monitoring configuration.
| Flag | Env Var | Type | Default | Description |
| ---------------------------------- | -------------------------------------- | -------- | ------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `--health-check-frequency` | `AVAGO_HEALTH_CHECK_FREQUENCY` | duration | `30s` | Health check runs with this frequency. |
| `--health-check-averager-halflife` | `AVAGO_HEALTH_CHECK_AVERAGER_HALFLIFE` | duration | `10s` | Half life of averagers used in health checks (to measure the rate of message failures, for example.) Larger value -> less volatile calculation of averages. |
### Network Configuration
Advanced network settings.
| Flag | Env Var | Type | Default | Description |
| --------------------------------------------------- | ------------------------------------------------------- | -------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `--network-allow-private-ips` | `AVAGO_NETWORK_ALLOW_PRIVATE_IPS` | boolean | `true` | Allows the node to connect peers with private IPs. |
| `--network-compression-type` | `AVAGO_NETWORK_COMPRESSION_TYPE` | string | `gzip` | The type of compression to use when sending messages to peers. Must be one of \`gzip\`, \`zstd\`, \`none\`. Nodes can handle inbound \`gzip\` compressed messages but by default send \`zstd\` compressed messages. |
| `--network-initial-timeout` | `AVAGO_NETWORK_INITIAL_TIMEOUT` | duration | `5s` | Initial timeout value of the adaptive timeout manager. |
| `--network-initial-reconnect-delay` | `AVAGO_NETWORK_INITIAL_RECONNECT_DELAY` | duration | `1s` | Initial delay duration must be waited before attempting to reconnect a peer. |
| `--network-max-reconnect-delay` | `AVAGO_NETWORK_MAX_RECONNECT_DELAY` | duration | `1h` | Maximum delay duration must be waited before attempting to reconnect a peer. |
| `--network-minimum-timeout` | `AVAGO_NETWORK_MINIMUM_TIMEOUT` | duration | `2s` | Minimum timeout value of the adaptive timeout manager. |
| `--network-maximum-timeout` | `AVAGO_NETWORK_MAXIMUM_TIMEOUT` | duration | `10s` | Maximum timeout value of the adaptive timeout manager. |
| `--network-maximum-inbound-timeout` | `AVAGO_NETWORK_MAXIMUM_INBOUND_TIMEOUT` | duration | `10s` | Maximum timeout value of an inbound message. Defines duration within which an incoming message must be fulfilled. Incoming messages containing deadline higher than this value will be overridden with this value. |
| `--network-timeout-halflife` | `AVAGO_NETWORK_TIMEOUT_HALFLIFE` | duration | `5m` | Half life used when calculating average network latency. Larger value -> less volatile network latency calculation. |
| `--network-timeout-coefficient` | `AVAGO_NETWORK_TIMEOUT_COEFFICIENT` | float | `2` | Requests to peers will time out after \[network-timeout-coefficient] \* \[average request latency]. |
| `--network-read-handshake-timeout` | `AVAGO_NETWORK_READ_HANDSHAKE_TIMEOUT` | duration | `15s` | Timeout value for reading handshake messages. |
| `--network-ping-timeout` | `AVAGO_NETWORK_PING_TIMEOUT` | duration | `30s` | Timeout value for Ping-Pong with a peer. |
| `--network-ping-frequency` | `AVAGO_NETWORK_PING_FREQUENCY` | duration | `22.5s` | Frequency of pinging other peers. |
| `--network-health-min-conn-peers` | `AVAGO_NETWORK_HEALTH_MIN_CONN_PEERS` | uint | `1` | Node will report unhealthy if connected to less than this many peers. |
| `--network-health-max-time-since-msg-received` | `AVAGO_NETWORK_HEALTH_MAX_TIME_SINCE_MSG_RECEIVED` | duration | `1m` | Node will report unhealthy if it hasn't received a message for this amount of time. |
| `--network-health-max-time-since-msg-sent` | `AVAGO_NETWORK_HEALTH_MAX_TIME_SINCE_MSG_SENT` | duration | `1m` | Network layer returns unhealthy if haven't sent a message for at least this much time. |
| `--network-health-max-portion-send-queue-full` | `AVAGO_NETWORK_HEALTH_MAX_PORTION_SEND_QUEUE_FULL` | float | `0.9` | Node will report unhealthy if its send queue is more than this portion full. Must be in \[0,1]. |
| `--network-health-max-send-fail-rate` | `AVAGO_NETWORK_HEALTH_MAX_SEND_FAIL_RATE` | float | `0.25` | Node will report unhealthy if more than this portion of message sends fail. Must be in \[0,1]. |
| `--network-health-max-outstanding-request-duration` | `AVAGO_NETWORK_HEALTH_MAX_OUTSTANDING_REQUEST_DURATION` | duration | `5m` | Node reports unhealthy if there has been a request outstanding for this duration. |
| `--network-max-clock-difference` | `AVAGO_NETWORK_MAX_CLOCK_DIFFERENCE` | duration | `1m` | Max allowed clock difference value between this node and peers. |
| `--network-require-validator-to-connect` | `AVAGO_NETWORK_REQUIRE_VALIDATOR_TO_CONNECT` | boolean | `false` | If true, this node will only maintain a connection with another node if this node is a validator, the other node is a validator, or the other node is a beacon. |
| `--network-tcp-proxy-enabled` | `AVAGO_NETWORK_TCP_PROXY_ENABLED` | boolean | `false` | Require all P2P connections to be initiated with a TCP proxy header. |
| `--network-tcp-proxy-read-timeout` | `AVAGO_NETWORK_TCP_PROXY_READ_TIMEOUT` | duration | `3s` | Maximum duration to wait for a TCP proxy header. |
| `--network-outbound-connection-timeout` | `AVAGO_NETWORK_OUTBOUND_CONNECTION_TIMEOUT` | duration | `30s` | Timeout while dialing a peer. |
### Message Rate-Limiting
These flags govern rate-limiting of inbound and outbound messages. For more information on rate-limiting and the flags below, see package `throttling` in AvalancheGo.
#### CPU Based Rate-Limiting
Rate-limiting based on how much CPU usage a peer causes.
| Flag | Env Var | Type | Default | Description |
| ------------------------------------------------------- | ----------------------------------------------------------- | -------- | ------------ | ------------------------------------------------------------------------------------------------------------------------------- |
| `--throttler-inbound-cpu-validator-alloc` | `AVAGO_THROTTLER_INBOUND_CPU_VALIDATOR_ALLOC` | float | half of CPUs | Number of CPU allocated for use by validators. Value should be in range (0, total core count]. |
| `--throttler-inbound-cpu-max-recheck-delay` | `AVAGO_THROTTLER_INBOUND_CPU_MAX_RECHECK_DELAY` | duration | `5s` | In the CPU rate-limiter, check at least this often whether the node's CPU usage has fallen to an acceptable level. |
| `--throttler-inbound-disk-max-recheck-delay` | `AVAGO_THROTTLER_INBOUND_DISK_MAX_RECHECK_DELAY` | duration | `5s` | In the disk-based network throttler, check at least this often whether the node's disk usage has fallen to an acceptable level. |
| `--throttler-inbound-cpu-max-non-validator-usage` | `AVAGO_THROTTLER_INBOUND_CPU_MAX_NON_VALIDATOR_USAGE` | float | 80% of CPUs | Number of CPUs that if fully utilized, will rate limit all non-validators. Value should be in range \[0, total core count]. |
| `--throttler-inbound-cpu-max-non-validator-node-usage` | `AVAGO_THROTTLER_INBOUND_CPU_MAX_NON_VALIDATOR_NODE_USAGE` | float | CPUs / 8 | Maximum number of CPUs that a non-validator can utilize. Value should be in range \[0, total core count]. |
| `--throttler-inbound-disk-validator-alloc` | `AVAGO_THROTTLER_INBOUND_DISK_VALIDATOR_ALLOC` | float | `1000 GiB/s` | Maximum number of disk reads/writes per second to allocate for use by validators. Must be > 0. |
| `--throttler-inbound-disk-max-non-validator-usage` | `AVAGO_THROTTLER_INBOUND_DISK_MAX_NON_VALIDATOR_USAGE` | float | `1000 GiB/s` | Number of disk reads/writes per second that, if fully utilized, will rate limit all non-validators. Must be >= 0. |
| `--throttler-inbound-disk-max-non-validator-node-usage` | `AVAGO_THROTTLER_INBOUND_DISK_MAX_NON_VALIDATOR_NODE_USAGE` | float | `1000 GiB/s` | Maximum number of disk reads/writes per second that a non-validator can utilize. Must be >= 0. |
#### Bandwidth Based Rate-Limiting
Rate-limiting based on the bandwidth a peer uses.
| Flag | Env Var | Type | Default | Description |
| ---------------------------------------------- | -------------------------------------------------- | ---- | ------- | ------------------------------------------------------------------------------------------------------------------ |
| `--throttler-inbound-bandwidth-refill-rate` | `AVAGO_THROTTLER_INBOUND_BANDWIDTH_REFILL_RATE` | uint | `512` | Max average inbound bandwidth usage of a peer, in bytes per second. See interface `throttling.BandwidthThrottler`. |
| `--throttler-inbound-bandwidth-max-burst-size` | `AVAGO_THROTTLER_INBOUND_BANDWIDTH_MAX_BURST_SIZE` | uint | `2 MiB` | Max inbound bandwidth a node can use at once. See interface `throttling.BandwidthThrottler`. |
#### Message Size Based Rate-Limiting
Rate-limiting based on the total size, in bytes, of unprocessed messages.
| Flag | Env Var | Type | Default | Description |
| --------------------------------------------- | ------------------------------------------------- | ---- | -------- | ------------------------------------------------------------------------------------------------------ |
| `--throttler-inbound-at-large-alloc-size` | `AVAGO_THROTTLER_INBOUND_AT_LARGE_ALLOC_SIZE` | uint | `6 MiB` | Size, in bytes, of at-large allocation in the inbound message throttler. |
| `--throttler-inbound-validator-alloc-size` | `AVAGO_THROTTLER_INBOUND_VALIDATOR_ALLOC_SIZE` | uint | `32 MiB` | Size, in bytes, of validator allocation in the inbound message throttler. |
| `--throttler-inbound-node-max-at-large-bytes` | `AVAGO_THROTTLER_INBOUND_NODE_MAX_AT_LARGE_BYTES` | uint | `2 MiB` | Maximum number of bytes a node can take from the at-large allocation of the inbound message throttler. |
#### Message Based Rate-Limiting
Rate-limiting based on the number of unprocessed messages.
| Flag | Env Var | Type | Default | Description |
| ---------------------------------------------- | -------------------------------------------------- | ---- | ------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `--throttler-inbound-node-max-processing-msgs` | `AVAGO_THROTTLER_INBOUND_NODE_MAX_PROCESSING_MSGS` | uint | `1024` | Node will stop reading messages from a peer when it is processing this many messages from the peer. Will resume reading messages from the peer when it is processing less than this many messages. |
#### Outbound Rate-Limiting
Rate-limiting for outbound messages.
| Flag | Env Var | Type | Default | Description |
| ---------------------------------------------- | -------------------------------------------------- | ---- | -------- | ------------------------------------------------------------------------------------------------------- |
| `--throttler-outbound-at-large-alloc-size` | `AVAGO_THROTTLER_OUTBOUND_AT_LARGE_ALLOC_SIZE` | uint | `32 MiB` | Size, in bytes, of at-large allocation in the outbound message throttler. |
| `--throttler-outbound-validator-alloc-size` | `AVAGO_THROTTLER_OUTBOUND_VALIDATOR_ALLOC_SIZE` | uint | `32 MiB` | Size, in bytes, of validator allocation in the outbound message throttler. |
| `--throttler-outbound-node-max-at-large-bytes` | `AVAGO_THROTTLER_OUTBOUND_NODE_MAX_AT_LARGE_BYTES` | uint | `2 MiB` | Maximum number of bytes a node can take from the at-large allocation of the outbound message throttler. |
### Connection Rate-Limiting
| Flag | Env Var | Type | Default | Description |
| ----------------------------------------------------------- | --------------------------------------------------------------- | -------- | ------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `--network-inbound-connection-throttling-cooldown` | `AVAGO_NETWORK_INBOUND_CONNECTION_THROTTLING_COOLDOWN` | duration | `10s` | Node will upgrade an inbound connection from a given IP at most once within this duration. If 0 or negative, will not consider recency of last upgrade when deciding whether to upgrade. |
| `--network-inbound-connection-throttling-max-conns-per-sec` | `AVAGO_NETWORK_INBOUND_CONNECTION_THROTTLING_MAX_CONNS_PER_SEC` | uint | `512` | Node will accept at most this many inbound connections per second. |
| `--network-outbound-connection-throttling-rps` | `AVAGO_NETWORK_OUTBOUND_CONNECTION_THROTTLING_RPS` | uint | `50` | Node makes at most this many outgoing peer connection attempts per second. |
### Peer List Gossiping
Nodes gossip peers to each other so that each node can have an up-to-date peer list. A node gossips `--network-peer-list-num-validator-ips` validator IPs to `--network-peer-list-validator-gossip-size` validators, `--network-peer-list-non-validator-gossip-size` non-validators and `--network-peer-list-peers-gossip-size` peers every `--network-peer-list-gossip-frequency`.
| Flag | Env Var | Type | Default | Description |
| ----------------------------------------------- | --------------------------------------------------- | -------- | ------- | ---------------------------------------------------------------------------------------------------- |
| `--network-peer-list-num-validator-ips` | `AVAGO_NETWORK_PEER_LIST_NUM_VALIDATOR_IPS` | int | `15` | Number of validator IPs to gossip to other nodes. |
| `--network-peer-list-validator-gossip-size` | `AVAGO_NETWORK_PEER_LIST_VALIDATOR_GOSSIP_SIZE` | int | `20` | Number of validators that the node will gossip peer list to. |
| `--network-peer-list-non-validator-gossip-size` | `AVAGO_NETWORK_PEER_LIST_NON_VALIDATOR_GOSSIP_SIZE` | int | `0` | Number of non-validators that the node will gossip peer list to. |
| `--network-peer-list-peers-gossip-size` | `AVAGO_NETWORK_PEER_LIST_PEERS_GOSSIP_SIZE` | int | `0` | Number of total peers (including non-validator or validator) that the node will gossip peer list to. |
| `--network-peer-list-gossip-frequency` | `AVAGO_NETWORK_PEER_LIST_GOSSIP_FREQUENCY` | duration | `1m` | Frequency to gossip peers to other nodes. |
| `--network-peer-read-buffer-size` | `AVAGO_NETWORK_PEER_READ_BUFFER_SIZE` | int | `8 KiB` | Size of the buffer that peer messages are read into (there is one buffer per peer). |
| `--network-peer-write-buffer-size` | `AVAGO_NETWORK_PEER_WRITE_BUFFER_SIZE` | int | `8 KiB` | Size of the buffer that peer messages are written into (there is one buffer per peer). |
### Resource Usage Tracking
| Flag | Env Var | Type | Default | Description |
| --------------------------------------------------------- | ------------------------------------------------------------- | -------- | ------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `--meter-vms-enabled` | `AVAGO_METER_VMS_ENABLED` | boolean | `true` | Enable Meter VMs to track VM performance with more granularity. |
| `--system-tracker-frequency` | `AVAGO_SYSTEM_TRACKER_FREQUENCY` | duration | `500ms` | Frequency to check the real system usage of tracked processes. More frequent checks -> usage metrics are more accurate, but more expensive to track. |
| `--system-tracker-processing-halflife` | `AVAGO_SYSTEM_TRACKER_PROCESSING_HALFLIFE` | duration | `15s` | Half life to use for the processing requests tracker. Larger half life -> usage metrics change more slowly. |
| `--system-tracker-cpu-halflife` | `AVAGO_SYSTEM_TRACKER_CPU_HALFLIFE` | duration | `15s` | Half life to use for the CPU tracker. Larger half life -> CPU usage metrics change more slowly. |
| `--system-tracker-disk-halflife` | `AVAGO_SYSTEM_TRACKER_DISK_HALFLIFE` | duration | `1m` | Half life to use for the disk tracker. Larger half life -> disk usage metrics change more slowly. |
| `--system-tracker-disk-required-available-space` | `AVAGO_SYSTEM_TRACKER_DISK_REQUIRED_AVAILABLE_SPACE` | uint | `536870912` | Minimum number of available bytes on disk, under which the node will shutdown. |
| `--system-tracker-disk-warning-threshold-available-space` | `AVAGO_SYSTEM_TRACKER_DISK_WARNING_THRESHOLD_AVAILABLE_SPACE` | uint | `1073741824` | Warning threshold for the number of available bytes on disk, under which the node will be considered unhealthy. Must be >= `--system-tracker-disk-required-available-space`. |
### Plugins
| Flag | Env Var | Type | Default | Description |
| -------------- | ------------------ | ------ | ---------------------------- | -------------------------------------------------------------------------------------- |
| `--plugin-dir` | `AVAGO_PLUGIN_DIR` | string | `$HOME/.avalanchego/plugins` | Sets the directory for [VM plugins](https://build.avax.network/docs/virtual-machines). |
### Virtual Machine (VM) Configs
| Flag | Env Var | Type | Default | Description |
| --------------------------- | ------------------------------- | ------ | ----------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `--vm-aliases-file` | `AVAGO_VM_ALIASES_FILE` | string | `~/.avalanchego/configs/vms/aliases.json` | Path to JSON file that defines aliases for Virtual Machine IDs. This flag is ignored if `--vm-aliases-file-content` is specified. Example content: `{"tGas3T58KzdjLHhBDMnH2TvrddhqTji5iZAMZ3RXs2NLpSnhH": ["timestampvm", "timerpc"]}`. The above example aliases the VM whose ID is `"tGas3T58KzdjLHhBDMnH2TvrddhqTji5iZAMZ3RXs2NLpSnhH"` to `"timestampvm"` and `"timerpc"`. |
| `--vm-aliases-file-content` | `AVAGO_VM_ALIASES_FILE_CONTENT` | string | - | As an alternative to `--vm-aliases-file`, it allows specifying base64 encoded aliases for Virtual Machine IDs. |
### Indexing
| Flag | Env Var | Type | Default | Description |
| -------------------------- | ------------------------------ | ------- | ------- | --------------------------------------------------------------------------------------------------------------------------- |
| `--index-allow-incomplete` | `AVAGO_INDEX_ALLOW_INCOMPLETE` | boolean | `false` | If true, allow running the node in such a way that could cause an index to miss transactions. Ignored if index is disabled. |
### Router
| Flag | Env Var | Type | Default | Description |
| ------------------------------------------ | ---------------------------------------------- | ----- | ------- | ------------------------------------------------------------------------------------------------------------------------------ |
| `--router-health-max-drop-rate` | `AVAGO_ROUTER_HEALTH_MAX_DROP_RATE` | float | `1` | Node reports unhealthy if the router drops more than this portion of messages. |
| `--router-health-max-outstanding-requests` | `AVAGO_ROUTER_HEALTH_MAX_OUTSTANDING_REQUESTS` | uint | `1024` | Node reports unhealthy if there are more than this many outstanding consensus requests (Get, PullQuery, etc.) over all chains. |
## Additional Resources
* [Full documentation](https://build.avax.network/docs/quick-start)
* [Example configurations](https://github.com/ava-labs/avalanchego/tree/master/config)
* [Network upgrade schedules](https://build.avax.network/docs/quick-start/primary-network)
# Backup and Restore
URL: /docs/nodes/maintain/backup-restore
Once you have your node up and running, it's time to prepare for disaster recovery. Should your machine ever have a catastrophic failure due to either hardware or software issues, or even a case of natural disaster, it's best to be prepared for such a situation by making a backup.
When running, a complete node installation along with the database can grow to be multiple gigabytes in size. Having to back up and restore such a large volume of data can be expensive, complicated and time-consuming. Luckily, there is a better way.
Instead of having to back up and restore everything, we need to back up only what is essential, that is, those files that cannot be reconstructed because they are unique to your node. For AvalancheGo node, unique files are those that identify your node on the network, in other words, files that define your NodeID.
Even if your node is a validator on the network and has multiple delegations on it, you don't need to worry about backing up anything else, because the validation and delegation transactions are also stored on the blockchain and will be restored during bootstrapping, along with the rest of the blockchain data.
The installation itself can be easily recreated by installing the node on a new machine, and all the remaining gigabytes of blockchain data can be easily recreated by the process of bootstrapping, which copies the data over from other network peers. However, if you would like to speed up the process, see the [Database Backup and Restore section](#database)
## NodeID[](#nodeid "Direct link to heading")
If more than one running nodes share the same NodeID, the communications from other nodes in the Avalanche network to this NodeID will be random to one of these nodes. If this NodeID is of a validator, it will dramatically impact the uptime calculation of the validator which will very likely disqualify the validator from receiving the staking rewards. Please make sure only one node with the same NodeID run at one time.
NodeID is a unique identifier that differentiates your node from all the other peers on the network. It's a string formatted like `NodeID-5mb46qkSBj81k9g9e4VFjGGSbaaSLFRzD`. You can look up the technical background of how the NodeID is constructed [here](/docs/api-reference/standards/cryptographic-primitives#tls-addresses). In essence, NodeID is defined by two files:
* `staker.crt`
* `staker.key`
NodePOP is this node's BLS key and proof of possession. Nodes must register a BLS key to act as a validator on the Primary Network. Your node's POP is logged on startup and is accessible over this endpoint.
* `publicKey` is the 48 byte hex representation of the BLS key.
* `proofOfPossession` is the 96 byte hex representation of the BLS signature.
NodePOP is defined by the `signer.key` file.
In the default installation, they can be found in the working directory, specifically in `~/.avalanchego/staking/`. All we need to do to recreate the node on another machine is to run a new installation with those same three files.
If `staker.key` and `staker.crt` are removed from a node, which is restarted afterwards, they will be recreated and a new node ID will be assigned.
If the `signer.key` is regenerated, the node will lose its previous BLS identity, which includes its public key and proof of possession. This change means that the node's former identity on the network will no longer be recognized, affecting its ability to participate in the consensus mechanism as before. Consequently, the node may lose its established reputation and any associated staking rewards.
If you have users defined in the keystore of your node, then you need to back up and restore those as well. [Keystore API](/docs/api-reference/keystore-api) has methods that can be used to export and import user keys. Note that Keystore API is used by developers only and not intended for use in production nodes. If you don't know what a keystore API is and have not used it, you don't need to worry about it.
### Backup[](#backup "Direct link to heading")
To back up your node, we need to store `staker.crt` and `staker.key` files somewhere safe and private, preferably to a different computer, to your private To back up your node, we need to store `staker.crt`, `staker.key` and `signer.key` files somewhere safe and private, preferably to a different computer.
If someone gets a hold of your staker files, they still cannot get to your funds, as they are controlled by the wallet private keys, not by the node. But, they could re-create your node somewhere else, and depending on the circumstances make you lose the staking rewards. So make sure your staker files are secure.
If someone gains access to your `signer.key`, they could potentially sign transactions on behalf of your node, which might disrupt the operations and integrity of your node on the network.
Let's get the files off the machine running the node.
#### From Local Node[](#from-local-node "Direct link to heading")
If you're running the node locally, on your desktop computer, just navigate to where the files are and copy them somewhere safe.
On a default Linux installation, the path to them will be `/home/USERNAME/.avalanchego/staking/`, where `USERNAME` needs to be replaced with the actual username running the node. Select and copy the files from there to a backup location. You don't need to stop the node to do that.
#### From Remote Node Using `scp`[](#from-remote-node-using-scp "Direct link to heading")
`scp` is a 'secure copy' command line program, available built-in on Linux and MacOS computers. There is also a Windows version, `pscp`, as part of the [PuTTY](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html) package. If using `pscp`, in the following commands replace each usage of `scp` with `pscp -scp`.
To copy the files from the node, you will need to be able to remotely log into the machine. You can use account password, but the secure and recommended way is to use the SSH keys. The procedure for acquiring and setting up SSH keys is highly dependent on your cloud provider and machine configuration. You can refer to our [Amazon Web Services](/docs/nodes/on-third-party-services/amazon-web-services) and [Microsoft Azure](/docs/nodes/on-third-party-services/microsoft-azure) setup guides for those providers. Other providers will have similar procedures.
When you have means of remote login into the machine, you can copy the files over with the following command:
```bash
scp -r ubuntu@PUBLICIP:/home/ubuntu/.avalanchego/staking ~/avalanche_backup
```
This assumes the username on the machine is `ubuntu`, replace with correct username in both places if it is different. Also, replace `PUBLICIP` with the actual public IP of the machine. If `scp` doesn't automatically use your downloaded SSH key, you can point to it manually:
```bash
scp -i /path/to/the/key.pem -r ubuntu@PUBLICIP:/home/ubuntu/.avalanchego/staking ~/avalanche_backup
```
Once executed, this command will create `avalanche_backup` directory and place those three files in it. You need to store them somewhere safe.
### Restore[](#restore "Direct link to heading")
To restore your node from a backup, we need to do the reverse: restore `staker.key`, `staker.crt` and `signer.key` from the backup to the working directory of the new node.
First, we need to do the usual [installation](/docs/nodes/using-install-script/installing-avalanche-go) of the node. This will create a new NodeID, a new BLS key and a new BLS signature, which we need to replace. When the node is installed correctly, log into the machine where the node is running and stop it:
```bash
sudo systemctl stop avalanchego
```
We're ready to restore the node.
#### To Local Node[](#to-local-node "Direct link to heading")
If you're running the node locally, just copy the `staker.key`, `staker.crt` and `signer.key` files from the backup location into the working directory, which on the default Linux installation will be `/home/USERNAME/.avalanchego/staking/`. Replace `USERNAME` with the actual username used to run the node.
#### To Remote Node Using `scp`[](#to-remote-node-using-scp "Direct link to heading")
Again, the process is just the reverse operation. Using `scp` we need to copy the `staker.key`, `staker.crt` and `signer.key` files from the backup location into the remote working directory. Assuming the backed up files are located in the directory where the above backup procedure placed them:
```bash
scp ~/avalanche_backup/{staker.*,signer.key} ubuntu@PUBLICIP:/home/ubuntu/.avalanchego/staking
```
Or if you need to specify the path to the SSH key:
```bash
scp -i /path/to/the/key.pem ~/avalanche_backup/{staker.*,signer.key} ubuntu@PUBLICIP:/home/ubuntu/.avalanchego/staking
```
And again, replace `ubuntu` with correct username if different, and `PUBLICIP` with the actual public IP of the machine running the node, as well as the path to the SSH key if used.
#### Restart the Node and Verify[](#restart-the-node-and-verify "Direct link to heading")
Once the files have been replaced, log into the machine and start the node using:
```bash
sudo systemctl start avalanchego
```
You can now check that the node is restored with the correct NodeID and NodePOP by issuing the [getNodeID](/docs/api-reference/info-api#infogetnodeid) API call in the same console you ran the previous command:
```bash
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"info.getNodeID"
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info
```
You should see your original NodeID and NodePOP (BLS key and BLS signature). Restore process is done.
## Database[](#database "Direct link to heading")
Normally, when starting a new node, you can just bootstrap from scratch. However, there are situations when you may prefer to reuse an existing database (ex: preserve keystore records, reduce sync time).
This tutorial will walk you through compressing your node's DB and moving it to another computer using `zip` and `scp`.
### Database Backup[](#database-backup "Direct link to heading")
First, make sure to stop AvalancheGo, run:
```bash
sudo systemctl stop avalanchego
```
You must stop the Avalanche node before you back up the database otherwise data could become corrupted.
Once the node is stopped, you can `zip` the database directory to reduce the size of the backup and speed up the transfer using `scp`:
```bash
zip -r avalanche_db_backup.zip .avalanchego/db
```
*Note: It may take > 30 minutes to zip the node's DB.*
Next, you can transfer the backup to another machine:
```bash
scp -r ubuntu@PUBLICIP:/home/ubuntu/avalanche_db_backup.zip ~/avalanche_db_backup.zip
```
This assumes the username on the machine is `ubuntu`, replace with correct username in both places if it is different. Also, replace `PUBLICIP` with the actual public IP of the machine. If `scp` doesn't automatically use your downloaded SSH key, you can point to it manually:
```bash
scp -i /path/to/the/key.pem -r ubuntu@PUBLICIP:/home/ubuntu/avalanche_db_backup.zip ~/avalanche_db_backup.zip
```
Once executed, this command will create `avalanche_db_backup.zip` directory in you home directory.
### Database Restore[](#database-restore "Direct link to heading")
*This tutorial assumes you have already completed "Database Backup" and have a backup at \~/avalanche\_db\_backup.zip.*
First, we need to do the usual [installation](/docs/nodes/using-install-script/installing-avalanche-go) of the node. When the node is installed correctly, log into the machine where the node is running and stop it:
```bash
sudo systemctl stop avalanchego
```
You must stop the Avalanche node before you restore the database otherwise data could become corrupted.
We're ready to restore the database. First, let's move the DB on the existing node (you can remove this old DB later if the restore was successful):
```bash
mv .avalanchego/db .avalanchego/db-old
```
Next, we'll unzip the backup we moved from another node (this will place the unzipped files in `~/.avalanchego/db` when the command is run in the home directory):
```bash
unzip avalanche_db_backup.zip
```
After the database has been restored on a new node, use this command to start the node:
```bash
sudo systemctl start avalanchego
```
Node should now be running from the database on the new instance. To check that everything is in order and that node is not bootstrapping from scratch (which would indicate a problem), use:
```bash
sudo journalctl -u avalanchego -f
```
The node should be catching up to the network and fetching a small number of blocks before resuming normal operation (all the ones produced from the time when the node was stopped before the backup).
Once the backup has been restored and is working as expected, the zip can be deleted:
```bash
rm avalanche_db_backup.zip
```
### Database Direct Copy[](#database-direct-copy "Direct link to heading")
You may be in a situation where you don't have enough disk space to create the archive containing the whole database, so you cannot complete the backup process as described previously.
In that case, you can still migrate your database to a new computer, by using a different approach: `direct copy`. Instead of creating the archive, moving the archive and unpacking it, we can do all of that on the fly.
To do so, you will need `ssh` access from the destination machine (where you want the database to end up) to the source machine (where the database currently is). Setting up `ssh` is the same as explained for `scp` earlier in the document.
Same as shown previously, you need to stop the node (on both machines):
```bash
sudo systemctl stop avalanchego
```
You must stop the Avalanche node before you back up the database otherwise data could become corrupted.
Then, on the destination machine, change to a directory where you would like to the put the database files, enter the following command:
```bash
ssh -i /path/to/the/key.pem ubuntu@PUBLICIP 'tar czf - .avalanchego/db' | tar xvzf - -C .
```
Make sure to replace the correct path to the key, and correct IP of the source machine. This will compress the database, but instead of writing it to a file it will pipe it over `ssh` directly to destination machine, where it will be decompressed and written to disk. The process can take a long time, make sure it completes before continuing.
After copying is done, all you need to do now is move the database to the correct location on the destination machine. Assuming there is a default AvalancheGo node installation, we remove the old database and replace it with the new one:
```bash
rm -rf ~/.avalanchego/db
mv db ~/.avalanchego/db
```
You can now start the node on the destination machine:
```bash
sudo systemctl start avalanchego
```
Node should now be running from the copied database. To check that everything is in order and that node is not bootstrapping from scratch (which would indicate a problem), use:
```bash
sudo journalctl -u avalanchego -f
```
The node should be catching up to the network and fetching a small number of blocks before resuming normal operation (all the ones produced from the time when the node was stopped before the backup).
## Summary[](#summary "Direct link to heading")
Essential part of securing your node is the backup that enables full and painless restoration of your node. Following this tutorial you can rest easy knowing that should you ever find yourself in a situation where you need to restore your node from scratch, you can easily and quickly do so.
If you have any problems following this tutorial, comments you want to share with us or just want to chat, you can reach us on our [Discord](https://chat.avalabs.org/) server.
# Node Bootstrap
URL: /docs/nodes/maintain/bootstrapping
Node Bootstrap is the process where a node *securely* downloads linear chain blocks to recreate the latest state of the chain locally.
Bootstrap must guarantee that the local state of a node is in sync with the state of other valid nodes. Once bootstrap is completed, a node has the latest state of the chain and can verify new incoming transactions and reach consensus with other nodes, collectively moving forward the chains.
Bootstrapping a node is a multi-step process which requires downloading the chains required by the Primary Network (that is, the C-Chain, P-Chain, and X-Chain), as well as the chains required by any additional Avalanche L1s that the node explicitly tracks.
This document covers the high-level technical details of how bootstrapping works. This document glosses over some specifics, but the [AvalancheGo](https://github.com/ava-labs/avalanchego) codebase is open-source and is available for curious-minded readers to learn more.
## Validators and Where to Find Them[](#validators-and-where-to-find-them "Direct link to heading")
Bootstrapping is all about downloading all previously accepted containers *securely* so a node can have the latest correct state of the chain. A node can't arbitrarily trust any source - a malicious actor could provide malicious blocks, corrupting the bootstrapping node's local state, and making it impossible for the node to correctly validate the network and reach consensus with other correct nodes.
What's the most reliable source of information in the Avalanche ecosystem? It's a *large enough* majority of validators. Therefore, the first step of bootstrapping is finding a sufficient amount of validators to download containers from.
The P-Chain is responsible for all platform-level operations, including staking events that modify an Avalanche L1's validator set. Whenever any chain (aside from the P-Chain itself) bootstraps, it requests an up-to-date validator set for that Avalanche L1 (Primary Network is an Avalanche L1 too). Once the Avalanche L1's current validator set is known, the node can securely download containers from these validators to bootstrap the chain.
There is a caveat here: the validator set must be *up-to-date*. If a bootstrapping node's validator set is stale, the node may incorrectly believe that some nodes are still validators when their validation period has already expired. A node might unknowingly end up requesting blocks from non-validators which respond with malicious blocks that aren't safe to download.
**For this reason, every Avalanche node must fully bootstrap the P-chain first before moving on to the other Primary Network chains and other Avalanche L1s to guarantee that their validator sets are up-to-date**.
What about the P-chain? The P-chain can't ever have an up-to-date validator set before completing its bootstrap. To solve this chicken-and-egg situation the Avalanche Foundation maintains a trusted default set of validators called beacons (but users are free to configure their own). Beacon Node-IDs and IP addresses are listed in the [AvalancheGo codebase](https://github.com/ava-labs/avalanchego/blob/master/genesis/bootstrappers.json). Every node has the beacon list available from the start and can reach out to them as soon as it starts.
Validators are the only sources of truth for a blockchain. Validator availability is so key to the bootstrapping process that **bootstrapping is blocked until the node establishes a sufficient amount of secure connections to validators**. If the node fails to reach a sufficient amount within a given period of time, it shuts down as no operation can be carried out safely.
## Bootstrapping the Blockchain[](#bootstrapping-the-blockchain "Direct link to heading")
Once a node is able to discover and connect to validator and beacon nodes, it's able to start bootstrapping the blockchain by downloading the individual containers.
One common misconception is that Avalanche blockchains are bootstrapped by retrieving containers starting at genesis and working up to the currently accepted frontier.
Instead, containers are downloaded from the accepted frontier downwards to genesis, and then their corresponding state transitions are executed upwards from genesis to the accepted frontier. The accepted frontier is the last accepted block for linear chains.
Why can't nodes simply download blocks in chronological order, starting from genesis upwards? The reason is efficiency: if nodes downloaded containers upwards they would only get a safety guarantee by polling a majority of validators for every single container. That's a lot of network traffic for a single container, and a node would still need to do that for each container in the chain.
Instead, if a node starts by securely retrieving the accepted frontier from a majority of honest nodes and then recursively fetches the parent containers from the accepted frontier down to genesis, it can cheaply check that containers are correct just by verifying their IDs. Each Avalanche container has the IDs of its parents (one block parent for linear chains) and an ID's integrity can be guaranteed cryptographically.
Let's dive deeper into the two bootstrap phases - frontier retrieval and container execution.
### Frontier Retrieval[](#frontier-retrieval "Direct link to heading")
The current frontier is retrieved by requesting them from validator or beacon nodes. Avalanche bootstrap is designed to be robust - it must be able to make progress even in the presence of slow validators or network failures. This process needs to be fault-tolerant to these types of failures, since bootstrapping may take quite some time to complete and network connections can be unreliable.
Bootstrap starts when a node has connected to a sufficient majority of validator stake. A node is able to start bootstrapping when it has connected to at least 75%75\\% of total validator stake.
Seeders are the first set of peers that a node reaches out to when trying to figure out the current frontier. A subset of seeders is randomly sampled from the validator set. Seeders might be slow and provide a stale frontier, be malicious and return malicious container IDs, but they always provide an initial set of candidate frontiers to work with.
Once a node has received the candidate frontiers form its seeders, it polls **every network validator** to vet the candidates frontiers. It sends the list of candidate frontiers it received from the seeders to each validator, asking whether or not they know about these frontiers. Each validator responds returning the subset of known candidates, regardless of how up-to-date or stale the containers are. Each validator returns containers irrespective of their age so that bootstrap works even in the presence of a stale frontier.
Frontier retrieval is completed when at least one of the candidate frontiers is supported by at least 50%50\\% of total validator stake. Multiple candidate frontiers may be supported by a majority of stake, after which point the next phase, container fetching starts.
At any point in these steps a network issue may occur, preventing a node from retrieving or validating frontiers. If this occurs, bootstrap restarts by sampling a new set of seeders and repeating the bootstrapping process, optimistically assuming that the network issue will go away.
### Containers Execution[](#containers-execution "Direct link to heading")
Once a node has at least one valid frontiers, it starts downloading parent containers for each frontier. If it's the first time the node is running, it won't know about any containers and will try fetching all parent containers recursively from the accepted frontier down to genesis (unless [state sync](#state-sync) is enabled). If bootstrap had already run previously, some containers are already available locally and the node will stop as soon as it finds a known one.
A node first just fetches and parses containers. Once the chain is complete, the node executes them in chronological order starting from the earliest downloaded container to the accepted frontier. This allows the node to rebuild the full chain state and to eventually be in sync with the rest of the network.
## When Does Bootstrapping Finish?[](#when-does-bootstrapping-finish "Direct link to heading")
You've seen how [bootstrap works](#bootstrapping-the-blockchain) for a single chain. However, a node must bootstrap the chains in the Primary Network as well as the chains in each Avalanche L1 it tracks. This begs the questions - when are these chains bootstrapped? When is a node done bootstrapping?
The P-chain is always the first to bootstrap before any other chain. Once the P-Chain has finished, all other chains start bootstrapping in parallel, connecting to their own validators independently of one another.
A node completes bootstrapping an Avalanche L1 once all of its corresponding chains have completed bootstrapping. Because the Primary Network is a special case of Avalanche L1 that includes the entire network, this applies to it as well as any other manually tracked Avalanche L1s.
Note that Avalanche L1s bootstrap is independently of one another - so even if one Avalanche L1 has bootstrapped and is validating new transactions and adding new containers, other Avalanche L1s may still be bootstrapping in parallel.
Within a single Avalanche L1 however, an Avalanche L1 isn't done bootstrapping until the last chain completes bootstrapping. It's possible for a single chain to effectively stall a node from finishing the bootstrap for a single Avalanche L1, if it has a sufficiently long history or each operation is complex and time consuming. Even worse, other Avalanche L1 validators are continuously accepting new transactions and adding new containers on top of the previously known frontier, so a node that's slow to bootstrap can continuously fall behind the rest of the network.
Nodes mitigate this by restarting bootstrap for any chains which is blocked waiting for the remaining Avalanche L1 chains to finish bootstrapping. These chains repeat the frontier retrieval and container downloading phases to stay up-to-date with the Avalanche L1's ever moving current frontier until the slowest chain has completed bootstrapping.
Once this is complete, a node is finally ready to validate the network.
## State Sync[](#state-sync "Direct link to heading")
The full node bootstrap process is long, and gets longer and longer over time as more and more containers are accepted. Nodes need to bootstrap a chain by reconstructing the full chain state locally - but downloading and executing each container isn't the only way to do this.
Starting from [AvalancheGo version 1.7.11](https://github.com/ava-labs/avalanchego/releases/tag/v1.7.11), nodes can use state sync to drastically cut down bootstrapping time on the C-Chain. Instead of executing each block, state sync uses cryptographic techniques to download and verify just the state associated with the current frontier. State synced nodes can't serve every C-chain block ever historically accepted, but they can safely retrieve the full C-chain state needed to validate in a much shorter time. State sync will fetch the previous 256 blocks prior to support the previous block hash operation code.
State sync is currently only available for the C-chain. The P-chain and X-chain currently bootstrap by downloading all blocks. Note that irrespective of the bootstrap method used (including state sync), each chain is still blocked on all other chains in its Avalanche L1 completing their bootstrap before continuing into normal operation.
There are no configs to state sync an archival node. If you need all the historical state then you must not use state sync and setup the config of the node for an archival node.
## Conclusions and FAQ[](#conclusions-and-faq "Direct link to heading")
If you got this far, you've hopefully gotten a better idea of what's going on when your node bootstraps. Here's a few frequently asked questions about bootstrapping.
### How Can I Get the ETA for Node Bootstrap?[](#how-can-i-get-the-eta-for-node-bootstrap "Direct link to heading")
Logs provide information about both container downloading and their execution for each chain. Here is an example
```bash
[02-16|17:31:42.950] INFO bootstrap/bootstrapper.go:494 fetching blocks {"numFetchedBlocks": 5000, "numTotalBlocks": 101357, "eta": "2m52s"}
[02-16|17:31:58.110] INFO
bootstrap/bootstrapper.go:494 fetching blocks {"numFetchedBlocks": 10000, "numTotalBlocks": 101357, "eta": "3m40s"}
[02-16|17:32:04.554] INFO
bootstrap/bootstrapper.go:494 fetching blocks {"numFetchedBlocks": 15000, "numTotalBlocks": 101357, "eta": "2m56s"}
...
[02-16|17:36:52.404] INFO
queue/jobs.go:203 executing operations {"numExecuted": 17881, "numToExecute": 101357, "eta": "2m20s"}
[02-16|17:37:22.467] INFO
queue/jobs.go:203 executing operations {"numExecuted": 35009, "numToExecute": 101357, "eta": "1m54s"}
[02-16|17:37:52.468] INFO
queue/jobs.go:203 executing operations {"numExecuted": 52713, "numToExecute": 101357, "eta": "1m23s"}
```
Similar logs are emitted for X and C chains and any chain in explicitly tracked Avalanche L1s.
### Why Chain Bootstrap ETA Keeps On Changing?[](#why-chain-bootstrap-eta-keeps-on-changing "Direct link to heading")
As you saw in the [bootstrap completion section](#when-does-bootstrapping-finish), an Avalanche L1 like the Primary Network completes once all of its chains finish bootstrapping. Some Avalanche L1 chains may have to wait for the slowest to finish. They'll restart bootstrapping in the meantime, to make sure they won't fall back too much with respect to the network accepted frontier.
## What Order Do The Chains Bootstrap?[](#what-order-do-the-chains-bootstrap "Direct link to heading")
The 3 chains will bootstrap in the following order: P-chain, X-chain, C-chain.
### Why Are AvalancheGo APIs Disabled During Bootstrapping?[](#why-are-avalanchego-apis-disabled-during-bootstrapping "Direct link to heading")
AvalancheGo APIs are [explicitly disabled](https://github.com/ava-labs/avalanchego/blob/master/api/server/server.go#L367:L379) during bootstrapping. The reason is that if the node has not fully rebuilt its Avalanche L1s state, it can't provide accurate information. AvalancheGo APIs are activated once bootstrap completes and node transition into its normal operating mode, accepting and validating transactions.
# Enroll in Avalanche Notify
URL: /docs/nodes/maintain/enroll-in-avalanche-notify
To receive email alerts if a validator becomes unresponsive or out-of-date, sign up with the Avalanche Notify tool: [http://notify.avax.network](http://notify.avax.network/).
Avalanche Notify is an active monitoring system that checks a validator's responsiveness each minute.
An email alert is sent if a validator is down for 5 consecutive checks and when a validator recovers (is responsive for 5 checks in a row).
}>
When signing up for email alerts, consider using a new, alias, or auto-forwarding email address to protect your privacy. Otherwise, it will be possible to link your NodeID to your email.
This tool is currently in BETA and validator alerts may erroneously be triggered, not triggered, or delayed. The best way to maximize the likelihood of earning staking rewards is to run redundant monitoring/alerting.
# Monitoring
URL: /docs/nodes/maintain/monitoring
Learn how to monitor an AvalancheGo node.
This tutorial demonstrates how to set up infrastructure to monitor an instance of [AvalancheGo](https://github.com/ava-labs/avalanchego). We will use:
* [Prometheus](https://prometheus.io/) to gather and store data
* [`node_exporter`](https://github.com/prometheus/node_exporter) to get information about the machine,
* AvalancheGo's [Metrics API](/docs/api-reference/metrics-api) to get information about the node
* [Grafana](https://grafana.com/) to visualize data on a dashboard.
* A set of pre-made [Avalanche dashboards](https://github.com/ava-labs/avalanche-monitoring/tree/main/grafana/dashboards)
## Prerequisites:
* A running AvalancheGo node
* Shell access to the machine running the node
* Administrator privileges on the machine
This tutorial assumes you have Ubuntu 20.04 running on your node. Other Linux flavors that use `systemd` for running services and `apt-get` for package management might work but have not been tested. Community member has reported it works on Debian 10, might work on other Debian releases as well.
### Caveat: Security
The system as described here **should not** be opened to the public internet. Neither Prometheus nor Grafana as shown here is hardened against unauthorized access. Make sure that both of them are accessible only over a secured proxy, local network, or VPN. Setting that up is beyond the scope of this tutorial, but exercise caution. Bad security practices could lead to attackers gaining control over your node! It is your responsibility to follow proper security practices.
## Monitoring Installer Script[](#monitoring-installer-script "Direct link to heading")
In order to make node monitoring easier to install, we have made a script that does most of the work for you. To download and run the script, log into the machine the node runs on with a user that has administrator privileges and enter the following command:
```bash
wget -nd -m https://raw.githubusercontent.com/ava-labs/avalanche-monitoring/main/grafana/monitoring-installer.sh ;\
chmod 755 monitoring-installer.sh;
```
This will download the script and make it executable.
Script itself is run multiple times with different arguments, each installing a different tool or part of the environment. To make sure it downloaded and set up correctly, begin by running:
```bash
./monitoring-installer.sh --help
```
It should display:
```bash
Usage: ./monitoring-installer.sh [--1|--2|--3|--4|--5|--help]
Options:
--help Shows this message
--1 Step 1: Installs Prometheus
--2 Step 2: Installs Grafana
--3 Step 3: Installs node_exporter
--4 Step 4: Installs AvalancheGo Grafana dashboards
--5 Step 5: (Optional) Installs additional dashboards
Run without any options, script will download and install latest version of AvalancheGo dashboards.
```
Let's get to it.
## Step 1: Set up Prometheus [](#step-1-set-up-prometheus- "Direct link to heading")
Run the script to execute the first step:
```bash
./monitoring-installer.sh --1
```
It should produce output something like this:
```bash
AvalancheGo monitoring installer
--------------------------------
STEP 1: Installing Prometheus
Checking environment...
Found arm64 architecture...
Prometheus install archive found:
https://github.com/prometheus/prometheus/releases/download/v2.31.0/prometheus-2.31.0.linux-arm64.tar.gz
Attempting to download:
https://github.com/prometheus/prometheus/releases/download/v2.31.0/prometheus-2.31.0.linux-arm64.tar.gz
prometheus.tar.gz 100%[=========================================================================================>] 65.11M 123MB/s in 0.5s
2021-11-05 14:16:11 URL:https://github-releases.githubusercontent.com/6838921/a215b0e7-df1f-402b-9541-a3ec9d431f76?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20211105%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20211105T141610Z&X-Amz-Expires=300&X-Amz-Signature=72a8ae4c6b5cea962bb9cad242cb4478082594b484d6a519de58b8241b319d94&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=6838921&response-content-disposition=attachment%3B%20filename%3Dprometheus-2.31.0.linux-arm64.tar.gz&response-content-type=application%2Foctet-stream [68274531/68274531] -> "prometheus.tar.gz" [1]
...
```
You may be prompted to confirm additional package installs, do that if asked. Script run should end with instructions on how to check that Prometheus installed correctly. Let's do that, run:
```bash
sudo systemctl status prometheus
```
It should output something like:
```bash
● prometheus.service - Prometheus
Loaded: loaded (/etc/systemd/system/prometheus.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2021-11-12 11:38:32 UTC; 17min ago
Docs: https://prometheus.io/docs/introduction/overview/
Main PID: 548 (prometheus)
Tasks: 10 (limit: 9300)
Memory: 95.6M
CGroup: /system.slice/prometheus.service
└─548 /usr/local/bin/prometheus --config.file=/etc/prometheus/prometheus.yml --storage.tsdb.path=/var/lib/prometheus --web.console.templates=/etc/prometheus/con>
Nov 12 11:38:33 ip-172-31-36-200 prometheus[548]: ts=2021-11-12T11:38:33.644Z caller=head.go:590 level=info component=tsdb msg="WAL segment loaded" segment=81 maxSegment=84
Nov 12 11:38:33 ip-172-31-36-200 prometheus[548]: ts=2021-11-12T11:38:33.773Z caller=head.go:590 level=info component=tsdb msg="WAL segment loaded" segment=82 maxSegment=84
```
Note the `active (running)` status (press `q` to exit). You can also check Prometheus web interface, available on `http://your-node-host-ip:9090/`
You may need to do `sudo ufw allow 9090/tcp` if the firewall is on, and/or adjust the security settings to allow connections to port 9090 if the node is running on a cloud instance. For AWS, you can look it up [here](/docs/nodes/on-third-party-services/amazon-web-services#create-a-security-group). If on public internet, make sure to only allow your IP to connect!
If everything is OK, let's move on.
## Step 2: Install Grafana [](#step-2-install-grafana- "Direct link to heading")
Run the script to execute the second step:
```bash
./monitoring-installer.sh --2
```
It should produce output something like this:
```bash
AvalancheGo monitoring installer
--------------------------------
STEP 2: Installing Grafana
OK
deb https://packages.grafana.com/oss/deb stable main
Hit:1 http://us-east-2.ec2.ports.ubuntu.com/ubuntu-ports focal InRelease
Get:2 http://us-east-2.ec2.ports.ubuntu.com/ubuntu-ports focal-updates InRelease [114 kB]
Get:3 http://us-east-2.ec2.ports.ubuntu.com/ubuntu-ports focal-backports InRelease [101 kB]
Hit:4 http://ppa.launchpad.net/longsleep/golang-backports/ubuntu focal InRelease
Get:5 http://ports.ubuntu.com/ubuntu-ports focal-security InRelease [114 kB]
Get:6 https://packages.grafana.com/oss/deb stable InRelease [12.1 kB]
...
```
To make sure it's running properly:
```bash
sudo systemctl status grafana-server
```
which should again show Grafana as `active`. Grafana should now be available at `http://your-node-host-ip:3000/` from your browser. Log in with username: admin, password: admin, and you will be prompted to set up a new, secure password. Do that.
You may need to do `sudo ufw allow 3000/tcp` if the firewall is on, and/or adjust the cloud instance settings to allow connections to port 3000. If on public internet, make sure to only allow your IP to connect!
Prometheus and Grafana are now installed, we're ready for the next step.
## Step 3: Set up `node_exporter` [](#step-3-set-up-node_exporter- "Direct link to heading")
In addition to metrics from AvalancheGo, let's set up monitoring of the machine itself, so we can check CPU, memory, network and disk usage and be aware of any anomalies. For that, we will use `node_exporter`, a Prometheus plugin.
Run the script to execute the third step:
```bash
./monitoring-installer.sh --3
```
The output should look something like this:
```bash
AvalancheGo monitoring installer
--------------------------------
STEP 3: Installing node_exporter
Checking environment...
Found arm64 architecture...
Downloading archive...
https://github.com/prometheus/node_exporter/releases/download/v1.2.2/node_exporter-1.2.2.linux-arm64.tar.gz
node_exporter.tar.gz 100%[=========================================================================================>] 7.91M --.-KB/s in 0.1s
2021-11-05 14:57:25 URL:https://github-releases.githubusercontent.com/9524057/6dc22304-a1f5-419b-b296-906f6dd168dc?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20211105%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20211105T145725Z&X-Amz-Expires=300&X-Amz-Signature=3890e09e58ea9d4180684d9286c9e791b96b0c411d8f8a494f77e99f260bdcbb&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=9524057&response-content-disposition=attachment%3B%20filename%3Dnode_exporter-1.2.2.linux-arm64.tar.gz&response-content-type=application%2Foctet-stream [8296266/8296266] -> "node_exporter.tar.gz" [1]
node_exporter-1.2.2.linux-arm64/LICENSE
```
Again, we check that the service is running correctly:
```bash
sudo systemctl status node_exporter
```
If the service is running, Prometheus, Grafana and `node_exporter` should all work together now. To check, in your browser visit Prometheus web interface on `http://your-node-host-ip:9090/targets`. You should see three targets enabled:
* Prometheus
* AvalancheGo
* `avalanchego-machine`
Make sure that all of them have `State` as `UP`.
If you run your AvalancheGo node with TLS enabled on your API port, you will need to manually edit the `/etc/prometheus/prometheus.yml` file and change the `avalanchego` job to look like this:
```yml
- job_name: "avalanchego"
metrics_path: "/ext/metrics"
scheme: "https"
tls_config:
insecure_skip_verify: true
static_configs:
- targets: ["localhost:9650"]
```
Mind the spacing (leading spaces too)! You will need admin privileges to do that (use `sudo`). Restart Prometheus service afterwards with `sudo systemctl restart prometheus`.
All that's left to do now is to provision the data source and install the actual dashboards that will show us the data.
## Step 4: Dashboards [](#step-4-dashboards- "Direct link to heading")
Run the script to install the dashboards:
```bash
./monitoring-installer.sh --4
```
It will produce output something like this:
```bash
AvalancheGo monitoring installer
--------------------------------
Downloading...
Last-modified header missing -- time-stamps turned off.
2021-11-05 14:57:47 URL:https://raw.githubusercontent.com/ava-labs/avalanche-monitoring/master/grafana/dashboards/c_chain.json [50282/50282] -> "c_chain.json" [1]
FINISHED --2021-11-05 14:57:47--
Total wall clock time: 0.2s
Downloaded: 1 files, 49K in 0s (132 MB/s)
Last-modified header missing -- time-stamps turned off.
...
```
This will download the latest versions of the dashboards from GitHub and provision Grafana to load them, as well as defining Prometheus as a data source. It may take up to 30 seconds for the dashboards to show up. In your browser, go to: `http://your-node-host-ip:3000/dashboards`. You should see 7 Avalanche dashboards:

Select 'Avalanche Main Dashboard' by clicking its title. It should load, and look similar to this:

Some graphs may take some time to populate fully, as they need a series of data points in order to render correctly.
You can bookmark the main dashboard as it shows the most important information about the node at a glance. Every dashboard has a link to all the others as the first row, so you can move between them easily.
## Step 5: Additional Dashboards (Optional)[](#step-5-additional-dashboards-optional "Direct link to heading")
Step 4 installs the basic set of dashboards that make sense to have on any node. Step 5 is for installing additional dashboards that may not be useful for every installation.
Currently, there is only one additional dashboard: Avalanche L1s. If your node is running any Avalanche L1s, you may want to add this as well. Do:
```bash
./monitoring-installer.sh --5
```
This will add the Avalanche L1s dashboard. It allows you to monitor operational data for any Avalanche L1 that is synced on the node. There is an Avalanche L1 switcher that allows you to switch between different Avalanche L1s. As there are many Avalanche L1s and not every node will have all of them, by default, it comes populated only with Spaces and WAGMI Avalanche L1s that exist on Fuji testnet:

To configure the dashboard and add any Layer 1s that your node is syncing, you will need to edit the dashboard. Select the `dashboard settings` icon (image of a cog) in the upper right corner of the dashboard display and switch to `Variables` section and select the `subnet` variable. It should look something like this:

The variable format is:
```bash
Subnet name:
```
and the separator between entries is a comma. Entries for Spaces and WAGMI look like:
```bash
Spaces (Fuji) : 2ebCneCbwthjQ1rYT41nhd7M76Hc6YmosMAQrTFhBq8qeqh6tt, WAGMI (Fuji) : 2AM3vsuLoJdGBGqX2ibE8RGEq4Lg7g4bot6BT1Z7B9dH5corUD
```
After editing the values, press `Update` and then click `Save dashboard` button and confirm. Press the back arrow in the upper left corner to return to the dashboard. New values should now be selectable from the dropdown and data for the selected Avalanche L1 will be shown in the panels.
## Updating[](#updating "Direct link to heading")
Available node metrics are updated constantly, new ones are added and obsolete removed, so it is good a practice to update the dashboards from time to time, especially if you notice any missing data in panels. Updating the dashboards is easy, just run the script with no arguments, and it will refresh the dashboards with the latest available versions. Allow up to 30s for dashboards to update in Grafana.
If you added the optional extra dashboards (step 5), they will be updated as well.
## Summary[](#summary "Direct link to heading")
Using the script to install node monitoring is easy, and it gives you insight into how your node is behaving and what's going on under the hood. Also, pretty graphs!
If you have feedback on this tutorial, problems with the script or following the steps, send us a message on [Discord](https://chat.avalabs.org/).
# Reduce Disk Usage
URL: /docs/nodes/maintain/reduce-disk-usage
Offline Pruning is ported from `go-ethereum` to reduce the amount of disk space taken up by the TrieDB (storage for the Merkle Forest).
Offline pruning creates a bloom filter and adds all trie nodes in the active state to the bloom filter to mark the data as protected. This ensures that any part of the active state will not be removed during offline pruning.
After generating the bloom filter, offline pruning iterates over the database and searches for trie nodes that are safe to be removed from disk.
A bloom filter is a probabilistic data structure that reports whether an item is definitely not in a set or possibly in a set. Therefore, for each key we iterate, we check if it is in the bloom filter. If the key is definitely not in the bloom filter, then it is not in the active state and we can safely delete it. If the key is possibly in the set, then we skip over it to ensure we do not delete any active state.
During iteration, the underlying database (LevelDB) writes deletion markers, causing a temporary increase in disk usage.
After iterating over the database and deleting any old trie nodes that it can, offline pruning then runs compaction to minimize the DB size after the potentially large number of delete operations.
## Finding the C-Chain Config File[](#finding-the-c-chain-config-file "Direct link to heading")
In order to enable offline pruning, you need to update the C-Chain config file to include the parameters `offline-pruning-enabled` and `offline-pruning-data-directory`.
The default location of the C-Chain config file is `~/.avalanchego/configs/chains/C/config.json`. **Please note that by default, this file does not exist. You would need to create it manually.** You can update the directory for chain configs by passing in the directory of your choice via the CLI argument: `chain-config-dir`. See [this](/docs/nodes/configure/configs-flags) for more info. For example, if you start your node with:
```bash
./build/avalanchego --chain-config-dir=/home/ubuntu/chain-configs
```
The chain config directory will be updated to `/home/ubuntu/chain-configs` and the corresponding C-Chain config file will be: `/home/ubuntu/chain-configs/C/config.json`.
## Running Offline Pruning[](#running-offline-pruning "Direct link to heading")
In order to enable offline pruning, update the C-Chain config file to include the following parameters:
```json
{
"offline-pruning-enabled": true,
"offline-pruning-data-directory": "/home/ubuntu/offline-pruning"
}
```
This will set `/home/ubuntu/offline-pruning` as the directory to be used by the offline pruner. Offline pruning will store the bloom filter in this location, so you must ensure that the path exists.
Now that the C-Chain config file has been updated, you can start your node with the command (no CLI arguments are necessary if using the default chain config directory):
Once AvalancheGo starts the C-Chain, you can expect to see update logs from the offline pruner:
```bash
INFO [02-09|00:20:15.625] Iterating state snapshot accounts=297,231 slots=6,669,708 elapsed=16.001s eta=1m29.03s
INFO [02-09|00:20:23.626] Iterating state snapshot accounts=401,907 slots=10,698,094 elapsed=24.001s eta=1m32.522s
INFO [02-09|00:20:31.626] Iterating state snapshot accounts=606,544 slots=13,891,948 elapsed=32.002s eta=1m10.927s
INFO [02-09|00:20:39.626] Iterating state snapshot accounts=760,948 slots=18,025,523 elapsed=40.002s eta=1m2.603s
INFO [02-09|00:20:47.626] Iterating state snapshot accounts=886,583 slots=21,769,199 elapsed=48.002s eta=1m8.834s
INFO [02-09|00:20:55.626] Iterating state snapshot accounts=1,046,295 slots=26,120,100 elapsed=56.002s eta=57.401s
INFO [02-09|00:21:03.626] Iterating state snapshot accounts=1,229,257 slots=30,241,391 elapsed=1m4.002s eta=47.674s
INFO [02-09|00:21:11.626] Iterating state snapshot accounts=1,344,091 slots=34,128,835 elapsed=1m12.002s eta=45.185s
INFO [02-09|00:21:19.626] Iterating state snapshot accounts=1,538,009 slots=37,791,218 elapsed=1m20.002s eta=34.59s
INFO [02-09|00:21:27.627] Iterating state snapshot accounts=1,729,564 slots=41,694,303 elapsed=1m28.002s eta=25.006s
INFO [02-09|00:21:35.627] Iterating state snapshot accounts=1,847,617 slots=45,646,011 elapsed=1m36.003s eta=20.052s
INFO [02-09|00:21:43.627] Iterating state snapshot accounts=1,950,875 slots=48,832,722 elapsed=1m44.003s eta=9.299s
INFO [02-09|00:21:47.342] Iterated snapshot accounts=1,950,875 slots=49,667,870 elapsed=1m47.718s
INFO [02-09|00:21:47.351] Writing state bloom to disk name=/home/ubuntu/offline-pruning/statebloom.0xd6fca36db4b60b34330377040ef6566f6033ed8464731cbb06dc35c8401fa38e.bf.gz
INFO [02-09|00:23:04.421] State bloom filter committed name=/home/ubuntu/offline-pruning/statebloom.0xd6fca36db4b60b34330377040ef6566f6033ed8464731cbb06dc35c8401fa38e.bf.gz
```
The bloom filter should be populated and committed to disk after about 5 minutes. At this point, if the node shuts down, it will resume the offline pruning session when it restarts (note: this operation cannot be cancelled).
In order to ensure that users do not mistakenly leave offline pruning enabled for the long term (which could result in an hour of downtime on each restart), we have added a manual protection which requires that after an offline pruning session, the node must be started with offline pruning disabled at least once before it will start with offline pruning enabled again. Therefore, once the bloom filter has been committed to disk, you should update the C-Chain config file to include the following parameters:
```json
{
"offline-pruning-enabled": false,
"offline-pruning-data-directory": "/home/ubuntu/offline-pruning"
}
```
It is important to keep the same data directory in the config file, so that the node knows where to look for the bloom filter on a restart if offline pruning has not finished.
Now if your node restarts, it will be marked as having correctly disabled offline pruning after the run and be allowed to resume normal operation once offline pruning has finished running.
You will see progress logs throughout the offline pruning run which will indicate the session's progress:
```bash
INFO [02-09|00:31:51.920] Pruning state data nodes=40,116,759 size=10.08GiB elapsed=8m47.499s eta=12m50.961s
INFO [02-09|00:31:59.921] Pruning state data nodes=41,659,059 size=10.47GiB elapsed=8m55.499s eta=12m13.822s
INFO [02-09|00:32:07.921] Pruning state data nodes=41,687,047 size=10.48GiB elapsed=9m3.499s eta=12m23.915s
INFO [02-09|00:32:15.921] Pruning state data nodes=41,715,823 size=10.48GiB elapsed=9m11.499s eta=12m33.965s
INFO [02-09|00:32:23.921] Pruning state data nodes=41,744,167 size=10.49GiB elapsed=9m19.500s eta=12m44.004s
INFO [02-09|00:32:31.921] Pruning state data nodes=41,772,613 size=10.50GiB elapsed=9m27.500s eta=12m54.01s
INFO [02-09|00:32:39.921] Pruning state data nodes=41,801,267 size=10.50GiB elapsed=9m35.500s eta=13m3.992s
INFO [02-09|00:32:47.922] Pruning state data nodes=41,829,714 size=10.51GiB elapsed=9m43.500s eta=13m13.951s
INFO [02-09|00:32:55.922] Pruning state data nodes=41,858,400 size=10.52GiB elapsed=9m51.501s eta=13m23.885s
INFO [02-09|00:33:03.923] Pruning state data nodes=41,887,131 size=10.53GiB elapsed=9m59.501s eta=13m33.79s
INFO [02-09|00:33:11.923] Pruning state data nodes=41,915,583 size=10.53GiB elapsed=10m7.502s eta=13m43.678s
INFO [02-09|00:33:19.924] Pruning state data nodes=41,943,891 size=10.54GiB elapsed=10m15.502s eta=13m53.551s
INFO [02-09|00:33:27.924] Pruning state data nodes=41,972,281 size=10.55GiB elapsed=10m23.502s eta=14m3.389s
INFO [02-09|00:33:35.924] Pruning state data nodes=42,001,414 size=10.55GiB elapsed=10m31.503s eta=14m13.192s
INFO [02-09|00:33:43.925] Pruning state data nodes=42,029,987 size=10.56GiB elapsed=10m39.504s eta=14m22.976s
INFO [02-09|00:33:51.925] Pruning state data nodes=42,777,042 size=10.75GiB elapsed=10m47.504s eta=14m7.245s
INFO [02-09|00:34:00.950] Pruning state data nodes=42,865,413 size=10.77GiB elapsed=10m56.529s eta=14m15.927s
INFO [02-09|00:34:08.956] Pruning state data nodes=42,918,719 size=10.79GiB elapsed=11m4.534s eta=14m24.453s
INFO [02-09|00:34:22.816] Pruning state data nodes=42,952,925 size=10.79GiB elapsed=11m18.394s eta=14m41.243s
INFO [02-09|00:34:30.818] Pruning state data nodes=42,998,715 size=10.81GiB elapsed=11m26.397s eta=14m49.961s
INFO [02-09|00:34:38.828] Pruning state data nodes=43,046,476 size=10.82GiB elapsed=11m34.407s eta=14m58.572s
INFO [02-09|00:34:46.893] Pruning state data nodes=43,107,656 size=10.83GiB elapsed=11m42.472s eta=15m6.729s
INFO [02-09|00:34:55.038] Pruning state data nodes=43,168,834 size=10.85GiB elapsed=11m50.616s eta=15m14.934s
INFO [02-09|00:35:03.039] Pruning state data nodes=43,446,900 size=10.92GiB elapsed=11m58.618s eta=15m14.705s
```
When the node completes, it will emit the following log and resume normal operation:
```bash
INFO [02-09|00:42:16.009] Pruning state data nodes=93,649,812 size=23.53GiB elapsed=19m11.588s eta=1m2.658s
INFO [02-09|00:42:24.009] Pruning state data nodes=95,045,956 size=23.89GiB elapsed=19m19.588s eta=45.149s
INFO [02-09|00:42:32.009] Pruning state data nodes=96,429,410 size=24.23GiB elapsed=19m27.588s eta=28.041s
INFO [02-09|00:42:40.009] Pruning state data nodes=97,811,804 size=24.58GiB elapsed=19m35.588s eta=11.204s
INFO [02-09|00:42:45.359] Pruned state data nodes=98,744,430 size=24.82GiB elapsed=19m40.938s
INFO [02-09|00:42:45.360] Compacting database range=0x00-0x10 elapsed="2.157µs"
INFO [02-09|00:43:12.311] Compacting database range=0x10-0x20 elapsed=26.951s
INFO [02-09|00:43:38.763] Compacting database range=0x20-0x30 elapsed=53.402s
INFO [02-09|00:44:04.847] Compacting database range=0x30-0x40 elapsed=1m19.486s
INFO [02-09|00:44:31.194] Compacting database range=0x40-0x50 elapsed=1m45.834s
INFO [02-09|00:45:31.580] Compacting database range=0x50-0x60 elapsed=2m46.220s
INFO [02-09|00:45:58.465] Compacting database range=0x60-0x70 elapsed=3m13.104s
INFO [02-09|00:51:17.593] Compacting database range=0x70-0x80 elapsed=8m32.233s
INFO [02-09|00:56:19.679] Compacting database range=0x80-0x90 elapsed=13m34.319s
INFO [02-09|00:56:46.011] Compacting database range=0x90-0xa0 elapsed=14m0.651s
INFO [02-09|00:57:12.370] Compacting database range=0xa0-0xb0 elapsed=14m27.010s
INFO [02-09|00:57:38.600] Compacting database range=0xb0-0xc0 elapsed=14m53.239s
INFO [02-09|00:58:06.311] Compacting database range=0xc0-0xd0 elapsed=15m20.951s
INFO [02-09|00:58:35.484] Compacting database range=0xd0-0xe0 elapsed=15m50.123s
INFO [02-09|00:59:05.449] Compacting database range=0xe0-0xf0 elapsed=16m20.089s
INFO [02-09|00:59:34.365] Compacting database range=0xf0- elapsed=16m49.005s
INFO [02-09|00:59:34.367] Database compaction finished elapsed=16m49.006s
INFO [02-09|00:59:34.367] State pruning successful pruned=24.82GiB elapsed=39m34.749s
INFO [02-09|00:59:34.367] Completed offline pruning. Re-initializing blockchain.
INFO [02-09|00:59:34.387] Loaded most recent local header number=10,671,401 hash=b52d0a..7bd166 age=40m29s
INFO [02-09|00:59:34.387] Loaded most recent local full block number=10,671,401 hash=b52d0a..7bd166 age=40m29s
INFO [02-09|00:59:34.387] Initializing snapshots async=true
DEBUG[02-09|00:59:34.390] Reinjecting stale transactions count=0
INFO [02-09|00:59:34.395] Transaction pool price threshold updated price=470,000,000,000
INFO [02-09|00:59:34.396] Transaction pool price threshold updated price=225,000,000,000
INFO [02-09|00:59:34.396] Transaction pool price threshold updated price=0
INFO [02-09|00:59:34.396] lastAccepted = 0xb52d0a1302e4055b487c3a0243106b5e13a915c6e178da9f8491cebf017bd166
INFO [02-09|00:59:34] snow/engine/snowman/transitive.go#67: initializing consensus engine
INFO [02-09|00:59:34] snow/engine/snowman/bootstrap/bootstrapper.go#220: Starting bootstrap...
INFO [02-09|00:59:34] chains/manager.go#246: creating chain:
ID: 2oYMBNV4eNHyqk2fjjV5nVQLDbtmNJzq5s3qs3Lo6ftnC6FByM
VMID:jvYyfQTxGMJLuGWa55kdP2p2zSUYsQ5Raupu4TW34ZAUBAbtq
INFO [02-09|00:59:34.425] Enabled APIs: eth, eth-filter, net, web3, internal-eth, internal-blockchain, internal-transaction, avax
DEBUG[02-09|00:59:34.425] Allowed origin(s) for WS RPC interface [*]
INFO [02-09|00:59:34] api/server/server.go#203: adding route /ext/bc/2q9e4r6Mu3U68nU1fYjgbR6JvwrRx36CohpAX5UQxse55x1Q5/avax
INFO [02-09|00:59:34] api/server/server.go#203: adding route /ext/bc/2q9e4r6Mu3U68nU1fYjgbR6JvwrRx36CohpAX5UQxse55x1Q5/rpc
INFO [02-09|00:59:34] api/server/server.go#203: adding route /ext/bc/2q9e4r6Mu3U68nU1fYjgbR6JvwrRx36CohpAX5UQxse55x1Q5/ws
INFO [02-09|00:59:34] vms/avm/vm.go#437: Fee payments are using Asset with Alias: AVAX, AssetID: FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z
INFO [02-09|00:59:34] vms/avm/vm.go#229: address transaction indexing is disabled
INFO [02-09|00:59:34] snow/engine/avalanche/transitive.go#71: initializing consensus engine
INFO [02-09|00:59:34] snow/engine/avalanche/bootstrap/bootstrapper.go#258: Starting bootstrap...
INFO [02-09|00:59:34] api/server/server.go#203: adding route /ext/bc/2oYMBNV4eNHyqk2fjjV5nVQLDbtmNJzq5s3qs3Lo6ftnC6FByM
INFO [02-09|00:59:34] snow/engine/snowman/bootstrap/bootstrapper.go#445: waiting for the remaining chains in this subnet to finish syncing
INFO [02-09|00:59:34] api/server/server.go#203: adding route /ext/bc/2oYMBNV4eNHyqk2fjjV5nVQLDbtmNJzq5s3qs3Lo6ftnC6FByM/wallet
INFO [02-09|00:59:34] api/server/server.go#203: adding route /ext/bc/2oYMBNV4eNHyqk2fjjV5nVQLDbtmNJzq5s3qs3Lo6ftnC6FByM/events
INFO [02-09|00:59:34]
snow/engine/common/bootstrapper.go#235: Bootstrapping started syncing with 1 vertices in the accepted frontier
INFO [02-09|00:59:46] snow/engine/common/bootstrapper.go#235: Bootstrapping started syncing with 2 vertices in the accepted frontier
INFO [02-09|00:59:49] snow/engine/common/bootstrapper.go#235: Bootstrapping started syncing with 1 vertices in the accepted frontier
INFO [02-09|00:59:49] snow/engine/avalanche/bootstrap/bootstrapper.go#473: bootstrapping fetched 55 vertices. Executing transaction state transitions...
INFO [02-09|00:59:49] snow/engine/common/queue/jobs.go#171: executed 55 operations
INFO [02-09|00:59:49] snow/engine/avalanche/bootstrap/bootstrapper.go#484: executing vertex state transitions...
INFO [02-09|00:59:49] snow/engine/common/queue/jobs.go#171: executed 55 operations
INFO [02-09|01:00:07] snow/engine/snowman/bootstrap/bootstrapper.go#406: bootstrapping fetched 1241 blocks. Executing state transitions...
```
At this point, the node will go into bootstrapping and (once bootstrapping completes) resume consensus and operate as normal.
## Disk Space Considerations[](#disk-space-considerations "Direct link to heading")
To ensure the node does not enter an inconsistent state, the bloom filter used for pruning is persisted to `offline-pruning-data-directory` for the duration of the operation. This directory should have `offline-pruning-bloom-filter-size` available in disk space (default 512 MB).
The underlying database (LevelDB) uses deletion markers (tombstones) to identify newly deleted keys. These markers are temporarily persisted to disk until they are removed during a process known as compaction. This will lead to an increase in disk usage during pruning. If your node runs out of disk space during pruning, you may safely restart the pruning operation. This may succeed as restarting the node triggers compaction.
If restarting the pruning operation does not succeed, additional disk space should be provisioned.
# Run Avalanche Node in Background
URL: /docs/nodes/maintain/run-as-background-service
This page demonstrates how to set up a `avalanchego.service` file to enable a manually deployed validator node to run in the background of a server instead of in the terminal directly.
Make sure that AvalancheGo is already installed on your machine.
## Steps[](#steps "Direct link to heading")
### Fuji Testnet Config[](#fuji-testnet-config "Direct link to heading")
Run this command in your terminal to create the `avalanchego.service` file
```bash
sudo nano /etc/systemd/system/avalanchego.service
```
Paste the following configuration into the `avalanchego.service` file
Remember to modify the values of:
* ***user=***
* ***group=***
* ***WorkingDirectory=***
* ***ExecStart=***
For those that you have configured on your Server:
```toml
[Unit]
Description=Avalanche Node service
After=network.target
[Service]
User='YourUserHere'
Group='YourUserHere'
Restart=always
PrivateTmp=true
TimeoutStopSec=60s
TimeoutStartSec=10s
StartLimitInterval=120s
StartLimitBurst=5
WorkingDirectory=/Your/Path/To/avalanchego
ExecStart=/Your/Path/To/avalanchego/./avalanchego \
--network-id=fuji \
--api-metrics-enabled=true
[Install]
WantedBy=multi-user.target
```
Press **Ctrl + X** then **Y** then **Enter** to save and exit.
Now, run:
```bash
sudo systemctl daemon-reload
```
### Mainnet Config[](#mainnet-config "Direct link to heading")
Run this command in your terminal to create the `avalanchego.service` file
```bash
sudo nano /etc/systemd/system/avalanchego.service
```
Paste the following configuration into the `avalanchego.service` file
```toml
[Unit]
Description=Avalanche Node service
After=network.target
[Service]
User='YourUserHere'
Group='YourUserHere'
Restart=always
PrivateTmp=true
TimeoutStopSec=60s
TimeoutStartSec=10s
StartLimitInterval=120s
StartLimitBurst=5
WorkingDirectory=/Your/Path/To/avalanchego
ExecStart=/Your/Path/To/avalanchego/./avalanchego \
--api-metrics-enabled=true
[Install]
WantedBy=multi-user.target
```
Press **Ctrl + X** then **Y** then **Enter** to save and exit.
Now, run:
```bash
sudo systemctl daemon-reload
```
## Start the Node[](#start-the-node "Direct link to heading")
This command makes your node start automatically in case of a reboot, run it:
```bash
sudo systemctl enable avalanchego
```
To start the node, run:
```bash
sudo systemctl start avalanchego
sudo systemctl status avalanchego
```
Output:
```bash
socopower@avalanche-node-01:~$ sudo systemctl status avalanchego
● avalanchego.service - Avalanche Node service
Loaded: loaded (/etc/systemd/system/avalanchego.service; enabled; vendor p>
Active: active (running) since Tue 2023-08-29 23:14:45 UTC; 5h 46min ago
Main PID: 2226 (avalanchego)
Tasks: 27 (limit: 38489)
Memory: 8.7G
CPU: 5h 50min 31.165s
CGroup: /system.slice/avalanchego.service
└─2226 /usr/local/bin/avalanchego/./avalanchego --network-id=fuji
Aug 30 03:02:50 avalanche-node-01 avalanchego[2226]: INFO [08-30|03:02:50.685] >
Aug 30 03:02:51 avalanche-node-01 avalanchego[2226]: INFO [08-30|03:02:51.185] >
Aug 30 03:03:09 avalanche-node-01 avalanchego[2226]: [08-30|03:03:09.380] INFO >
Aug 30 03:03:23 avalanche-node-01 avalanchego[2226]: [08-30|03:03:23.983] INFO >
Aug 30 03:05:15 avalanche-node-01 avalanchego[2226]: [08-30|03:05:15.192] INFO >
Aug 30 03:05:15 avalanche-node-01 avalanchego[2226]: [08-30|03:05:15.237] INFO >
Aug 30 03:05:15 avalanche-node-01 avalanchego[2226]: [08-30|03:05:15.238] INFO >
Aug 30 03:05:19 avalanche-node-01 avalanchego[2226]: [08-30|03:05:19.809] INFO >
Aug 30 03:05:19 avalanche-node-01 avalanchego[2226]: [08-30|03:05:19.809] INFO >
Aug 30 05:00:47 avalanche-node-01 avalanchego[2226]: [08-30|05:00:47.001] INFO
```
To see the synchronization process, you can run the following command:
```bash
sudo journalctl -fu avalanchego
```
# Upgrade Your AvalancheGo Node
URL: /docs/nodes/maintain/upgrade
## Backup Your Node[](#backup-your-node "Direct link to heading")
Before upgrading your node, it is recommended you backup your staker files which are used to identify your node on the network. In the default installation, you can copy them by running following commands:
```bash
cd
cp ~/.avalanchego/staking/staker.crt .
cp ~/.avalanchego/staking/staker.key .
```
Then download `staker.crt` and `staker.key` files and keep them somewhere safe and private. If anything happens to your node or the machine node runs on, these files can be used to fully recreate your node.
If you use your node for development purposes and have keystore users on your node, you should back up those too.
## Node Installed Using the Installer Script[](#node-installed-using-the-installer-script "Direct link to heading")
If you installed your node using the [installer script](/docs/nodes/using-install-script/installing-avalanche-go), to upgrade your node, just run the installer script again.
```bash
./avalanchego-installer.sh
```
It will detect that you already have AvalancheGo installed:
```bash
AvalancheGo installer
---------------------
Preparing environment...
Found 64bit Intel/AMD architecture...
Found AvalancheGo systemd service already installed, switching to upgrade mode.
Stopping service...
```
It will then upgrade your node to the latest version, and after it's done, start the node back up, and print out the information about the latest version:
```bash
Node upgraded, starting service...
New node version:
avalanche/1.1.1 [network=mainnet, database=v1.0.0, commit=f76f1fd5f99736cf468413bbac158d6626f712d2]
Done!
```
And that is it, your node is upgraded to the latest version.
If you installed your node manually, proceed with the rest of the tutorial.
## Stop the Old Node Version[](#stop-the-old-node-version "Direct link to heading")
After the backup is secured, you may start upgrading your node. Begin by stopping the currently running version.
### Node Running from Terminal[](#node-running-from-terminal "Direct link to heading")
If your node is running in a terminal stop it by pressing `ctrl+c`.
### Node Running as a Service[](#node-running-as-a-service "Direct link to heading")
If your node is running as a service, stop it by entering: `sudo systemctl stop avalanchego.service`
(your service may be named differently, `avalanche.service`, or similar)
### Node Running in Background[](#node-running-in-background "Direct link to heading")
If your node is running in the background (by running with `nohup`, for example) then find the process running the node by running `ps aux | grep avalanche`. This will produce output like:
```bash
ubuntu 6834 0.0 0.0 2828 676 pts/1 S+ 19:54 0:00 grep avalanche
ubuntu 2630 26.1 9.4 2459236 753316 ? Sl Dec02 1220:52 /home/ubuntu/build/avalanchego
```
In this example, second line shows information about your node. Note the process id, in this case, `2630`. Stop the node by running `kill -2 2630`.
Now we are ready to download the new version of the node. You can either download the source code and then build the binary program, or you can download the pre-built binary. You don't need to do both.
Downloading pre-built binary is easier and recommended if you're just looking to run your own node and stake on it.
Building the node [from source](/docs/nodes/maintain/upgrade#build-from-source) is recommended if you're a developer looking to experiment and build on Avalanche.
## Download Pre-Built Binary[](#download-pre-built-binary "Direct link to heading")
If you want to download a pre-built binary instead of building it yourself, go to our [releases page](https://github.com/ava-labs/avalanchego/releases), and select the release you want (probably the latest one.)
If you have a node, you can subscribe to the [avalanche notify service](/docs/nodes/maintain/enroll-in-avalanche-notify) with your node ID to be notified about new releases.
In addition, or if you don't have a node ID, you can get release notifications from github. To do so, you can go to our [repository](https://github.com/ava-labs/avalanchego) and look on the top-right corner for the **Watch** option. After you click on it, select **Custom**, and then **Releases**. Press **Apply** and it is done.
Under `Assets`, select the appropriate file.
For MacOS:\
Download: `avalanchego-macos-.zip`\
Unzip: `unzip avalanchego-macos-.zip`\
The resulting folder, `avalanchego-`, contains the binaries.
For Linux on PCs or cloud providers:\
Download: `avalanchego-linux-amd64-.tar.gz`\
Unzip: `tar -xvf avalanchego-linux-amd64-.tar.gz`\
The resulting folder, `avalanchego--linux`, contains the binaries.
For Linux on Arm64-based computers:\
Download: `avalanchego-linux-arm64-.tar.gz`\
Unzip: `tar -xvf avalanchego-linux-arm64-.tar.gz`\
The resulting folder, `avalanchego--linux`, contains the binaries.
You are now ready to run the new version of the node.
### Running the Node from Terminal[](#running-the-node-from-terminal "Direct link to heading")
If you are using the pre-built binaries on MacOS:
```bash
./avalanchego-/build/avalanchego
```
If you are using the pre-built binaries on Linux:
```bash
./avalanchego--linux/avalanchego
```
Add `nohup` at the start of the command if you want to run the node in the background.
### Running the Node as a Service[](#running-the-node-as-a-service "Direct link to heading")
If you're running the node as a service, you need to replace the old binaries with the new ones.
```bash
cp -r avalanchego--linux/*
```
and then restart the service with: `sudo systemctl start avalanchego.service`.
## Build from Source[](#build-from-source "Direct link to heading")
First clone our GitHub repo (you can skip this step if you've done this before):
```bash
git clone https://github.com/ava-labs/avalanchego.git
```
The repository cloning method used is HTTPS, but SSH can be used too:
`git clone git@github.com:ava-labs/avalanchego.git`
You can find more about SSH and how to use it [here](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/about-ssh).
Then move to the AvalancheGo directory:
```bash
cd avalanchego
```
Pull the latest code:
```bash
git pull
```
If the master branch has not been updated with the latest release tag, you can get to it directly via first running `git fetch --all --tags` and then `git checkout --force tags/` (where `` is the latest release tag; for example `v1.3.2`) instead of `git pull`.
Note that your local copy will be in a 'detached HEAD' state, which is not an issue if you do not make changes to the source that you want push back to the repository (in which case you should check out to a branch and to the ordinary merges).
Note also that the `--force` flag will disregard any local changes you might have.
Check that your local code is up to date. Do:
```bash
git rev-parse HEAD
```
and check that the first 7 characters printed match the Latest commit field on our [GitHub](https://github.com/ava-labs/avalanchego).
If you used the `git checkout tags/` then these first 7 characters should match commit hash of that tag.
Now build the binary:
```bash
./scripts/build.sh
```
This should print: `Build Successful`
You can check what version you're running by doing:
```bash
./build/avalanchego --version
```
You can run your node with:
```bash
./build/avalanchego
```
# Amazon Web Services
URL: /docs/nodes/on-third-party-services/amazon-web-services
Learn how to run a node on Amazon Web Services.
## Introduction[](#introduction "Direct link to heading")
This tutorial will guide you through setting up an Avalanche node on [Amazon Web Services (AWS)](https://aws.amazon.com/). Cloud services like AWS are a good way to ensure that your node is highly secure, available, and accessible.
To get started, you'll need:
* An AWS account
* A terminal with which to SSH into your AWS machine
* A place to securely store and back up files
This tutorial assumes your local machine has a Unix style terminal. If you're on Windows, you'll have to adapt some of the commands used here.
## Log Into AWS[](#log-into-aws "Direct link to heading")
Signing up for AWS is outside the scope of this article, but Amazon has instructions [here](https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account).
It is *highly* recommended that you set up Multi-Factor Authentication on your AWS root user account to protect it. Amazon has documentation for this [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_enable_virtual.html#enable-virt-mfa-for-root).
Once your account is set up, you should create a new EC2 instance. An EC2 is a virtual machine instance in AWS's cloud. Go to the [AWS Management Console](https://console.aws.amazon.com/) and enter the EC2 dashboard.

To log into the EC2 instance, you will need a key on your local machine that grants access to the instance. First, create that key so that it can be assigned to the EC2 instance later on. On the bar on the left side, under **Network & Security**, select **Key Pairs.**

Select **Create key pair** to launch the key pair creation wizard.

Name your key `avalanche`. If your local machine has MacOS or Linux, select the `pem` file format. If it's Windows, use the `ppk` file format. Optionally, you can add tags for the key pair to assist with tracking.

Click `Create key pair`. You should see a success message, and the key file should be downloaded to your local machine. Without this file, you will not be able to access your EC2 instance. **Make a copy of this file and put it on a separate storage medium such as an external hard drive. Keep this file secret; do not share it with others.**

## Create a Security Group[](#create-a-security-group "Direct link to heading")
An AWS Security Group defines what internet traffic can enter and leave your EC2 instance. Think of it like a firewall. Create a new Security Group by selecting **Security Groups** under the **Network & Security** drop-down.

This opens the Security Groups panel. Click **Create security group** in the top right of the Security Groups panel.

You'll need to specify what inbound traffic is allowed. Allow SSH traffic from your IP address so that you can log into your EC2 instance (each time your ISP changes your IP address, you will need to modify this rule). Allow TCP traffic on port 9651 so your node can communicate with other nodes on the network. Allow TCP traffic on port 9650 from your IP so you can make API calls to your node. **It's important that you only allow traffic on the SSH and API port from your IP.** If you allow incoming traffic from anywhere, this could be used to brute force entry to your node (SSH port) or used as a denial of service attack vector (API port). Finally, allow all outbound traffic.

Add a tag to the new security group with key `Name` and value`Avalanche Security Group`. This will enable us to know what this security group is when we see it in the list of security groups.

Click `Create security group`. You should see the new security group in the list of security groups.
## Launch an EC2 Instance[](#launch-an-ec2-instance "Direct link to heading")
Now you're ready to launch an EC2 instance. Go to the EC2 Dashboard and select **Launch instance**.

Select **Ubuntu 20.04 LTS (HVM), SSD Volume Type** for the operating system.

Next, choose your instance type. This defines the hardware specifications of the cloud instance. In this tutorial we set up a **c5.2xlarge**. This should be more than powerful enough since Avalanche is a lightweight consensus protocol. To create a c5.2xlarge instance, select the **Compute-optimized** option from the filter drop-down menu.

Select the checkbox next to the c5.2xlarge instance in the table.

Click the **Next: Configure Instance Details** button in the bottom right-hand corner.

The instance details can stay as their defaults.
When setting up a node as a validator, it is crucial to select the appropriate AWS instance type to ensure the node can efficiently process transactions and manage the network load. The recommended instance types are as follows:
* For a minimal stake, start with a compute-optimized instance such as c6, c6i, c6a, c7 and similar.
* Use a 2xlarge instance size for the minimal stake configuration.
* As the staked amount increases, choose larger instance sizes to accommodate the additional workload. For every order of magnitude increase in stake, move up one instance size. For example, for a 20k AVAX stake, a 4xlarge instance is suitable.
### Optional: Using Reserved Instances[](#optional-using-reserved-instances "Direct link to heading")
By default, you will be charged hourly for running your EC2 instance. For a long term usage that is not optimal.
You could save money by using a **Reserved Instance**. With a reserved instance, you pay upfront for an entire year of EC2 usage, and receive a lower per-hour rate in exchange for locking in. If you intend to run a node for a long time and don't want to risk service interruptions, this is a good option to save money. Again, do your own research before selecting this option.
### Add Storage, Tags, Security Group[](#add-storage-tags-security-group "Direct link to heading")
Click the **Next: Add Storage** button in the bottom right corner of the screen.
You need to add space to your instance's disk. You should start with at least 700GB of disk space. Although upgrades to reduce disk usage are always in development, on average the database will continually grow, so you need to constantly monitor disk usage on the node and increase disk space if needed.
Note that the image below shows 100GB as disk size, which was appropriate at the time the screenshot was taken. You should check the current [recommended disk space size](https://github.com/ava-labs/avalanchego#installation) before entering the actual value here.

Click **Next: Add Tags** in the bottom right corner of the screen to add tags to the instance. Tags enable us to associate metadata with our instance. Add a tag with key `Name` and value `My Avalanche Node`. This will make it clear what this instance is on your list of EC2 instances.

Now assign the security group created earlier to the instance. Choose **Select an existing security group** and choose the security group created earlier.

Finally, click **Review and Launch** in the bottom right. A review page will show the details of the instance you're about to launch. Review those, and if all looks good, click the blue **Launch** button in the bottom right corner of the screen.
You'll be asked to select a key pair for this instance. Select **Choose an existing key pair** and then select the `avalanche` key pair you made earlier in the tutorial. Check the box acknowledging that you have access to the `.pem` or `.ppk` file created earlier (make sure you've backed it up!) and then click **Launch Instances**.

You should see a new pop up that confirms the instance is launching!

### Assign an Elastic IP[](#assign-an-elastic-ip "Direct link to heading")
By default, your instance will not have a fixed IP. Let's give it a fixed IP through AWS's Elastic IP service. Go back to the EC2 dashboard. Under **Network & Security,** select **Elastic IPs**.

Select **Allocate Elastic IP address**.

Select the region your instance is running in, and choose to use Amazon's pool of IPv4 addresses. Click **Allocate**.

Select the Elastic IP you just created from the Elastic IP manager. From the **Actions** drop-down, choose **Associate Elastic IP address**.

Select the instance you just created. This will associate the new Elastic IP with the instance and give it a public IP address that won't change.

## Set Up AvalancheGo[](#set-up-avalanchego "Direct link to heading")
Go back to the EC2 Dashboard and select `Running Instances`.

Select the newly created EC2 instance. This opens a details panel with information about the instance.

Copy the `IPv4 Public IP` field to use later. From now on we call this value `PUBLICIP`.
**Remember: the terminal commands below assume you're running Linux. Commands may differ for MacOS or other operating systems. When copy-pasting a command from a code block, copy and paste the entirety of the text in the block.**
Log into the AWS instance from your local machine. Open a terminal (try shortcut `CTRL + ALT + T`) and navigate to the directory containing the `.pem` file you downloaded earlier.
Move the `.pem` file to `$HOME/.ssh` (where `.pem` files generally live) with:
Add it to the SSH agent so that we can use it to SSH into your EC2 instance, and mark it as read-only.
```bash
ssh-add ~/.ssh/avalanche.pem; chmod 400 ~/.ssh/avalanche.pem
```
SSH into the instance. (Remember to replace `PUBLICIP` with the public IP field from earlier.)
If the permissions are **not** set correctly, you will see the following error.

You are now logged into the EC2 instance.

If you have not already done so, update the instance to make sure it has the latest operating system and security updates:
```bash
sudo apt update; sudo apt upgrade -y; sudo reboot
```
This also reboots the instance. Wait 5 minutes, then log in again by running this command on your local machine:
You're logged into the EC2 instance again. Now we'll need to set up our Avalanche node. To do this, follow the [Set Up Avalanche Node With Installer](/docs/nodes/using-install-script/installing-avalanche-go) tutorial which automates the installation process. You will need the `PUBLICIP` we set up earlier.
Your AvalancheGo node should now be running and in the process of bootstrapping, which can take a few hours. To check if it's done, you can issue an API call using `curl`. If you're making the request from the EC2 instance, the request is:
```bash
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"info.isBootstrapped",
"params": {
"chain":"X"
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info
```
Once the node is finished bootstrapping, the response will be:
```json
{
"jsonrpc": "2.0",
"result": {
"isBootstrapped": true
},
"id": 1
}
```
You can continue on, even if AvalancheGo isn't done bootstrapping.
In order to make your node a validator, you'll need its node ID. To get it, run:
```bash
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"info.getNodeID"
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info
```
The response contains the node ID.
```json
{"jsonrpc":"2.0","result":{"nodeID":"NodeID-DznHmm3o7RkmpLkWMn9NqafH66mqunXbM"},"id":1}
```
In the above example the node ID is`NodeID-DznHmm3o7RkmpLkWMn9NqafH66mqunXbM`. Copy your node ID for later. Your node ID is not a secret, so you can just paste it into a text editor.
AvalancheGo has other APIs, such as the [Health API](/docs/api-reference/health-api), that may be used to interact with the node. Some APIs are disabled by default. To enable such APIs, modify the ExecStart section of `/etc/systemd/system/avalanchego.service` (created during the installation process) to include flags that enable these endpoints. Don't manually enable any APIs unless you have a reason to.

Back up the node's staking key and certificate in case the EC2 instance is corrupted or otherwise unavailable. The node's ID is derived from its staking key and certificate. If you lose your staking key or certificate then your node will get a new node ID, which could cause you to become ineligible for a staking reward if your node is a validator. **It is very strongly advised that you copy your node's staking key and certificate**. The first time you run a node, it will generate a new staking key/certificate pair and store them in directory `/home/ubuntu/.avalanchego/staking`.
Exit out of the SSH instance by running:
Now you're no longer connected to the EC2 instance; you're back on your local machine.
To copy the staking key and certificate to your machine, run the following command. As always, replace `PUBLICIP`.
```bash
scp -r ubuntu@PUBLICIP:/home/ubuntu/.avalanchego/staking ~/aws_avalanche_backup
```
Now your staking key and certificate are in directory `~/aws_avalanche_backup` . **The contents of this directory are secret.** You should hold this directory on storage not connected to the internet (like an external hard drive.)
### Upgrading Your Node[](#upgrading-your-node "Direct link to heading")
AvalancheGo is an ongoing project and there are regular version upgrades. Most upgrades are recommended but not required. Advance notice will be given for upgrades that are not backwards compatible. To update your node to the latest version, SSH into your AWS instance as before and run the installer script again.
```bash
./avalanchego-installer.sh
```
Your machine is now running the newest AvalancheGo version. To see the status of the AvalancheGo service, run `sudo systemctl status avalanchego.`
## Increase Volume Size[](#increase-volume-size "Direct link to heading")
If you need to increase the volume size, follow these instructions from AWS:
* [Request modifications to your EBS volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/requesting-ebs-volume-modifications.html)
* [Extend a Linux file system after resizing a volume](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html)
## Wrap Up[](#wrap-up "Direct link to heading")
That's it! You now have an AvalancheGo node running on an AWS EC2 instance. We recommend setting up [node monitoring](/docs/nodes/maintain/monitoring)for your AvalancheGo node. We also recommend setting up AWS billing alerts so you're not surprised when the bill arrives. If you have feedback on this tutorial, or anything else, send us a message on [Discord](https://chat.avalabs.org/).
# AWS Marketplace
URL: /docs/nodes/on-third-party-services/aws-marketplace
Learn how to run a node on AWS Marketplace.
## How to Launch an Avalanche Validator using AWS
With the intention of enabling developers and entrepreneurs to on-ramp into the Avalanche ecosystem with as little friction as possible, Ava Labs recently launched an offering to deploy an Avalanche Validator node via the AWS Marketplace. This tutorial will show the main steps required to get this node running and validating on the Avalanche Fuji testnet.
## Product Overview[](#product-overview "Direct link to heading")
The Avalanche Validator node is available via [the AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-nd6wgi2bhhslg). There you'll find a high level product overview. This includes a product description, pricing information, usage instructions, support information and customer reviews. After reviewing this information you want to click the "Continue to Subscribe" button.
## Subscribe to This Software[](#subscribe-to-this-software "Direct link to heading")
Once on the "Subscribe to this Software" page you will see a button which enables you to subscribe to this AWS Marketplace offering. In addition you'll see Terms of service including the seller's End User License Agreement and the [AWS Privacy Notice](https://aws.amazon.com/privacy/). After reviewing these you want to click on the "Continue to Configuration" button.
## Configure This Software[](#configure-this-software "Direct link to heading")
This page lets you choose a fulfillment option and software version to launch this software. No changes are needed as the default settings are sufficient. Leave the `Fulfillment Option` as `64-bit (x86) Amazon Machine Image (AMI)`. The software version is the latest build of [the AvalancheGo full node](https://github.com/ava-labs/avalanchego/releases), `v1.9.5 (Dec 22, 2022)`, AKA `Banff.5`. This will always show the latest version. Also, the Region to deploy in can be left as `US East (N. Virginia)`. On the right you'll see the software and infrastructure pricing. Lastly, click the "Continue to Launch" button.
## Launch This Software[](#launch-this-software "Direct link to heading")
Here you can review the launch configuration details and follow the instructions to launch the Avalanche Validator Node. The changes are very minor. Leave the action as "Launch from Website." The EC2 Instance Type should remain `c5.2xlarge`. The primary change you'll need to make is to choose a keypair which will enable you to `ssh` into the newly created EC2 instance to run `curl` commands on the Validator node. You can search for existing Keypairs or you can create a new keypair and download it to your local machine. If you create a new keypair you'll need to move the keypair to the appropriate location, change the permissions and add it to the OpenSSH authentication agent. For example, on MacOS it would look similar to the following:
```bash
# In this example we have a keypair called avalanche.pem which was downloaded from AWS to ~/Downloads/avalanche.pem
# Confirm the file exists with the following command
test -f ~/Downloads/avalanche.pem && echo "Avalanche.pem exists."
# Running the above command will output the following:
# Avalanche.pem exists.
# Move the avalanche.pem keypair from the ~/Downloads directory to the hidden ~/.ssh directory
mv ~/Downloads/avalanche.pem ~/.ssh
# Next add the private key identity to the OpenSSH authentication agent
ssh-add ~/.ssh/avalanche.pem;
# Change file modes or Access Control Lists
sudo chmod 600 ~/.ssh/avalanche.pem
```
Once these steps are complete you are ready to launch the Validator node on EC2. To make that happen click the "Launch" button

You now have an Avalanche node deployed on an AWS EC2 instance! Copy the `AMI ID` and click on the `EC2 Console` link for the next step.
## EC2 Console[](#ec2-console "Direct link to heading")
Now take the `AMI ID` from the previous step and input it into the search bar on the EC2 Console. This will bring you to the dashboard where you can find the EC2 instances public IP address.

Copy that public IP address and open a Terminal or command line prompt. Once you have the new Terminal open `ssh` into the EC2 instance with the following command.
## Node Configuration[](#node-configuration "Direct link to heading")
### Switch to Fuji Testnet[](#switch-to-fuji-testnet "Direct link to heading")
By default the Avalanche Node available through the AWS Marketplace syncs the Mainnet. If this is what you are looking for, you can skip this step.
For this tutorial you want to sync and validate the Fuji Testnet. Now that you're `ssh`ed into the EC2 instance you can make the required changes to sync Fuji instead of Mainnet.
First, confirm that the node is syncing the Mainnet by running the `info.getNetworkID` command.
#### `info.getNetworkID` Request[](#infogetnetworkid-request "Direct link to heading")
```bash
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"info.getNetworkID",
"params": {
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info
```
#### `info.getNetworkID` Response[](#infogetnetworkid-response "Direct link to heading")
The returned `networkID` will be 1 which is the network ID for Mainnet.
```json
{
"jsonrpc": "2.0",
"result": {
"networkID": "1"
},
"id": 1
}
```
Now you want to edit `/etc/avalanchego/conf.json` and change the `"network-id"` property from `"mainnet"` to `"fuji"`. To see the contents of `/etc/avalanchego/conf.json` you can `cat` the file.
```bash
cat /etc/avalanchego/conf.json
{
"api-keystore-enabled": false,
"http-host": "0.0.0.0",
"log-dir": "/var/log/avalanchego",
"db-dir": "/data/avalanchego",
"api-admin-enabled": false,
"public-ip-resolution-service": "opendns",
"network-id": "mainnet"
}
```
Edit that `/etc/avalanchego/conf.json` with your favorite text editor and change the value of the `"network-id"` property from `"mainnet"` to `"fuji"`. Once that's complete, save the file and restart the Avalanche node via `sudo systemctl restart avalanchego`. You can then call the `info.getNetworkID` endpoint to confirm the change was successful.
#### `info.getNetworkID` Request[](#infogetnetworkid-request-1 "Direct link to heading")
```bash
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"info.getNetworkID",
"params": {
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info
```
#### `info.getNetworkID` Response[](#infogetnetworkid-response-1 "Direct link to heading")
The returned `networkID` will be 5 which is the network ID for Fuji.
```json
{
"jsonrpc": "2.0",
"result": {
"networkID": "5"
},
"id": 1
}
```
Next you run the `info.isBoostrapped` command to confirm if the Avalanche Validator node has finished bootstrapping.
### `info.isBootstrapped` Request[](#infoisbootstrapped-request "Direct link to heading")
```bash
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"info.isBootstrapped",
"params": {
"chain":"P"
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info
```
Once the node is finished bootstrapping, the response will be:
### `info.isBootstrapped` Response[](#infoisbootstrapped-response "Direct link to heading")
```json
{
"jsonrpc": "2.0",
"result": {
"isBootstrapped": true
},
"id": 1
}
```
**Note** that initially the response is `false` because the network is still syncing.\
When you're adding your node as a Validator on the Avalanche Mainnet you'll want to wait for this response to return `true` so that you don't suffer from any downtime while validating. For this tutorial you're not going to wait for it to finish syncing as it's not strictly necessary.
### `info.getNodeID` Request[](#infogetnodeid-request "Direct link to heading")
Next, you want to get the NodeID which will be used to add the node as a Validator. To get the node's ID you call the `info.getNodeID` jsonrpc endpoint.
```bash
curl --location --request POST 'http://127.0.0.1:9650/ext/info' \
--header 'Content-Type: application/json' \
--data-raw '{
"jsonrpc":"2.0",
"id" :1,
"method" :"info.getNodeID",
"params" :{
}
}'
```
### `info.getNodeID` Response[](#infogetnodeid-response "Direct link to heading")
Take a note of the `nodeID` value which is returned as you'll need to use it in the next step when adding a validator via the Avalanche Web Wallet. In this case the `nodeID` is `NodeID-Q8Gfaaio9FAqCmZVEXDq9bFvNPvDi7rt5`
```json
{
"jsonrpc": "2.0",
"result": {
"nodeID": "NodeID-Q8Gfaaio9FAqCmZVEXDq9bFvNPvDi7rt5",
"nodePOP": {
"publicKey": "0x85675db18b326a9585bfd43892b25b71bf01b18587dc5fac136dc5343a9e8892cd6c49b0615ce928d53ff5dc7fd8945d",
"proofOfPossession": "0x98a56f092830161243c1f1a613ad68a7f1fb25d2462ecf85065f22eaebb4e93a60e9e29649a32252392365d8f628b2571174f520331ee0063a94473f8db6888fc3a722be330d5c51e67d0d1075549cb55376e1f21d1b48f859ef807b978f65d9"
}
},
"id": 1
}
```
## Add Node as Validator on Fuji via Core web[](#add-node-as-validator-on-fuji-via-core-web "Direct link to heading")
For adding the new node as a Validator on the Fuji testnet's Primary Network you can use the [Core web](https://core.app/) [connected](https://support.avax.network/en/articles/6639869-core-web-how-do-i-connect-to-core-web) to [Core extension](https://core.app). If you don't have a Core extension already, check out this [guide](https://support.avax.network/en/articles/6100129-core-extension-how-do-i-create-a-new-wallet). If you'd like to import an existing wallet to Core extension, follow [these steps](https://support.avax.network/en/articles/6078933-core-extension-how-do-i-access-my-existing-account).

Core web is a free, all-in-one command center that gives users a more intuitive and comprehensive way to view assets, and use dApps across the Avalanche network, its various Avalanche L1s, and Ethereum. Core web is optimized for use with the Core browser extension and Core mobile (available on both iOS & Android). Together, they are key components of the Core product suite that brings dApps, NFTs, Avalanche Bridge, Avalanche L1s, L2s, and more, directly to users.
### Switching to Testnet Mode[](#switching-to-testnet-mode "Direct link to heading")
By default, Core web and Core extension are connected to Mainnet. For the sake of this demo, you want to connect to the Fuji Testnet.
#### On Core Extension[](#on-core-extension "Direct link to heading")
From the hamburger menu on the top-left corner, choose Advanced, and then toggle the Testnet Mode on.

You can follow the same steps for switching back to Mainnet.
#### On Core web[](#on-core-web "Direct link to heading")
Click on the Settings button top-right corner of the page, then toggle Testnet Mode on.

You can follow the same steps for switching back to Mainnet.
### Adding the Validator[](#adding-the-validator "Direct link to heading")
* Node ID: A unique ID derived from each individual node's staker certificate. Use the `NodeID` which was returned in the `info.getNodeID` response. In this example it's `NodeID-Q8Gfaaio9FAqCmZVEXDq9bFvNPvDi7rt5`
* Staking End Date: Your AVAX tokens will be locked until this date.
* Stake Amount: The amount of AVAX to lock for staking. On Mainnet, the minimum required amount is 2,000 AVAX. On Testnet the minimum required amount is 1 AVAAX.
* Delegation Fee: You will claim this % of the rewards from the delegators on your node.
* Reward Address: A reward address is the destination address of the accumulated staking rewards.
To add a node as a Validator, first select the Stake tab on Core web, in the left hand nav menu. Next click the Validate button, and select Get Started.

This page will open up.

Choose the desired Staking Amount, then click Next.

Enter you Node ID, then click Next.

Here, you'll need to choose the staking duration. There are predefined values, like 1 day, 1 month and so on. You can also choose a custom period of time. For this example, 22 days were chosen.

Choose the address that the network will send rewards to. Make sure it's the correct address because once the transaction is submitted this cannot be changed later or undone. You can choose the wallet's P-Chain address, or a custom P-Chain address. After entering the address, click Next.

Other individuals can stake to your validator and receive rewards too, known as "delegating." You will claim this percent of the rewards from the delegators on your node. Click Next.

After entering all these details, a summary of your validation will show up. If everything is correct, you can proceed and click on Submit Validation. A new page will open up, prompting you to accept the transaction. Here, please approve the transaction.

After the transaction is approved, you will see a message saying that your validation transaction was submitted.

If you click on View on explorer, a new browser tab will open with the details of the `AddValidatorTx`. It will show details such as the total value of AVAX transferred, any AVAX which were burned, the blockchainID, the blockID, the NodeID of the validator, and the total time which has elapsed from the entire Validation period.

## Confirm That the Node is a Pending Validator on Fuji[](#confirm-that-the-node-is-a-pending-validator-on-fuji "Direct link to heading")
As a last step you can call the `platform.getPendingvalidators` endpoint to confirm that the Avalanche node which was recently spun up on AWS is no in the pending validators queue where it will stay for 5 minutes.
### `platform.getPendingValidators` Request[](#platformgetpendingvalidators-request "Direct link to heading")
```bash
curl --location --request POST 'https://api.avax-test.network/ext/bc/P' \
--header 'Content-Type: application/json' \
--data-raw '{
"jsonrpc": "2.0",
"method": "platform.getPendingValidators",
"params": {
"subnetID": "11111111111111111111111111111111LpoYY",
"nodeIDs": []
},
"id": 1
}'
```
### `platform.getPendingValidators` Response[](#platformgetpendingvalidators-response "Direct link to heading")
```json
{
"jsonrpc": "2.0",
"result": {
"validators": [
{
"txID": "4d7ZboCrND4FjnyNaF3qyosuGQsNeJ2R4KPJhHJ55VCU1Myjd",
"startTime": "1673411918",
"endTime": "1675313170",
"stakeAmount": "1000000000",
"nodeID": "NodeID-Q8Gfaaio9FAqCmZVEXDq9bFvNPvDi7rt5",
"delegationFee": "2.0000",
"connected": false,
"delegators": null
}
],
"delegators": []
},
"id": 1
}
```
You can also pass in the `NodeID` as a string to the `nodeIDs` array in the request body.
```bash
curl --location --request POST 'https://api.avax-test.network/ext/bc/P' \
--header 'Content-Type: application/json' \
--data-raw '{
"jsonrpc": "2.0",
"method": "platform.getPendingValidators",
"params": {
"subnetID": "11111111111111111111111111111111LpoYY",
"nodeIDs": ["NodeID-Q8Gfaaio9FAqCmZVEXDq9bFvNPvDi7rt5"]
},
"id": 1
}'
```
This will filter the response by the `nodeIDs` array which will save you time by no longer requiring you to search through the entire response body for the NodeIDs.
```json
{
"jsonrpc": "2.0",
"result": {
"validators": [
{
"txID": "4d7ZboCrND4FjnyNaF3qyosuGQsNeJ2R4KPJhHJ55VCU1Myjd",
"startTime": "1673411918",
"endTime": "1675313170",
"stakeAmount": "1000000000",
"nodeID": "NodeID-Q8Gfaaio9FAqCmZVEXDq9bFvNPvDi7rt5",
"delegationFee": "2.0000",
"connected": false,
"delegators": null
}
],
"delegators": []
},
"id": 1
}
```
After 5 minutes the node will officially start validating the Avalanche Fuji testnet and you will no longer see it in the response body for the `platform.getPendingValidators` endpoint. Now you will access it via the `platform.getCurrentValidators` endpoint.
### `platform.getCurrentValidators` Request[](#platformgetcurrentvalidators-request "Direct link to heading")
```bash
curl --location --request POST 'https://api.avax-test.network/ext/bc/P' \
--header 'Content-Type: application/json' \
--data-raw '{
"jsonrpc": "2.0",
"method": "platform.getCurrentValidators",
"params": {
"subnetID": "11111111111111111111111111111111LpoYY",
"nodeIDs": ["NodeID-Q8Gfaaio9FAqCmZVEXDq9bFvNPvDi7rt5"]
},
"id": 1
}'
```
### `platform.getCurrentValidators` Response[](#platformgetcurrentvalidators-response "Direct link to heading")
```json
{
"jsonrpc": "2.0",
"result": {
"validators": [
{
"txID": "2hy57Z7KiZ8L3w2KonJJE1fs5j4JDzVHLjEALAHaXPr6VMeDhk",
"startTime": "1673411918",
"endTime": "1675313170",
"stakeAmount": "1000000000",
"nodeID": "NodeID-Q8Gfaaio9FAqCmZVEXDq9bFvNPvDi7rt5",
"rewardOwner": {
"locktime": "0",
"threshold": "1",
"addresses": [
"P-fuji1tgj2c3k56enytw5d78rt0tsq3lzg8wnftffwk7"
]
},
"validationRewardOwner": {
"locktime": "0",
"threshold": "1",
"addresses": [
"P-fuji1tgj2c3k56enytw5d78rt0tsq3lzg8wnftffwk7"
]
},
"delegationRewardOwner": {
"locktime": "0",
"threshold": "1",
"addresses": [
"P-fuji1tgj2c3k56enytw5d78rt0tsq3lzg8wnftffwk7"
]
},
"potentialReward": "5400963",
"delegationFee": "2.0000",
"uptime": "0.0000",
"connected": false,
"delegators": null
}
]
},
"id": 1
}
```
## Mainnet[](#mainnet "Direct link to heading")
All of these steps can be applied to Mainnet. However, the minimum required Avax token amounts to become a validator is 2,000 on the Mainnet. For more information, please read [this doc](/docs/nodes/validate/how-to-stake#validators).
## Maintenance[](#maintenance "Direct link to heading")
AWS one click is meant to be used in automated environments, not as an end-user solution. You can still manage it manually, but it is not as easy as an Ubuntu instance or using the script:
* AvalancheGo binary is at `/usr/local/bin/avalanchego`
* Main node config is at `/etc/avalanchego/conf.json`
* Working directory is at `/home/avalanche/.avalanchego/ (and belongs to avalanchego user)`
* Database is at `/data/avalanchego`
* Logs are at `/var/log/avalanchego`
For a simple upgrade you would need to place the new binary at `/usr/local/bin/`. If you run an Avalanche L1, you would also need to place the VM binary into `/home/avalanche/.avalanchego/plugins`.
You can also look at using [this guide](https://docs.aws.amazon.com/systems-manager/latest/userguide/automation-tutorial-update-ami.html), but that won't address updating the Avalanche L1, if you have one.
## Summary[](#summary "Direct link to heading")
Avalanche is the first decentralized smart contracts platform built for the scale of global finance, with near-instant transaction finality. Now with an Avalanche Validator node available as a one-click install from the AWS Marketplace developers and entrepreneurs can on-ramp into the Avalanche ecosystem in a matter of minutes. If you have any questions or want to follow up in any way please join our Discord server at [https://chat.avax.network](https://chat.avax.network/). For more developer resources please check out our [Developer Documentation](/docs/).
# Google Cloud
URL: /docs/nodes/on-third-party-services/google-cloud
Learn how to run an Avalanche node on Google Cloud.
This document was written by a community member, some information may be outdated.
## Introduction[](#introduction "Direct link to heading")
Google's Cloud Platform (GCP) is a scalable, trusted and reliable hosting platform. Google operates a significant amount of it's own global networking infrastructure. It's [fiber network](https://cloud.google.com/blog/products/networking/google-cloud-networking-in-depth-cloud-cdn) can provide highly stable and consistent global connectivity. In this article, we will leverage GCP to deploy a node on which Avalanche can installed via [terraform](https://www.terraform.io/). Leveraging `terraform` may seem like overkill, it should set you apart as an operator and administrator as it will enable you greater flexibility and provide the basis on which you can easily build further automation.
## Conventions[](#conventions "Direct link to heading")
* `Items` highlighted in this manor are GCP parlance and can be searched for further reference in the Google documentation for their cloud products.
## Important Notes[](#important-notes "Direct link to heading")
* The machine type used in this documentation is for reference only and the actual sizing you use will depend entirely upon the amount that is staked and delegated to the node.
## Architectural Description[](#architectural-description "Direct link to heading")
This section aims to describe the architecture of the system that the steps in the [Setup Instructions](#-setup-instructions) section deploy when enacted. This is done so that the executor can not only deploy the reference architecture, but also understand and potentially optimize it for their needs.
### Project[](#project "Direct link to heading")
We will create and utilize a single GCP `Project` for deployment of all resources.
#### Service Enablement[](#service-enablement "Direct link to heading")
Within our GCP project we will need to enable the following Cloud Services:
* `Compute Engine`
* `IAP`
### Networking[](#networking "Direct link to heading")
#### Compute Network[](#compute-network "Direct link to heading")
We will deploy a single `Compute Network` object. This unit is where we will deploy all subsequent networking objects. It provides a logical boundary and securitization context should you wish to deploy other chain stacks or other infrastructure in GCP.
#### Public IP[](#public-ip "Direct link to heading")
Avalanche requires that a validator communicate outbound on the same public IP address that it advertises for other peers to connect to it on. Within GCP this precludes the possibility of us using a Cloud NAT Router for the outbound communications and requires us to bind the public IP that we provision to the interface of the machine. We will provision a single `EXTERNAL` static IPv4 `Compute Address`.
#### Avalanche L1s[](#avalanche-l1s "Direct link to heading")
For the purposes of this documentation we will deploy a single `Compute Subnetwork` in the US-EAST1 `Region` with a /24 address range giving us 254 IP addresses (not all usable but for the sake of generalized documentation).
### Compute[](#compute "Direct link to heading")
#### Disk[](#disk "Direct link to heading")
We will provision a single 400GB `PD-SSD` disk that will be attached to our VM.
#### Instance[](#instance "Direct link to heading")
We will deploy a single `Compute Instance` of size `e2-standard-8`. Observations of operations using this machine specification suggest it is memory over provisioned and could be brought down to 16GB using custom machine specification; but please review and adjust as needed (the beauty of compute virtualization!!).
#### Zone[](#zone "Direct link to heading")
We will deploy our instance into the `US-EAST1-B` `Zone`
#### Firewall[](#firewall "Direct link to heading")
We will provision the following `Compute Firewall` rules:
* IAP INGRESS for SSH (TCP 22) - this only allows GCP IAP sources inbound on SSH.
* P2P INGRESS for AVAX Peers (TCP 9651)
These are obviously just default ports and can be tailored to your needs as you desire.
## Setup Instructions[](#-setup-instructions "Direct link to heading")
### GCP Account[](#gcp-account "Direct link to heading")
1. If you don't already have a GCP account go create one [here](https://console.cloud.google.com/freetrial)
You will get some free bucks to run a trial, the trial is feature complete but your usage will start to deplete your free bucks so turn off anything you don't need and/or add a credit card to your account if you intend to run things long term to avoid service shutdowns.
### Project[](#project-1 "Direct link to heading")
Login to the GCP `Cloud Console` and create a new `Project` in your organization. Let's use the name `my-avax-nodes` for the sake of this setup.
1. 
2. 
3. 
### Terraform State[](#terraform-state "Direct link to heading")
Terraform uses a state files to compose a differential between current infrastructure configuration and the proposed plan. You can store this state in a variety of different places, but using GCP storage is a reasonable approach given where we are deploying so we will stick with that.
1. 
2. 
Authentication to GCP from terraform has a few different options which are laid out [here](https://www.terraform.io/language/settings/backends/gcs). Please chose the option that aligns with your context and ensure those steps are completed before continuing.
Depending upon how you intend to execute your terraform operations you may or may not need to enable public access to the bucket. Obviously, not exposing the bucket for `public` access (even if authenticated) is preferable. If you intend to simply run terraform commands from your local machine then you will need to open the access up. I recommend to employ a full CI/CD pipeline using GCP Cloud Build which if utilized will mean the bucket can be marked as `private`. A full walk through of Cloud Build setup in this context can be found [here](https://cloud.google.com/architecture/managing-infrastructure-as-code)
### Clone GitHub Repository[](#clone-github-repository "Direct link to heading")
I have provided a rudimentary terraform construct to provision a node on which to run Avalanche which can be found [here](https://github.com/meaghanfitzgerald/deprecated-avalanche-docs/tree/master/static/scripts). Documentation below assumes you are using this repository but if you have another terraform skeleton similar steps will apply.
### Terraform Configuration[](#terraform-configuration "Direct link to heading")
1. If running terraform locally, please [install](https://learn.hashicorp.com/tutorials/terraform/install-cli) it.
2. In this repository, navigate to the `terraform` directory.
3. Under the `projects` directory, rename the `my-avax-project` directory to match your GCP project name that you created (not required, but nice to be consistent)
4. Under the folder you just renamed locate the `terraform.tfvars` file.
5. Edit this file and populate it with the values which make sense for your context and save it.
6. Locate the `backend.tf` file in the same directory.
7. Edit this file ensuring to replace the `bucket` property with the GCS bucket name that you created earlier.
If you do not with to use cloud storage to persist terraform state then simply switch the `backend` to some other desirable provider.
### Terraform Execution[](#terraform-execution "Direct link to heading")
Terraform enables us to see what it would do if we were to run it without actually applying any changes... this is called a `plan` operation. This plan is then enacted (optionally) by an `apply`.
#### Plan[](#plan "Direct link to heading")
1. In a terminal which is able to execute the `tf` binary, `cd` to the \~`my-avax-project` directory that you renamed in step 3 of `Terraform Configuration`.
2. Execute the command `tf plan`
3. You should see a JSON output to the stdout of the terminal which lays out the operations that terraform will execute to apply the intended state.
#### Apply[](#apply "Direct link to heading")
1. In a terminal which is able to execute the `tf` binary, `cd` to the \~`my-avax-project` directory that you renamed in step 3 of `Terraform Configuration`.
2. Execute the command `tf apply`
If you want to ensure that terraform does **exactly** what you saw in the `apply` output, you can optionally request for the `plan` output to be saved to a file to feed to `apply`. This is generally considered best practice in highly fluid environments where rapid change is occurring from multiple sources.
## Conclusion[](#conclusion "Direct link to heading")
Establishing CI/CD practices using tools such as GitHub and Terraform to manage your infrastructure assets is a great way to ensure base disaster recovery capabilities and to ensure you have a place to embed any \~tweaks you have to make operationally removing the potential to miss them when you have to scale from 1 node to 10. Having an automated pipeline also gives you a place to build a bigger house... what starts as your interest in building and managing a single AVAX node today can quickly change into you building an infrastructure operation for many different chains working with multiple different team members. I hope this may have inspired you to take a leap into automation in this context!
# Latitude
URL: /docs/nodes/on-third-party-services/latitude
Learn how to run an Avalanche node on Latitude.sh.
## Introduction[](#introduction "Direct link to heading")
This tutorial will guide you through setting up an Avalanche node on [Latitude.sh](https://latitude.sh/). Latitude.sh provides high-performance lighting-fast bare metal servers to ensure that your node is highly secure, available, and accessible.
To get started, you'll need:
* A Latitude.sh account
* A terminal with which to SSH into your Latitude.sh machine
For the instructions on creating an account and server with Latitude.sh, please reference their [GitHub tutorial](https://github.com/NottherealIllest/Latitude.sh-post/blob/main/avalanhe/avax-copy.md) , or visit [this page](https://www.latitude.sh/dashboard/signup) to sign up and create your first project.
This tutorial assumes your local machine has a Unix-style terminal. If you're on Windows, you'll have to adapt some of the commands used here.
## Configuring Your Server[](#configuring-your-server "Direct link to heading")
### Create a Latitude.sh Account[](#create-a-latitudesh-account "Direct link to heading")
At this point your account has been verified, and you have created a new project and deployed the server according to the instructions linked above.
### Access Your Server & Further Steps[](#access-your-server--further-steps "Direct link to heading")
All your Latitude.sh credentials are available by clicking the `server` under your project, and can be used to access your Latitude.sh machine from your local machine using a terminal.
You will need to install the `avalanche node installer script` directly in the server's terminal.
After gaining access, we'll need to set up our Avalanche node. To do this, follow the instructions here to install and run your node [Set Up Avalanche Node With Installer](/docs/nodes/using-install-script/installing-avalanche-go).
Your AvalancheGo node should now be running and in the process of bootstrapping, which can take a few hours. To check if it's done, you can issue an API call using `curl`. The request is:
```bash
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"info.isBootstrapped",
"params": {
"chain":"X"
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info
```
Once the node is finished bootstrapping, the response will be:
```json
{
"jsonrpc": "2.0",
"result": {
"isBootstrapped": true
},
"id": 1
}
```
You can continue on, even if AvalancheGo isn't done bootstrapping. In order to make your node a validator, you'll need its node ID. To get it, run:
```bash
curl -X POST --data '{
"jsonrpc": "2.0",
"id": 1,
"method": "info.getNodeID"
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info
```
The response contains the node ID.
```json
{
"jsonrpc": "2.0",
"result": { "nodeID": "KhDnAoZDW8iRJ3F26iQgK5xXVFMPcaYeu" },
"id": 1
}
```
In the above example the node ID is `NodeID-KhDnAoZDW8iRJ3F26iQgK5xXVFMPcaYeu`.
AvalancheGo has other APIs, such as the [Health API](/docs/api-reference/health-api), that may be used to interact with the node. Some APIs are disabled by default. To enable such APIs, modify the ExecStart section of `/etc/systemd/system/avalanchego.service` (created during the installation process) to include flags that enable these endpoints. Don't manually enable any APIs unless you have a reason to.
Exit out of the SSH server by running:
### Upgrading Your Node[](#upgrading-your-node "Direct link to heading")
AvalancheGo is an ongoing project and there are regular version upgrades. Most upgrades are recommended but not required. Advance notice will be given for upgrades that are not backwards compatible. To update your node to the latest version, SSH into your server using a terminal and run the installer script again.
```bash
./avalanchego-installer.sh
```
Your machine is now running the newest AvalancheGo version. To see the status of the AvalancheGo service, run `sudo systemctl status avalanchego.`
## Wrap Up[](#wrap-up "Direct link to heading")
That's it! You now have an AvalancheGo node running on a Latitude.sh machine. We recommend setting up [node monitoring](/docs/nodes/maintain/monitoring) for your AvalancheGo node.
# Microsoft Azure
URL: /docs/nodes/on-third-party-services/microsoft-azure
How to run an Avalanche node on Microsoft Azure.
This document was written by a community member, some information may be out of date.
Running a validator and staking with Avalanche provides extremely competitive rewards of between 9.69% and 11.54% depending on the length you stake for. The maximum rate is earned by staking for a year, whilst the lowest rate for 14 days. There is also no slashing, so you don't need to worry about a hardware failure or bug in the client which causes you to lose part or all of your stake. Instead with Avalanche you only need to currently maintain at least 80% uptime to receive rewards. If you fail to meet this requirement you don't get slashed, but you don't receive the rewards. **You also do not need to put your private keys onto a node to begin validating on that node.** Even if someone breaks into your cloud environment and gains access to the node, the worst they can do is turn off the node.
Not only does running a validator node enable you to receive rewards in AVAX, but later you will also be able to validate other Avalanche L1s in the ecosystem as well and receive rewards in the token native to their Avalanche L1s.
Hardware requirements to run a validator are relatively modest: 8 CPU cores, 16 GB of RAM and 1 TB SSD. It also doesn't use enormous amounts of energy. Avalanche's [revolutionary consensus mechanism](/docs/quick-start/avalanche-consensus) is able to scale to millions of validators participating in consensus at once, offering unparalleled decentralisation.
Currently the minimum amount required to stake to become a validator is 2,000 AVAX. Alternatively, validators can also charge a small fee to enable users to delegate their stake with them to help towards running costs.
In this article we will step through the process of configuring a node on Microsoft Azure. This tutorial assumes no prior experience with Microsoft Azure and will go through each step with as few assumptions possible.
At the time of this article, spot pricing for a virtual machine with 2 Cores and 8 GB memory costs as little as 0.01060perhourwhichworksoutatabout0.01060 per hour which works out at about 113.44 a year, **a saving of 83.76%! compared to normal pay as you go prices.** In comparison a virtual machine in AWS with 2 Cores and 4 GB Memory with spot pricing is around \$462 a year.
## Initial Subscription Configuration[](#initial-subscription-configuration "Direct link to heading")
### Set up 2 Factor[](#set-up-2-factor "Direct link to heading")
First you will need a Microsoft Account, if you don't have one already you will see an option to create one at the following link. If you already have one, make sure to set up 2 Factor authentication to secure your node by going to the following link and then selecting "Two-step verification" and following the steps provided.
[https://account.microsoft.com/security](https://account.microsoft.com/security)

Once two factor has been configured log into the Azure portal by going to [https://portal.azure.com](https://portal.azure.com/) and signing in with your Microsoft account. When you login you won't have a subscription, so we need to create one first. Select "Subscriptions" as highlighted below:

Then select "+ Add" to add a new subscription

If you want to use Spot Instance VM Pricing (which will be considerably cheaper) you can't use a Free Trial account (and you will receive an error upon validation), so **make sure to select Pay-As-You-Go.**

Enter your billing details and confirm identity as part of the sign-up process, when you get to Add technical support select the without support option (unless you want to pay extra for support) and press Next.

## Create a Virtual Machine[](#create-a-virtual-machine "Direct link to heading")
Now that we have a subscription, we can create the Ubuntu Virtual Machine for our Avalanche Node. Select the Icon in the top left for the Menu and choose "+ Create a resource"

Select Ubuntu Server 18.04 LTS (this will normally be under the popular section or alternatively search for it in the marketplace)

This will take you to the Create a virtual machine page as shown below:

First, enter a virtual machine a name, this can be anything but in my example, I have called it Avalanche (This will also automatically change the resource group name to match)
Then select a region from the drop-down list. Select one of the recommended ones in a region that you prefer as these tend to be the larger ones with most features enabled and cheaper prices. In this example I have selected North Europe.

You have the option of using spot pricing to save significant amounts on running costs. Spot instances use a supply and demand market price structure. As demand for instances goes up, the price for the spot instance goes up. If there is insufficient capacity, then your VM will be turned off. The chances of this happening are incredibly low though, especially if you select the Capacity only option. Even in the unlikely event it does get turned off temporarily you only need to maintain at least 80% up time to receive the staking rewards and there is no slashing implemented in Avalanche.
Select Yes for Azure Spot instance, select Eviction type to Capacity Only and **make sure to set the eviction policy to Stop / Deallocate. This is very important otherwise the VM will be deleted**

Choose "Select size" to change the Virtual Machine size, and from the menu select D2s\_v4 under the D-Series v4 selection (This size has 2 Cores, 8 GB Memory and enables Premium SSDs). You can use F2s\_v2 instances instead, with are 2 Cores, 4 GB Memory and enables Premium SSDs) but the spot price actually works out cheaper for the larger VM currently with spot instance prices. You can use [this link](https://azure.microsoft.com/en-us/pricing/details/virtual-machines/linux/) to view the prices across the different regions.

Once you have selected the size of the Virtual Machine, select "View pricing history and compare prices in nearby regions" to see how the spot price has changed over the last 3 months, and whether it's cheaper to use a nearby region which may have more spare capacity.

At the time of this article, spot pricing for D2s\_v4 in North Europe costs 0.07975perhour,oraround0.07975 per hour, or around 698.61 a year. With spot pricing, the price falls to 0.01295perhour,whichworksoutatabout0.01295 per hour, which works out at about 113.44 a year, **a saving of 83.76%!**
There are some regions which are even cheaper, East US for example is 0.01060perhouroraround0.01060 per hour or around 92.86 a year!

Below you can see the price history of the VM over the last 3 months for North Europe and regions nearby.

### Cheaper Than Amazon AWS[](#cheaper-than-amazon-aws "Direct link to heading")
As a comparison a c5.large instance costs 0.085USDperhouronAWS.Thistotals 0.085 USD per hour on AWS. This totals \~745 USD per year. Spot instances can save 62%, bringing that total down to \$462.
The next step is to change the username for the VM, to align with other Avalanche tutorials change the username to Ubuntu. Otherwise you will need to change several commands later in this article and swap out Ubuntu with your new username.

### Disks[](#disks "Direct link to heading")
Select Next: Disks to then configure the disks for the instance. There are 2 choices for disks, either Premium SSD which offer greater performance with a 64 GB disk costs around 10amonth,orthereisthestandardSSDwhichofferslowerperformanceandisaround10 a month, or there is the standard SSD which offers lower performance and is around 5 a month. You also have to pay \$0.002 per 10,000 transaction units (reads / writes and deletes) with the Standard SSD, whereas with Premium SSDs everything is included. Personally, I chose the Premium SSD for greater performance, but also because the disks are likely to be heavily used and so may even work out cheaper in the long run.
Select Next: Networking to move onto the network configuration

### Network Config[](#network-config "Direct link to heading")
You want to use a Static IP so that the public IP assigned to the node doesn't change in the event it stops. Under Public IP select "Create new"

Then select "Static" as the Assignment type

Then we need to configure the network security group to control access inbound to the Avalanche node. Select "Advanced" as the NIC network security group type and select "Create new"

For security purposes you want to restrict who is able to remotely connect to your node. To do this you will first want to find out what your existing public IP is. This can be done by going to google and searching for "what's my IP"

It's likely that you have been assigned a dynamic public IP for your home, unless you have specifically requested it, and so your assigned public IP may change in the future. It's still recommended to restrict access to your current IP though, and then in the event your home IP changes and you are no longer able to remotely connect to the VM, you can just update the network security rules with your new public IP so you are able to connect again.
NOTE: If you need to change the network security group rules after deployment if your home IP has changed, search for "avalanche-nsg" and you can modify the rule for SSH and Port 9650 with the new IP. **Port 9651 needs to remain open to everyone** though as that's how it communicates with other Avalanche nodes.

Now that you have your public IP select the default allow ssh rule on the left under inbound rules to modify it. Change Source from "Any" to "IP Addresses" and then enter in your Public IP address that you found from google in the Source IP address field. Change the Priority towards the bottom to 100 and then press Save.

Then select "+ Add an inbound rule" to add another rule for RPC access, this should also be restricted to only your IP. Change Source to "IP Addresses" and enter in your public IP returned from google into the Source IP field. This time change the "Destination port ranges" field to 9650 and select "TCP" as the protocol. Change the priority to 110 and give it a name of "Avalanche\_RPC" and press Add.

Select "+ Add an inbound rule" to add a final rule for the Avalanche Protocol so that other nodes can communicate with your node. This rule needs to be open to everyone so keep "Source" set to "Any." Change the Destination port range to "9651" and change the protocol to "TCP." Enter a priority of 120 and a name of Avalanche\_Protocol and press Add.

The network security group should look like the below (albeit your public IP address will be different) and press OK.

Leave the other settings as default and then press "Review + create" to create the Virtual machine.

First it will perform a validation test. If you receive an error here, make sure you selected Pay-As-You-Go subscription model and you are not using the Free Trial subscription as Spot instances are not available. Verify everything looks correct and press "Create"

You should then receive a prompt asking you to generate a new key pair to connect your virtual machine. Select "Download private key and create resource" to download the private key to your PC.

Once your deployment has finished, select "Go to resource"

## Change the Provisioned Disk Size[](#change-the-provisioned-disk-size "Direct link to heading")
By default, the Ubuntu VM will be provisioned with a 30 GB Premium SSD. You should increase this to 250 GB, to allow for database growth.

To change the Disk size, the VM needs to be stopped and deallocated. Select "Stop" and wait for the status to show deallocated. Then select "Disks" on the left.

Select the Disk name that's current provisioned to modify it

Select "Size + performance" on the left under settings and change the size to 250 GB and press "Resize"

Doing this now will also extend the partition automatically within Ubuntu. To go back to the virtual machine overview page, select Avalanche in the navigation setting.

Then start the VM

## Connect to the Avalanche Node[](#connect-to-the-avalanche-node "Direct link to heading")
The following instructions show how to connect to the Virtual Machine from a Windows 10 machine. For instructions on how to connect from a Ubuntu machine see the [AWS tutorial](/docs/nodes/on-third-party-services/amazon-web-services).
On your local PC, create a folder on the root of the C: drive called Avalanche and then move the Avalanche\_key.pem file you downloaded before into the folder. Then right click the file and select Properties. Go to the security tab and select "Advanced" at the bottom

Select "Disable inheritance" and then "Remove all inherited permissions from this object" to remove all existing permissions on that file.

Then select "Add" to add a new permission and choose "Select a principal" at the top. From the pop-up box enter in your user account that you use to log into your machine. In this example I log on with a local user called Seq, you may have a Microsoft account that you use to log in, so use whatever account you login to your PC with and press "Check Names" and it should underline it to verify and press OK.

Then from the permissions section make sure only "Read & Execute" and "Read" are selected and press OK.

It should look something like the below, except with a different PC name / user account. This just means the key file can't be modified or accessed by any other accounts on this machine for security purposes so they can't access your Avalanche Node.

### Find your Avalanche Node Public IP[](#find-your-avalanche-node-public-ip "Direct link to heading")
From the Azure Portal make a note of your static public IP address that has been assigned to your node.

To log onto the Avalanche node, open command prompt by searching for `cmd` and selecting "Command Prompt" on your Windows 10 machine.

Then use the following command and replace the EnterYourAzureIPHere with the static IP address shown on the Azure portal.
ssh -i C:\Avalanche\Avalanche\_key.pem ubuntu\@EnterYourAzureIPHere
for my example its:
ssh -i C:\Avalanche\Avalanche\_key.pem
The first time you connect you will receive a prompt asking to continue, enter yes.

You should now be connected to your Node.

The following section is taken from Colin's excellent tutorial for [configuring an Avalanche Node on Amazon's AWS](/docs/nodes/on-third-party-services/amazon-web-services).
### Update Linux with Security Patches[](#update-linux-with-security-patches "Direct link to heading")
Now that we are on our node, it's a good idea to update it to the latest packages. To do this, run the following commands, one-at-a-time, in order:
```
sudo apt update
sudo apt upgrade -y
sudo reboot
```

This will make our instance up to date with the latest security patches for our operating system. This will also reboot the node. We'll give the node a minute or two to boot back up, then log in again, same as before.
### Set up the Avalanche Node[](#set-up-the-avalanche-node "Direct link to heading")
Now we'll need to set up our Avalanche node. To do this, follow the [Set Up Avalanche Node With Installer](/docs/nodes/using-install-script/installing-avalanche-go) tutorial which automates the installation process. You will need the "IPv4 Public IP" copied from the Azure Portal we set up earlier.
Once the installation is complete, our node should now be bootstrapping! We can run the following command to take a peek at the latest status of the AvalancheGo node:
```
sudo systemctl status avalanchego
```
To check the status of the bootstrap, we'll need to make a request to the local RPC using `curl`. This request is as follows:
```
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"info.isBootstrapped",
"params": {
"chain":"X"
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info
```
The node can take some time (upward of an hour at this moment writing) to bootstrap. Bootstrapping means that the node downloads and verifies the history of the chains. Give this some time. Once the node is finished bootstrapping, the response will be:
```
{
"jsonrpc": "2.0",
"result": {
"isBootstrapped": true
},
"id": 1
}
```
We can always use `sudo systemctl status avalanchego` to peek at the latest status of our service as before, as well.
### Get Your NodeID[](#get-your-nodeid "Direct link to heading")
We absolutely must get our NodeID if we plan to do any validating on this node. This is retrieved from the RPC as well. We call the following curl command to get our NodeID.
```
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"info.getNodeID"
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info
```
If all is well, the response should look something like:
```
{"jsonrpc":"2.0","result":{"nodeID":"NodeID-Lve2PzuCvXZrqn8Stqwy9vWZux6VyGUCR"},"id":1}
```
That portion that says, "NodeID-Lve2PzuCvXZrqn8Stqwy9vWZux6VyGUCR" is our NodeID, the entire thing. Copy that and keep that in our notes. There's nothing confidential or secure about this value, but it's an absolute must for when we submit this node to be a validator.
### Backup Your Staking Keys[](#backup-your-staking-keys "Direct link to heading")
The last thing that should be done is backing up our staking keys in the untimely event that our instance is corrupted or terminated. It's just good practice for us to keep these keys. To back them up, we use the following command:
```
scp -i C:\Avalanche\avalanche_key.pem -r ubuntu@EnterYourAzureIPHere:/home/ubuntu/.avalanchego/staking C:\Avalanche
```
As before, we'll need to replace "EnterYourAzureIPHere" with the appropriate value that we retrieved. This backs up our staking key and staking certificate into the C:\Avalanche folder we created before.

# Avalanche L1 Nodes
URL: /docs/nodes/run-a-node/avalanche-l1-nodes
Learn how to run an Avalanche node that tracks an Avalanche L1.
This article describes how to run a node that tracks an Avalanche L1. It requires building AvalancheGo, adding Virtual Machine binaries as plugins to your local data directory, and running AvalancheGo to track these binaries.
This tutorial specifically covers tracking an Avalanche L1 built with Avalanche's [Subnet-EVM](https://github.com/ava-labs/subnet-evm), the default [Virtual Machine](/docs/quick-start/virtual-machines) run by Avalanche L1s on Avalanche.
## Build AvalancheGo
It is recommended that you must complete [this comprehensive guide](/docs/nodes/run-a-node/from-source) which demonstrates how to build and run a basic Avalanche node from source.
## Build Avalanche L1 Binaries
After building AvalancheGo successfully,
Clone [Subnet-EVM](https://github.com/ava-labs/subnet-evm):
```bash
cd $GOPATH/src/github.com/ava-labs
git clone https://github.com/ava-labs/subnet-evm.git
```
In the Subnet-EVM directory, run the build script, and save it in the `plugins` folder of your `.avalanchego` data directory. Name the plugin after the `VMID` of the Avalanche L1 you wish to track. The `VMID` of the WAGMI Avalanche L1 is the value beginning with **srEX...**
```bash
cd $GOPATH/src/github.com/ava-labs/subnet-evm
./scripts/build.sh ~/.avalanchego/plugins/srEXiWaHuhNyGwPUi444Tu47ZEDwxTWrbQiuD7FmgSAQ6X7Dy
```
VMID, Avalanche L1 ID (SubnetID), ChainID, and all other parameters can be found in the "Chain Info" section of the Avalanche L1 Explorer.
* [Avalanche Mainnet](https://subnets.avax.network/c-chain)
* [Fuji Testnet](https://subnets-test.avax.network/c-chain)
Create a file named `config.json` and add a `track-subnets` field that is populated with the `SubnetID` you wish to track. The `SubnetID` of the WAGMI Avalanche L1 is the value beginning with **28nr...**
```bash
cd ~/.avalanchego
echo '{"track-subnets": "28nrH5T2BMvNrWecFcV3mfccjs6axM1TVyqe79MCv2Mhs8kxiY"}' > config.json
```
## Run the Node
Run AvalancheGo with the `—config-file` flag to start your node and ensure it tracks the Avalanche L1s included in the configuration file.
```bash
cd $GOPATH/src/github.com/ava-labs/avalanchego
./build/avalanchego --config-file ~/.avalanchego/config.json --network-id=fuji
```
Note: The above command includes the `--network-id=fuji` command because the WAGMI Avalanche L1 is deployed on Fuji Testnet.
If you would prefer to track Avalanche L1s using a command line flag, you can instead use the `--track-subnets` flag. For example:
```bash
./build/avalanchego --track-subnets 28nrH5T2BMvNrWecFcV3mfccjs6axM1TVyqe79MCv2Mhs8kxiY --network-id=fuji
```
You should now see terminal filled with logs and information to suggest the node is properly running and has began bootstrapping to the network.
## Bootstrapping and RPC Details
It may take a few hours for the node to fully [bootstrap](/docs/nodes/run-a-node/from-source#bootstrapping) to the Avalanche Primary Network and tracked Avalanche L1s.
When finished bootstrapping, the endpoint will be:
```bash
localhost:9650/ext/bc//rpc
```
if run locally, or:
```bash
XXX.XX.XX.XXX:9650/ext/bc//rpc
```
if run on a cloud provider. The “X”s should be replaced with the public IP of your EC2 instance.
For more information on the requests available at these endpoints, please see the [Subnet-EVM API Reference](/docs/api-reference/subnet-evm-api) documentation.
Because each node is also tracking the Primary Network, those [RPC endpoints](/docs/nodes/run-a-node/from-source#rpc) are available as well.
# Common Errors
URL: /docs/nodes/run-a-node/common-errors
Common errors while running a node and their solutions.
If you experience any issues building your node, here are some common errors and possible solutions.
### Failed to Connect to Bootstrap Nodes[](#failed-to-connect-to-bootstrap-nodes "Direct link to heading")
Error: `WARN node/node.go:291 failed to connect to bootstrap nodes`
This error can occur when the node doesn't have access to the Internet or if the NodeID is already being used by a different node in the network. This can occur when an old instance is running and not terminated.
### Cannot Query Unfinalized Data[](#cannot-query-unfinalized-data "Direct link to heading")
Error: `err="cannot query unfinalized data"`
There may be a number of reasons for this issue, but it is likely that the node is not connected properly to other validators, which is usually caused by networking misconfiguration (wrong public IP, closed p2p port 9651).
# Using Source Code
URL: /docs/nodes/run-a-node/from-source
Learn how to run an Avalanche node from AvalancheGo Source code.
The following steps walk through downloading the AvalancheGo source code and locally building the binary program. If you would like to run your node using a pre-built binary, follow [this](/docs/nodes/run-a-node/using-binary) guide.
## Install Dependencies
* Install [gcc](https://gcc.gnu.org/)
* Install [go](https://go.dev/doc/install)
## Build the Node Binary
Set the `$GOPATH`. You can follow [this](https://github.com/golang/go/wiki/SettingGOPATH) guide.
Create a directory in your `$GOPATH`:
```bash
mkdir -p $GOPATH/src/github.com/ava-labs
```
In the `$GOPATH`, clone [AvalancheGo](https://github.com/ava-labs/avalanchego), the consensus engine and node implementation that is the core of the Avalanche Network.
```bash
cd $GOPATH/src/github.com/ava-labs
git clone https://github.com/ava-labs/avalanchego.git
```
From the `avalanchego` directory, run the build script:
```bash
cd $GOPATH/src/github.com/ava-labs/avalanchego
./scripts/build.sh
```
## Start the Node
To be able to make API calls to your node from other machines, include the argument `--http-host=` when starting the node.
For running a node on the Avalanche Mainnet:
```bash
cd $GOPATH/src/github.com/ava-labs/avalanchego
./build/avalanchego
```
For running a node on the Fuji Testnet:
```bash
cd $GOPATH/src/github.com/ava-labs/avalanchego
./build/avalanchego --network-id=fuji
```
To kill the node, press `Ctrl + C`.
## Bootstrapping
A new node needs to catch up to the latest network state before it can participate in consensus and serve API calls. This process (called bootstrapping) currently takes several days for a new node connected to Mainnet, and a day or so for a new node connected to Fuji Testnet. When a given chain is done bootstrapping, it will print logs like this:
```bash
[09-09|17:01:45.295] INFO snowman/transitive.go:392 consensus starting {"lastAcceptedBlock": "2qaFwDJtmCCbMKP4jRpJwH8EFws82Q2yC1HhWgAiy3tGrpGFeb"}
[09-09|17:01:46.199] INFO snowman/transitive.go:392 consensus starting {"lastAcceptedBlock": "2ofmPJuWZbdroCPEMv6aHGvZ45oa8SBp2reEm9gNxvFjnfSGFP"}
[09-09|17:01:51.628] INFO snowman/transitive.go:334 consensus starting {"lenFrontier": 1}
```
### Check Bootstrapping Progress[](#check-bootstrapping-progress "Direct link to heading")
To check if a given chain is done bootstrapping, in another terminal window call [`info.isBootstrapped`](/docs/api-reference/info-api#infoisbootstrapped) by copying and pasting the following command:
```bash
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"info.isBootstrapped",
"params": {
"chain":"X"
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info
```
If this returns `true`, the chain is bootstrapped; otherwise, it returns `false`. If you make other API calls to a chain that is not done bootstrapping, it will return `API call rejected because chain is not done bootstrapping`. If you are still experiencing issues please contact us on [Discord.](https://chat.avalabs.org/)
The 3 chains will bootstrap in the following order: P-chain, X-chain, C-chain.
Learn more about bootstrapping [here](/docs/nodes/maintain/bootstrapping).
## RPC
When finished bootstrapping, the X, P, and C-Chain RPC endpoints will be:
```bash
localhost:9650/ext/bc/P
localhost:9650/ext/bc/X
localhost:9650/ext/bc/C/rpc
```
if run locally, or
```bash
XXX.XX.XX.XXX:9650/ext/bc/P
XXX.XX.XX.XXX:9650/ext/bc/X
XXX.XX.XX.XXX:9650/ext/bc/C/rpc
```
if run on a cloud provider. The “XXX.XX.XX.XXX" should be replaced with the public IP of your EC2 instance.
For more information on the requests available at these endpoints, please see the [AvalancheGo API Reference](/docs/api-reference/p-chain/api) documentation.
## Going Further
Your Avalanche node will perform consensus on its own, but it is not yet a validator on the network. This means that the rest of the network will not query your node when sampling the network during consensus. If you want to add your node as a validator, check out [Add a Validator](/docs/nodes/validate/node-validator) to take it a step further.
Also check out the [Maintain](/docs/nodes/maintain/bootstrapping) section to learn about how to maintain and customize your node to fit your needs.
To track an Avalanche L1 with your node, head to the [Avalanche L1 Node](/docs/nodes/run-a-node/avalanche-l1-nodes) tutorial.
# Using Pre-Built Binary
URL: /docs/nodes/run-a-node/using-binary
Learn how to run an Avalanche node from a pre-built binary program.
## Download Binary
To download a pre-built binary instead of building from source code, go to the official [AvalancheGo releases page](https://github.com/ava-labs/avalanchego/releases), and select the desired version.
Scroll down to the **Assets** section, and select the appropriate file. You can follow below rules to find out the right binary.
### For MacOS
Download the `avalanchego-macos-.zip` file and unzip using the below command:
```bash
unzip avalanchego-macos-.zip
```
The resulting folder, `avalanchego-`, contains the binaries.
### Linux (PCs or Cloud Providers)
Download the `avalanchego-linux-amd64-.tar.gz` file and unzip using the below command:
```bash
tar -xvf avalanchego-linux-amd64-.tar.gz
```
The resulting folder, `avalanchego--linux`, contains the binaries.
### Linux (Arm64)
Download the `avalanchego-linux-arm64-.tar.gz` file and unzip using the below command:
```bash
tar -xvf avalanchego-linux-arm64-.tar.gz
```
The resulting folder, `avalanchego--linux`, contains the binaries.
## Start the Node
To be able to make API calls to your node from other machines, include the argument `--http-host=` when starting the node.
### MacOS
For running a node on the Avalanche Mainnet:
```bash
./avalanchego-/build/avalanchego
```
For running a node on the Fuji Testnet:
```bash
./avalanchego-/build/avalanchego --network-id=fuji
```
### Linux
For running a node on the Avalanche Mainnet:
```bash
./avalanchego--linux/avalanchego
```
For running a node on the Fuji Testnet:
```bash
./avalanchego--linux/avalanchego --network-id=fuji
```
## Bootstrapping
A new node needs to catch up to the latest network state before it can participate in consensus and serve API calls. This process (called bootstrapping) currently takes several days for a new node connected to Mainnet, and a day or so for a new node connected to Fuji Testnet. When a given chain is done bootstrapping, it will print logs like this:
```bash
[09-09|17:01:45.295] INFO snowman/transitive.go:392 consensus starting {"lastAcceptedBlock": "2qaFwDJtmCCbMKP4jRpJwH8EFws82Q2yC1HhWgAiy3tGrpGFeb"}
[09-09|17:01:46.199] INFO snowman/transitive.go:392 consensus starting {"lastAcceptedBlock": "2ofmPJuWZbdroCPEMv6aHGvZ45oa8SBp2reEm9gNxvFjnfSGFP"}
[09-09|17:01:51.628] INFO snowman/transitive.go:334 consensus starting {"lenFrontier": 1}
```
### Check Bootstrapping Progress[](#check-bootstrapping-progress "Direct link to heading")
To check if a given chain is done bootstrapping, in another terminal window call [`info.isBootstrapped`](/docs/api-reference/info-api#infoisbootstrapped) by copying and pasting the following command:
```bash
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"info.isBootstrapped",
"params": {
"chain":"X"
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info
```
If this returns `true`, the chain is bootstrapped; otherwise, it returns `false`. If you make other API calls to a chain that is not done bootstrapping, it will return `API call rejected because chain is not done bootstrapping`. If you are still experiencing issues please contact us on [Discord.](https://chat.avalabs.org/)
The 3 chains will bootstrap in the following order: P-chain, X-chain, C-chain.
Learn more about bootstrapping [here](/docs/nodes/maintain/bootstrapping).
## RPC
When finished bootstrapping, the X, P, and C-Chain RPC endpoints will be:
```bash
localhost:9650/ext/bc/P
localhost:9650/ext/bc/X
localhost:9650/ext/bc/C/rpc
```
if run locally, or
```bash
XXX.XX.XX.XXX:9650/ext/bc/P
XXX.XX.XX.XXX:9650/ext/bc/X
XXX.XX.XX.XXX:9650/ext/bc/C/rpc
```
if run on a cloud provider. The “XXX.XX.XX.XXX" should be replaced with the public IP of your EC2 instance.
For more information on the requests available at these endpoints, please see the [AvalancheGo API Reference](/docs/api-reference/p-chain/api) documentation.
## Going Further
Your Avalanche node will perform consensus on its own, but it is not yet a validator on the network. This means that the rest of the network will not query your node when sampling the network during consensus. If you want to add your node as a validator, check out [Add a Validator](/docs/nodes/validate/node-validator) to take it a step further.
Also check out the [Maintain](/docs/nodes/maintain/bootstrapping) section to learn about how to maintain and customize your node to fit your needs.
To track an Avalanche L1 with your node, head to the [Avalanche L1 Node](/docs/nodes/run-a-node/avalanche-l1-nodes) tutorial.
# Using Docker
URL: /docs/nodes/run-a-node/using-docker
Learn how to run an Avalanche node using Docker.
## Prerequisites
Before beginning, you must ensure that:
* Docker is installed on your system
* You need to clone the [AvalancheGo repository](https://github.com/ava-labs/avalanchego)
* You need to install [GCC](https://gcc.gnu.org/) and [Go](https://go.dev/doc/install)
* Docker daemon is running on your machine
You can verify your Docker installation by running:
```bash
docker --version
```
## Building the Docker Image
To build the Docker image for the latest `avalanchego` branch:
1. Navigate to the project directory
2. Execute the build script:
```bash
./scripts/build_image.sh
```
This script will create a Docker image containing the latest version of AvalancheGo.
## Verifying the Build
After the build completes, verify the image was created successfully:
```bash
docker image ls
```
You should see an image with:
* Repository: `avaplatform/avalanchego`
* Tag: `xxxxxxxx` (where `xxxxxxxx` is the shortened commit hash of the source code used for the build)
## Running AvalancheGo Node
To start an AvalancheGo node, run the following command:
```bash
docker run -ti -p 9650:9650 -p 9651:9651 avaplatform/avalanchego:xxxxxxxx /avalanchego/build/avalanchego
```
This command:
* Creates an interactive container (`-ti`)
* Maps the following ports:
* `9650`: HTTP API port
* `9651`: P2P networking port
* Uses the built AvalancheGo image
* Executes the AvalancheGo binary inside the container
## Port Configuration
The default ports used by AvalancheGo are:
* `9650`: HTTP API
* `9651`: P2P networking
Ensure these ports are available on your host machine and not blocked by firewalls.
# Installing AvalancheGo
URL: /docs/nodes/using-install-script/installing-avalanche-go
Learn how to install AvalancheGo on your system.
## Running the Script
So, now that you prepared your system and have the info ready, let's get to it.
To download and run the script, enter the following in the terminal:
```bash
wget -nd -m https://raw.githubusercontent.com/ava-labs/avalanche-docs/master/scripts/avalanchego-installer.sh;\
chmod 755 avalanchego-installer.sh;\
./avalanchego-installer.sh
```
And we're off! The output should look something like this: