Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 0 additions & 4 deletions .github/workflows/per-pr.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,6 @@ name: CI
on:
pull_request:

group: "tests-${{ github.head_ref }}"
cancel-in-progress: true

jobs:
run_starlark:
runs-on: ubuntu-latest
Expand All @@ -26,7 +23,6 @@ jobs:
"./configs/just-network.json",
"./configs/one-chain.json",
"./configs/two-chains-with-bridge.json",
"./configs/hypersdk.json",
"./configs/fuji.json",
]
runs-on: ubuntu-latest
Expand Down
92 changes: 83 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,25 +26,34 @@ Once the codespace is set up, run `chmod 777 ./scripts/setup-codespace.sh` follo

#### Configuration

<details>
<summary>Click to see configuration</summary>

You can configure this package using the JSON structure below. The default values for each parameter are shown.

// NOTE and TODO: flesh out the configurable params and document available option

```javascript
{
"base-network-id": "1337",
// add more dicts to spin up more L1s
"chain-configs": [
{
// the name of the blockchain you want to have
"name": "myblockchain",

// the vm you want to use to start the chain
// currently the support options are subnetevm and morpheusvm
// the default is subnetevm
"vm": "subnetevm",

// the network id for you chain
"network-id": 555555,

// whether or not you want to deploy teleporter contracts to your chain, defaults to true
"enable-teleporter": true,

// config to deploy an erc20 token bridge between to chains
"erc20-bridge-config": {
// name of token you want to enable bridging for, by default every subnetevm chain spun up by the package automatically deploys a token contract with the name TOK
"token-name": "TOK",

// the chain you want to enable bridging to
// must be the name of another chain within this config file
"destinations": ["mysecondblockchain"]
}
},
Expand All @@ -55,17 +64,48 @@ You can configure this package using the JSON structure below. The default value
"enable-teleporter": true
}
],
// number of nodes to start on the network
"num-nodes": 3,

"node-cfg": {
// network id that nodes use to know where to connect, by default this is 1337 - which indicates a local avalanche network
"network-id": "1337",
"staking-enabled": false,
"health-check-frequency": "5s"
}
},

// if you are running inside a codespace, provide this configuration with the value of `echo $CODESPACE_NAME`.
// this is needed to make sure that networking for additional services like the blockscout explorer are proxied to the codepsace correctly
"codespace": "verbose-couscous-q45vq44g552657q",

// a list of additional infrastructure services you can have the package spin up in the encalve
"additional-services": {
// starts prometheus + grafana instance connected to one node on the network
// provides dashboards for monitoring the Avalanche node - including metrics on all primary network and configured chains, resource usage, etc
"observability": true,

// creates a tx spammer that spams transactions - note this only works for a subnetevm chain
// one spammer is created for each subnetevm chain spun up in the package
"tx-spammer": true,

// spins up the interchain token transfer frontend, a UI that allows you to bridge ERC 20 tokens from one chain to another
// note this only works for configs that deploy an erc20 token bridge and have at minimum to chains
"ictt-frontend": true,

// spins up a faucet configured for every chain spun up in the package
"faucet": true,


// spins up a blockscout explorer for each chain
"block-explorer": true
},

// cpu arch of machine this package runs on
// this is only required when spinning up non subnetevm chains, defaults to arm64
"cpu-arch": "arm64"
}
```

</details>

Use this package in your package
--------------------------------
Kurtosis packages can be composed inside other Kurtosis packages. To use this package in your package:
Expand All @@ -86,6 +126,40 @@ Develop on this package
1. Clone this repo
1. For your dev loop, run `kurtosis clean -a && kurtosis run .` inside the repo directory

Package breakdown
-----------------------
A short breakdown of each directory for those who want are understand more about how this package is created or contribute.

- `main.star`
- Main entrypoint that contains the high level logic, structure, and arg configuration for the package.
- If you want to a high level view of the steps this package takes, start here.
- `builder/`
- A simple docker container that contains code necessary to configure avalanche networks.
- Contains `genesis-generator-code` thats used to configure primary network, configs for nodes, etc.
- Contains `subnet-creator-code` thats used to create and configure local avalanche l1s. Performs all steps needed to create a POA avalanche l1 including creating subnet, blockchain, converting to l1, initializing validator manager contracts and set.
- This builder service sticks around for entirety of package creation - downstream starlark logic runs commands on the builder that runs golang code mentioned above.
- If you are looking for the avalanche code thats used to configure the networks, start here.
- `node_launcher.star`
- Contains starlark node configuration - data produced by the `builder` is mounted onto the nodes and then used to configure them.
- Contains logic for how nodes are configured to track new l1s configured by the package.
- `l1/`
- Contains starlark logic for configuring l1s. These steps execute commands on the `builder` container to start the l1 in three stages - `create`, `convert`, `initvalidadtorset`
- `relayer/`
- Config to start the ICM relayer service used for relaying messages between avalanche l1s - this is used for the erc20 token bridge that can be spun up in this package.
- `faucet/`
- Starts avalanche faucet service
- `block-explorer/`
- Starts blockscout explorer
- `observablity/`
- Configures prom and grafana
- `bridge-frontend/`
- Starts interchain token transfer frontend

Dependencies
-----------------------
Curious about what docker images and other kurtosis packages this package depends on?

Run any `kurtosis run .` command with `--dependencies` appended (e.g. `kurtosis run . --args-file configs/two-chains-with-bridge.json --dependencies`) to get a list of dependencies.

<!-------------------------------- LINKS ------------------------------->
[install-kurtosis]: https://docs.kurtosis.com/install
Expand Down
3 changes: 3 additions & 0 deletions builder/builder.star
Original file line number Diff line number Diff line change
Expand Up @@ -66,6 +66,9 @@ def generate_genesis(plan, network_id, num_nodes, vmName):
description="Creating config files for each node",
)

# the genesis data artifact is a directory of data containing information about the primary network, all the node information (ids, signer/stakings, keys etc.)
# this artifact gets placed onto each node downstream and are configured based on this data
# do a `kurtosis files inspect <enclave name> generated-genesis-data` to view whats inside
genesis_data = plan.store_service_files(
service_name = BUILDER_SERVICE_NAME,
src = "/tmp/data",
Expand Down
File renamed without changes.
File renamed without changes.
21 changes: 21 additions & 0 deletions configs/hypersdk-amd64.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
{
"num-nodes": 2,
"node-cfg": {
"network-id": "1337",
"staking-enabled": false,
"health-check-frequency": "5s",
"log-level": "debug"
},
"chain-configs": [
{
"name": "aaronsblockchain",
"vm": "morpheusvm",
"network-id": 555555,
"enable-teleporter": true
}
],
"additional-services": {
"observability": true
},
"cpu-arch": "amd64"
}
4 changes: 2 additions & 2 deletions main.star
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ l1 = import_module("./l1/l1.star")
relayer = import_module("./relayer/relayer.star")
contract_deployer = import_module("./contract-deployment/contract-deployer.star")
bridge_frontend = import_module("./bridge-frontend/bridge-frontend.star")
proxy = import_module("./proxy/node-proxy.star")
codespace_proxy = import_module("./codespace-proxy/codespace-proxy.star")
utils = import_module("./utils.star")
constants = import_module("./constants.star")

Expand Down Expand Up @@ -112,7 +112,7 @@ def run(plan, args):
if additional_services.get("ictt-frontend", False) == True and len(l1_info) >= 2 and launch_relayer == True:
if codespace_name != "":
# when using codespace, a proxy needs to be launched to add cors headers for bridge frontend requests to work
proxy_port = proxy.launch_node_proxy(plan, node_info["node-0"]["rpc-url"])
proxy_port = codespace_proxy.launch_node_proxy(plan, node_info["node-0"]["rpc-url"])

for chain_name, chain in l1_info.items():
# update codespace endpoints to point to proxy instead
Expand Down
8 changes: 5 additions & 3 deletions node_launcher.star
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ BUILDER_SERVICE_NAME = "builder"
EXECUTABLE_PATH = "avalanchego"
ABS_PLUGIN_DIRPATH = "/avalanchego/build/plugins/"
ABS_DATA_DIRPATH = "/tmp/data/"
HYPERSDK_VM_PATH = "{0}/hypersdk".format(ABS_DATA_DIRPATH)
RPC_PORT_ID = "rpc"
RPC_PORT_NUM = 9650
PUBLIC_IP = "127.0.0.1"
Expand Down Expand Up @@ -64,7 +65,7 @@ def launch(
log_file_cmd = " && ".join(log_files_cmds)

node_files = {}
node_files["/tmp/data"] = genesis
node_files[ABS_DATA_DIRPATH] = genesis

entrypoint=[]
if network_id == constants.FUJI_NETWORK_ID:
Expand All @@ -74,7 +75,8 @@ def launch(

if custom_vm_path:
vm_plugin = plan.upload_files(custom_vm_path)
node_files["/tmp/data/hypersdk"]= vm_plugin
# if a custom vm path is provided, assume its the morpheusvm/hypersdk
node_files[HYPERSDK_VM_PATH]= vm_plugin

node_service_config = ServiceConfig(
image=image,
Expand Down Expand Up @@ -111,7 +113,7 @@ def launch(
)

if custom_vm_path:
cp(plan, node_name, "/tmp/data/hypersdk/" + vmId, ABS_PLUGIN_DIRPATH + vmId)
cp(plan, node_name, HYPERSDK_VM_PATH + vmId, ABS_PLUGIN_DIRPATH + vmId)
elif custom_vm_url:
download_to_path_and_untar(plan, node_name, custom_vm_url, ABS_PLUGIN_DIRPATH + vmId)

Expand Down
Loading