Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
181 changes: 179 additions & 2 deletions api-reference/introduction.mdx
Original file line number Diff line number Diff line change
@@ -1,5 +1,182 @@
---
title: "API Introduction"
title: "API Hello World"
sidebarTitle: "Hello World"
---

Welcome to Vast.ai's API documentation. Our API allows you to programmatically manage GPU instances, handle machine operations, and automate your AI/ML workflow. Whether you're running individual GPU instances or managing a fleet of machines, our API provides comprehensive control over all Vast.ai platform features.
The Vast.ai REST API gives you programmatic control over GPU instances — useful for automation, CI/CD pipelines, or building your own tooling on top of Vast.

This guide walks through the complete instance lifecycle: authenticate, search for a GPU, rent it, wait for it to boot, connect to it, and clean up. By the end you'll understand the core API calls needed to manage instances without touching the web console.

## Prerequisites

- A Vast.ai account with credit (~$0.01–0.05, depending on test instance run time)
- `curl` installed

## 1. Get Your API Key

Generate an API key from the [Keys page](https://cloud.vast.ai/cli/keys/) by clicking **+New**. Copy the key — you'll need it for your API calls, and you'll only see it once.

Export it as an environment variable:

```bash
export VAST_API_KEY="your-api-key-here"
```

## 2. Verify Authentication

Confirm your key works by listing your current instances. If you have none, this returns an empty list.

```bash
curl -s -H "Authorization: Bearer $VAST_API_KEY" \
"https://console.vast.ai/api/v0/instances/"
```

```json
{
"instances_found": 0,
"instances": []
}
```

<Note>
If you get a `401` or `403`, double-check your API key. If you already have instances, you'll see them listed here.
</Note>

## 3. Search for GPUs

Find available machines using the bundles endpoint. This query returns the top 5 on-demand RTX 4090s sorted by deep learning performance benchmarked per dollar:

```bash
curl -s -H "Authorization: Bearer $VAST_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"verified": {"eq": true},
"rentable": {"eq": true},
"gpu_name": {"eq": "RTX 4090"},
"num_gpus": {"eq": 1},
"direct_port_count": {"gte": 1},
"order": [["dlperf_per_dphtotal", "desc"]],
"type": "on-demand",
"limit": 5
}' \
"https://console.vast.ai/api/v0/bundles/"
```

Each parameter in the query above controls a different filter:

| Parameter | Value | Meaning |
|-----------|-------|---------|
| `verified` | `{"eq": true}` | Only machines verified by Vast.ai (identity-checked hosts) |
| `rentable` | `{"eq": true}` | Only machines currently available to rent |
| `gpu_name` | `{"eq": "RTX 4090"}` | Filter to a specific GPU model |
| `num_gpus` | `{"eq": 1}` | Exactly 1 GPU per instance |
| `direct_port_count` | `{"gte": 1}` | At least 1 directly accessible port (needed for SSH) |
| `order` | `[["dlperf_per_dphtotal", "desc"]]` | Sort by deep learning performance per dollar, best value first |
| `type` | `"on-demand"` | On-demand pricing (vs. interruptible spot/bid) |
| `limit` | `5` | Return at most 5 results |

The response contains an `offers` array. Note the `id` of the offer you want — you'll use it in the next step. If no offers are returned, try relaxing your filters (e.g. a different GPU model or removing `direct_port_count`).

<Tip>
See the [Search Offers](/api-reference/search/search-offers) reference for the full list of filter parameters and operators.
</Tip>

## 4. Create an Instance

Rent the machine by sending a PUT request with your Docker image and disk size. Replace `OFFER_ID` with the `id` from step 3. `disk` is in GB and specifies the size of the disk on your new instance.

```bash
curl -s -H "Authorization: Bearer $VAST_API_KEY" \
-H "Content-Type: application/json" \
-X PUT \
-d '{
"image": "pytorch/pytorch:2.4.0-cuda12.4-cudnn9-runtime",
"disk": 20,
"onstart": "echo hello && nvidia-smi"
}' \
"https://console.vast.ai/api/v0/asks/OFFER_ID/"
```

```json
{
"success": true,
"new_contract": 12345678,
"instance_api_key": "d15a..."
}
```

Save the `new_contract` value — this is your instance ID. The `instance_api_key` is a restricted key injected into the container as `CONTAINER_API_KEY` — it can only start, stop, or destroy that specific instance.

## 5. Wait Until Ready

The instance needs time to pull the Docker image and boot. Poll the status endpoint until `actual_status` is `"running"`. Replace `INSTANCE_ID` with the `new_contract` value from step 4.

```bash
curl -s -H "Authorization: Bearer $VAST_API_KEY" \
"https://console.vast.ai/api/v0/instances/INSTANCE_ID/"
```

Example response:

```json
{
"instances": {
"actual_status": "loading",
"ssh_host": "...",
"ssh_port": 12345
}
}
```

The `actual_status` field progresses through these states:

| `actual_status` | Meaning |
|-----------------|---------|
| `null` | Instance is being provisioned |
| `"loading"` | Docker image is downloading |
| `"running"` | Ready to use |

Poll every 10 seconds. Boot time is typically 1–5 minutes depending on the Docker image size. You can also use the `onstart` script to send a callback when the instance is ready, instead of polling.

Once `actual_status` is `"running"`, you're ready to connect.

## 6. Connect via SSH

Use the `ssh_host` and `ssh_port` from the status response to connect directly to your new instance:

```bash
ssh root@SSH_HOST -p SSH_PORT
```

## 7. Clean Up

When you're done, destroy the instance to stop all billing.

Alternatively, to pause an instance temporarily instead of destroying it, you can **stop** it. Stopping halts compute billing but disk storage charges continue.

**Destroy** (removes everything):

```bash
curl -s -H "Authorization: Bearer $VAST_API_KEY" \
-X DELETE \
"https://console.vast.ai/api/v0/instances/INSTANCE_ID/"
```

**Stop** (pauses compute, disk charges continue):

```bash
curl -s -H "Authorization: Bearer $VAST_API_KEY" \
-H "Content-Type: application/json" \
-X PUT \
-d '{"state": "stopped"}' \
"https://console.vast.ai/api/v0/instances/INSTANCE_ID/"
```

Both return `{"success": true}`.

## Next Steps

You've now completed the full instance lifecycle through the API: authentication, search, creation, polling, and teardown. From here:

- **SSH setup** — See the [SSH guide](/documentation/instances/connect/ssh) for key configuration and advanced connection options.
- **Use templates** — Avoid repeating image and config parameters on every create call. The [Templates API guide](/api-reference/creating-and-using-templates-with-api) covers creating, sharing, and launching from templates.
4 changes: 4 additions & 0 deletions docs.json
Original file line number Diff line number Diff line change
Expand Up @@ -733,6 +733,10 @@
"source": "/api",
"destination": "/api-reference/introduction"
},
{
"source": "/api-reference/hello-world",
"destination": "/api-reference/introduction"
},
{
"source": "/api/:slug*",
"destination": "/api-reference/:slug*"
Expand Down