A simple open-source AI agent that can generate and deploy an app in seconds.
- Next.js 15
- Vercel AI SDK 4.1.54:
- OpenAI
gpt-4o-minifor code generation (add your own model) - Code streaming (based on the latest
toolCallStreaming)
- OpenAI
- Fast and secure code interpretation via E2B SDK by E2B
- A set of pre-built tools for your agent to use (add your own)
- Monaco editor for multi-file code preview
git, Node.js 16+, pnpm
Clone the repository:
git clone git@github.com:mkrl/e0.gitObtain API keys from OpenAI and E2B Dashboard.
Set the required environment variables as shown in the .env.example file, but in a new file called .env.
Install dependencies:
pnpm installLaunch the development server:
pnpm run devThanks to the Vercel AI SDK, you can hot-swap the underlying model for code generation.
You can pick just about anything that is capable of "Tool Usage" here.
- Obtain an API key (if that's something different from the default OpenAI provider)
- Set an environment variable for your new model in the
.envfile - Go to
app/api/chat/route.tsand change themodelvariable to your new model
The chatbot follows an agentic approach: an LLM is given a set of tools and instructions on how to operate on a remote server.
The following tools are available:
- Create a sandbox server
- Generate and deploy a file at a particular path
- Install a package
You can find the tools in the tools directory, connected to the streamText call in app/api/chat/route.ts
All the tool calls are streamed in to the client in order to deliver fast intermediate results.
- Think of a good idea! Generate an image and deploy it to the sandbox? Check weather in NY to make sure it's not raining when generating code?
- Create a new file in the
toolsdirectory by copying the contents of any other tool file - Give your tool a good description and a set of arguments. If you want to execute something in a sandbox be sure to include the
sandboxIdparameter - Add your tool to the
toolsobject inapp/api/chat/route.ts - Add your tool name to
types/tools.ts - Give it some visual appearance in the chat by extending
components/ui/ToolCall.tsx. You can add a custom icon and messages for all the possible states of the tool call.
You can find a full system prompt in app/api/chat/route.ts.
When adding new tools sometimes it is a good idea to give some instructions to the LLM on when to use it. For example, if you only want to generate code when it is not raining in NY, you may want to specify this in a system prompt.
