Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,15 @@ ENDPOINT=http://localhost:3000/graphql ALLOW_MUTATIONS=true npx mcp-graphql
ENDPOINT=http://localhost:3000/graphql SCHEMA=./schema.graphql npx mcp-graphql
```



## Running evals

The evals package loads an mcp client that then runs the index.ts file, so there is no need to rebuild between tests. You can load environment variables by prefixing the npx command. Full documentation can be found [here](https://www.mcpevals.io/docs).

```bash
OPENAI_API_KEY=your-key npx mcp-eval src/evals/evals.ts src/index.ts
```
## Resources

- **graphql-schema**: The server exposes the GraphQL schema as a resource that clients can access. This is either the local schema file or based on an introspection query.
Expand Down
1 change: 1 addition & 0 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@
"dependencies": {
"@modelcontextprotocol/sdk": "1.6.1",
"graphql": "^16.10.0",
"mcp-evals": "^1.0.18",
"yargs": "17.7.2",
"zod": "3.24.2",
"zod-to-json-schema": "3.24.3"
Expand Down
32 changes: 32 additions & 0 deletions src/evals/evals.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
//evals.ts

import { EvalConfig } from 'mcp-evals';
import { openai } from "@ai-sdk/openai";
import { grade, EvalFunction } from "mcp-evals";

const introspectSchemaEval: EvalFunction = {
name: 'introspect-schema Evaluation',
description: 'Evaluates the introspect-schema tool functionality',
run: async () => {
const result = await grade(openai("gpt-4"), "Please introspect the schema for the GraphQL endpoint and return the type definitions.");
return JSON.parse(result);
}
};

const queryGraphqlEval: EvalFunction = {
name: 'query-graphql Tool Evaluation',
description: 'Evaluates the query-graphql tool functionality',
run: async () => {
const result = await grade(openai("gpt-4"), "Please query the GraphQL endpoint for a list of items, then attempt a mutation to add a new item.");
return JSON.parse(result);
}
};

const config: EvalConfig = {
model: openai("gpt-4"),
evals: [introspectSchemaEval, queryGraphqlEval]
};

export default config;

export const evals = [introspectSchemaEval, queryGraphqlEval];