-
-
Notifications
You must be signed in to change notification settings - Fork 68
Description
Is there an existing issue for this?
- I have searched the existing issues
What happened?
📌 Issue Overview
The get-embedding Edge Function is using an incorrect Gemini API endpoint and model name, causing all embedding generation requests to fail with a "model not found" error. This breaks the AI-powered semantic search and meeting summarization features that depend on embeddings.
🔍 Steps to Reproduce
- Deploy the current index.ts to Supabase
- Call the Edge Function with a test request:
curl -X POST "https://your-project.supabase.co/functions/v1/get-embedding" \ -H "Authorization: Bearer YOUR_ANON_KEY" \ -H "Content-Type: application/json" \ -d '{"text": "Test meeting about AI"}' | jq
- Observe the error response
🎯 Expected Behavior
The function should successfully generate a 768-dimension embedding vector using the Gemini API and return:
{
"embedding": [0.123, -0.456, 0.789, ...]
}🚨 Actual Behavior
The function fails with the following error:
{
"error": "Error generating embedding: models/embedding-001 is not found for API version v1, or is not supported for embedContent. Call ListModels to see the list of available models and their supported methods."
}Root Cause:
- Current implementation uses
/v1/models/embedding-001which doesn't exist - Gemini's embedding model is actually
gemini-embedding-001and requires the/v1betaAPI version - Request body includes unnecessary
modelandtaskTypefields
📷 Screenshot
Error output:
Verified correct model via Gemini API:
$ curl "https://generativelanguage.googleapis.com/v1beta/models?key=API_KEY" | \
jq '.models[] | select(.name | contains("embedding"))'
{
"name": "models/gemini-embedding-001",
"supportedGenerationMethods": [
"embedContent",
"countTextTokens",
"countTokens",
"asyncBatchEmbedContent"
]
}💡 Suggested Improvements
Fix Required Changes
Current (broken) code in index.ts:
const embeddingResponse = await fetch(
"https://generativelanguage.googleapis.com/v1/models/embedding-001:embedContent?key=" + GEMINI_API_KEY,
{
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({
model: "embedding-001", // ❌ Incorrect
content: {
parts: [{ text: text }]
},
taskType: "RETRIEVAL_QUERY" // ❌ Unnecessary
}),
}
);Proposed fix:
const embeddingResponse = await fetch(
"https://generativelanguage.googleapis.com/v1beta/models/gemini-embedding-001:embedContent?key=" + GEMINI_API_KEY,
{
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({
content: {
parts: [{ text: text }]
}
}),
}
);Changes Summary:
- ✅ Update API version:
/v1→/v1beta - ✅ Update model name:
embedding-001→gemini-embedding-001 - ✅ Remove
modelfield from request body (redundant - already in URL) - ✅ Remove
taskTypefield (not required by v1beta API)
Verification
After applying the fix, the function works correctly:
$ curl -X POST "https://your-project.supabase.co/functions/v1/get-embedding" \
-H "Authorization: Bearer YOUR_ANON_KEY" \
-H "Content-Type: application/json" \
-d '{"text": "Test meeting about AI"}' | jq
{
"embedding": [0.034567, -0.012345, 0.056789, ..., 0.023456] // 768 dimensions
}Impact: This bug prevents all AI-powered features (semantic search, meeting insights, similarity matching) from functioning. Fixing it is critical for the application to work as intended.
Record
- I agree to follow this project's Code of Conduct
- I want to work on this issue
