Build machine learning models using natural language.
Quickstart | Features | Installation | Documentation | Benchmarks
plexe lets you create machine learning models by describing them in plain language. Simply explain what you want, and the AI-powered system builds a fully functional model through an automated agentic approach. Also available as a managed cloud service.
demo.mp4
pip install plexeLaunch the interactive chat interface to build models through conversational AI:
# Start the chat interface
plexeThis opens a Gradio UI where you can describe your model, upload datasets, and get explanations throughout the process.
import plexe
# Define the model
model = plexe.Model(
intent="Predict sentiment from news articles",
input_schema={"headline": str, "content": str},
output_schema={"sentiment": str}
)
# Build and train the model
model.build(
datasets=[your_dataset],
provider="openai/gpt-4o-mini",
max_iterations=10
)
# Use the model
prediction = model.predict({
"headline": "New breakthrough in renewable energy",
"content": "Scientists announced a major advancement..."
})
# Save for later use
plexe.save_model(model, "sentiment-model")
loaded_model = plexe.load_model("sentiment-model.tar.gz")Define models using plain English descriptions:
model = plexe.Model(
intent="Predict housing prices based on features like size, location, etc.",
input_schema={"square_feet": int, "bedrooms": int, "location": str},
output_schema={"price": float}
)The system uses a team of specialized AI agents to:
- Analyze your requirements and data
- Plan the optimal model solution
- Generate and improve model code
- Test and evaluate performance
- Package the model for deployment
Build complete models with a single method call:
model.build(
datasets=[dataset],
provider="openai/gpt-4o-mini", # LLM provider
max_iterations=10, # Max solutions to explore
timeout=1800 # Optional time limit in seconds
)Generate synthetic data or infer schemas automatically:
# Generate synthetic data
dataset = plexe.DatasetGenerator(
schema={"features": str, "target": int}
)
dataset.generate(500) # Generate 500 samples
# Infer schema from intent
model = plexe.Model(intent="Predict customer churn based on usage patterns")
model.build(provider="openai/gpt-4o-mini") # Schema inferred automaticallyUse your preferred LLM provider:
model.build(provider="openai/gpt-4o-mini") # OpenAI
model.build(provider="anthropic/claude-3-opus") # Anthropic
model.build(provider="google/gemini-1.5-pro") # GoogleSee LiteLLM providers for available providers.
pip install plexe # Standard installation
pip install plexe[lightweight] # Minimal dependencies
pip install plexe[all] # With deep learning support# Set your preferred provider's API key
export OPENAI_API_KEY=<your-key>
export ANTHROPIC_API_KEY=<your-key>
export GEMINI_API_KEY=<your-key>See LiteLLM providers for environment variable names.
For full documentation, visit docs.plexe.ai.
Evaluated on 20 OpenML benchmarks and 12 Kaggle competitions, showing higher performance in 12/20 datasets. Full results at plexe-ai/plexe-results.
Deploy as a platform with API and web UI:
git clone https://github.com/plexe-ai/plexe.git
cd plexe/docker
cp .env.example .env # Add your API key
docker-compose up -dAccess at:
- API: http://localhost:8000
- Web UI: http://localhost:8501
See CONTRIBUTING.md for guidelines. Join our Discord to connect with the team.