Skip to content

✨ build ml models in natural language and minimal code

License

Notifications You must be signed in to change notification settings

Vishruth-Hyphen/plexe

 
 

plexe ✨

PyPI version Discord

backed-by-yc

Build machine learning models using natural language.

Quickstart | Features | Installation | Documentation | Benchmarks


plexe lets you create machine learning models by describing them in plain language. Simply explain what you want, and the AI-powered system builds a fully functional model through an automated agentic approach. Also available as a managed cloud service.

demo.mp4

1. Quickstart

Installation

pip install plexe

Two Ways to Use plexe

A. Interactive Chat Interface

Launch the interactive chat interface to build models through conversational AI:

# Start the chat interface
plexe

This opens a Gradio UI where you can describe your model, upload datasets, and get explanations throughout the process.

B. Library API

import plexe

# Define the model
model = plexe.Model(
    intent="Predict sentiment from news articles",
    input_schema={"headline": str, "content": str},
    output_schema={"sentiment": str}
)

# Build and train the model
model.build(
    datasets=[your_dataset],
    provider="openai/gpt-4o-mini",
    max_iterations=10
)

# Use the model
prediction = model.predict({
    "headline": "New breakthrough in renewable energy",
    "content": "Scientists announced a major advancement..."
})

# Save for later use
plexe.save_model(model, "sentiment-model")
loaded_model = plexe.load_model("sentiment-model.tar.gz")

2. Features

2.1. 💬 Natural Language Model Definition

Define models using plain English descriptions:

model = plexe.Model(
    intent="Predict housing prices based on features like size, location, etc.",
    input_schema={"square_feet": int, "bedrooms": int, "location": str},
    output_schema={"price": float}
)

2.2. 🤖 Multi-Agent Architecture

The system uses a team of specialized AI agents to:

  • Analyze your requirements and data
  • Plan the optimal model solution
  • Generate and improve model code
  • Test and evaluate performance
  • Package the model for deployment

2.3. 🎯 Automated Model Building

Build complete models with a single method call:

model.build(
    datasets=[dataset],
    provider="openai/gpt-4o-mini",  # LLM provider
    max_iterations=10,              # Max solutions to explore
    timeout=1800                    # Optional time limit in seconds
)

2.4. 🎲 Data Generation & Schema Inference

Generate synthetic data or infer schemas automatically:

# Generate synthetic data
dataset = plexe.DatasetGenerator(
    schema={"features": str, "target": int}
)
dataset.generate(500)  # Generate 500 samples

# Infer schema from intent
model = plexe.Model(intent="Predict customer churn based on usage patterns")
model.build(provider="openai/gpt-4o-mini")  # Schema inferred automatically

2.5. 🌐 Multi-Provider Support

Use your preferred LLM provider:

model.build(provider="openai/gpt-4o-mini")    # OpenAI
model.build(provider="anthropic/claude-3-opus")  # Anthropic
model.build(provider="google/gemini-1.5-pro")    # Google

See LiteLLM providers for available providers.

3. Installation

3.1. Installation Options

pip install plexe                  # Standard installation
pip install plexe[lightweight]     # Minimal dependencies
pip install plexe[all]             # With deep learning support

3.2. API Keys

# Set your preferred provider's API key
export OPENAI_API_KEY=<your-key>
export ANTHROPIC_API_KEY=<your-key>
export GEMINI_API_KEY=<your-key>

See LiteLLM providers for environment variable names.

4. Documentation

For full documentation, visit docs.plexe.ai.

5. Benchmarks

Evaluated on 20 OpenML benchmarks and 12 Kaggle competitions, showing higher performance in 12/20 datasets. Full results at plexe-ai/plexe-results.

6. Docker Deployment

Deploy as a platform with API and web UI:

git clone https://github.com/plexe-ai/plexe.git
cd plexe/docker
cp .env.example .env  # Add your API key
docker-compose up -d

Access at:

7. Contributing

See CONTRIBUTING.md for guidelines. Join our Discord to connect with the team.

8. License

Apache-2.0 License

About

✨ build ml models in natural language and minimal code

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 92.4%
  • Jinja 4.7%
  • Jupyter Notebook 2.5%
  • Dockerfile 0.4%