Skip to content

Releases: thinkall/featcopilot

v0.3.6

12 Feb 13:55

Choose a tag to compare

What's Changed (since v0.3.0)

Features

  • OpenAI SDK backend: Added direct OpenAI SDK support (openai_client.py) with AsyncOpenAI/AsyncAzureOpenAI clients (d3ea4e8)
  • Microsoft Fabric support: Auto-detect Fabric platform and use synapse.ml.fabric.credentials for authentication (95f115a)
  • Azure OpenAI support: Detect Azure endpoints via �pi_version, env vars, or URL and use AsyncAzureOpenAI with proper config (f41cb0b)
  • GitHub Copilot as default backend: Changed default LLM backend from openai to copilot across all components (3c263b6)
  • Demo video generator: Combined video pipeline with HTML slides + notebook walkthrough + narrations (1bf5d1e)
  • Backend parameter: Added �ackend parameter to FeatureExplainer and FeatureCodeGenerator (defb252)

Bug Fixes

  • Fabric credential auth: Use get_openai_httpx_async_client() to avoid placeholder API key 401 errors on Fabric (95f115a)
  • Azure api_version: Set default �pi_version=2024-10-21 for Fabric Azure client (bf2fe04)
  • Duplicate logging: Prevent duplicate log output by disabling propagation to root logger (c97f11a)
  • Backend propagation: Pass �ackend, �pi_key, �pi_base, �pi_version from llm_config through AutoFeatureEngineer to SemanticEngine (6ba724e)
  • Token parameter: Use max_completion_tokens instead of deprecated max_tokens for newer models (9c56697)
  • sklearn compatibility: Fix sklearn error in AutoFeatureEngineer (c0b0145)

Chores

  • Bump version to 0.3.6
  • Update .gitignore for generated video files (eed8420)
  • Update gitignore (12e3269)

Full Changelog: v0.3.0...v0.3.6

v0.3.0

05 Feb 13:58

Choose a tag to compare

Highlights

  • Expanded benchmark coverage with multi-framework support (FLAML, H2O, AutoGluon) and feature caching.
  • Added a unified category-based dataset API and reusable natural-language transform rules.
  • Refreshed benchmark docs and examples with corrected metrics and new results.

Benchmarks & datasets

  • Added simple_models LLM benchmark results (52 datasets) and expanded engine coverage.
  • Included forecasting datasets and standardized report naming/formatting.
  • Shared feature cache across benchmarks with consistent cache versioning.

Fixes & maintenance

  • Hardened benchmark tool handling and fixed cache key issues.
  • Updated benchmark dependencies (numpy pin, autogluon).
  • Removed the FLAML 90s benchmark.

Docs & examples

  • Added FLAML Spotify example.
  • Updated benchmark highlights in docs.

Full Changelog: v0.2.0...v0.3.0

v0.2.0

31 Jan 12:42

Choose a tag to compare

New features in this release:

  • LiteLLM integration supporting 100+ LLM providers
  • Feast feature store integration for production deployment
  • Comprehensive demo notebook with visualizations
  • HTML presentation slides for demos
  • Improved async handling for Jupyter notebooks
  • Enhanced documentation with e2e example

Full Changelog: v0.1.0...v0.2.0

v0.1.0

30 Jan 05:43

Choose a tag to compare

Features

  • Multi-engine architecture: tabular, time series, relational, and text feature engines
  • LLM-powered intelligence via GitHub Copilot SDK
  • Intelligent feature selection with statistical testing
  • Sklearn-compatible transformers for ML pipelines

Full Changelog: https://github.com/thinkall/featcopilot/commits/v0.1.0