Example of an API for Stable Diffusion inference using FastAPI and Hugging Face.
-
Clone the repository locally
git clone git@github.com:mvfolino68/python-fastapi-stabile-diffusion-backend.git
-
Create a virtual environment for Python and activate it. Install requirements
python3 venv -m api source api/bin/activate pip install -r requirements.txt -
create .env file using the
.env.exampleprovided.- Create an API key from Hugging Face. More information
- paste the API key into the environment file.
- Sign up for a free MongoDB account, create a cluster, database and collection. More information
- paste the connection values into the environment file.
- Create an API key from Hugging Face. More information
-
Start the API
uvicorn image.main:app --reload --lifespan=on --use-colors --loop uvloop --http httptools
-
Make Test API Request
You will see a url displayed in your terminal. Navigate to http://127.0.0.1:8000/docs to try the API.
-
Make a request to the API
curl --location --request POST 'http://localhost:8000/api/v1/image-generator' \ --header 'Content-Type: application/json' \ --data-raw '{ "prompt": "this is a sample image prompt", "num_inference_steps": 0, "negative_prompt": "this is a sample for a negative image prompt." }'
Working on mounting a volume to avoud having to download the model with each build or new image. Will need to update the model_id in the repository.py file when testings. Currently the container stops when using mounted volume for model.
without docker the local model works fine.
Create a docker image with docker build -t fastapi-example .
Run the docker image with docker run -v $(pwd)/models:/app/models -d -p 8000:8000 fastapi-example
todo
todo