diff --git a/docs/deep-dive/language_model_clients/local_models/HFClientVLLM.mdx b/docs/deep-dive/language_model_clients/local_models/HFClientVLLM.mdx
index 059c2a9..bdce7c8 100644
--- a/docs/deep-dive/language_model_clients/local_models/HFClientVLLM.mdx
+++ b/docs/deep-dive/language_model_clients/local_models/HFClientVLLM.mdx
@@ -9,7 +9,7 @@ Refer to the [vLLM Server API](/api/local_language_model_clients/vLLM) for setti
```bash
#Example vLLM Server Launch
- python -m vllm.entrypoints.api_server --model meta-llama/Llama-2-7b-hf --port 8080
+ python -m vllm.entrypoints.openai.api_server --model meta-llama/Llama-2-7b-hf --port 8080
```
This command will start the server and make it accessible at `http://localhost:8080`.
@@ -79,4 +79,4 @@ print(response)
***
-
\ No newline at end of file
+