Skip to content

Request: Return full usage details instead of only returning total_tokens #83

@gallardorafael

Description

@gallardorafael

We need to get full usage data from each LLM call, as returned by OCI GenAI Python SDK (chat_response.data.chat_response.usage):

{
  "completion_tokens": 41, 
  "completion_tokens_details": null, 
  "prompt_tokens": 70, 
  "prompt_tokens_details": null, 
  "total_tokens": 111
}

Currently, langchain-oci (main branch and v0.1.6) is only returning total_tokens:

# From: langchain_oci/chat_models/oci_generative_ai.py - chat_generation_info()

# Include token usage if available
if hasattr(response.data.chat_response, "usage") and response.data.chat_response.usage:
    generation_info["total_tokens"] = response.data.chat_response.usage.total_tokens

Metadata

Metadata

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions