Skip to content

chigwell/sec-summary-llm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 

Repository files navigation

sec-summary-llm

PyPI version License: MIT Downloads LinkedIn

sec-summary-llm is a tiny Python package that extracts structured, concise summaries from security‑ and government‑related text (news headlines, reports, etc.). It drives a language model (LLM) with a system prompt that focuses on factual details such as roles, events, and implications, and then validates the LLM output against a strict regex pattern. The result is a clean, plain‑text summary ready for downstream analysis or reporting.


Features

  • One‑function API – call sec_summary_llm() with raw text and get back a list of extracted summary strings.
  • Built‑in LLM7 support – automatically uses ChatLLM7 from the langchain_llm7 package if no LLM is supplied.
  • Pattern‑based validation – output must match the predefined regex pattern, guaranteeing consistent formatting.
  • Pluggable LLMs – works with any LangChain‑compatible chat model (OpenAI, Anthropic, Google, etc.).
  • Zero‑markdown/HTML output – the function returns plain text, suitable for CSV, databases, or further NLP pipelines.

Installation

pip install sec_summary_llm

Quick Start

from sec_summary_llm import sec_summary_llm

# Simple call using the default ChatLLM7 (requires an API key in the environment)
summary = sec_summary_llm(
    user_input="The Treasury Department announced new sanctions against..."
)

print(summary)   # -> ['...']  (list of formatted summary strings)

Parameters

Name Type Description
user_input str The raw text (e.g., headline, report excerpt) to be summarized.
llm Optional[BaseChatModel] A LangChain chat model instance. If omitted, the function creates a ChatLLM7 instance automatically.
api_key Optional[str] API key for LLM7. If not supplied, the function reads LLM7_API_KEY from the environment or falls back to "None" (which will raise an error from the provider).

Using a Custom LLM

You can pass any LangChain‑compatible chat model. Below are examples for the most common providers.

OpenAI

from langchain_openai import ChatOpenAI
from sec_summary_llm import sec_summary_llm

llm = ChatOpenAI(model="gpt-4o-mini")
response = sec_summary_llm(
    user_input="Recent congressional hearings revealed...",
    llm=llm
)
print(response)

Anthropic

from langchain_anthropic import ChatAnthropic
from sec_summary_llm import sec_summary_llm

llm = ChatAnthropic(model="claude-3-haiku-20240307")
response = sec_summary_llm(
    user_input="A new cyber‑espionage campaign has been traced...",
    llm=llm
)
print(response)

Google Generative AI

from langchain_google_genai import ChatGoogleGenerativeAI
from sec_summary_llm import sec_summary_llm

llm = ChatGoogleGenerativeAI(model="gemini-1.5-flash")
response = sec_summary_llm(
    user_input="The Ministry of Defense released a statement about...",
    llm=llm
)
print(response)

LLM7 (Default)

  • Package: langchain_llm7https://pypi.org/project/langchain-llm7/
  • Free‑tier rate limits are sufficient for typical usage (few requests per minute).
  • To increase limits, provide a paid API key via the LLM7_API_KEY environment variable or pass it directly:
response = sec_summary_llm(
    user_input="...",
    api_key="your_paid_llm7_key"
)

Contributing & Issues

If you encounter bugs or have feature requests, please open an issue:

https://github... (replace with actual repository URL)


License

This project is licensed under the MIT License.


Author

Eugene Evstafev
Email: hi@euegne.plus
GitHub: chigwell