sec-summary-llm is a tiny Python package that extracts structured, concise summaries from security‑ and government‑related text (news headlines, reports, etc.). It drives a language model (LLM) with a system prompt that focuses on factual details such as roles, events, and implications, and then validates the LLM output against a strict regex pattern. The result is a clean, plain‑text summary ready for downstream analysis or reporting.
- One‑function API – call
sec_summary_llm()with raw text and get back a list of extracted summary strings. - Built‑in LLM7 support – automatically uses
ChatLLM7from thelangchain_llm7package if no LLM is supplied. - Pattern‑based validation – output must match the predefined regex pattern, guaranteeing consistent formatting.
- Pluggable LLMs – works with any LangChain‑compatible chat model (OpenAI, Anthropic, Google, etc.).
- Zero‑markdown/HTML output – the function returns plain text, suitable for CSV, databases, or further NLP pipelines.
pip install sec_summary_llmfrom sec_summary_llm import sec_summary_llm
# Simple call using the default ChatLLM7 (requires an API key in the environment)
summary = sec_summary_llm(
user_input="The Treasury Department announced new sanctions against..."
)
print(summary) # -> ['...'] (list of formatted summary strings)| Name | Type | Description |
|---|---|---|
user_input |
str |
The raw text (e.g., headline, report excerpt) to be summarized. |
llm |
Optional[BaseChatModel] |
A LangChain chat model instance. If omitted, the function creates a ChatLLM7 instance automatically. |
api_key |
Optional[str] |
API key for LLM7. If not supplied, the function reads LLM7_API_KEY from the environment or falls back to "None" (which will raise an error from the provider). |
You can pass any LangChain‑compatible chat model. Below are examples for the most common providers.
from langchain_openai import ChatOpenAI
from sec_summary_llm import sec_summary_llm
llm = ChatOpenAI(model="gpt-4o-mini")
response = sec_summary_llm(
user_input="Recent congressional hearings revealed...",
llm=llm
)
print(response)from langchain_anthropic import ChatAnthropic
from sec_summary_llm import sec_summary_llm
llm = ChatAnthropic(model="claude-3-haiku-20240307")
response = sec_summary_llm(
user_input="A new cyber‑espionage campaign has been traced...",
llm=llm
)
print(response)from langchain_google_genai import ChatGoogleGenerativeAI
from sec_summary_llm import sec_summary_llm
llm = ChatGoogleGenerativeAI(model="gemini-1.5-flash")
response = sec_summary_llm(
user_input="The Ministry of Defense released a statement about...",
llm=llm
)
print(response)- Package:
langchain_llm7– https://pypi.org/project/langchain-llm7/ - Free‑tier rate limits are sufficient for typical usage (few requests per minute).
- To increase limits, provide a paid API key via the
LLM7_API_KEYenvironment variable or pass it directly:
response = sec_summary_llm(
user_input="...",
api_key="your_paid_llm7_key"
)- Obtain a free key by registering at https://token.llm7.io/
If you encounter bugs or have feature requests, please open an issue:
https://github... (replace with actual repository URL)
This project is licensed under the MIT License.
Eugene Evstafev
Email: hi@euegne.plus
GitHub: chigwell