A lightweight Python package that processes text about early Unix history (pre‑V7) and returns concise, structured summaries. The summaries are generated via a language model and are forced to match a predefined XML‑like pattern, making them easy to parse and validate.
pip install pre_v7_unix_summarizerfrom pre_v7_unix_summarizer import pre_v7_unix_summarizer
# Simple call – the default ChatLLM7 will be used
summary = pre_v7_unix_summarizer(
user_input="The early days of Unix started at AT&T Bell Labs in the late 1960s..."
)
print(summary) # -> List of strings that match the output pattern| Name | Type | Description |
|---|---|---|
| user_input | str |
Raw text containing historical Unix information that you want to summarize. |
| llm | Optional[BaseChatModel] |
A LangChain LLM instance. If omitted, the package creates a default ChatLLM7 instance. |
| api_key | Optional[str] |
API key for the LLM7 service. If not supplied, the function looks for the environment variable LLM7_API_KEY. If that is also missing, a placeholder key "None" is used. |
- Default LLM –
ChatLLM7from thelangchain_llm7package (see https://pypi.org/project/langchain-llm7/). - Pattern Matching – The response is validated against a regular expression defined in
prompts.patternusingllmatch. Only data that matches the pattern is returned.
You can provide any LangChain‑compatible chat model. Below are a few examples.
from langchain_openai import ChatOpenAI
from pre_v7_unix_summarizer import pre_v7_unix_summarizer
llm = ChatOpenAI()
summary = pre_v7_unix_summarizer(
user_input="Your Unix text here...",
llm=llm
)from langchain_anthropic import ChatAnthropic
from pre_v7_unix_summarizer import pre_v7_unix_summarizer
llm = ChatAnthropic()
summary = pre_v7_unix_summarizer(
user_input="Your Unix text here...",
llm=llm
)from langchain_google_genai import ChatGoogleGenerativeAI
from pre_v7_unix_summarizer import pre_v7_unix_summarizer
llm = ChatGoogleGenerativeAI()
summary = pre_v7_unix_summarizer(
user_input="Your Unix text here...",
llm=llm
)- LLM7 Free Tier – The default rate limits are sufficient for most research and hobbyist use cases.
- Higher Limits – Provide your own API key either through the
LLM7_API_KEYenvironment variable or by passingapi_key="YOUR_KEY"directly to the function. - Get a Free Key – Register at https://token.llm7.io/ to obtain an API key.
If you encounter any issues or have feature requests, please open an issue on GitHub:
https://github....
We welcome contributions, bug reports, and suggestions.
This project is licensed under the MIT License.
Eugene Evstafev – hi@euegne.plus
GitHub: chigwell