This project demonstrates a basic LLM agent capable of interacting with code. Uses the google gemini API for LLM interaction.
Please read these notes carefully before using or sharing this project.
This LLM agent is a toy version designed for learning and experimentation. It is not intended for production use and lacks the robust security measures of professional tools like Cursor's Agentic Mode or Claude Code.
Be very cautious about giving any Large Language Model (LLM) access to your filesystem and Python interpreter. An LLM might execute arbitrary code or make unintended changes. Always understand what the agent is doing before allowing it to proceed.
Do not give this code away to others to use without clearly explaining the inherent risks and limitations. Users should be fully aware of the potential for unintended side effects.
Before running the agent, especially when experimenting with new features or on different codebases, it is highly recommended to commit your changes to version control (e.g., Git). This ensures you can always revert to a stable state if the agent makes undesirable modifications.
LLMs can sometimes produce unexpected or incorrect results. Always review any changes or actions proposed by the agent before implementing them.