Note: For a detailed explanation of the "spore" concept, its purpose, and its role in ontology governance, see SPORE_CONCEPT.md.
- DRY Scientific Overview
- Spore Concept & Governance
- MCP & BFG9K Architecture
- MCP Validator Structure
- Cursor IDE, LLM, and BFG9K_MCP Integration
- Navigation Diagram (with links)
A framework for managing and validating ontologies with support for both local and Oracle RDF storage.
- Consistent prefix and namespace management
- Local and Oracle RDF store support
- Pre-commit hooks for ontology validation
- SHACL validation support
- Automated testing infrastructure
Install using conda (recommended):
conda env create -f environment.yml
conda activate ontology-frameworkOr using pip:
pip install -r requirements.txtTo use the Oracle RDF store functionality:
- Oracle Database with RDF support
- Java must be installed and configured in the Oracle database
- Required environment variables:
ORACLE_USER: Database usernameORACLE_PASSWORD: Database passwordORACLE_DSN: Database connection string
To verify Oracle setup:
python -m scripts.verify_oracle_setup- Avoid absolute paths in configuration files. Use relative paths (e.g.,
./bfg9k_mcp.py,./src) to ensure portability across different environments and operating systems. - Always run commands from the project root directory. This ensures that all relative paths in configuration files resolve correctly.
-
Clone the repository:
git clone https://github.com/yourusername/ontology-framework.git cd ontology-framework -
Install pre-commit hooks:
pip install pre-commit pre-commit install
-
Run tests:
pytest tests/
The framework provides consistent prefix management through the PrefixMap class:
from ontology_framework.prefix_map import default_prefix_map
# Get standard namespace
meta_ns = default_prefix_map.get_namespace("meta")
# Register custom prefix
default_prefix_map.register_prefix("custom", "./custom#", PrefixCategory.DOMAIN)
# Bind prefixes to graph
g = Graph()
default_prefix_map.bind_to_graph(g)The following checks are run on commit:
- Python code formatting (black)
- Import sorting (isort)
- Python code quality (flake8)
- Ontology validation:
- Required prefixes
- Class property requirements
- SHACL constraints
Ontologies must follow these rules:
- Use standard prefixes from
prefix_map.py - Include required properties:
rdfs:labelrdfs:commentowl:versionInfo
- Pass SHACL validation if shapes are defined
This framework provides tools for validating spore instances against governance rules and transformation patterns, ensuring compliance with the Spore Governance Discipline.
The validation framework implements the Spore Governance Discipline by checking spore instances for:
- Pattern registration via
meta:TransformationPattern - SHACL validation support
- Runtime feedback through
meta:distributesPatch - Conformance tracking via
meta:confirmedViolation - Propagation and reintegration of corrections via
meta:ConceptPatch
- Create a virtual environment:
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate- Install dependencies:
pip install -r requirements.txtfrom spore_validation import SporeValidator
validator = SporeValidator()
spore_uri = "http://example.org/spores/example-spore"
results = validator.validate_spore(spore_uri)
print(results)python -m pytest test_spore_validation.py -vThe framework validates spore instances against the following governance rules:
-
Pattern Registration
- Spore must be registered as a
meta:TransformationPattern - Must have proper type assertions
- Must be properly documented with labels and comments
- Spore must be registered as a
-
SHACL Validation
- Must have associated SHACL shapes
- Shapes must target the spore class
- Validation rules must be properly defined
- Must support runtime validation
-
Patch Support
- Must support patch distribution via
meta:distributesPatch - Patches must be of type
meta:ConceptPatch - Must support propagation and reintegration of corrections
- Must maintain patch history and versioning
- Must support patch distribution via
-
Conformance Tracking
- Must track conformance violations via
meta:confirmedViolation - Must support LLM or system-evaluated conformance
- Must document remediation paths
- Must maintain violation history
- Must track conformance violations via
To ensure robust and DRY ontology management, follow this two-step validation process:
- Tool:
fix_prefixes_tool - Purpose: Ensures all prefixes are absolute IRIs and well-formed.
- How to use:
- Dry run: Preview changes without modifying the file.
# Example (dry run) fix_prefixes_tool <your_file.ttl>
- Apply fixes:
# Example (apply fixes) fix_prefixes_tool --apply <your_file.ttl>
- Dry run: Preview changes without modifying the file.
- Why: Relative or malformed prefixes will cause errors in downstream validation and reasoning tools. Fixing them first prevents cascading issues.
- Tool:
validate_turtle_tool - Purpose: Checks Turtle syntax, semantic consistency, SHACL/OWL rules, and more.
- How to use:
validate_turtle_tool <your_file.ttl>
- Why: Ensures your ontology is valid, consistent, and ready for use in the framework. This step assumes prefixes are already correct.
- Prefix issues are a common source of validation failure. Fixing them first makes the main validation step more reliable and actionable.
- Iterative improvement: This workflow is robust and can be automated in the future if needed, based on real-world usage and pain points.
- If prefix issues become a bottleneck or are frequently forgotten, consider integrating prefix fixing into the main validation tool or creating a wrapper script.
- For now, this two-step process is recommended for reliability and clarity.
- Fork the repository
- Create a feature branch
- Commit your changes with appropriate type and version:
git commit -m "onto(scope): description Ontology-Version: X.Y.Z"
- Push to the branch
- Create a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
This repository contains deployment scripts for running Ontotext GraphDB on Azure Container Apps (ACA) in a development environment.
- Azure CLI installed and configured
- Azure subscription with permissions to create:
- Resource Groups
- Storage Accounts
- Container Apps
- Container Apps Environments
-
Make the deployment script executable:
chmod +x deploy-graphdb.sh
-
Run the deployment script:
./deploy-graphdb.sh
The script will:
- Create a resource group
- Set up Azure Files storage
- Create a Container Apps environment
- Deploy GraphDB with persistent storage
- Configure HTTPS ingress
- Set up basic authentication
- Azure Files is mounted at
/opt/graphdb/home - Storage account uses Standard_LRS SKU
- File share name:
graphdbdata
- Image: ontotext/graphdb:10.4.1
- Port: 7200
- Memory: 2GB heap size
- Authentication: Basic auth
- Default credentials:
- Username: admin
- Password: graphdb-dev-password
- External HTTPS ingress enabled
- Custom domain name provided by Azure Container Apps
- No VNet integration required
Once deployed, you can access GraphDB through:
- Web interface:
https://<container-app-dns> - REST API:
https://<container-app-dns>/repositories
curl -X GET https://<container-app-dns>/repositories -u admin:graphdb-dev-password- Basic authentication is enabled by default
- HTTPS is enforced
- Storage account uses Azure's built-in encryption
- Consider implementing IP restrictions if needed
To remove all resources:
az group delete --name graphdb-dev-rg --yes- This setup is for development purposes only
- Single instance deployment
- No high availability
- Basic authentication only
- No custom domain configuration
- Uses Standard_LRS storage for cost efficiency
- Single instance deployment
- No additional services (API Gateway, etc.)
- Auto-scaling disabled