Skip to content
/ EHNA Public

易叙引擎 - 边缘混合式空间叙事感知引擎,支持AI驱动的沉浸式叙事体验 | EHNA - Edge-Hybrid Spatial Narrative Perception Engine supporting AI-driven immersive narratives

Notifications You must be signed in to change notification settings

ewanqian/EHNA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 

Repository files navigation

EHNA: Edge-Hybrid Spatial Narrative Perception Engine

易叙引擎:边缘混合式空间叙事感知引擎

License Python Version Ollama Compatible GitHub Stars Discord

Subtitle: Open-Source Agent Framework for Multimodal Snapshot, Cross-Domain Compatibility & Spatial Computing
副标题:支持多模态快照、跨域兼容与空间计算的开源智能体框架


🌟 Project Overview | 项目概述

EHNA (易叙) is an open-source intelligent agent framework designed for spatial narrative perception with core advantages in multimodal fusion, cross-domain compatibility, and edge deployment. It addresses key bottlenecks in current narrative-driven applications (e.g., generative content inconsistency, performance-quality imbalance, data silos across engines) by integrating "pre-baked text constraints" and "AI dynamic generation".

EHNA empowers developers, artists, and researchers to build immersive narrative experiences that break free from cloud dependency and device limitations—supporting use cases from text-based games and AR geospatial narratives to digital performance art.

易叙(EHNA)是一款面向空间叙事感知的开源智能体框架,核心优势聚焦多模态融合、跨域兼容与边缘部署。它通过整合"预烘焙文本约束"与"AI动态生成",解决了当前叙事类应用的核心瓶颈(如生成内容一致性失衡、性能-质量矛盾、跨引擎数据割裂)。

该框架支持开发者、艺术家与研究者构建摆脱云端依赖、突破设备限制的沉浸式叙事体验,覆盖文本游戏、AR地理叙事、数字表演艺术等多元场景。


🚀 Core Features | 核心特性

1. Hybrid Narrative Architecture | 混合式叙事架构

  • Analogous to 3D game "light baking", the Text State Baking mechanism constrains AI generation with pre-structured core plots, character settings, and spatial rules—ensuring narrative consistency while retaining dynamic diversity.
  • Dual-Model Division: Separate "Narrative Model" (cinematic scene/dialogue generation) and "Logic Model" (task validation, snapshot encoding) to balance literary quality and real-time performance.

类比3D游戏"光照烘焙",创新文本状态烘焙机制,通过预结构化核心剧情、人设与空间规则约束AI生成,在保障叙事一致性的同时保留动态多样性; 双模型分工策略:拆分"叙事模型"(影视级场景/对话生成)与"逻辑模型"(任务判定、快照编码),平衡文学性与实时性。

2. Cross-Domain Compatibility | 跨域兼容能力

  • Textual Snapshot Archiving System (TSAS): Encodes full game states (map, inventory, quests, character relationships) into structured text—enabling cross-engine reuse ("one save file for multiple games").
  • Standardized Protocols: Supports JSON-LD data format and cross-engine plugins (Unity/Unreal/Godot) for seamless state transfer.

文本化快照存档系统(TSAS):将游戏全状态(地图、背包、任务、角色关系)编码为结构化文本,实现跨引擎复用("一个存档通玩多游戏"); 标准化协议支持:适配JSON-LD数据格式与跨引擎插件(Unity/Unreal/Godot),确保状态无缝迁移。

3. Multimodal & Spatial Perception | 多模态与空间感知

  • Multimodal Narrative Interface Layer (MNIL): Unifies input/output for text, images, audio, geospatial data, and spatial sensor data—powered by VLM models (Qwen3-VL, Llama3.2-Vision).
  • Spatial-Narrative Mapping Protocol (SNMP): Maps virtual narrative elements (quests, NPCs) to real-world geographic coordinates, enabling AR/spatial computing scenarios.

多模态叙事接口层(MNIL):统一文本、图像、音频、地理数据、空间传感器输入输出,基于VLM模型(Qwen3-VL、Llama3.2-Vision)实现多模态联动; 空间-叙事映射协议(SNMP):将虚拟叙事元素(任务、NPC)与现实地理坐标关联,支撑AR/空间计算场景。

4. Edge Deployment Optimization | 边缘部署优化

  • Tiered Model Packages: Tailored for 8GB/16GB/32GB RAM devices (PC/Mac/AR glasses) with Ollama-compatible deployment commands—no cloud dependency.
  • Low-Latency Performance: Optimized model quantization (Q4_K_M/Q6_K) for inference latency < 800ms on mid-range hardware.

分级模型套餐:针对8GB/16GB/32GB内存设备定制化适配,提供Ollama部署命令,兼容PC/Mac/AR设备,摆脱云端依赖; 低延迟优化:采用模型量化(Q4_K_M/Q6_K)技术,中端硬件推理延迟<800ms。


🎯 Target Audience | 适用人群

角色 应用场景
Game Developers Build local AI-driven text games, geospatial narrative games, AR storytelling apps
Media Artists Create digital spatial performances, virtual scene narratives, real-time interactive art
Researchers Explore intersections of Game AI, generative narrative, multimodal interaction, spatial computing
Open-Source Enthusiasts Contribute to toolchain development and case studies
角色 应用场景
游戏开发者 开发本地AI文本游戏、地理叙事游戏、AR叙事应用
新媒体艺术家 创作数字空间表演、虚拟场景叙事、实时互动艺术作品
学术研究者 探索游戏AI、生成式叙事、多模态交互、空间计算交叉领域
开源爱好者 参与工具链搭建与多模态/空间叙事案例开发

📦 Project Deliverables | 项目产出

Core Modules | 核心模块

  • EHNA Agent Core: Edge-hybrid narrative intelligent agent

  • TSAS (Textual Snapshot Archiving System)

  • MNIL (Multimodal Narrative Interface Layer)

  • SNMP (Spatial-Narrative Mapping Protocol)

  • EHNA智能体核心:边缘混合式叙事智能体

  • TSAS文本化快照系统

  • MNIL多模态叙事接口层

  • SNMP空间-叙事映射协议

Model Packages | 模型套餐

Hardware Tier Model Combination Ollama Deployment Command
Basic (8GB RAM) Qwen3:7B + Llama3:7B + Qwen3-VL:4B (Q4_K_M) ollama run qwen3:7b-instruct:q4_k_m && ollama run llama3:7b-instruct:q4_k_m && ollama run qwen3-vl:4b:q4_k_m
Advanced (16GB RAM) Qwen3:14B + Llama3:13B + Qwen3-VL:7B (Q4_K_M) ollama run qwen3:14b-instruct:q4_k_m && ollama run llama3:13b-instruct:q4_k_m && ollama run qwen3-vl:7b:q4_k_m
Premium (32GB RAM) Qwen3:32B + DeepSeek-R1:32B + Qwen3-VL:14B (Q6_K) ollama run qwen3:32b-instruct:q6_k && ollama run deepseek-r1:32b-instruct:q6_k && ollama run qwen3-vl:14b:q6_k

Demo Cases | 实证案例

  1. Virtual Space Station Narrative: Multimodal text game with dynamic scene generation

  2. City Treasure Map: Geospatial narrative game with TSAS cross-engine save

  3. Mixed Reality Digital Performance: AR spatial narrative demo with SNMP protocol

  4. 《虚拟空间站叙事》:多模态文本游戏(动态场景生成)

  5. 《城市藏宝图》:地理叙事游戏(TSAS跨引擎存档)

  6. 《虚实交织的数字表演》:AR空间叙事Demo(SNMP协议适配)


🚀 Quick Start | 快速开始

Prerequisites | 前置依赖

  • Python 3.9+
  • Ollama 0.1.30+ (for model deployment)
  • Git (for repository management)
  • Recommended Hardware: 16GB RAM (Advanced Package) / 8GB RAM (Basic Package)

Installation | 安装步骤

# Clone the repository
git clone https://github.com/ewanqian/EHNA.git
cd EHNA

# Install dependencies
pip install -r requirements.txt

# Pull recommended models via Ollama (Advanced Package example)
ollama pull qwen3:14b-instruct:q4_k_m
ollama pull llama3:13b-instruct:q4_k_m
ollama pull qwen3-vl:7b:q4_k_m

# Run the minimal demo demo
python examples/minimal_demo/run_demo.py

Verify Installation | 验证安装

Check if the core modules are loaded successfully:

python tools/validation/verify_install.py

📚 Documentation | 文档资源


👥 Contribution | 贡献指南

We welcome contributions from developers, artists, and researchers! See CONTRIBUTING.md for details on:

  • Knowledge base expansion (technical docs, case studies, test data)
  • Code development (core modules, tools, plugins)
  • Empirical testing (performance, compatibility, user experience)
  • Case development (games, art projects, research prototypes)

我们欢迎开发者、艺术家与研究者参与贡献!详情见CONTRIBUTING.md

  • 知识库补充(技术文档、案例分析、测试数据)
  • 代码开发(核心模块、工具链、插件)
  • 实证测试(性能、兼容性、用户体验)
  • 案例开发(游戏、艺术作品、研究原型)

Contribution Flow | 贡献流程

  1. Fork the repository

  2. Create a feature branch (git checkout -b feature/your-feature)

  3. Commit your changes (follow Conventional Commits)

  4. Push to the branch (git push origin feature/your-feature)

  5. Open a Pull Request

  6. Fork仓库

  7. 创建功能分支(git checkout -b feature/你的功能

  8. 提交变更(遵循Conventional Commits规范)

  9. 推送分支(git push origin feature/你的功能

  10. 提交Pull Request


📜 License | 许可证

This project is licensed under the Apache License 2.0—see the LICENSE file for details. You are free to use, modify, distribute, and commercially deploy the software with attribution.

本项目采用Apache License 2.0许可证,详情见LICENSE文件。允许商业使用、修改、分发与二次开源,需保留原作者声明与版权信息。


📞 Contact & Support | 联系方式


⭐ Star History | 星标历史

Star History Chart


Made with ❤️ by the EHNA Community. We strive to bridge art, technology, and narrative intelligence.
由EHNA社区打造,致力于连接艺术、技术与叙事智能。

About

易叙引擎 - 边缘混合式空间叙事感知引擎,支持AI驱动的沉浸式叙事体验 | EHNA - Edge-Hybrid Spatial Narrative Perception Engine supporting AI-driven immersive narratives

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published