diff --git a/README.md b/README.md
index d1178c5e..9f418cec 100644
--- a/README.md
+++ b/README.md
@@ -62,6 +62,56 @@ The name "Vela" is originated from the Latin term for "sail," which is also the
- Please refer to the [Supported Architectures and Platforms](https://nuttx.apache.org/docs/latest/platforms/index.html) page for a complete list.
- For adaptation cases regarding development boards, please refer to the [Case Documentation](./en/dev_board/Development_Board.md).
+## What's New
+
+- **Significant Hardware Ecosystem Expansion**: Added support for **Infineon AURIX™ TC4**, **Flagchip MCU**, and the **QEMU-R52 SIL** platform. (View [TC4 Guide](./en/quickstart/development_board/tc4d9_evb_guide.md) / [Flagchip Guide](./en/quickstart/development_board/fc7300f8m_evb_guide.md))
+
+- **Enhanced Ubuntu Development Experience**: The OpenVela VS Code plugin now **fully supports the Ubuntu environment**. Linux developers can enjoy a seamless, end-to-end workflow—from project creation and build to system debugging—significantly boosting development efficiency. Get started: [VS Code Plugin Guide](./en/quickstart/vscode_plugin_usage.md).
+
+## Version Strategy
+
+We manage releases based on the `trunk` branch, using Tags to track release history. This ensures traceability and stability for production environments.
+
+### Release Tags
+
+Release tags are immutable markers created on the `trunk` branch. Each tag represents an officially released version of openvela.
+
+- **Production Environment Recommendation**: To ensure maximum system stability and security, we **strongly recommend** using the latest release tags in production environments (Production Environment), rather than using branch code directly.
+
+### Released Versions
+
+Below are the currently released stable versions and their change logs:
+
+- **trunk-5.4**: Please refer to the [v5.4 Release Notes](./en/release_notes/v5.4.md) for detailed changes.
+
+- **trunk-5.2**: Please refer to the [v5.2 Release Notes](./en/release_notes/v5.2.md) for detailed changes.
+
+### Version Maintenance Strategy
+
+openvela follows a strict version maintenance lifecycle:
+
+- **Patch Updates**: For critical bugs or security vulnerabilities discovered in released versions, the team issues new patch release tags (Patch Release) to provide fixes.
+- **Naming Convention**: Patch versions increment based on the original version number, such as `trunk-5.2.1`.
+
+## Branch Strategy
+
+openvela adopts a dual-branch model to balance system innovation and stability. Please select the appropriate branch according to your development needs.
+
+### dev (Development Branch)
+
+- **Definition**: This is the cutting-edge development branch of openvela, aggregating the latest features and bug fixes.
+- **Status**: The code updates frequently and remains in a state of continuous integration and rapid iteration. It may contain features not yet fully verified, so potential instability exists.
+- **Target Audience**:
+
+ - Developers who wish to experience new features early.
+ - Contributors planning to submit code or participate in core function development.
+
+### trunk (Stable Trunk Branch)
+
+- **Definition**: This is the fully tested main branch, representing the current stable state of the system.
+- **Status**: Features from the `dev` branch are merged here only after they pass rigorous testing and verification.
+- **Target Audience**: Most users who require high system stability, and engineers developing standard applications.
+
## Quick start
### Device Development
@@ -115,29 +165,6 @@ To see the full list of native apps, please visit the [Native App Examples Repos
More Quick App examples are continuously being added. To see all examples, please visit the [Quick App Examples Repository](../../../packages_fe_examples).
-## openvela Versioning Strategy
-
-- **dev (Development Branch)**
-
- Contains the latest features and fixes, and may be unstable. Recommended for developers who wish to experience new features or contribute.
-
-- **trunk (Main Stable Branch)**
-
- A comprehensively tested, stable version. Stable features from the `dev` branch are merged here. Recommended for most users seeking stability.
-
-- **Release Tags**
-
- Permanent tags created from the `trunk` branch, representing an official, stable release. We strongly recommend using the latest release tag in **production environments** to ensure maximum stability.
-
- - **List of Released Versions**:
-
- - `trunk-5.2`: For detailed changes in this version, please refer to its [v5.2 Release Notes](./en/release_notes/v5.2.md).
- - `trunk-5.4`: For detailed changes in this version, please refer to its [v5.4 Release Notes](./en/release_notes/v5.4.md).
-
- - **Maintenance Policy**:
-
- Critical bug fixes for a released version will be delivered by releasing a new patch tag (e.g., `trunk-5.2.1`).
-
## Code contribution
- [Code Contribution Guide](./CONTRIBUTING.md)
diff --git a/README_zh-cn.md b/README_zh-cn.md
index 75adeaf9..e7184c0e 100644
--- a/README_zh-cn.md
+++ b/README_zh-cn.md
@@ -63,6 +63,56 @@ Vela 的命名源自拉丁语中船帆的含义,也是南方星空中船帆星
- openvela 支持各种不同的架构(ARM32、ARM64、RISC-V、Xtensa、MIPS、CEVA 等)和硬件平台。请在[硬件支持](https://nuttx.apache.org/docs/latest/platforms/index.html)页面上查看完整列表。
- 关于**开发板**的适配案例,请参见[案例文档](./zh-cn/dev_board/Development_Board.md)。
+## 最新动态
+
+- 硬件生态大幅扩展:新增对 **英飞凌 AURIX™ TC4**、**旗芯微 (Flagchip) MCU** 以及 **QEMU-R52 SIL** 平台的适配支持。(查看 [TC4 指南](./zh-cn/quickstart/development_board/tc4d9_evb_guide.md) / [旗芯微指南](./zh-cn/quickstart/development_board/fc7300f8m_evb_guide.md))
+
+- Ubuntu 开发体验升级:openvela VS Code 插件现已**完美支持 Ubuntu 环境**。Linux 开发者现在也可以享受从项目创建、编译构建到系统调试的一站式流畅体验,开发效率显著提升。即刻体验:[VS Code 插件使用指南](./zh-cn/quickstart/vscode_plugin_usage.md)。
+
+## 版本发布管理 (Version Strategy)
+
+我们基于 `trunk` 分支进行版本发布,通过标签(Tags)管理发布历史,确保生产环境的可追溯性与稳定性。
+
+### 发布标签 (Release Tags)
+
+发布标签是基于 `trunk` 分支创建的不可变标记(Immutable Marker)。每个标签代表一个正式发布的 openvela 版本。
+
+- **生产环境建议**:为了确保系统的最高稳定性和安全性,我们**强烈建议**在生产环境(Production Environment)中使用最新的发布标签,而非直接使用分支代码。
+
+### 已发布版本列表
+
+以下是当前已发布的稳定版本及其变更说明:
+
+- **trunk-5.4**:请查阅 [v5.4 版本发布说明](./zh-cn/release_notes/v5.4.md) 了解详细变更。
+
+- **trunk-5.2**:请查阅 [v5.2 版本发布说明](./zh-cn/release_notes/v5.2.md) 了解详细变更。
+
+### 版本维护策略
+
+openvela 遵循严格的版本维护生命周期:
+
+- **补丁更新**:针对已发布版本中发现的关键缺陷(Critical Bugs)或安全漏洞,团队将发布新的补丁版本标签(Patch Release)进行修复。
+- **命名规则**:补丁版本将在原版本号基础上递增,例如 `trunk-5.2.1`。
+
+## 代码分支管理 (Branch Strategy)
+
+openvela 采用双分支模型来平衡系统的创新性与稳定性。请根据您的开发需求选择合适的分支。
+
+### dev (开发分支)
+
+- **定义**:这是 openvela 的前沿开发分支,汇集了最新的功能特性与缺陷修复。
+- **状态**:代码更新频率高,处于持续集成与快速迭代状态,可能包含尚未完全验证的特性,因此可能存在不稳定性。
+- **适用人群**:
+
+ - 希望抢先体验新功能的开发者。
+ - 计划向社区提交代码、参与核心功能建设的贡献者。
+
+### trunk (主干稳定分支)
+
+- **定义**:这是经过全面测试的主干分支,代表了当前系统的稳定状态。
+- **状态**:`dev` 分支中的功能在经过严格测试验证稳定后,会被合并至此分支。
+- **适用人群**:大多数对系统稳定性有较高要求的用户,以及进行标准应用开发的工程师。
+
## 快速入门
### 设备开发
@@ -116,29 +166,6 @@ Vela 的命名源自拉丁语中船帆的含义,也是南方星空中船帆星
快应用相关示例正在持续丰富中。查看所有示例,请访问[快应用示例仓库](../../../packages_fe_examples)。
-## openvela 版本策略
-
-- **dev (开发分支)**
-
- 汇集了最新的功能与修复,可能不稳定。推荐给希望体验新功能或参与贡献的开发者。
-
-- **trunk (主干稳定分支)**
-
- 经全面测试的稳定版本,`dev` 分支的稳定功能会合并于此。推荐大多数追求稳定性的用户使用。
-
-- **Release Tags (版本发布标签)**
-
- 基于 `trunk` 分支创建的永久标记,代表一个正式、稳定的发布版本。我们强烈建议**生产环境**使用最新的发布标签以确保最高稳定性。
-
- - **已发布版本列表**:
-
- - `trunk-5.2`:关于此版本的详细变更,请查阅其 [v5.2 版本发布说明](./zh-cn/release_notes/v5.2.md)。
- - `trunk-5.4`:关于此版本的详细变更,请查阅其 [v5.4 版本发布说明](./zh-cn/release_notes/v5.4.md)。
-
- - **维护策略**:
-
- 针对已发布版本的关键 Bug 修复,会通过发布新的补丁版本标签来提供(例如 `trunk-5.2.1`)。
-
## 参与贡献
- [代码贡献指南](./CONTRIBUTING_zh-cn.md)
diff --git a/en/contribute/process/doc_dev_process.md b/en/contribute/process/doc_dev_process.md
index bbeb5848..6a1a685b 100644
--- a/en/contribute/process/doc_dev_process.md
+++ b/en/contribute/process/doc_dev_process.md
@@ -1,6 +1,10 @@
# openvela Documentation Development Process
-\[ English | [简体中文](../../../zh-cn/contribute/process/doc_dev_process.md) \]
+[ English | [简体中文](../../../zh-cn/contribute/process/doc_dev_process.md) ]
+
+## Flowchart
+
+
## I. What Development Engineers Should Do
diff --git a/en/contribute/process/figures/001.png b/en/contribute/process/figures/001.png
new file mode 100644
index 00000000..30cb1009
Binary files /dev/null and b/en/contribute/process/figures/001.png differ
diff --git a/en/contribute/process/images/doc_dev_process.svg b/en/contribute/process/images/doc_dev_process.svg
deleted file mode 100644
index bb3d0368..00000000
--- a/en/contribute/process/images/doc_dev_process.svg
+++ /dev/null
@@ -1 +0,0 @@
-
\ No newline at end of file
diff --git a/en/dev_board/Development_Board.md b/en/dev_board/Development_Board.md
index 0560a3ff..19699867 100644
--- a/en/dev_board/Development_Board.md
+++ b/en/dev_board/Development_Board.md
@@ -1,13 +1,13 @@
# openvela Development Board Examples
-\[ English | [简体中文](../../zh-cn/dev_board/Development_Board.md) \]
+[ English | [简体中文](../../zh-cn/dev_board/Development_Board.md) ]
-| Manufacturer | Board Model | Chip Model | Porting Guide | Typical Application Scenarios | Board Support |
-| -------------------- | -------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------- | --------------------------------------------------- | -------------------------------------------------------------------------------------------- |
-| STMicroelectronics | [STM32H750B-DK](https://www.st.com/en/evaluation-tools/stm32h750b-dk.html) | [STM32H750XB](https://www.st.com/en/microcontrollers-microprocessors/stm32h750xb.html) | [Deploy openvela on STM32H750](../quickstart/development_board/STM32H750.md) | Smart Home, Industrial Control, Medical Electronics | [ST MCU China Support](mailto:mcu.china@st.com) |
-| STMicroelectronics | STM32F411CEU6 | [STM32F411CE](https://www.st.com/en/microcontrollers-microprocessors/stm32f411ce.html) | [Blink an LED with openvela on STM32F411](../quickstart/development_board/STM32F411.md) | IoT, Industrial Automation | [ST MCU China Support](mailto:mcu.china@st.com) |
-| Espressif | [ESP32-S3-EYE](https://www.espressif.com/en/dev-board/esp32-s3-eye) | [ESP32-S3](https://www.espressif.com/en/products/socs/esp32-s3) | [Port openvela to the ESP32-S3-EYE Dev Board](../quickstart/development_board/ESP32-S3-EYE.md) | AIoT, HMI, Smart Home | [Espressif Developer Community](https://www.espressif.com/en/contact-us/technical-inquiries) |
-| Espressif | [ESP32-S3-BOX](https://www.espressif.com/en/news/ESP32-S3-BOX_video) | [ESP32-S3](https://www.espressif.com/en/products/socs/esp32-s3) | [See: Port openvela to the ESP32-S3-EYE Dev Board](../quickstart/development_board/ESP32-S3-EYE.md) | AIoT, HMI, Smart Home | [Espressif Developer Community](https://www.espressif.com/en/contact-us/technical-inquiries) |
-| Bestechnic | [BES2600WM MAIN BOARD V1.1](https://www.fortune-co.com/index.php?s=/Cn/Public/singlePage/catid/176.html) | BES2600WM-AX4F | [Readme](../../../../../vendor_bes/blob/trunk/boards/best2003_ep/aos_evb/Readme) | Smart Wearables, AI Toys | [Contact Distributor](https://www.fortune-co.com/Tech/projectDetail/id/64.html) |
-| Flagchip | FC7300F8M-EVB | FC7300F8MDT | [openvela Running Guide for FC7300F8M-EVB](../quickstart/development_board/fc7300f8m_evb_guide.md) | Domain/Zonal Controllers, ADAS, BMS, Motor Control, etc. | [Contact Distributor](https://www.flagchip.com.cn/Pro/3/3.html) | [Contact Distributor](https://www.flagchip.com.cn/Pro/3/3.html) |
-| Infineon | TC4D9-EVB | AURIX ™ TC4x | [openvela Running Guide for TC4D9-EVB](../quickstart/development_board/tc4d9_evb_guide.md) | Vehicle Motion Controllers, Zonal Controllers, Automotive Gateways, etc. | [Contact Distributor](https://www.infineon.cn/contact-us/where-to-buy) | [Contact Distributor](https://www.infineon.cn/contact-us/where-to-buy) |
\ No newline at end of file
+| Manufacturer | Board Model | Chip Model | Porting Guide | Typical Application Scenarios | Board Support |
+| ------------------ | ----------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------- |
+| STMicroelectronics | [STM32H750B-DK](https://www.st.com/en/evaluation-tools/stm32h750b-dk.html) | [STM32H750XB](https://www.st.com/en/microcontrollers-microprocessors/stm32h750xb.html) | [Deploy openvela on STM32H750](../quickstart/development_board/STM32H750.md) | Smart Home, Industrial Control, Medical Electronics | [ST MCU China Support](mailto:mcu.china@st.com) |
+| STMicroelectronics | STM32F411CEU6 | [STM32F411CE](https://www.st.com/en/microcontrollers-microprocessors/stm32f411ce.html) | [Blink an LED with openvela on STM32F411](../quickstart/development_board/STM32F411.md) | IoT, Industrial Automation | [ST MCU China Support](mailto:mcu.china@st.com) |
+| Espressif | [ESP32-S3-EYE](https://www.espressif.com/en/dev-board/esp32-s3-eye) | [ESP32-S3](https://www.espressif.com/en/products/socs/esp32-s3) | [Port openvela to the ESP32-S3-EYE Dev Board](../quickstart/development_board/ESP32-S3-EYE.md) | AIoT, HMI, Smart Home | [Espressif Developer Community](https://www.espressif.com/en/contact-us/technical-inquiries) |
+| Espressif | [ESP32-S3-BOX](https://www.espressif.com/en/news/ESP32-S3-BOX_video) | [ESP32-S3](https://www.espressif.com/en/products/socs/esp32-s3) | [See: Port openvela to the ESP32-S3-EYE Dev Board](../quickstart/development_board/ESP32-S3-EYE.md) | AIoT, HMI, Smart Home | [Espressif Developer Community](https://www.espressif.com/en/contact-us/technical-inquiries) |
+| Bestechnic | [BES2600WM MAIN BOARD V1.1](https://www.fortune-co.com/index.php?s=/Cn/Public/singlePage/catid/176.html) | BES2600WM-AX4F | [Readme](../../../../../vendor_bes/blob/dev/boards/best2003_ep/aos_evb/Readme) | Smart Wearables, AI Toys | [Contact Distributor](https://www.fortune-co.com/Tech/projectDetail/id/64.html) |
+| Flagchip | [FC7300F8M-EVB](https://www.flagchip.com.cn/Pro/3/3.html) | [FC7300F8MDT](https://www.flagchip.com.cn/Pro/3/3.html) | [openvela Running Guide for FC7300F8M-EVB](../quickstart/development_board/fc7300f8m_evb_guide.md) | Domain/Zonal Controllers, ADAS, BMS, Motor Control, etc. | [Contact Distributor](https://www.flagchip.com.cn/Pro/3/3.html) | [Contact Distributor](https://www.flagchip.com.cn/Pro/3/3.html) |
+| Infineon | [TC4D9-EVB](https://itools.infineon.com/aurix_tc4xx_code_examples/documents/Board_Users_Manual_TriBoard-TC4X9-COM-V2_0_0.pdf) | [AURIX ™ TC4x](https://www.infineon.cn/products/microcontroller/32-bit-tricore/aurix-tc4x/tc4dx#products) | [openvela Running Guide for TC4D9-EVB](../quickstart/development_board/tc4d9_evb_guide.md) | Vehicle Motion Controllers, Zonal Controllers, Automotive Gateways, etc. | [Contact Distributor](https://www.infineon.cn/contact-us/where-to-buy) | [Contact Distributor](https://www.infineon.cn/contact-us/where-to-buy) |
\ No newline at end of file
diff --git a/en/edge_ai_dev/configure_tflite_micro_dev_env.md b/en/edge_ai_dev/configure_tflite_micro_dev_env.md
new file mode 100644
index 00000000..d3ebb3fa
--- /dev/null
+++ b/en/edge_ai_dev/configure_tflite_micro_dev_env.md
@@ -0,0 +1,143 @@
+# Configure TFLite Micro Development Environment
+
+[ English | [简体中文](../../zh-cn/edge_ai_dev/configure_tflite_micro_dev_env.md) ]
+
+Before developing TensorFlow Lite for Microcontrollers (TFLite Micro) applications on the openvela platform, the compilation environment and dependent libraries must be configured correctly. This section guides developers through source code confirmation, library dependency configuration, and memory strategy formulation.
+
+## I. Prerequisites
+
+Before starting, please ensure that the following preparations have been completed:
+
+- **Basic Environment**: Refer to the [Official Documentation](../quickstart/openvela_ubuntu_quick_start.md) to complete the deployment of the openvela basic development environment.
+
+- **Source Code Confirmation**: The TFLite Micro source code has been integrated into the openvela code repository at the following path:
+
+ - `apps/mlearning/tflite-micro/`
+
+## II. Component and Dependency Library Support
+
+TFLite Micro relies on specific mathematical and utility libraries to implement model parsing and operator acceleration. The openvela repository has pre-configured the following key components:
+
+| **Component Name** | **Functional Description** | **Source Path** |
+| :----------------- | :--------------------------------------------------------------------------------------------------------------- | :------------------------- |
+| **FlatBuffers** | Library supporting the TFLite model serialization format; provides necessary headers. | `apps/system/flatbuffers/` |
+| **Gemmlowp** | Google's low-precision general matrix multiplication library, used for quantized operations. | `apps/math/gemmlowp/` |
+| **Ruy** | TensorFlow's high-performance matrix multiplication backend, mainly optimizing fully connected layer operations. | `apps/math/ruy/` |
+| **KissFFT** | Lightweight Fast Fourier Transform library, supporting fixed-point and floating-point operations. | `apps/math/kissfft/` |
+| **CMSIS-NN** | Neural network kernel optimization library dedicated to ARM Cortex-M (optional). | `apps/mlearning/cmsis-nn/` |
+
+## III. Compilation Configuration (Kconfig)
+
+Enable necessary library support through the `menuconfig` graphical interface to ensure successful compilation and optimize code size.
+
+Launch the configuration menu:
+
+```Bash
+cmake --build cmake_out/goldfish-arm64-v8a-ap -t menuconfig
+```
+
+Please complete the configuration of the following four core modules in order:
+
+### 1. Enable C++ Runtime Support
+
+TFLite Micro is written based on C++11/14 standards; therefore, LLVM libc++ support must be enabled.
+
+- **Configuration Path**: `Library Routines` -> `C++ Library`
+- **Action**: Select `LLVM libc++ C++ Standard Library`
+
+```Plain
+(Top) → Library Routines → C++ Library
+
+( ) Toolchain C++ support
+( ) Basic C++ support
+(X) LLVM libc++ C++ Standard Library
+```
+
+### 2. Enable Math Acceleration Libraries
+
+Enable matrix operation and signal processing libraries based on model requirements.
+
+- **Configuration Path**: `Application Configuration` -> `Math Library Support`
+- **Action**: Select `Gemmlowp`, `kissfft`, and `Ruy`
+
+```Plain
+(Top) → Application Configuration → Math Library Support
+
+[*] Gemmlowp
+[*] kissfft
+[ ] LibTomMath MPI Math Library
+[*] Ruy
+```
+
+### 3. Enable FlatBuffers Support
+
+Enable the system-level FlatBuffers library to support model parsing.
+
+- **Configuration Path**: `Application Configuration` -> `System Libraries and NSH Add-Ons`
+- **Action**: Select `flatbuffers`
+
+```Plain
+(Top) → Application Configuration → System Libraries and NSH Add-Ons
+
+[*] flatbuffers
+```
+
+### 4. Enable TFLite Micro Core
+
+- **Configuration Path**: `Application Configuration` -> `Machine Learning Support`
+- **Action**: Select `TFLiteMicro`. If ARM hardware acceleration is required, it is recommended to also select `CMSIS_NN Library`.
+
+```Plain
+(Top) → Application Configuration → Machine Learning Support
+
+[ ] CMSIS_NN Library
+[*] TFLiteMicro
+[ ] Print tflite-micro's debug message
+```
+
+## IV. Memory Allocation Strategy
+
+Embedded systems have limited memory resources. TFLite Micro requires a continuous memory area (Tensor Arena) to store input/output tensors and intermediate calculation results.
+
+### 1. Static Allocation (Recommended)
+
+For production environments, static array allocation is recommended. This method eliminates the risk of memory fragmentation, and memory usage is known at compile time.
+
+**Implementation Example**:
+
+```C++
+// Define in the global area of the application code
+// Note: Memory must be aligned to 16 bytes to meet SIMD instruction requirements
+#define TENSOR_ARENA_SIZE (100 * 1024)
+static uint8_t tensor_arena[TENSOR_ARENA_SIZE] __attribute__((aligned(16)));
+```
+
+### 2. Determine Arena Size
+
+To precisely set `TENSOR_ARENA_SIZE` and avoid waste or overflow, you can use `RecordingMicroInterpreter` to capture actual memory usage at runtime.
+
+**Debugging Steps**:
+
+1. Include the recorder header file.
+2. Use `RecordingMicroInterpreter` to replace the standard `MicroInterpreter`.
+3. Run model inference once (Invoke).
+4. Read the actual usage and add a safety margin (suggest adding +1KB).
+
+```C++
+#include "tensorflow/lite/micro/recording_micro_interpreter.h"
+
+// 1. Create recording allocator
+auto* allocator = tflite::RecordingMicroAllocator::Create(tensor_arena, arena_size);
+
+// 2. Instantiate recording interpreter
+tflite::RecordingMicroInterpreter interpreter(model, resolver, allocator);
+
+// 3. Allocate tensors and execute inference
+interpreter.AllocateTensors();
+interpreter.Invoke();
+
+// 4. Get memory statistics
+size_t used = interpreter.arena_used_bytes(); // Actual usage
+interpreter.GetMicroAllocator().PrintAllocations(); // Itemized details
+size_t recommended = used + 1024; // Reserve at least ~1KB extra space
+```
diff --git a/en/edge_ai_dev/model_integration.md b/en/edge_ai_dev/model_integration.md
new file mode 100644
index 00000000..ee967452
--- /dev/null
+++ b/en/edge_ai_dev/model_integration.md
@@ -0,0 +1,167 @@
+# Model Conversion and Code Integration
+
+[ English | [简体中文](../../zh-cn/edge_ai_dev/model_integration.md) ]
+
+In openvela development, due to the limited RAM resources of microcontrollers (MCUs) and the fact that a complete file system may not be mounted, directly reading `.tflite` files is often not feasible. The standard practice is to convert the trained TensorFlow Lite model into a C language array and compile it into the application firmware as read-only data (RODATA), allowing for direct execution from Flash.
+
+This section guides developers on how to convert a model into a C array and integrate it into an openvela C++ application (such as `helloxx`).
+
+## I. Model Conversion (TFLite to C Array)
+
+To embed the model into firmware, we need to use tools to convert the binary `.tflite` file into a C source code file.
+
+### 1. Prepare Model File
+
+This tutorial uses the TensorFlow Lite Micro official Hello World model (sine wave prediction). To match the code logic below, we need to download the Float32 (floating point) version of the model.
+
+- Download Link: [hello_world_float.tflite](https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/examples/hello_world/models/hello_world_float.tflite) (Google Official Example)
+
+Please rename the downloaded file to `converted_model.tflite` and place it in the current directory.
+
+### 2. Convert Using xxd Tool
+
+In a Linux/Unix environment, use the `xxd` command to generate the source file containing the model data:
+
+```Bash
+# Convert converted_model.tflite to model_data.cc
+xxd -i converted_model.tflite > model_data.cc
+```
+
+### 3. Optimize Model Array Declaration
+
+The default output generated by `xxd` is similar to the following:
+
+```c++
+unsigned char converted_model_tflite[] = { 0x18, 0x00, ...};
+unsigned int converted_model_tflite_len = 18200;
+```
+
+**Key Optimization Steps**:
+
+To save valuable RAM resources and ensure stable program operation, you **must** modify the generated array:
+
+1. **Add `const`**: Place model data in Flash (RODATA segment) to avoid occupying RAM.
+2. **Add Memory Alignment**: TFLite Micro requires the model data start address to be 16-byte aligned.
+
+Please open `model_data.cc`, copy the array content, and paste it directly into the main program file `helloxx_main.cxx` (recommended):
+
+```C++
+// Add alignas(16) to meet TFLite memory alignment requirements
+// Add const to place data in Flash, saving RAM
+alignas(16) const unsigned char converted_model_tflite[] = {
+ 0x18, 0x00, ...
+};
+const unsigned int converted_model_tflite_len = 18200;
+```
+
+## II. Integration into Application
+
+This section uses the modification of the standard C++ example program `apps/examples/helloxx` in openvela to demonstrate how to integrate TFLite Micro.
+
+### 1. Modify Build System
+
+When compiling the application, the TFLite Micro header file paths and build rules need to be included. Edit `apps/examples/helloxx/CMakeLists.txt`; you can refer to the following content:
+
+```Bash
+if(CONFIG_EXAMPLES_HELLOXX)
+ nuttx_add_application(
+ NAME
+ helloxx
+ STACKSIZE
+ 10240
+ MODULE
+ ${CONFIG_EXAMPLES_HELLOXX}
+ SRCS
+ helloxx_main.cxx
+ DEPENDS
+ tflite_micro
+ DEFINITIONS
+ TFLITE_WITH_STABLE_ABI=0
+ TFLITE_USE_OPAQUE_DELEGATE=0
+ TFLITE_SINGLE_ROUNDING=0
+ TF_LITE_STRIP_ERROR_STRINGS
+ TF_LITE_STATIC_MEMORY
+ COMPILE_FLAGS
+ -Wno-error)
+endif()
+```
+
+### 2. Modify Configuration
+
+- Refer to [Configure TFLite Micro Development Environment](./configure_tflite_micro_dev_env.md) to configure the compilation environment and dependent libraries.
+- Enable the example application: In the configuration menu (`menuconfig`), navigate to `Application Configuration` -> `Examples`, and check `"Hello, World!" C++ example` (i.e., `helloxx`).
+
+### 3. Implement Inference Logic
+
+Integrating TFLite Micro in the code mainly involves five standard steps:
+
+1. **Load Model**: Load the model structure from the C array.
+2. **Register Operators**: Instantiate `OpResolver` and register the operators required by the model.
+3. **Prepare Environment**: Instantiate `Interpreter` and allocate the Tensor Arena (tensor memory pool).
+4. **Write Input**: Fill the input tensor with sensor data or test data.
+5. **Execute and Read**: Call `Invoke()` and read the output tensor.
+
+Open `apps/examples/helloxx/helloxx_main.cxx`, which needs to include the following core logic:
+
+```C++
+#include
+#include
+#include "tensorflow/lite/micro/micro_mutable_op_resolver.h"
+#include "tensorflow/lite/micro/micro_interpreter.h"
+#include "tensorflow/lite/schema/schema_generated.h"
+
+// ==========================================================
+// Model Data Definition (Recommend pasting content generated by xxd directly and modifying modifiers)
+// ==========================================================
+alignas(16) const unsigned char converted_model_tflite[] = {
+ // ... Paste specific hex data generated by xxd -i here ...
+ 0x1c, 0x00, 0x00, 0x00, 0x54, 0x46, 0x4c, 0x33, // Example header
+ // ... Middle data omitted ...
+};
+const unsigned int converted_model_tflite_len = 18200; // Please fill in the actual length
+
+static void test_inference(const void* file_data, size_t arenaSize) {
+ // 1. Load model
+ const tflite::Model* model = tflite::GetModel(file_data);
+
+ // 2. Register operators
+ // Note: Only the FullyConnected operator is registered here; add others based on actual model requirements
+ tflite::MicroMutableOpResolver<1> resolver;
+ resolver.AddFullyConnected(tflite::Register_FULLY_CONNECTED());
+
+ // 3. Allocate memory and instantiate interpreter
+ std::unique_ptr pArena(new uint8_t[arenaSize]);
+ // Create an interpreter instance. The interpreter requires the model, operator resolver, and memory buffer as inputs
+ tflite::MicroInterpreter interpreter(model,
+ resolver, pArena.get(), arenaSize);
+
+ // Allocate tensor memory
+ interpreter.AllocateTensors();
+
+ // 4. Write input data
+ TfLiteTensor* input_tensor = interpreter.input(0);
+ float* input_tensor_data = tflite::GetTensorData(input_tensor);
+
+ // Test case: Input x = pi/2 (1.5708), expected model output y approx 1.0
+ float x_value = 1.5708f;
+ input_tensor_data[0] = x_value;
+
+ // 5. Execute inference
+ interpreter.Invoke();
+
+ // Read output result
+ TfLiteTensor* output_tensor = interpreter.output(0);
+ float* output_tensor_data = tflite::GetTensorData(output_tensor);
+ printf("Output value after inference: %f\n", output_tensor_data[0]);
+}
+```
+
+### 4. Verify Results
+
+After compiling and flashing the firmware, run the `helloxx` command. The terminal should output the following inference result:
+
+```Plain
+Output value after inference:0.99999
+```
+
+If the output value is close to 1.0, it indicates that the model has been successfully loaded on the openvela platform and has completed a sine wave inference calculation.
diff --git a/en/edge_ai_dev/tflite_micro_integration.md b/en/edge_ai_dev/tflite_micro_integration.md
new file mode 100644
index 00000000..647ebedd
--- /dev/null
+++ b/en/edge_ai_dev/tflite_micro_integration.md
@@ -0,0 +1,323 @@
+# TFLite Micro Architecture Analysis and Integration
+
+[ English | [简体中文](../../zh-cn/edge_ai_dev/tflite_micro_integration.md) ]
+
+Integrating TensorFlow Lite for Microcontrollers (TFLite Micro) on the openvela platform requires developers to deeply understand its layered software architecture, component dependencies, and hardware acceleration mechanisms. This document details the complete architectural design of TFLite Micro on the openvela platform to guide developers through efficient integration.
+
+## I. Prerequisite Concepts and Terminology
+
+To better understand how TFLite Micro operates in an embedded environment, developers must first understand the following core concepts. These terms are used throughout the integration process.
+
+| **Term** | **Definition** | **openvela Platform Context** |
+| :------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :----------------------------------------------------------------------------------------------------------------- |
+| **TFLite Micro (TFLM)** | The microcontroller version of TensorFlow, a lightweight inference framework designed for resource-constrained (KB-level memory) devices. | The core inference engine running on openvela. |
+| **Tensor Arena** | A pre-allocated large contiguous memory region. TFLM avoids `malloc/free`, placing model inputs, outputs, and intermediate calculation data entirely within this area. | Determines the maximum model size the system can run; requires careful configuration based on SRAM size. |
+| **FlatBuffers** | An efficient serialization format. Model files are stored in this format, allowing data to be read directly from Flash. | Model data is typically compiled directly into firmware or stored in the filesystem. |
+| **Operator (Op) / Kernel** | Concrete implementation of operators in neural networks (e.g., Conv2D, Softmax). A Kernel is the specific C++ code for an Op. | Standard Kernels can be replaced via **CMSIS-NN** to leverage openvela hardware acceleration features. |
+| **Op Resolver** | Operator resolver. Used to locate and register required operator implementations at runtime. | It is recommended to use `MicroMutableOpResolver` to register on demand, avoiding firmware bloat from unused code. |
+| **Quantization** | Technique converting 32-bit floating-point numbers to 8-bit integers to reduce model size and accelerate computation. | openvela recommends running `int8` quantized models for optimal performance. |
+
+## II. Software Stack Hierarchy
+
+The TFLite Micro software stack on the openvela platform adopts a modular layered design, achieving decoupling from the underlying hardware abstraction to the upper-level application interface.
+
+### 1. Overall Architecture Overview
+
+```Plain
+┌────────────────────────────────────────────────────────────────┐
+│ Application Layer │
+│ ┌──────────────┐ ┌──────────────┐ ┌──────────────────────┐ │
+│ │ Voice Recog. │ │ Image Detect │ │ Sensor Analysis │ │
+│ └──────────────┘ └──────────────┘ └──────────────────────┘ │
+└────────────────────────────────────────────────────────────────┘
+ │
+ ▼
+┌────────────────────────────────────────────────────────────────┐
+│ Inference API Layer │
+│ ┌──────────────────────────────────────────────────────────┐ │
+│ │ Model Loading │ Tensor Management │ Inference API │ │
+│ └──────────────────────────────────────────────────────────┘ │
+└────────────────────────────────────────────────────────────────┘
+ │
+ ▼
+┌────────────────────────────────────────────────────────────────┐
+│ TFLite Micro Framework Layer │
+│ ┌─────────────────┐ ┌─────────────────────────────────────┐ │
+│ │ Micro Interpreter│ │ Operator Kernels (incl. CMSIS-NN │ │
+│ ├─────────────────┤ │ / Custom Accel Kernels) │ │
+│ │ Memory Planner │ ├─────────────────────────────────────┤ │
+│ ├─────────────────┤ │ CONV │ FC │ POOL │ RELU │ ... │ │
+│ │ FlatBuffer Parser│ └─────────────────────────────────────┘ │
+│ └─────────────────┘ │
+└────────────────────────────────────────────────────────────────┘
+ │
+ ▼
+┌────────────────────────────────────────────────────────────────┐
+│ RTOS / Platform Service Layer (NuttX Drivers, FS, etc.) │
+│ ┌──────────────────────────────────────────────────────────┐ │
+│ │ Task Scheduler │ Memory Mgmt │ Drivers │ File Sys │ │
+│ └──────────────────────────────────────────────────────────┘ │
+└────────────────────────────────────────────────────────────────┘
+ │
+ ▼
+┌────────────────────────────────────────────────────────────────┐
+│ Hardware Platform │
+│ ARM Cortex-M │ RISC-V │ ESP32 │ Custom SoC │
+└────────────────────────────────────────────────────────────────┘
+
+```
+
+### 2. Application Layer: Inference API
+
+The application layer encapsulates core functions such as model loading, inference execution, and result retrieval via C/C++ APIs. Developers should focus on how to initialize the interpreter and efficiently handle tensor data.
+
+#### Inference Program Implementation Example
+
+The following code demonstrates the standard process for executing a complete inference in the openvela environment:
+
+```C++
+static void test_inference(void* file_data, size_t arenaSize) {
+ // 1. Load the model
+ const tflite::Model* model = tflite::GetModel(file_data);
+ printf("arenaSize: %d\n", (int)arenaSize);
+
+ // 2. Manually register operators
+ tflite::MicroMutableOpResolver<1> resolver;
+ resolver.AddFullyConnected(tflite::Register_FULLY_CONNECTED());
+
+ // 3. Prepare Tensor Arena (Memory Pool)
+ std::unique_ptr pArena(new uint8_t[arenaSize]);
+
+ // 4. Create Interpreter Instance
+ // The interpreter requires the model, operator resolver, and memory buffer as inputs
+ tflite::MicroInterpreter interpreter(model,
+ resolver, pArena.get(), arenaSize);
+
+ // 5. Allocate Tensor Memory
+ interpreter.AllocateTensors();
+
+ // 6. Populate Input Data
+ TfLiteTensor* input_tensor = interpreter.input(0);
+ float* input_tensor_data = tflite::GetTensorData(input_tensor);
+
+ // Example: Test input x = π/2, expect y ≈ 1.0
+ float x_value = 1.5708f;
+ input_tensor_data[0] = x_value;
+
+ // 7. Execute Inference
+ interpreter.Invoke();
+
+ // 8. Retrieve Output Results
+ TfLiteTensor* output_tensor = interpreter.output(0);
+ float* output_tensor_data = tflite::GetTensorData(output_tensor);
+ syslog(LOG_INFO, "Output value after inference: %f\n", output_tensor_data[0]);
+}
+```
+
+### 3. Framework Layer: TFLite Micro Core Components
+
+The framework layer is the core of TFLite Micro, responsible for critical functions such as model parsing, memory management, and operator scheduling. This layer ensures extremely low system overhead on the openvela platform through static memory allocation and a streamlined runtime environment.
+
+#### Micro Interpreter
+
+The interpreter acts as the hub of the framework, coordinating processes like model loading, memory allocation, and operator execution. It contains three core sub-components:
+
+1. Model Parser
+
+ - Parses model files in FlatBuffers format.
+ - Extracts model metadata: operator types, tensor dimensions, quantization parameters.
+ - Constructs the computation graph data structure.
+
+2. Subgraph Manager
+
+ - Manages the computation subgraphs of the model (embedded models typically contain only one subgraph).
+ - Maintains the topological relationship of nodes (operators) and edges (tensors).
+
+3. Invocation Engine
+
+ - Executes operators in topological order.
+ - Manages input/output tensor bindings for operators.
+ - Handles operator execution errors and exceptions.
+
+**The interpreter execution flow is as follows:**
+
+```Plain
+Initialization Phase (Setup):
+1. AllocateTensors() → Plan and allocate memory space for all tensors (Tensor Arena)
+
+
+Inference Phase (Inference):
+1. interpreter.input() → Populate input tensors and fill data
+2. Invoke() → Trigger inference loop
+ ├─ for each node in execution_plan (Iterate through every Node in the plan):
+ │ ├─ Get operator registration info (Registration)
+ │ ├─ Bind input/output tensors
+ │ └─ Call the operator's Invoke function
+ └─ Return execution status
+3. interpreter.output() → Read output tensor results
+```
+
+#### Operator Kernels Library
+
+Operator Kernels are the concrete implementations that perform mathematical operations (e.g., convolution, fully connected). TFLite Micro uses a registration mechanism to decouple the framework from specific algorithm implementations, making it very easy to replace specific operators on openvela (e.g., using hardware-accelerated convolution).
+
+**Operator Interface Specification**
+
+If developers need to customize operators or encapsulate hardware acceleration drivers, they must adhere to the `TfLiteRegistration` interface definition:
+
+```C++
+typedef struct {
+
+ // [Optional] Init: Allocate persistent memory required by the operator (e.g., filter coefficient tables)
+ void* (*init)(TfLiteContext* context, const char* buffer, size_t length);
+
+ // [Optional] Free: Clean up resources allocated by init
+ void (*free)(TfLiteContext* context, void* buffer);
+
+ // [Required] Prepare: Validate tensor dimensions/types, calculate temporary buffer (Scratch Buffer) size
+ TfLiteStatus (*prepare)(TfLiteContext* context, TfLiteNode* node);
+
+ // [Required] Invoke: Core calculation logic, read data from Input Tensor, write to Output Tensor
+ TfLiteStatus (*invoke)(TfLiteContext* context, TfLiteNode* node);
+} TfLiteRegistration;
+```
+
+**Operator Implementation Reference: ReLU**
+
+The following code demonstrates the implementation logic of a standard ReLU activation function, reflecting TFLite Micro's encapsulation of type safety and memory operations:
+
+```C++
+// 1. Preparation Phase: Validate data types and dimensions
+TfLiteStatus ReluPrepare(TfLiteContext* context, TfLiteNode* node)
+{
+ // Validation: Number of input/output tensors
+ TF_LITE_ENSURE_EQ(context, node->inputs->size, 1);
+ TF_LITE_ENSURE_EQ(context, node->outputs->size, 1);
+
+ const TfLiteTensor* input = GetInput(context, node, 0);
+ TfLiteTensor* output = GetOutput(context, node, 0);
+
+ // Validation: Tensor type
+ TF_LITE_ENSURE_TYPES_EQ(context, input->type, kTfLiteFloat32);
+
+ // Configuration: Resize output tensor shape to match input
+ return context->ResizeTensor(context, output, TfLiteIntArrayCopy(input->dims));
+}
+
+// 2. Execution Phase: Numerical Calculation
+TfLiteStatus ReluInvoke(TfLiteContext* context, TfLiteNode* node)
+{
+ const TfLiteTensor* input = GetInput(context, node, 0);
+ TfLiteTensor* output = GetOutput(context, node, 0);
+
+ const float* input_data = GetTensorData(input);
+ float* output_data = GetTensorData(output);
+
+ // Get total data length
+ const int flat_size = MatchingFlatSize(input->dims, output->dims);
+
+ // Execute ReLU: output = max(0, input)
+ for (int i = 0; i < flat_size; ++i) {
+ output_data[i] = (input_data[i] > 0.0f) ? input_data[i] : 0.0f;
+ }
+
+ return kTfLiteOk;
+}
+
+// 3. Registration Phase: Return function pointer structure
+TfLiteRegistration* Register_RELU()
+{
+ static TfLiteRegistration r = {
+ nullptr, // init
+ nullptr, // free
+ ReluPrepare, // prepare
+ ReluInvoke // invoke
+ };
+ return &r;
+}
+```
+
+**Operator Library Source Directory Structure**
+
+In the `tensorflow/lite/micro/kernels/` directory, code is organized by operator function:
+
+```Plain
+tensorflow/lite/micro/kernels/
+├── conv.cc # Convolution operator
+├── depthwise_conv.cc # Depthwise separable convolution
+├── fully_connected.cc # Fully connected layer
+├── pooling.cc # Pooling operator
+├── activations.cc # Activation functions (ReLU, Sigmoid, etc.)
+├── softmax.cc # Softmax
+├── add.cc, mul.cc, sub.cc # Element-wise operations
+├── reshape.cc, transpose.cc # Tensor transformation
+└── ...
+```
+
+#### Memory Planner
+
+The Memory Planner is the key technology for TFLite Micro to achieve low memory footprint. Unlike the dynamic memory allocation in desktop TensorFlow, Micro implements memory reuse by analyzing tensor lifecycles.
+
+## III. Platform Dependencies and Integration
+
+Running TFLite Micro on the openvela platform is not an isolated process; it depends deeply on underlying OS services and hardware libraries. Understanding these dependencies is crucial for performance tuning and troubleshooting.
+
+### 1. NuttX Kernel Services
+
+TFLite Micro interacts with the NuttX RTOS through a platform abstraction layer. Although TFLite Micro is designed to be OS-independent, reasonable OS configuration on openvela can significantly improve system stability.
+
+#### Task Scheduling and Synchronization
+
+NuttX provides complete POSIX standard support. TFLite Micro inference tasks are typically encapsulated in standard `pthread` or NuttX Tasks.
+
+#### Memory Allocator
+
+TFLite Micro recommends using the **Tensor Arena** mechanism for memory management. However, during the initialization phase or when processing non-tensor data, it may still interact with the NuttX memory manager (Mm).
+
+**Tensor Arena Allocation Strategy**
+
+Although `malloc` can be used to dynamically request the Arena, static allocation is strongly recommended.
+
+```C++
+// Recommended: Determine size at compile time, place in BSS segment or specific memory segment (e.g., CCM)
+// Method to estimate size: Allocate a large space first, run Interpreter::ArenaUsedBytes() to get actual usage, then adjust
+#define ARENA_SIZE (100 * 1024)
+static uint8_t tensor_arena[ARENA_SIZE] __attribute__((aligned(16)));
+```
+
+### 2. Hardware Acceleration: CMSIS-NN Integration
+
+To improve inference performance on ARM Cortex-M cores (the primary computing unit of openvela), the CMSIS-NN library must be integrated. This library utilizes the SIMD (Single Instruction, Multiple Data) instruction set, which can increase the performance of convolution and matrix multiplication by 4-5 times.
+
+#### Build System Configuration (Makefile)
+
+When integrating CMSIS-NN, the core logic is **replacement**: introducing optimized version source files while removing TFLite's built-in general reference implementations (Reference Kernels) from the compilation list to avoid symbol definition conflicts.
+
+The following is a configuration template for the NuttX build system:
+
+```Makefile
+# Check if CMSIS-NN option is enabled in Kconfig
+ifneq ($(CONFIG_MLEARNING_CMSIS_NN),)
+
+# 1. Define Macro: Inform TFLite Micro to enable CMSIS-NN path
+COMMON_FLAGS += -DCMSIS_NN
+
+# Add header file search path
+COMMON_FLAGS += ${INCDIR_PREFIX}$(APPDIR)/mlearning/cmsis-nn/cmsis-nn
+
+# 2. Find optimized source files: Get all .cc files in the cmsis_nn directory
+CMSIS_NN_SRCS := $(wildcard $(TFLM_DIR)/tensorflow/lite/micro/kernels/cmsis_nn/*.cc)
+
+# 3. Exclude conflicting files:
+# Calculate the filenames of generic implementations to exclude (e.g., conv.cc, fully_connected.cc)
+# Logic: Take filenames from CMSIS_NN_SRCS and map them to the kernels/ root directory
+UNNEEDED_SRCS := $(addprefix $(TFLM_DIR)/tensorflow/lite/micro/kernels/, $(notdir $(CMSIS_NN_SRCS)))
+
+# 4. Filter out these generic implementations from the original compilation list CXXSRCS
+CXXSRCS := $(filter-out $(UNNEEDED_SRCS), $(CXXSRCS))
+
+# 5. Add the optimized source files to the compilation list
+CXXSRCS += $(CMSIS_NN_SRCS)
+
+endif
+```
diff --git a/en/edge_ai_dev/tflite_micro_overview.md b/en/edge_ai_dev/tflite_micro_overview.md
new file mode 100644
index 00000000..15af4176
--- /dev/null
+++ b/en/edge_ai_dev/tflite_micro_overview.md
@@ -0,0 +1,424 @@
+# TFLite Micro Overview
+
+[ English | [简体中文](../../zh-cn/edge_ai_dev/tflite_micro_overview.md) ]
+
+TensorFlow Lite for Microcontrollers (hereinafter referred to as TFLite Micro) is a lightweight machine learning inference framework designed by Google specifically for resource-constrained embedded devices. As a streamlined version of TensorFlow Lite, this framework is deeply optimized for the characteristics of microcontrollers (MCUs), supporting the execution of complex neural network models on devices with only tens of KB of RAM and hundreds of KB of Flash.
+
+This document aims to introduce the core architecture of TFLite Micro, its technical challenges, and its integration value and application scenarios on the openvela platform.
+
+## I. Core Features and Development Workflow
+
+### 1. Core Features
+
+TFLite Micro addresses the core pain points of embedded AI through the following features:
+
+- **Lightweight Design**: The core runtime library is extremely minimal and requires no operating system support, allowing it to run directly in a bare-metal environment. The framework adopts a static memory allocation strategy, eliminating the overhead of dynamic memory management and the risk of fragmentation.
+- **Low Power Optimization**: Optimized for the power characteristics of embedded devices, it supports quantized models such as INT8. While ensuring inference accuracy, it significantly reduces computation volume and power consumption, supporting long-term operation of AI applications on battery-powered devices.
+- **Broad Hardware Ecosystem**: It supports various mainstream MCU architectures such as ARM Cortex-M, RISC-V, and Xtensa, and provides optimized operator implementations for specific hardware platforms to fully utilize hardware acceleration capabilities.
+
+### 2. Development Workflow
+
+TFLite Micro provides comprehensive toolchain support. The typical development process is as follows:
+
+1. **Model Training**: Train the model using TensorFlow or Keras.
+2. **Model Conversion**: Convert the trained model to the TFLite format (`.tflite`) and use quantization techniques to reduce model size and minimize accuracy loss.
+3. **Integration and Deployment**: Convert the transformed model into a C array or binary file and integrate it into the openvela project for execution.
+
+Integrating TFLite Micro into the openvela system empowers IoT devices with edge intelligence, reducing cloud dependency and operating costs while protecting user privacy and achieving faster response speeds.
+
+## II. Challenges of AI Inference on Microcontrollers
+
+Deploying AI inference on microcontrollers involves multiple technical challenges regarding resources, real-time performance, and model size.
+
+### 1. Resource Constraints
+
+The extremely limited hardware resources of microcontrollers are the primary challenge facing edge AI inference:
+
+#### Memory Constraints
+
+- Typical IoT MCUs have only 32KB to 512KB of RAM and approximately 256KB to 2MB of Flash.
+- In contrast, even a simple deep learning model may require several MBs of parameter storage space.
+
+**Coping Strategies**:
+
+- Models undergo quantization compression, converting floating-point parameters to INT8 or lower precision.
+- The inference framework itself is extremely lightweight, with runtime overhead controlled within a few tens of KB.
+- Adoption of a static memory allocation strategy to avoid dynamic memory fragmentation.
+- Optimization of intermediate calculation result storage to achieve tensor buffer reuse.
+
+#### Computing Power Limitations
+
+- MCU clock speeds are typically in the range of tens to hundreds of MHz, often lacking Floating Point Units (FPU) or supporting only single-precision floating-point arithmetic, let alone GPUs or dedicated AI accelerators. This means complex matrix operations require extensive optimization to meet real-time requirements.
+
+**Coping Strategies**:
+
+- Fully utilize hardware features (such as ARM Cortex-M SIMD instructions) to optimize matrix operations.
+- Operator implementations require assembly-level optimization for specific architectures.
+- Restricted model structure selection, favoring computationally efficient lightweight network architectures (such as MobileNet, SqueezeNet).
+
+#### Power Constraints
+
+- Many IoT devices rely on battery power, operating at microwatt to milliwatt power levels. As a compute-intensive task, power control for AI inference is crucial.
+
+**Coping Strategies**:
+
+- Inference frequency needs to be optimized according to application scenarios to avoid continuous high-frequency operation.
+- Support for low-power modes, shutting down the inference engine during standby.
+- Quantized models not only reduce volume but also significantly lower computational power consumption.
+- Deep synergy with hardware power management mechanisms is required.
+
+### 2. Real-time Requirements
+
+Edge AI applications typically have strict latency constraints, which fundamentally distinguishes them from cloud inference.
+
+#### Low Latency Demands
+
+- Applications such as voice wake-up and gesture recognition require end-to-end latency from data acquisition to inference result output to be within tens to hundreds of milliseconds.
+
+**Coping Strategies**:
+
+- Rapid startup of the inference engine to avoid cold start latency.
+- Efficient operator execution to reduce single inference time.
+- Optimization of data pre-processing flows to reduce conversion overhead from sensors to model inputs.
+
+#### Deterministic Execution
+
+- In a Real-Time Operating System (RTOS) environment, task scheduling requires predictable execution times.
+
+**Coping Strategies**:
+
+- Avoid unpredictable memory allocation operations.
+- Inference time should be relatively stable to facilitate task timing planning.
+- Support for interrupt-driven inference triggering mechanisms.
+
+#### Offline Priority
+
+- Edge devices cannot rely on network connections; all inference must be completed locally.
+
+**Coping Strategies**:
+
+- The model resides entirely in the device Flash.
+- No need for cloud-assisted data processing capabilities.
+- Ability to work normally even when the network is disconnected.
+
+### 3. Model Size Constraints
+
+Model size directly affects the feasibility of deployment, constituting the core contradiction of microcontroller AI.
+
+#### Storage Limits
+
+- Complete deep learning models (like ResNet-50) may exceed 100MB, while MCU Flash is typically only a few hundred KB to 2MB.
+
+**Coping Strategies**:
+
+- Models are compressed using techniques like pruning and distillation.
+- Quantization to INT8 can reduce model volume by 75%.
+- Selection of parameter-efficient model architectures (such as depthwise separable convolutions).
+
+#### Trade-off Between Accuracy and Size
+
+- Compressing models inevitably brings accuracy loss.
+
+**Coping Strategies**:
+
+- Maximize the compression ratio within an acceptable accuracy range.
+- Customize and fine-tune models for specific tasks.
+- Adopt Quantization-Aware Training (QAT) to reduce accuracy degradation.
+
+## III. Architecture Analysis of TFLite Micro
+
+TFLite Micro adopts an interpreter architecture and achieves extreme lightweighting through a series of design choices. It effectively addresses the aforementioned challenges, providing a viable AI inference solution for microcontrollers.
+
+### 1. Lightweight Interpreter Design
+
+TFLite Micro uses an interpreter architecture to run neural network models, but compared to traditional interpreters, it has undergone radical lightweight modifications.
+
+- **Model Format**: Uses FlatBuffers to serialize models, offering the following advantages:
+
+ - Zero-copy access: On devices supporting memory-mapped Flash (XIP), model data can be read directly from Flash without loading into RAM.
+ - Compact storage: Minimal metadata overhead; model file size is close to the actual parameter size.
+ - Fast parsing: No complex deserialization process required; the interpreter starts up quickly.
+ - Cross-platform compatibility: Compatible with standard TFLite model formats, allowing for a unified toolchain.
+
+- **Interpretation Execution Flow**:
+
+ - Model Loading: Model FlatBuffer constants reside in Flash/ROM and are accessed directly via pointers.
+ - Interpreter Initialization: Allocates the Tensor Arena (tensor workspace).
+ - Operator Registration: Loads corresponding implementations based on the operators used by the model.
+ - Inference Execution: Calls the `Invoke` function of operators in the order of the computation graph.
+ - Result Output: Reads inference results from the output tensor.
+
+- **Memory-Efficient Design Choices**:
+
+ - Static Computation Graph: The model structure is determined at model generation time, with no dynamic graph overhead.
+
+### 2. Minimal External Dependencies
+
+A key design principle of TFLite Micro is to **reduce external dependencies**, enabling it to run in various constrained environments.
+
+- **Small Standard Library Dependency**:
+
+ - Does not rely on `malloc`/`free`; all memory is allocated from the pre-allocated Arena.
+ - Provides streamlined alternative implementations (such as `micro_log`, `micro_time`).
+
+- **Operating System Neutral**:
+
+ - Can run in a bare-metal environment without an RTOS.
+ - Adapts to different systems through a Platform Abstraction Layer (PAL).
+ - RTOSs like NuttX, FreeRTOS, and Zephyr can be seamlessly integrated.
+
+- **Hardware Abstraction**:
+
+ - Adapts to different architectures (ARM, RISC-V, Xtensa, etc.) through conditional compilation.
+ - Provides optimized assembly kernels (such as ARM CMSIS-NN integration).
+ - Supports hardware accelerator interfaces (such as Arm Ethos-U NPU).
+
+### 3. Supported Operators and Models
+
+TFLite Micro provides a carefully selected set of operators covering the most commonly used neural network layers.
+
+- **Convolution Operators** (Computer Vision Core):
+
+ - `CONV_2D`: Standard 2D convolution.
+ - `DEPTHWISE_CONV_2D`: Depthwise separable convolution (Core of MobileNet).
+ - Supports various padding modes (SAME, VALID) and stride configurations.
+
+- **Pooling and Activation**:
+
+ - `MAX_POOL_2D`, `AVERAGE_POOL_2D`: Downsampling layers.
+ - `RELU`: Common activation function.
+ - `SOFTMAX`: Classification layer.
+ - `TANH`, `LOGISTIC`: Common activations for recurrent networks.
+
+- **Fully Connected**:
+
+ - `FULLY_CONNECTED`: Fully connected layer.
+
+- **Tensor Operations**:
+
+ - `RESHAPE`, `SQUEEZE`, `EXPAND_DIMS`: Dimension transformations.
+ - `ADD`, `MUL`, `SUB`: Element-wise operations.
+
+- **Typical Supported Models**:
+
+ - **MobileNet V1**: Lightweight image classification.
+ - **Micro Speech**: Voice keyword recognition (Google official example).
+ - **Person Detection**: Human body detection.
+ - **Magic Wand**: Gesture recognition.
+ - Custom lightweight models (such as shallow CNNs, small RNNs).
+
+**Quantization Support**:
+
+ - **INT8 Quantization**: Mainstream recommended method; parameters and activations are both 8-bit integers.
+ - **INT16 Activation**: Higher precision for intermediate calculations (partial operators).
+ - **Hybrid Quantization**: Key layers retain high precision, while others are quantized.
+ - Supports both Quantization-Aware Training (QAT) and Post-Training Quantization (PTQ).
+
+### 4. Memory Management Mechanism
+
+TFLite Micro employs a unique static memory management strategy, which is key to its efficient operation on microcontrollers with extremely limited RAM resources (e.g., only a few tens of KB).
+
+#### Tensor Arena (Tensor Workspace)
+
+The Tensor Arena is the core concept of TFLite Micro's memory management.
+
+- **Definition and Allocation**: The application must allocate a contiguous block of memory (the Tensor Arena) before inference begins. The TFLite Micro runtime will allocate all intermediate tensors and temporary buffers from this area.
+- **Size Estimation**: Developers need to estimate the size of the Arena based on the complexity of the model.
+
+#### Memory Planning and Reuse
+
+To maximize the use of limited memory, the interpreter executes strict Memory Planning during the model loading phase.
+
+**Planning Process**:
+
+1. **Lifecycle Analysis**: Analyze the computation graph to determine the creation and destruction time points (lifecycle) of each tensor.
+2. **Dependency Construction**: Build a dependency graph between tensors to identify which tensors' lifecycles do not overlap, thus qualifying for memory reuse.
+3. **Address Allocation**: Use a greedy algorithm to calculate the memory offset of each tensor in the Arena.
+4. **Layout Generation**: Generate the final static memory layout plan (Memory Plan).
+
+**Typical Reuse Case**:
+
+Assuming a simple network containing three layers with the following tensor lifecycles:
+
+- **Layer 1 (Conv2D)**: Generates Output Tensor A (Lifecycle covers Layer 1 to Layer 2).
+- **Layer 2 (ReLU)**: Uses Tensor A, generates Output Tensor B (Lifecycle covers Layer 2 to Layer 3).
+- **Layer 3 (MaxPool)**: Uses Tensor B, generates Output Tensor C (Lifecycle covers Layer 3 to Layer 4).
+
+Memory Allocation Result:
+
+- **Tensor A and Tensor C**: Since their lifecycles do not overlap (A is destroyed when Layer 2 ends, C is created when Layer 3 begins), the memory planner will arrange for them to **share the same physical memory address**.
+- **Tensor B**: Since B's lifecycle overlaps with both A and C, the planner will allocate independent memory space for it.
+
+#### Memory Alignment and Optimization
+
+To improve calculation efficiency, TFLite Micro implements multiple low-level optimizations at the memory management level:
+
+- **Address Alignment**: Defaults to alignment by a certain number of bytes (commonly 16, configurable) to fully utilize the SIMD (Single Instruction, Multiple Data) instruction sets of processors like ARM Cortex-M for accelerated computation.
+- **Weight Alignment**: Optimizes address alignment for model parameter weights to reduce CPU access cycles and improve read efficiency.
+- **Stack Optimization**: Optimizes function call paths to avoid deep nested calls, thereby reducing the occupation of system stack space.
+
+## IV. Integration Value of TFLite Micro on the openvela Platform
+
+The openvela platform is built on the NuttX RTOS, providing a unified and standardized software environment for IoT devices. The deep integration of TFLite Micro with openvela not only resolves underlying resource limitations but also fully unleashes the potential of edge intelligence applications.
+
+### 1. Deep Adaptation for IoT Scenarios
+
+The typical IoT terminals targeted by openvela, such as smart speakers, smart locks, environmental sensors, and wearable devices, have business characteristics that align highly with the design philosophy of TFLite Micro:
+
+- **Adherence to Local Processing Priority:**
+
+ - **Privacy Protection**: Ensures sensitive data like voice and images are processed entirely on the device, eliminating the risk of privacy leakage from cloud uploads.
+ - **Low Latency Response**: Local inference achieves millisecond-level response, avoiding network latency caused by cloud interaction (typically hundreds of milliseconds).
+ - **Offline Availability**: Even if the network is disconnected, the device can still perform core intelligent functions, ensuring a continuous user experience.
+
+- **Meeting Long-term Operation Needs:**
+
+ - **Power Optimization**: INT8 quantized models combined with openvela's low-power management support battery-powered devices running for months.
+ - **System Stability**: TFLite Micro's static memory allocation mechanism eliminates memory fragmentation and leakage risks, meeting strict requirements for 24/7 stable operation.
+ - **OTA Friendly**: Extremely small model sizes make Firmware Over-The-Air (FOTA) updates faster, more reliable, and data-saving.
+
+- **Cost-Sensitive Design**:
+
+ - **Lower Hardware Costs**: Supports implementing AI capabilities on low-cost general-purpose MCUs without deploying expensive dedicated NPU chips.
+ - **Operational Cost Savings**: Significantly reduces calls to cloud inference services, lowering server bandwidth and computing power costs.
+ - **Scalable Deployment**: The unified openvela platform shields underlying hardware differences, simplifying the management and maintenance of large-scale device fleets.
+
+### 2. Technical Advantages Based on NuttX
+
+As a POSIX-compliant real-time operating system, NuttX's lightweight and modular characteristics provide solid system-level support for TFLite Micro:
+
+- **Resource Management Synergy**:
+
+ - **Task Scheduling**: The TFLite Micro inference engine can run as a standard NuttX task, accepting system priority scheduling to ensure real-time performance of critical tasks.
+ - **Memory Isolation**: Utilizing NuttX's support for the MPU (Memory Protection Unit), the inference engine is effectively isolated from other system components, enhancing system security.
+ - **Power Management**: Combined with NuttX's PM (Power Management) framework, the system can automatically enter low-power modes during inference idle intervals.
+
+- **Driver and Ecosystem Integration**:
+
+ - **Data Acquisition**: NuttX's rich driver model (I2C, SPI, ADC, Video, Audio) simplifies the standardized acquisition of sensor data.
+ - **Storage Management**: Supports file systems like LittleFS, facilitating storage, reading, and version management of model files.
+ - **Network Communication**: The network protocol stack (TCP/IP, MQTT) provides a basic channel for remote model delivery and updates.
+
+- **Debugging and Diagnosis**:
+
+ - Integrates the `syslog` system to facilitate recording inference logs and error tracking.
+ - Supports GDB remote debugging, significantly accelerating development and optimization cycles.
+
+**Integration Architecture Diagram:**
+
+```Plain
+┌─────────────────────────────────────────┐
+│ openvela Application Layer │
+│ (Smart Home, Wearable, Industrial) │
+└─────────────────────────────────────────┘
+ │
+ ▼
+┌─────────────────────────────────────────┐
+│ TFLite Micro Inference Engine │
+│ (Model Interpreter + Optimized Ops) │
+└─────────────────────────────────────────┘
+ │
+ ▼
+┌─────────────────────────────────────────┐
+│ NuttX RTOS Core Services │
+│ (Task Scheduler, Memory, Drivers, FS) │
+└─────────────────────────────────────────┘
+ │
+ ▼
+┌─────────────────────────────────────────┐
+│ Hardware Abstraction Layer │
+│ (ARM Cortex-M, RISC-V, ESP32, etc.) │
+└─────────────────────────────────────────┘
+```
+
+### 3. Typical Application Scenarios Detailed
+
+On the openvela platform, TFLite Micro has been widely used in various edge intelligence scenarios. The following are detailed technical solutions for four typical applications.
+
+#### Scenario 1: Voice Wake-up and Command Recognition
+
+- **Scenario Description**: Smart speakers and smart home controllers need to continuously listen for wake words and recognize simple voice commands.
+- **Technical Solution**:
+
+ - **Model Selection**: CNN or RNN-based keyword detection models (e.g., Micro Speech).
+ - **Model Size**: 18KB (after quantization).
+ - **Inference Latency**: Inference time per frame (30ms audio) < 5ms.
+ - **Power Optimization**:
+
+ - Use low-power ADC to collect audio (16kHz sampling rate).
+ - Lightweight VAD (Voice Activity Detection) pre-filtering to reduce invalid inference.
+ - Activate the main processor for complex recognition only after detecting the wake word.
+
+- **openvela Platform Advantages**:
+
+ - NuttX audio subsystem provides standardized audio data streams.
+ - Real-time task scheduling guarantees inference real-time performance.
+ - Low-power mode supports long standby times.
+
+#### Scenario 2: Image Recognition and Object Detection
+
+- **Scenario Description**: Smart lock face recognition, industrial equipment defect detection, smart camera object recognition.
+- **Technical Solution**:
+
+ - **Model Selection**: MobileNet V1 (Image Classification).
+ - **Model Size**: 300KB-1MB.
+ - **Inference Latency**: At 96x96 input resolution, inference takes about 200-500ms (depending on MCU performance).
+ - **Input Preprocessing**:
+
+ - Acquire RGB/YUV images from a camera (e.g., OV2640).
+ - Scale to model input size (Bilinear interpolation).
+ - Normalize to [-128, 127] range (INT8 input).
+
+- **Application Cases**:
+
+ - **Smart Lock**: Completes face detection and liveness detection locally, uploading feature values for cloud verification only when necessary, balancing security and power consumption.
+ - **Industrial Inspection**: Real-time detection of product defects, reducing cloud bandwidth pressure.
+ - **Wildlife Monitoring**: Long-running battery-powered cameras that transmit images only after locally identifying target animals.
+
+#### Scenario 3: Sensor Data Anomaly Detection
+
+- **Scenario Description**: Predictive maintenance of industrial equipment, energy consumption anomaly detection in smart buildings, health monitoring devices.
+- **Technical Solution**:
+
+ - **Model Selection**: AutoEncoder or 1D-CNN.
+ - **Model Size**: 10KB - 50KB (processing low-dimensional time-series data).
+ - **Inference Frequency**: Non-real-time triggering (e.g., once per minute).
+ - **Data Flow**:
+
+ - Multi-sensor data fusion (temperature, vibration, pressure, etc.).
+ - Sliding window feature extraction (e.g., FFT spectral features).
+ - Model outputs an anomaly score; alarms or maintenance requests are triggered if a threshold is exceeded.
+
+- **openvela Platform Advantages**:
+
+ - NuttX supports concurrent multi-sensor acquisition.
+ - File system stores historical data for cloud retraining.
+ - Network protocol stack reports anomaly events.
+
+#### Scenario 4: Gesture and Pose Recognition
+
+- **Scenario Description**: Wearable device gesture control, smart home non-contact interaction, fitness monitoring.
+- **Technical Solution**:
+
+ - **Model Selection**: LSTM or 1D-CNN based on accelerometer/gyroscope data.
+ - **Model Size**: 20KB - 100KB.
+ - **Inference Latency**: Real-time processing latency < 50ms.
+ - **Application Examples**:
+
+ - Smart Band: Identifying sports types like running, swimming, cycling.
+ - Smart Remote: Waving gestures to change channels.
+ - AR Glasses: Head pose tracking.
+
+- **Key Technologies**:
+
+ - **Data Augmentation**: Introduce noise and rotation during training to adapt to different user wearing habits.
+ - **Online Calibration**: Personalized adjustment when the device is used for the first time.
+ - **Low Power Optimization**: Motion detection triggers inference; pauses during static states.
+
+## V. Summary
+
+- The combination of TFLite Micro and the openvela platform provides a complete solution for AI inference on microcontrollers.
+
+- It not only overcomes challenges regarding resources, real-time performance, and fragmentation at the technical level but also achieves privacy protection, low cost, and high reliability at the business level.
+- Through standardized development processes and system-level support, developers can quickly deploy intelligent algorithms to various IoT devices, promoting the large-scale implementation of edge intelligence.
+- The following chapters will explore in depth how to integrate, deploy, and optimize TFLite Micro applications on the openvela platform.
diff --git a/en/faq/QuickStart_FAQ.md b/en/faq/QuickStart_FAQ.md
index 9308af90..c948e1d8 100644
--- a/en/faq/QuickStart_FAQ.md
+++ b/en/faq/QuickStart_FAQ.md
@@ -1,6 +1,6 @@
# Quick Start FAQ
-\[ English | [简体中文](./../../zh-cn/faq/QuickStart_FAQ.md) \]
+[ English | [简体中文](./../../zh-cn/faq/QuickStart_FAQ.md) ]
## I. Unable to Access Remote Repository
diff --git a/en/faq/devoloper_tech_faq.md b/en/faq/devoloper_tech_faq.md
new file mode 100644
index 00000000..474c8f70
--- /dev/null
+++ b/en/faq/devoloper_tech_faq.md
@@ -0,0 +1,225 @@
+# Developer FAQ
+
+[ English | [简体中文](./../../zh-cn/faq/devoloper_tech_faq.md) ]
+
+## I. Community and General
+
+### 1. What should I do if I encounter technical issues or bugs?
+
+If it is a technical issue, please submit it on the [Issue page](../../../../docs/issues).
+
+- For blocking issues, after submitting the Issue, you can send the link directly to the WeChat group for a quick response.
+- For non-blocking issues, the community maintenance team will reply and handle them regularly within the Issue tracker.
+
+### 2. Are there rewards for community contributions?
+
+Yes, the community has a contribution incentive mechanism. For detailed reward rules and instructions, please refer to the [Contribution Reward Instructions](../../../../docs/issues).
+
+### 3. When will the IDE be released?
+
+It is expected to be officially released in early 2026.
+
+### 4. Is there a difference between the Gitee and GitHub versions of the source code repository?
+
+There is no difference between the two. The GitHub and Gitee internal repositories utilize bidirectional real-time synchronization. You can choose either one for access based on your network conditions.
+
+## II. Compilation and Build
+
+### 5. Does openvela application development (e.g., Hello World) run in kernel mode or user mode?
+
+The system primarily supports three compilation modes.
+
+Currently, the official recommendation is to use **Flat Build**, where applications and the kernel reside in the same address space (similar to kernel mode). This provides optimal performance and is suitable for embedded small systems such as modules and smart bands.
+
+Additionally, Kernel Build (user mode isolation) and Product Mode are supported, but they are less commonly used in resource-constrained scenarios.
+
+### 6. Using the recommended Flat Build mode, will an application crash cause the entire system to crash?
+
+Theoretically, in Flat Build mode, since applications and the kernel share the same space, an application crash can indeed affect the system. However, openvela is developing a polymorphic isolation protection mechanism to prevent memory corruption. Although the system supports running ELF binary files, the official recommendation remains strongly in favor of using Flat Build mode for embedded scenarios.
+
+### 7. Does openvela support incremental compilation? Do I need to rebuild everything every time I modify the code?
+
+The system supports incremental compilation.
+
+- If you only modified `.c` or `.h` source files, you can compile incrementally, which is relatively fast.
+- However, if you modified the `Kconfig` (menuconfig) configuration file (i.e., enabled or disabled certain features), it is recommended to perform a full recompilation to ensure the configuration takes effect properly.
+
+## III. System Architecture and Kernel
+
+### 8. Is the openvela protocol stack located in the module or on the AP side?
+
+The protocol stacks (such as TCP/IP, Bluetooth Host Stack, etc.) run on the AP (Application Processor) side. External WiFi or Bluetooth modules are typically used only as transceivers, communicating with the main controller via interfaces like HCI or SDIO, with the modules primarily running firmware internally.
+
+### 9. I see many functions starting with NX_ in the code (e.g., nx_read). Should I use them in my application?
+
+**It is not recommended.**
+
+Functions starting with `NX_` are typically system calls or low-level encapsulations used internally by the kernel.
+
+To ensure code standardization and portability (openvela has passed PNS 52 certification), please strictly use standard POSIX interfaces (such as `open`, `read`, `pthread_create`) for development.
+
+### 10. In isolation mode, does user memory use a physical flat model or virtual address mapping?
+
+The system uses a **Physical Flat Memory Model**.
+
+In this model, user memory is not mapped via virtual addresses through an MMU like in Linux. Instead, independent segments are partitioned within the physical flat memory.
+
+### 11. Do user memory segments of different processes (Tasks) have the same virtual base address?
+
+**No, they do not.**
+
+Since there is no virtual address overlapping, different processes do not share the same virtual base address (e.g., all processes starting from 0x0000). Each process possesses an independent base address in physical memory, and different tasks are distinguished by their physical address ranges.
+
+### 12. Does the system's isolation mechanism rely on MMU or MPU?
+
+System memory isolation primarily relies on the **MPU (Memory Protection Unit)**.
+
+This is designed to achieve secure isolation even on chips that do not possess an MMU (such as the Cortex-M series).
+
+### 13. How is secure isolation achieved without an MMU?
+
+This is achieved by defining access permission regions on physical memory via the MPU.
+
+The system allocates specific physical memory regions for each task and utilizes the MPU to restrict that task to accessing only its allocated region, thereby realizing secure isolation between tasks at the physical addressing level.
+
+### 14. In a multi-tasking environment, how are the user-mode Heap and Stack allocated?
+
+To align with the MPU's region protection mechanism, each task is technically required to have an independent user-mode Heap and Stack. This ensures that runtime data storage does not interfere with others and prevents out-of-bounds memory access between tasks.
+
+## IV. Development Environment and Tools
+
+### 15. Are the JLink and Trace32 debugging tools mentioned in the documentation mandatory?
+
+This depends on your run target.
+
+If debugging on real hardware, hardware debuggers like JLink or Trace32 are usually required. If you are using a Simulator (running locally on Linux) or an Emulator (QEMU/Goldfish) for development, the system comes with built-in debugging mechanisms, and you can use GDB directly without extra hardware.
+
+### 16. What if the simulator fails to start (missing libraries) when configuring the environment on macOS (M1/M2)?
+
+Currently, QEMU/Goldfish has relatively less compatibility testing on macOS, and you may encounter missing libraries or instruction set translation efficiency issues.
+
+At this stage, it is strongly recommended to install an **Ubuntu 22.04 Virtual Machine** on macOS for development, as this is the most fully verified and stable environment.
+
+### 17. What if `repo sync` has no response or fails to download in the Ubuntu VM?
+
+Please troubleshoot in the following order:
+
+- Confirm you are downloading code based on the `trunk` branch.
+- Confirm the operating system version is Ubuntu 22.04.
+- Check network connections and proxy settings (there may be network firewall issues).
+
+If the problem persists after troubleshooting, please screenshot the error message and submit an Issue in the community.
+
+### 18. Does openvela have a companion VS Code plugin or IDE?
+
+Yes, there is an official IDE customized based on VS Code that supports openvela development.
+
+The current version has not yet been fully released as open source. It will be synchronized with developers as soon as it is officially released.
+
+### 19. Why can the AI assistant (like the Doubao plugin) in the Quick App IDE only read code but not edit/automatically modify it?
+
+This situation is usually caused by compatibility issues resulting from the synchronization delay between the rapidly updating VS Code core version and the VS Code source version built into the IDE.
+
+It is recommended to report the specific plugin version and IDE version, and the development team will investigate and fix it.
+
+### 20. Why can't I see network interfaces when enabling network configuration under the QEMU goldfish arm64 configuration?
+
+This is because basic configurations like `goldfish arm64` are primarily used to verify CPU architecture and basic kernel functions, and do not enable complete network bridging or peripheral support by default.
+
+If you need to verify network or multimedia functions, it is recommended to use product-form configuration files, such as the complete configuration for `smart speaker` or `ARMv7a Goldfish`.
+
+### 21. Files created by programs running in the simulator are lost after restart. How can I achieve data persistence?
+
+It is recommended to use the **9PFS (9P File System)** feature.
+
+By directly mounting and mapping a folder from the host machine (Host PC) into the simulator, data can be written directly to the PC hard drive, achieving both persistence and convenient viewing on the PC side.
+
+### 22. Why do low-power devices like smart bands/watches also use QEMU Goldfish (ARM A-series) for simulation?
+
+This is mainly to unify the teaching platform and facilitate management; currently, the QEMU platform based on Google Goldfish is used uniformly.
+The openvela operating system abstracts underlying architectural differences, so whether the bottom layer is A-series or M-series has little impact on the learning of upper-layer applications and frameworks. Support for ARM M/R series simulators will be released in the future.
+
+### 23. Is it possible to start driver development learning and course design before physical development boards arrive?
+
+**Absolutely.**
+
+It is recommended to prioritize using the simulator (QEMU). The driver framework is consistent across the simulator and physical boards. You can first complete theoretical learning, framework development, and core concepts like memory management based on the simulator, and then perform hardware adaptation verification once the development board arrives.
+
+### 24. Does Telephony (phone/communication) related business need to be verified on real devices?
+
+It is recommended to use the **Simulator**.
+
+Debugging communication business on real machines has a high threshold (requires Modem, SIM card, network access). The openvela simulator features a built-in Modem Simulator capable of fully simulating processes like making calls and sending/receiving SMS, which is sufficient to meet teaching needs.
+
+### 25. Can the RPK file generated by Quick App packaging run directly on openvela devices?
+
+Currently, it cannot. The Quick App framework engine (Runtime) is planned to be open-sourced in library form and integrated into the system around **February 2026**. At this stage, it is recommended to use the simulator for learning and development.
+
+## V. Hardware Adaptation and Porting
+
+### 26. Can openvela be ported to hardware platforms not currently officially supported (e.g., STM32)?
+
+**Yes.**
+
+openvela is fully compatible with the NuttX kernel. Theoretically, openvela can be smoothly adapted and ported to all hardware platforms supported by NuttX.
+
+### 27. What is the current support status for ESP32 series development boards?
+
+Although the bottom layer is compatible with NuttX, adaptation and testing for ESP32 are not yet fully covered, and some Demos may not run directly. If a stable development experience is required, it is currently recommended to prioritize using officially verified ARM platform development boards.
+
+### 28. When developing drivers or low-level code, which directory should the code be submitted to?
+
+Please decide based on the universality of the code:
+
+- General driver frameworks, scheduling code, or bug fixes are recommended to be submitted to the `nuttx` main directory (e.g., under `drivers`).
+- Specific chip vendor or private board-level driver code is recommended to be stored in the `vendor` directory.
+
+openvela follows the Apache license, so you can freely choose whether to open source it.
+
+## VI. Application Framework and Multimedia
+
+### 29. Is the underlying engine for openvela Quick Apps Node.js or V8?
+
+Neither. The openvela device-side Quick App engine is based on **QuickJS**.
+
+### 30. What is the difference in running mechanisms between Quick Apps and Native Apps?
+
+- Quick Apps run in an independent container within the system, isolated from the system. A crash does not easily cause a system freeze, and they invoke underlying capabilities via JS interfaces.
+- Native Apps call system APIs directly, offering higher performance but also a higher degree of coupling with the system.
+
+### 31. Does openvela currently support running MPlayer?
+
+Running MPlayer directly is currently not supported; the official team has not yet ported it.
+
+### 32. What multimedia development tools or frameworks are available under the current system?
+
+Currently available solutions include: ported FFmpeg, the system's built-in native multimedia toolkit (please refer to the [Sim Environment Audio Function Development Guide](../quickstart/emulator/sim_audio_guide.md)), and ported open-source codec libraries such as `libx264`, `openh264`, and `libopus`.
+
+### 33. Which common Linux multimedia tools are suitable for porting to openvela?
+
+Tools with a Pure Software implementation are usually easier to port. Tools that rely heavily on specific hardware drivers or hardware acceleration cannot be ported directly and must be adapted based on openvela's existing multimedia framework.
+
+### 34. Is there any reference case or path if third-party code needs to be ported?
+
+Developers are advised to directly reference the `apps/external` folder in the source directory. This directory contains a large number of ported third-party libraries and serves as the best practice for understanding the build system and porting methods.
+
+### 35. For developing graphical interfaces on openvela, are Qt or GTK/JDK supported?
+
+**Not supported and not recommended.**
+
+- Qt and GTK frameworks are too heavy for embedded RTOS.
+- The official recommendation is to use **LVGL**, which the team has deeply optimized and integrated well with the NuttX system.
+
+### 36. Does openvela support IoT protocols like MQTT, CoAP, Matter?
+
+**Yes.**
+
+The system has integrated MQTT, CoAP, and Matter (partial versions).
+
+Relevant libraries are usually located in the `apps/netutils` or `external` directories and can be referenced directly in the source code.
+
+### 37. Is it necessary to deeply master kernel principles just to learn multimedia development?
+
+**No.**
+
+You only need to master basic system calls (such as threads, locks, message queues, Sockets). The learning focus should be on Pipeline design (decoding, post-processing), without needing to delve into kernel underlying implementations like scheduling algorithms.
\ No newline at end of file
diff --git a/en/quickstart/emulator/sim_audio_guide.md b/en/quickstart/emulator/sim_audio_guide.md
new file mode 100644
index 00000000..f6fd2fb8
--- /dev/null
+++ b/en/quickstart/emulator/sim_audio_guide.md
@@ -0,0 +1,330 @@
+# Sim Environment Audio Development Guide
+
+[ English | [简体中文](./../../../zh-cn/quickstart/emulator/sim_audio_guide.md) ]
+
+## I. Introduction
+
+This document aims to guide developers in developing and testing audio features within the openvela Sim (simulator) environment. Through the Sim environment, developers can utilize the Host machine's audio capabilities to simulate embedded device audio input and output, verifying driver logic and middleware functionality.
+
+The main testing scope includes:
+
+1. Using `nxplayer`, `nxrecorder`, and `nxlooper` to verify the basic functions of the **Audio Driver**.
+2. Using `mediatool` to verify the business logic of the **Media Framework**.
+
+## II. Module Architecture
+
+The audio subsystem in the Sim environment consists of the following core modules:
+
+1. **Audio Driver**
+
+ - In the Sim environment, the underlying driver simulates audio hardware input and output by mapping the ALSA interface of the Host machine (Linux).
+
+2. **Command Line Tools (CLI Tools)**
+
+ - **nxplayer**: Audio playback testing tool.
+ - **nxrecorder**: Audio recording testing tool.
+ - **nxlooper**: Audio loopback testing tool.
+ - These tools are all implemented based on the Vela Audio Driver.
+
+3. **Media Framework**
+
+ - Includes components such as the Media Framework, RPC communication, and Audio Policy.
+ - Provides standard interfaces for playback, recording, audio path switching, and volume control.
+
+4. **mediatool**
+
+ - A command-line interactive program implemented based on the Media Framework, used for testing framework-level functions.
+
+## III. Compilation Configuration
+
+Please configure the openvela build system (Kconfig) as follows.
+
+### 1. Audio Driver Configuration
+
+Enable basic audio driver support and buffer configuration:
+
+```Makefile
+CONFIG_AUDIO=y # Enable AUDIO subsystem
+CONFIG_AUDIO_NUM_BUFFERS=2 # Number of driver buffers
+CONFIG_AUDIO_BUFFER_NUMBYTES=8192 # Size of a single buffer (Bytes)
+```
+
+### 2. CLI Tool Configuration
+
+Enable `nxplayer`, `nxrecorder`, and `nxlooper` tools:
+
+```Makefile
+CONFIG_SYSTEM_NXPLAYER=y
+CONFIG_SYSTEM_NXRECORDER=y
+CONFIG_SYSTEM_NXLOOPER=y
+
+# Keep other related configurations as default
+```
+
+### 3. Media Framework Configuration
+
+The Media Framework supports cross-core operations. In the Sim environment, this usually involves the simulation of the AP (Application Processor) and the Audio DSP (Digital Signal Processor).
+
+#### AP Side Configuration
+
+Configuration when compiling the Media Framework main body on the AP core:
+
+```Makefile
+ CONFIG_MEDIA=y
+ CONFIG_MEDIA_SERVER=y
+ CONFIG_MEDIA_SERVER_CONFIG_PATH="/etc/media/"
+ CONFIG_MEDIA_SERVER_PROGNAME="mediad"
+ CONFIG_MEDIA_SERVER_STACKSIZE=2097152
+ CONFIG_MEDIA_SERVER_PRIORITY=245
+ CONFIG_MEDIA_TOOL=y
+ CONFIG_MEDIA_TOOL_STACKSIZE=16384
+ CONFIG_MEDIA_TOOL_PRIORITY=100
+ CONFIG_MEDIA_CLIENT_LISTEN_STACKSIZE=4096
+
+ CONFIG_PFW=y
+ CONFIG_LIB_XML2=y
+ CONFIG_HAVE_CXX=y
+ CONFIG_HAVE_CXXINITIALIZE=y
+ CONFIG_LIBCXX=y
+ CONFIG_LIBSUPCXX=y
+```
+
+#### AUDIO Side Configuration
+
+Configuration when compiling the Media Framework main body on the Audio core (including FFmpeg support):
+
+```Makefile
+CONFIG_MEDIA=y
+CONFIG_MEDIA_SERVER=y
+
+# CONFIG_MEDIA_FOCUS is not set
+CONFIG_MEDIA_SERVER_CONFIG_PATH="/etc/media/"
+CONFIG_MEDIA_SERVER_PROGNAME="mediad"
+CONFIG_MEDIA_SERVER_STACKSIZE=81920
+CONFIG_MEDIA_SERVER_PRIORITY=245
+CONFIG_MEDIA_TOOL=y
+CONFIG_MEDIA_TOOL_STACKSIZE=16384
+CONFIG_MEDIA_TOOL_PRIORITY=100
+CONFIG_MEDIA_CLIENT_LISTEN_STACKSIZE=4096
+
+# Audio Policy
+CONFIG_PFW=y
+CONFIG_LIB_XML2=y
+CONFIG_HAVE_CXX=y
+CONFIG_HAVE_CXXINITIALIZE=y
+CONFIG_LIBCXX=y
+CONFIG_LIBSUPCXX=y
+CONFIG_KVDB
+
+# FFmpeg Core Configuration
+CONFIG_LIB_FFMPEG=y
+CONFIG_LIB_FFMPEG_CONFIGURATION="--disable-sse --enable-avcodec --enable-avdevice --enable-avfilter --enable-avformat --enable-decoder='aac,aac_latm,flac,mp3,pcm_s16le,libopus,libfluoride_sbc,libfluoride_sbc_packed,silk' --enable-demuxer='aac,mp3,pcm_s16le,flac,mov,ogg,wav,silk' --enable-encoder='aac,pcm_s16le,libopus,libfluoride_sbc,silk' --enable-hardcoded-tables --enable-indev=nuttx --enable-ffmpeg --enable-ffprobe --enable-filter='adevsrc,adevsink,afade,amix,amovie_async,amoviesink_async,astats,astreamselect,aresample,volume' --enable-libopus --enable-muxer='opus,opusraw,pcm_s16le,silk,wav' --enable-outdev=bluelet,nuttx --enable-parser='aac,flac' --enable-protocol='cache,concat,file,http,https,rpmsg,tcp,unix' --enable-swresample --tmpdir='/stream'"
+```
+
+## IV. FFmpeg Extension Configuration
+
+The Media Framework is implemented based on FFmpeg. Developers need to configure FFmpeg components (demuxer, muxer, decoder, encoder, filter, etc.) according to project requirements.
+
+### 1. Basic Configuration String
+
+The core configuration string is as follows (needs to be written into `.config` or relevant build files):
+
+```Makefile
+CONFIG_LIB_FFMPEG_CONFIGURATION="--disable-sse --enable-avcodec --enable-avdevice --enable-avfilter --enable-avformat --enable-decoder='aac,aac_latm,flac,mp3,pcm_s16le,libopus,libfluoride_sbc,libfluoride_sbc_packed,silk' --enable-demuxer='aac,mp3,pcm_s16le,flac,mov,ogg,wav,silk' --enable-encoder='aac,pcm_s16le,libopus,libfluoride_sbc,silk' --enable-hardcoded-tables --enable-indev=nuttx --enable-ffmpeg --enable-ffprobe --enable-filter='adevsrc,adevsink,afade,amix,amovie_async,amoviesink_async,astats,astreamselect,aresample,volume' --enable-libopus --enable-muxer='opus,opusraw,pcm_s16le,silk,wav' --enable-outdev=bluelet,nuttx --enable-parser='aac,flac' --enable-protocol='cache,concat,file,http,https,rpmsg,tcp,unix' --enable-swresample --tmpdir='/stream'"
+```
+
+**Configuration Explanation:**
+
+- `--enable-decoder`: Enables specified decoders.
+- `--enable-filter`: Enables specified filters.
+
+**Troubleshooting:**
+
+If you encounter errors like `Failed to avformat_open_input ret -1330794744, Protocol not found.`, it usually means the corresponding protocol or format support is missing. Please check and modify the configuration string above to extend FFmpeg capabilities.
+
+### 2. Dependency Library Configuration
+
+Some FFmpeg decoders depend on third-party decoding libraries and must be explicitly enabled in Kconfig:
+
+```Makefile
+# libhelix_aac dependency
+CONFIG_LIB_HELIX_AAC=y
+CONFIG_LIB_HELIX_AAC_SBR=y
+
+# libfluoride_sbc, libfluoride_sbc_packed dependency
+CONFIG_LIB_FLUORIDE_SBC=y
+CONFIG_LIB_FLUORIDE_SBC_DECODER=y
+CONFIG_LIB_FLUORIDE_SBC_ENCODER=y
+
+# libopus dependency
+CONFIG_LIB_OPUS=y
+
+# silk dependency
+CONFIG_LIB_SILK=y
+```
+
+## V. Debugging Tool Usage Guide
+
+This section introduces how to run and test audio tools in the Sim environment.
+
+### 1. Environment Startup
+
+1. Run Simulator
+
+ Enter the `nuttx` directory and start GDB for debugging:
+
+ ```Bash
+ cd nuttx
+ sudo gdb --args ./nuttx
+ ```
+
+2. Mount Host File System
+
+ In the NuttX Shell (nsh), mount the Host machine's audio stream directory to the `/stream` directory of the Sim environment:
+
+ ```Bash
+ # Replace with the actual username
+ mount -t hostfs -o fs=/home//Streams/ /stream
+ ```
+
+### 2. nxplayer Usage Instructions
+
+`nxplayer` is used to test audio playback functions.
+
+#### Scenario A: Playback of Raw PCM Data
+
+**Test Case**: Play `/stream/8000.pcm` (Mono, 16-bit, 44100Hz).
+
+```Bash
+nxplayer
+
+# Specify playback device
+device pcm0p
+
+# Format: playraw
+playraw /stream/8000.pcm 1 16 44100
+```
+
+#### Scenario B: Playback of MP3 Files (Simulated Offload)
+
+**Host Dependency**: Simulating MP3 decoding requires installing the `libmad` library on the Host machine.
+
+```Bash
+sudo apt install libmad0-dev:i386
+```
+
+**Test Case**:
+
+```Bash
+nxplayer
+# Specify Offload playback device
+device pcm1p
+# Play file
+play /stream/1.mp3
+
+# Stop playback
+stop
+```
+
+**Functional Limitations**:
+
+- Supports files with ID3V2 headers.
+- Supports files without any ID3 headers.
+- **Does NOT support** ID3V1 format.
+
+### 3. nxrecorder Usage Instructions
+
+`nxrecorder` is used to test audio recording functions.
+
+#### Scenario A: Recording Raw PCM Data
+
+**Test Case**: Record stereo, 16-bit, 48000Hz audio to `1.pcm`.
+
+```Bash
+nxrecorder
+# Specify recording device
+device pcm0c
+# Format: recordraw
+recordraw /stream/1.pcm 2 16 48000
+
+# Stop recording
+stop
+```
+
+**Verification Method**: Check if `1.pcm` is generated in the corresponding directory on the Host machine and verify if it plays back normally.
+
+#### Scenario B: Recording MP3 Files (Simulated Offload)
+
+**Host Dependency**: Simulating MP3 encoding requires installing the `libmp3lame` library on the Host machine:
+
+```Bash
+sudo apt-get install libmp3lame-dev:i386
+```
+
+**Test Case**:
+
+```Bash
+nxrecorder
+
+# Specify Offload recording device
+device pcm1c
+
+# Record MP3
+record /stream/100.mp3 2 16 44100
+```
+
+### 4. nxlooper Usage Instructions
+
+`nxlooper` is used to test audio loopback, where recorded data is sent directly to the playback channel.
+
+#### Scenario A: PCM Data Loopback
+
+```Bash
+nxlooper
+# Specify playback device
+device pcm0p
+# Specify recording device
+device pcm0c
+# Start loopback: 2 channels, 16-bit, 48kHz
+loopback 2 16 48000
+
+# Stop loopback
+stop
+```
+
+#### Scenario B: MP3 Data Loopback
+
+```Bash
+nxlooper
+device pcm1p
+device pcm1c
+# The last parameter '8' represents the format code (AUDIO_FMT_MP3)
+loopback 2 16 44100 8
+
+# Stop loopback
+stop
+```
+
+**Parameter Explanation**: The `loopback` command format is: `loopback [format]`
+
+Where the `[format]` parameter corresponds to the definitions in `audio.h` (defaults to PCM):
+
+```C
+/* Located at ./nuttx/include/nuttx/audio/audio.h */
+#define AUDIO_FMT_UNDEF 0x00
+#define AUDIO_FMT_OTHER 0x01
+#define AUDIO_FMT_MPEG 0x02
+#define AUDIO_FMT_AC3 0x03
+#define AUDIO_FMT_WMA 0x04
+#define AUDIO_FMT_DTS 0x05
+#define AUDIO_FMT_PCM 0x06
+#define AUDIO_FMT_WAV 0x07
+#define AUDIO_FMT_MP3 0x08
+#define AUDIO_FMT_MIDI 0x09
+#define AUDIO_FMT_OGG_VORBIS 0x0a
+#define AUDIO_FMT_FLAC 0x0b
+```
+
+## VI. mediatool Usage Instructions
+
+For detailed commands and usage of `mediatool`, please refer to the [Mediatool Introduction](../../device_dev_guide/media/mediatool.md).
diff --git a/en/quickstart/openvela_ubuntu_quick_start.md b/en/quickstart/openvela_ubuntu_quick_start.md
index ae17312f..ef8dcea4 100644
--- a/en/quickstart/openvela_ubuntu_quick_start.md
+++ b/en/quickstart/openvela_ubuntu_quick_start.md
@@ -82,13 +82,13 @@ After installation, you can run `repo --version` to verify it.
This method requires you to add your SSH public key to your GitHub account first. Please refer to the [official GitHub documentation](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/adding-a-new-ssh-key-to-your-github-account).
```bash
- repo init -u ssh://git@github.com/open-vela/manifests.git -b dev -m openvela.xml
+ repo init -u ssh://git@github.com/open-vela/manifests.git -b dev -m openvela.xml --git-lfs
```
- Method 2: HTTPS
```bash
- repo init -u https://github.com/open-vela/manifests.git -b dev -m openvela.xml
+ repo init -u https://github.com/open-vela/manifests.git -b dev -m openvela.xml --git-lfs
```
#### Option B: Download from Gitee
@@ -98,13 +98,13 @@ After installation, you can run `repo --version` to verify it.
This method requires you to add your SSH public key to your Gitee account first. Please refer to the [official Gitee documentation](https://gitee.com/help/articles/4191).
```bash
- repo init -u ssh://git@gitee.com/open-vela/manifests.git -b dev -m openvela.xml --repo-url=https://mirrors.tuna.tsinghua.edu.cn/git/git-repo/
+ repo init -u ssh://git@gitee.com/open-vela/manifests.git -b dev -m openvela.xml --repo-url=https://mirrors.tuna.tsinghua.edu.cn/git/git-repo/ --git-lfs
```
- Method 2: HTTPS
```bash
- repo init -u https://gitee.com/open-vela/manifests.git -b dev -m openvela.xml --repo-url=https://mirrors.tuna.tsinghua.edu.cn/git/git-repo/
+ repo init -u https://gitee.com/open-vela/manifests.git -b dev -m openvela.xml --repo-url=https://mirrors.tuna.tsinghua.edu.cn/git/git-repo/ --git-lfs
```
#### Option C: Download from GitCode
@@ -114,13 +114,13 @@ After installation, you can run `repo --version` to verify it.
This method requires you to add your SSH public key to your GitCode account first. Please refer to the [official GitCode documentation](https://docs.gitcode.com/docs/help/home/user_center/security_management/ssh).
```bash
- repo init -u ssh://git@gitcode.com/open-vela/manifests.git -b dev -m openvela.xml --repo-url=https://mirrors.tuna.tsinghua.edu.cn/git/git-repo/
+ repo init -u ssh://git@gitcode.com/open-vela/manifests.git -b dev -m openvela.xml --repo-url=https://mirrors.tuna.tsinghua.edu.cn/git/git-repo/ --git-lfs
```
- Method 2: HTTPS
```bash
- repo init -u https://gitcode.com/open-vela/manifests.git -b dev -m openvela.xml --repo-url=https://mirrors.tuna.tsinghua.edu.cn/git/git-repo/
+ repo init -u https://gitcode.com/open-vela/manifests.git -b dev -m openvela.xml --repo-url=https://mirrors.tuna.tsinghua.edu.cn/git/git-repo/ --git-lfs
```
3. Execute the sync command. `repo` will download all related source code repositories according to the manifest file (`openvela.xml`).
@@ -185,6 +185,7 @@ After the emulator starts, you will see the `goldfish-armv8a-ap>` prompt, indica
- Frequently Asked Questions
- [Quick Start FAQ](../faq/QuickStart_FAQ.md)
+ - [Developer FAQ](../faq/QuickStart_FAQ.md)
- Further Reading
diff --git a/en/release_notes/v5.2.md b/en/release_notes/v5.2.md
index e6dabc8b..9182f830 100644
--- a/en/release_notes/v5.2.md
+++ b/en/release_notes/v5.2.md
@@ -1,6 +1,6 @@
# openvela trunk-5.2
-\[ English | [简体中文](../../zh-cn/release_notes/v5.2.md) \]
+[ English | [简体中文](../../zh-cn/release_notes/v5.2.md) ]
## I. Quick Start
diff --git a/en/release_notes/v5.4.md b/en/release_notes/v5.4.md
index e7f09356..829a734c 100644
--- a/en/release_notes/v5.4.md
+++ b/en/release_notes/v5.4.md
@@ -1,91 +1,109 @@
# openvela trunk-5.4
-\[ English | [简体中文](../../zh-cn/release_notes/v5.4.md) \]
+[ English | [简体中文](../../zh-cn/release_notes/v5.4.md) ]
## I. Overview
+
openvela has been dedicated to introducing support for more chips, enhancing system real-time communication capabilities, and significantly improving system robustness, storage functionality, and debuggability. This release focuses on enhancements around the following core themes:
-- Hardware Ecosystem Expansion: **Added support for [Infineon AURIX™ TC4](../quickstart/development_board/tc4d9_evb_guide.md), [Flagchip MCU](../quickstart/development_board/fc7300f8m_evb_guide.md), and QEMU-R52 SIL platforms**, broadening the scope of platform applicability.
-- System Kernel Hardening: **Achieved collaborative operation of SMP and PM, introduced MPU-based thread stack protection and RPC framework refactoring**, making the system safer and more stable.
-- Key Capabilities Integration: **Added SocketCAN and Ethernet protocol stacks; introduced the highly reliable NVS2 storage solution**.
+
+- Hardware Ecosystem Expansion: Added support for [Infineon AURIX™ TC4](../quickstart/development_board/tc4d9_evb_guide.md), [Flagchip MCU](../quickstart/development_board/fc7300f8m_evb_guide.md), and QEMU-R52 SIL platforms**, broadening the scope of platform applicability.
+- System Kernel Hardening: Achieved collaborative operation of SMP and PM, introduced MPU-based thread stack protection and RPC framework refactoring**, making the system safer and more stable.
+- Key Capabilities Integration: Added SocketCAN and Ethernet protocol stacks; introduced the highly reliable NVS2 storage solution**.
- Developer Experience Optimization: Provided low-overhead FDX real-time tracing tools and multiple LVGL application examples, lowering the barrier for development and debugging.
## II. Major New Features & Enhancements
-### **1. Platform Support**
-- Added support for Infineon AURIX™ TriCore™ TC4 chips
-- Added support for Flagchip MCUs
+
+### 1. Platform Support
+
+- Added support for Infineon AURIX™ TriCore™ TC4 chips
+- Added support for Flagchip MCUs
- Added Cortex-R52 core support under QEMU platform, supporting Vector SIL platform
- Resolved compilation issues for nuttx boards, better supporting native nuttx boards platforms
-### **2. Kernel & Security**
-- Power Management (PM) & Symmetric Multi-Processing (SMP):
- - Achieved simultaneous enablement of SMP and PM functions, and completed functional verification on the `qemu-armv8a` platform, covering basic PM functions and `ostest` base cases.
+
+### 2. Kernel & Security
+
+- Power Management (PM) & Symmetric Multi-Processing (SMP): Achieved simultaneous enablement of SMP and PM functions, and completed functional verification on the `qemu-armv8a` platform, covering basic PM functions and `ostest` base cases.
+
- RPC
- - Framework Refactoring: Refactored the RPC framework to possess greater versatility, enabling cross-core communication capabilities for other VirtIO devices.
- - Functional Enhancements for Rptun/Rpmsg: Introduced a multi-priority mechanism to meet the real-time needs of automotive scenarios, and fixed issues from functional safety code scans.
+
+ - Framework Refactoring: Refactored the RPC framework to possess greater versatility, enabling cross-core communication capabilities for other VirtIO devices.
+ - Functional Enhancements for Rptun/Rpmsg: Introduced a multi-priority mechanism to meet the real-time needs of automotive scenarios, and fixed issues from functional safety code scans.
+
- Memory Management Enhancements: Implemented Task-independent Heap space, now supported by libraries like `libdbus`.
- Binder Message Mechanism: Integrated Binder server/client fds into the `libuv` event loop, handling messages via callbacks, achieving unified management with other modules.
- Added Rpmsg Battery & Gauge drivers.
-- Inter-thread Isolation Protection Mechanism
- The kernel now supports thread stack protection based on the hardware Memory Protection Unit (MPU). When a thread experiences a stack overflow, this mechanism triggers a hardware exception, preventing it from corrupting other threads' stack spaces or critical data.
-- Code Quality:
- - Completed multiple static code issue fixes, improving the overall quality of the code base.
-### **3. Communication**
-- Added SocketCAN and Ethernet Support
- Introduced a CAN communication framework following standard Socket APIs. Users can now use standard interfaces like `socket()`, `bind()`, `send()`, `recv()` for CAN message transmission and filtering.
-- WebSocket Function Enhancement
- Added default certificate support for the WebSocket Feature, simplifying the secure connection establishment process.
-### **4. Storage**
-- Added NVS2 (Non-Volatile Storage v2)
- Integrated a brand-new, highly reliable NVS2 storage solution. This storage solution is deeply optimized for embedded Flash media, supporting wear leveling, power-loss safety, and data encryption.
-### **5. Debugging & Diagnostics**
-- Added FDX-based Real-time Trace Function
- Implemented a low-intrusion real-time tracing tool based on the FDX (Fast Debug eXchange) protocol. It can capture and export high-precision system events, such as task switching, interrupt response, semaphore operations, etc., with extremely low system overhead.
-## **6. Application Examples**
+- Inter-thread Isolation Protection Mechanism: The kernel now supports thread stack protection based on the hardware Memory Protection Unit (MPU). When a thread experiences a stack overflow, this mechanism triggers a hardware exception, preventing it from corrupting other threads' stack spaces or critical data.
+
+- Code Quality: Completed multiple static code issue fixes, improving the overall quality of the code base.
+
+### 3. Communication
+
+- Added SocketCAN and Ethernet Support: Introduced a CAN communication framework following standard Socket APIs. Users can now use standard interfaces like `socket()`, `bind()`, `send()`, `recv()` for CAN message transmission and filtering.
+- WebSocket Function Enhancement: Added default certificate support for the WebSocket Feature, simplifying the secure connection establishment process.
+
+### 4. Storage
+
+Added NVS2 (Non-Volatile Storage v2): Integrated a brand-new, highly reliable NVS2 storage solution. This storage solution is deeply optimized for embedded Flash media, supporting wear leveling, power-loss safety, and data encryption.
+
+### 5. Debugging & Diagnostics
+
+Added FDX-based Real-time Trace Function: Implemented a low-intrusion real-time tracing tool based on the FDX (Fast Debug eXchange) protocol. It can capture and export high-precision system events, such as task switching, interrupt response, semaphore operations, etc., with extremely low system overhead.
+
+## 6. Application Examples
+
- Added [Breakout Game](./../../../../../packages_demos/blob/trunk-5.4/breakout/Readme.md): A touchscreen breakout game developed based on openvela and LVGL, which has implemented basic game logic, added image assets, and implemented impact sound effects.
- Added [Virtual Pet](./../../../../../packages_demos/blob/trunk-5.4/pet/README.md) application: An interactive demo program based on the LVGL graphics library, simulating the process of raising a digital pet. Users can care for the virtual pet through operations like feeding, giving water, exercise, and rest to improve its mood and level.
- [Snake Game](./../../../../../packages_demos/blob/trunk-5.4/snake_game/Readme.md): An automatic Snake game implemented using the LVGL graphics library.
-- [Electronic Wooden Fish](./../../../../../packages_demos/blob/trunk-5.4//wooden_fish/README_zh-cn.md): Based on the openvela `nxaudio` service and the upper-layer LVGL UI framework, implementing a complete interaction link containing responsive layout and secure resource management, realizing an application showcase with smooth animation effects, secure resource management, and a good user experience.
-## **7. Development Tools**
-- Ubuntu Environment VS Code Plugin Support
- Support installing openvela VS Code plugin in Ubuntu environment, achieving full-process support from project creation, compilation and building, system debugging to application development, significantly improving development efficiency. ([openvela VS Code Plugin Usage Guide](../quickstart/vscode_plugin_usage.md))
-## **8. Emulator Runtime Parameter Extensions**
-- `emulator.sh` adds `-keep` parameter support
- - If `emulator.sh` supports multi-instance configuration, you can access an instance with a specified name via the `-keep` parameter (creating it if it doesn't exist), and relevant contexts will not be deleted when that instance exits.
- ```plaintext
- # Usage
- cp cmake_out/vela_goldfish-arm64-v8a-ap/nuttx* cmake_out/vela_goldfish-arm64-v8a-ap/vela_* cmake_out/vela_goldfish-arm64-v8a-ap/advancedFeatures.ini nuttx/
-
- ./emulator.sh vela -keep -no-window
-
- # Test example: create a test file in /data directory and write content
- nsh> echo test > /data/test
- nsh> echo "openvvela qemu keep test" >> /data/test
- nsh> quit
-
- # Exit emulator and re-enter, the previously written content is still preserved
- ./emulator.sh vela -keep -no-window
- nsh> cat /data/test
- test
- openvela qemu keep test
- ```
+- [Electronic Wooden Fish](./../../../../../packages_demos/blob/trunk-5.4/wooden_fish/README_zh-cn.md): Based on the openvela `nxaudio` service and the upper-layer LVGL UI framework, implementing a complete interaction link containing responsive layout and secure resource management, realizing an application showcase with smooth animation effects, secure resource management, and a good user experience.
+
+## 7. Development
+
+Ubuntu Environment VS Code Plugin Support: Support installing openvela VS Code plugin in Ubuntu environment, achieving full-process support from project creation, compilation and building, system debugging to application development, significantly improving development efficiency. ([openvela VS Code Plugin Usage Guide](../quickstart/vscode_plugin_usage.md))
+
+## 8. Emulator Runtime Parameter Extensions
+
+- `emulator.sh` adds `-keep` parameter support:
+
+ If `emulator.sh` supports multi-instance configuration, you can access an instance with a specified name via the `-keep` parameter (creating it if it doesn't exist), and relevant contexts will not be deleted when that instance exits.
+
+ ```bash
+ # Usage
+ cp cmake_out/vela_goldfish-arm64-v8a-ap/nuttx* cmake_out/vela_goldfish-arm64-v8a-ap/vela_* cmake_out/vela_goldfish-arm64-v8a-ap/advancedFeatures.ini nuttx/
+
+ ./emulator.sh vela -keep -no-window
+
+ # Test example: create a test file in /data directory and write content
+ nsh> echo test > /data/test
+ nsh> echo "openvvela qemu keep test" >> /data/test
+ nsh> quit
+
+ # Exit emulator and re-enter, the previously written content is still preserved
+ ./emulator.sh vela -keep -no-window
+ nsh> cat /data/test
+ test
+ openvela qemu keep test
+ ```
- `emulator.sh` adds support for using Hostfs function, default support using 9pfs, effect as follows:
- ```Shell
- goldfish-armv8a-ap> df -h
- Filesystem Size Used Available Mounted on
- binfs 0B 0B 0B /bin
- fatfs 255M 78M 177M /data
- romfs 1152B 1152B 0B /etc
- hostfs 0B 0B 0B /host
- procfs 0B 0B 0B /proc
- v9fs 878G 626G 252G /share
- romfs 512B 512B 0B /system
- tmpfs 6K 1K 5K /tmp
- ```
-
- Usage:
- ```plaintext
- # Usage
- cp cmake_out/vela_goldfish-arm64-v8a-ap/nuttx* cmake_out/vela_goldfish-arm64-v8a-ap/vela_* cmake_out/vela_goldfish-arm64-v8a-ap/advancedFeatures.ini nuttx/
-
- ./emulator.sh vela
- ```
+
+ ```bash
+ goldfish-armv8a-ap> df -h
+ Filesystem Size Used Available Mounted on
+ binfs 0B 0B 0B /bin
+ fatfs 255M 78M 177M /data
+ romfs 1152B 1152B 0B /etc
+ hostfs 0B 0B 0B /host
+ procfs 0B 0B 0B /proc
+ v9fs 878G 626G 252G /share
+ romfs 512B 512B 0B /system
+ tmpfs 6K 1K 5K /tmp
+ ```
+
+ Usage:
+
+ ```bash
+ # Usage
+ cp cmake_out/vela_goldfish-arm64-v8a-ap/nuttx* cmake_out/vela_goldfish-arm64-v8a-ap/vela_* cmake_out/vela_goldfish-arm64-v8a-ap/advancedFeatures.ini nuttx/
+
+ ./emulator.sh vela
+ ```
diff --git a/images/assistant_qr.jpg b/images/assistant_qr.jpg
new file mode 100644
index 00000000..168d8c9d
Binary files /dev/null and b/images/assistant_qr.jpg differ
diff --git a/zh-cn/contribute/process/doc_dev_process.md b/zh-cn/contribute/process/doc_dev_process.md
index 5a969e38..114bdde7 100644
--- a/zh-cn/contribute/process/doc_dev_process.md
+++ b/zh-cn/contribute/process/doc_dev_process.md
@@ -4,7 +4,7 @@
## 流程图
-
+
## 一、开发工程师要做的
diff --git a/zh-cn/contribute/process/figures/001.png b/zh-cn/contribute/process/figures/001.png
new file mode 100644
index 00000000..62d1f941
Binary files /dev/null and b/zh-cn/contribute/process/figures/001.png differ
diff --git a/zh-cn/contribute/process/images/doc_dev_process.svg b/zh-cn/contribute/process/images/doc_dev_process.svg
deleted file mode 100644
index bb3d0368..00000000
--- a/zh-cn/contribute/process/images/doc_dev_process.svg
+++ /dev/null
@@ -1 +0,0 @@
-
\ No newline at end of file
diff --git a/zh-cn/dev_board/Development_Board.md b/zh-cn/dev_board/Development_Board.md
index 08577661..984faff8 100644
--- a/zh-cn/dev_board/Development_Board.md
+++ b/zh-cn/dev_board/Development_Board.md
@@ -1,13 +1,13 @@
# openvela 开发板案例
-\[ [English](../../en/dev_board/Development_Board.md) | 简体中文 \]
+[ [English](../../en/dev_board/Development_Board.md) | 简体中文 ]
-| 厂商名称 | 开发板型号 | 芯片型号 | 适配案例 | 典型应用场景 | 购买渠道 | 开发板问题咨询 |
-| ------------------------------- | -------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------ | ---------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------- |
-| 意法半导体 (STMicroelectronics) | [STM32H750B-DK](https://www.st.com.cn/zh/evaluation-tools/stm32h750b-dk.html#documentation) | [STM32H750XBH6](https://www.st.com.cn/zh/microcontrollers-microprocessors/stm32h750xb.html#documentation) | [在 STM32H750 上部署 openvela](../quickstart/development_board/STM32H750.md) | 智能家居、工业控制、医疗电子 | [购买链接](https://shop314814286.taobao.com/?weexShopTab=allitemsbar&weexShopSubTab=allitems&shopFrameworkType=native&sourceType=other&suid=74be4e31-a352-413d-bf81-72909dd711a5&shareUniqueId=31021130754&ut_sk=1.ZUQzpvSPtZsDAM22wgMusrSy_21646297_1743386335140.Copy.shop&un=0ebec934d3cc95cdbbeeeafdb2768e28&share_crt_v=1&un_site=0&spm=a2159r.13376460.0.0&sp_tk=bVZWMmV3bFZkbFI%3D&cpp=1&shareurl=true&short_name=h.6etlMBbY1ZEjXgI&bxsign=scdnpFVDezWrEooi2xHR3oT8fAOZA8b4hwRYH5nD-IkJzr_e6YrW1NWxn3VpZEVnrZ-9OpQT-aJKRxCaAu6Jbcs_PY7aOntLtLTTy6VNNJRR26yZttuARyPNJT51Pyeq_Ei&app=chrome) | [ST MCU 中国支持](mailto:mcu.china@st.com) |
-| 意法半导体 (STMicroelectronics) | STM32F411CEU6 | [STM32F411CE](https://www.st.com/en/microcontrollers-microprocessors/stm32f411ce.html) | [在 STM32F411 上使用 openvela 点亮 LED](../quickstart/development_board/STM32F411.md) | 物联网、工业自动化 | [购买链接](https://item.taobao.com/item.htm?abbucket=11&id=594670660262&ns=1&pisk=g--sY9MMDTfFh3IYhNDUVsm6s9IjkvorC-6vEKEaHGITDxO2Tn7AgVAflIRhkhSwQB1vaB1T0qfaci9XiIRXnAXAMIdfgFuE4dvGmihzG0oyIpevwdA_WlpKHtBuMv7TzUdWfihramaUpNcNDIRWgwmLJTfdBtCADvwdUtjAHOIxd6BfErF9MIHC9tBfHrCYXJedH6UTBsBY9pB5UsBAHZHBp6XADsdAWv9pPvPC3V6vCYgJpnSrQjJdOoEvA9hcbdwLfT-PC9CWBhwaQxX15_pOOfhY-UbW6ZtmFkOBcs_VZN2-EMpk8CBCVcNlCep5XTttnrSJrBtfyggqIG-MBUQv2j4vNCx66U9U1us2fpKNKIw87dbv8hbkbXE12UvNbEd-TP59PtIPVu57qiUbdaqfd_kIdr4DOa-FTSi8kjbOKOnrdvNQoNBhd9MIdr4cW9XtivMQOrf..&priceTId=2147831d17537712154442218e1cfd&spm=a21n57.sem.item.50.51873903rAXiL0&utparam={"aplus_abtest"%3A"88650fbdf45c34af5c7b5b5527a5bc29"}&xxc=taobaoSearch) | [ST MCU 中国支持](mailto:mcu.china@st.com) |
-| 乐鑫科技 (Espressif) | [ESP32S3EYE](https://www.espressif.com.cn/zh-hans/dev-board/esp32-s3-eye-cn) | [ESP32S3](https://www.espressif.com.cn/zh-hans/products/socs/esp32-s3) | [在 ESP32-S3-EYE 开发板上移植 openvela](../quickstart/development_board/ESP32-S3-EYE.md) | AIoT、人机交互、智能家居 | [购买链接](https://item.taobao.com/item.htm?spm=a21n57.sem.item.1.3d75390372IH5V&priceTId=2147816e17537599042042013e18b1&utparam={"aplus_abtest"%3A"63d6c7ec4d03ab8b3f05e1c978046905"}&id=664295688431&ns=1&abbucket=5&xxc=taobaoSearch&pisk=g6ojYejoDsfX9HqOCrvPA5v-azq11L-ehOwtKAIVBoEAXdGZaczT_Ec_5fluWmrq3bNtTbNAbFVVflMssflsIK2T6fh__q8yYxD0jldF5H-EnbrNkflbWSHJBJ2UHLzA82hIcldeTh_PerA4XflBXBtRyJV8HReAXL18pJZTBlevF7e_KNCx6fp7yRy1HsFT68K8Q7QOW5U92geQH1ITMrpSe72TX5hTWLM-ZlsbUKNohpTpMPPB8XDYNGItDh48OaP8E8iSOrFLl7stU0wblWHxA9rsb8nrVyzPQQq-K4lYFktft-g-doesjpQ_1Pi0V-gksn4jpRhg00pOkW3qgunxROItFoNr60aAGQNKmYn32z-pdYnogxmSQOKTUXP-nmE6vpqbDSE7EcRlD5g-Wmz0jsd_s0M-DVsPMMPQLCb1Fyj_FWJWFNbihI1rYxEZcdaYE8RwFL_EIreuFWJWFNbgk82rQL958Af..) | [乐鑫开发者社区](https://www.espressif.com.cn/zh-hans/contact-us/technical-inquiries) |
-| 乐鑫科技 (Espressif) | [ESP32S3BOX](https://www.espressif.com.cn/zh-hans/news/ESP32-S3-BOX_video) | [ESP32S3](https://www.espressif.com.cn/zh-hans/products/socs/esp32-s3) | [请参考:在 ESP32-S3-EYE 开发板上移植 openvela](../quickstart/development_board/ESP32-S3-EYE.md) | AIoT、人机交互、智能家居 | [购买链接](https://item.taobao.com/item.htm?id=732842971319&pisk=gOJq2JVGWxHVbZ7AiL6a8Sea8mWA3OuCnd_1jhxGcZbcldTwjGsanSVco3xlxawsDtGOQF7yWZ1fnZwNQh8951v15drvUey_hqhvjRW1I2gIdvtYGOBiRBruwraAAiqmCibcqtI6UFpmpvtvDNE4SDOodlr9DuE0sNYGE_jAbOj0IZmPEGIGI-fgn3blyaXGj1j0r_jCvo2ciF4lrGS0IGV0nTblfiWGSFXi43bRbObiw4vHBzS1i0bPYQmK2MCV-nbzL3pPmpFv0ali_LRcgw2CzR2MUi-ZrOVgLf_H9Z1C8UD73O-lbU5Bg2y27hxps_JaomTHmEppsnkr-t9vmBfMKVFM8t8NtKfzjJBVEaXPqpm_9a9PlFvVZDwF1TvCttASwq113M8MHEraIexWAKCJQ2zlWIsdEGJtx57Nsg58WgmIXdd4S55c2g7I40o60-dJbzfdD5FOMmIPRmf065Cc2g7I40PT6sLA4wici&spm=a1z10.3-c.w4002-8715811646.9.4dc69a382dycIm) | [乐鑫开发者社区](https://www.espressif.com.cn/zh-hans/contact-us/technical-inquiries) |
-| 恒玄科技 (Bestechnic) | [BES2600WM MAIN BOARD V1.1](https://www.fortune-co.com/index.php?s=/Cn/Public/singlePage/catid/176.html) | BES2600WM-AX4F | [Readme](../../../../../vendor_bes/blob/trunk/boards/best2003_ep/aos_evb/Readme) | 智能穿戴、AI 玩具 | [联系代理商](https://www.fortune-co.com/Tech/projectDetail/id/64.html) | [联系代理商](https://www.fortune-co.com/Tech/projectDetail/id/64.html) |
-| 旗芯微半导体 | FC7300F8M-EVB | FC7300F8MDT | [FC7300F8M-EVB 开发板 openvela 运行指南](../quickstart/development_board/fc7300f8m_evb_guide.md) | 域/区控制器、驾驶辅助系统、电池管理系统、电机控制等 | [联系代理商](https://www.flagchip.com.cn/Pro/3/3.html) | [联系代理商](https://www.flagchip.com.cn/Pro/3/3.html) |
-| 英飞凌半导体 | TC4D9-EVB | AURIX ™ TC4x | [TC4D9-EVB 开发板 openvela 运行指南](../quickstart/development_board/tc4d9_evb_guide.md) | 车辆运动控制器、区域控制器、车载网关等 | [联系代理商](https://www.infineon.cn/contact-us/where-to-buy) | [联系代理商](https://www.infineon.cn/contact-us/where-to-buy) |
\ No newline at end of file
+| 厂商名称 | 开发板型号 | 芯片型号 | 适配案例 | 典型应用场景 | 购买渠道 | 开发板问题咨询 |
+| ------------------------------- | ----------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------ | --------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------- |
+| 意法半导体 (STMicroelectronics) | [STM32H750B-DK](https://www.st.com.cn/zh/evaluation-tools/stm32h750b-dk.html#documentation) | [STM32H750XBH6](https://www.st.com.cn/zh/microcontrollers-microprocessors/stm32h750xb.html#documentation) | [在 STM32H750 上部署 openvela](../quickstart/development_board/STM32H750.md) | 智能家居、工业控制、医疗电子 | [购买链接](https://shop314814286.taobao.com/?weexShopTab=allitemsbar&weexShopSubTab=allitems&shopFrameworkType=native&sourceType=other&suid=74be4e31-a352-413d-bf81-72909dd711a5&shareUniqueId=31021130754&ut_sk=1.ZUQzpvSPtZsDAM22wgMusrSy_21646297_1743386335140.Copy.shop&un=0ebec934d3cc95cdbbeeeafdb2768e28&share_crt_v=1&un_site=0&spm=a2159r.13376460.0.0&sp_tk=bVZWMmV3bFZkbFI%3D&cpp=1&shareurl=true&short_name=h.6etlMBbY1ZEjXgI&bxsign=scdnpFVDezWrEooi2xHR3oT8fAOZA8b4hwRYH5nD-IkJzr_e6YrW1NWxn3VpZEVnrZ-9OpQT-aJKRxCaAu6Jbcs_PY7aOntLtLTTy6VNNJRR26yZttuARyPNJT51Pyeq_Ei&app=chrome) | [ST MCU 中国支持](mailto:mcu.china@st.com) |
+| 意法半导体 (STMicroelectronics) | STM32F411CEU6 | [STM32F411CE](https://www.st.com/en/microcontrollers-microprocessors/stm32f411ce.html) | [在 STM32F411 上使用 openvela 点亮 LED](../quickstart/development_board/STM32F411.md) | 物联网、工业自动化 | [购买链接](https://item.taobao.com/item.htm?abbucket=11&id=594670660262&ns=1&pisk=g--sY9MMDTfFh3IYhNDUVsm6s9IjkvorC-6vEKEaHGITDxO2Tn7AgVAflIRhkhSwQB1vaB1T0qfaci9XiIRXnAXAMIdfgFuE4dvGmihzG0oyIpevwdA_WlpKHtBuMv7TzUdWfihramaUpNcNDIRWgwmLJTfdBtCADvwdUtjAHOIxd6BfErF9MIHC9tBfHrCYXJedH6UTBsBY9pB5UsBAHZHBp6XADsdAWv9pPvPC3V6vCYgJpnSrQjJdOoEvA9hcbdwLfT-PC9CWBhwaQxX15_pOOfhY-UbW6ZtmFkOBcs_VZN2-EMpk8CBCVcNlCep5XTttnrSJrBtfyggqIG-MBUQv2j4vNCx66U9U1us2fpKNKIw87dbv8hbkbXE12UvNbEd-TP59PtIPVu57qiUbdaqfd_kIdr4DOa-FTSi8kjbOKOnrdvNQoNBhd9MIdr4cW9XtivMQOrf..&priceTId=2147831d17537712154442218e1cfd&spm=a21n57.sem.item.50.51873903rAXiL0&utparam={"aplus_abtest"%3A"88650fbdf45c34af5c7b5b5527a5bc29"}&xxc=taobaoSearch) | [ST MCU 中国支持](mailto:mcu.china@st.com) |
+| 乐鑫科技 (Espressif) | [ESP32S3EYE](https://www.espressif.com.cn/zh-hans/dev-board/esp32-s3-eye-cn) | [ESP32S3](https://www.espressif.com.cn/zh-hans/products/socs/esp32-s3) | [在 ESP32-S3-EYE 开发板上移植 openvela](../quickstart/development_board/ESP32-S3-EYE.md) | AIoT、人机交互、智能家居 | [购买链接](https://item.taobao.com/item.htm?spm=a21n57.sem.item.1.3d75390372IH5V&priceTId=2147816e17537599042042013e18b1&utparam={"aplus_abtest"%3A"63d6c7ec4d03ab8b3f05e1c978046905"}&id=664295688431&ns=1&abbucket=5&xxc=taobaoSearch&pisk=g6ojYejoDsfX9HqOCrvPA5v-azq11L-ehOwtKAIVBoEAXdGZaczT_Ec_5fluWmrq3bNtTbNAbFVVflMssflsIK2T6fh__q8yYxD0jldF5H-EnbrNkflbWSHJBJ2UHLzA82hIcldeTh_PerA4XflBXBtRyJV8HReAXL18pJZTBlevF7e_KNCx6fp7yRy1HsFT68K8Q7QOW5U92geQH1ITMrpSe72TX5hTWLM-ZlsbUKNohpTpMPPB8XDYNGItDh48OaP8E8iSOrFLl7stU0wblWHxA9rsb8nrVyzPQQq-K4lYFktft-g-doesjpQ_1Pi0V-gksn4jpRhg00pOkW3qgunxROItFoNr60aAGQNKmYn32z-pdYnogxmSQOKTUXP-nmE6vpqbDSE7EcRlD5g-Wmz0jsd_s0M-DVsPMMPQLCb1Fyj_FWJWFNbihI1rYxEZcdaYE8RwFL_EIreuFWJWFNbgk82rQL958Af..) | [乐鑫开发者社区](https://www.espressif.com.cn/zh-hans/contact-us/technical-inquiries) |
+| 乐鑫科技 (Espressif) | [ESP32S3BOX](https://www.espressif.com.cn/zh-hans/news/ESP32-S3-BOX_video) | [ESP32S3](https://www.espressif.com.cn/zh-hans/products/socs/esp32-s3) | [请参考:在 ESP32-S3-EYE 开发板上移植 openvela](../quickstart/development_board/ESP32-S3-EYE.md) | AIoT、人机交互、智能家居 | [购买链接](https://item.taobao.com/item.htm?id=732842971319&pisk=gOJq2JVGWxHVbZ7AiL6a8Sea8mWA3OuCnd_1jhxGcZbcldTwjGsanSVco3xlxawsDtGOQF7yWZ1fnZwNQh8951v15drvUey_hqhvjRW1I2gIdvtYGOBiRBruwraAAiqmCibcqtI6UFpmpvtvDNE4SDOodlr9DuE0sNYGE_jAbOj0IZmPEGIGI-fgn3blyaXGj1j0r_jCvo2ciF4lrGS0IGV0nTblfiWGSFXi43bRbObiw4vHBzS1i0bPYQmK2MCV-nbzL3pPmpFv0ali_LRcgw2CzR2MUi-ZrOVgLf_H9Z1C8UD73O-lbU5Bg2y27hxps_JaomTHmEppsnkr-t9vmBfMKVFM8t8NtKfzjJBVEaXPqpm_9a9PlFvVZDwF1TvCttASwq113M8MHEraIexWAKCJQ2zlWIsdEGJtx57Nsg58WgmIXdd4S55c2g7I40o60-dJbzfdD5FOMmIPRmf065Cc2g7I40PT6sLA4wici&spm=a1z10.3-c.w4002-8715811646.9.4dc69a382dycIm) | [乐鑫开发者社区](https://www.espressif.com.cn/zh-hans/contact-us/technical-inquiries) |
+| 恒玄科技 (Bestechnic) | [BES2600WM MAIN BOARD V1.1](https://www.fortune-co.com/index.php?s=/Cn/Public/singlePage/catid/176.html) | BES2600WM-AX4F | [Readme](../../../../../vendor_bes/blob/dev/boards/best2003_ep/aos_evb/Readme) | 智能穿戴、AI 玩具 | [联系代理商](https://www.fortune-co.com/Tech/projectDetail/id/64.html) | [联系代理商](https://www.fortune-co.com/Tech/projectDetail/id/64.html) |
+| 旗芯微半导体 | [FC7300F8M-EVB](https://www.flagchip.com.cn/Pro/3/3.html) | [FC7300F8MDT](https://www.flagchip.com.cn/Pro/3/3.html) | [FC7300F8M-EVB 开发板 openvela 运行指南](../quickstart/development_board/fc7300f8m_evb_guide.md) | 域/区控制器、驾驶辅助系统、电池管理系统、电机控制等 | [联系代理商](https://www.flagchip.com.cn/Pro/3/3.html) | [联系代理商](https://www.flagchip.com.cn/Pro/3/3.html) |
+| 英飞凌半导体 | [TC4D9-EVB](https://itools.infineon.com/aurix_tc4xx_code_examples/documents/Board_Users_Manual_TriBoard-TC4X9-COM-V2_0_0.pdf) | [AURIX ™ TC4x](https://www.infineon.cn/products/microcontroller/32-bit-tricore/aurix-tc4x/tc4dx#products) | [TC4D9-EVB 开发板 openvela 运行指南](../quickstart/development_board/tc4d9_evb_guide.md) | 车辆运动控制器、区域控制器、车载网关等 | [联系代理商](https://www.infineon.cn/contact-us/where-to-buy) | [联系代理商](https://www.infineon.cn/contact-us/where-to-buy) |
\ No newline at end of file
diff --git a/zh-cn/edge_ai_dev/configure_tflite_micro_dev_env.md b/zh-cn/edge_ai_dev/configure_tflite_micro_dev_env.md
new file mode 100644
index 00000000..6ae2d1ea
--- /dev/null
+++ b/zh-cn/edge_ai_dev/configure_tflite_micro_dev_env.md
@@ -0,0 +1,142 @@
+# 配置 TFLite Micro 开发环境
+
+[ [English](../../en/edge_ai_dev/configure_tflite_micro_dev_env.md) | 简体中文 ]
+
+在 openvela 平台上开发 TensorFlow Lite for Microcontrollers (TFLite Micro) 应用前,必须正确配置编译环境与依赖库。本节指导开发者完成源码确认、库依赖配置及内存策略制定。
+
+## 一、先决条件
+
+在开始之前,请确保已完成以下准备工作:
+
+- **基础环境**:参考[官方文档](../quickstart/openvela_ubuntu_quick_start.md),完成 openvela 基础开发环境的部署。
+- **源码确认**:TFLite Micro 源码已集成至 openvela 代码仓库中,路径为:
+
+ - `apps/mlearning/tflite-micro/`
+
+## 二、组件与依赖库支持
+
+TFLite Micro 依赖特定的数学库和工具库来实现模型解析与算子加速。openvela 仓库已预置以下关键组件:
+
+| **组件名称** | **功能描述** | **源码路径** |
+| :-------------- | :------------------------------------------------------ | :------------------------- |
+| **FlatBuffers** | TFLite 模型序列化格式支持库,提供必要的头文件。 | `apps/system/flatbuffers/` |
+| **Gemmlowp** | Google 提供的低精度通用矩阵乘法库,用于量化运算。 | `apps/math/gemmlowp/` |
+| **Ruy** | TensorFlow 的高性能矩阵乘法后端,主要优化全连接层运算。 | `apps/math/ruy/` |
+| **KissFFT** | 轻量级快速傅里叶变换库,支持定点与浮点运算。 | `apps/math/kissfft/` |
+| **CMSIS-NN** | ARM Cortex-M 专用神经网络内核优化库(可选)。 | `apps/mlearning/cmsis-nn/` |
+
+## 三、编译配置 (Kconfig)
+
+通过 menuconfig 图形化界面启用必要的库支持,以确保编译通过并优化代码体积。
+
+启动配置菜单
+
+```Bash
+cmake --build cmake_out/goldfish-arm64-v8a-ap -t menuconfig
+```
+
+请依次完成以下四个核心模块的配置:
+
+### 1、启用 C++ 运行时支持
+
+TFLite Micro 基于 C++11/14 标准编写,必须启用 LLVM libc++ 支持。
+
+- **配置路径**:`Library Routines` -> `C++ Library`
+- **操作**:选择 `LLVM libc++ C++ Standard Library`
+
+```Plain
+(Top) → Library Routines → C++ Library
+
+( ) Toolchain C++ support
+( ) Basic C++ support
+(X) LLVM libc++ C++ Standard Library
+```
+
+### 2、启用数学加速库
+
+根据模型需求启用矩阵运算与信号处理库。
+
+- **配置路径**:`Application Configuration` -> `Math Library Support`
+- **操作**:选中 `Gemmlowp`, `kissfft`, `Ruy`
+
+```Plain
+(Top) → Application Configuration → Math Library Support
+
+[*] Gemmlowp
+[*] kissfft
+[ ] LibTomMath MPI Math Library
+[*] Ruy
+```
+
+### 3、启用 FlatBuffers 支持
+
+启用系统级 FlatBuffers 库以支持模型解析。
+
+- **配置路径**:`Application Configuration` -> `System Libraries and NSH Add-Ons`
+- **操作**:选中 `flatbuffers`
+
+```Plain
+(Top) → Application Configuration → System Libraries and NSH Add-Ons
+
+[*] flatbuffers
+```
+
+### 4、启用 TFLite Micro 核心
+
+- **配置路径**:`Application Configuration` -> `Machine Learning Support`
+- **操作**:选中 `TFLiteMicro`。如需使用 ARM 硬件加速,建议同时选中 `CMSIS_NN Library`。
+
+```Plain
+(Top) → Application Configuration → Machine Learning Support
+
+[ ] CMSIS_NN Library
+[*] TFLiteMicro
+[ ] Print tflite-micro's debug message
+```
+
+## 四、内存分配策略
+
+嵌入式系统的内存资源有限,TFLite Micro 需要一块连续的内存区域(Tensor Arena)来存放输入/输出张量及中间计算结果。
+
+### 1、静态分配(推荐)
+
+对于生产环境,推荐使用静态数组分配。这种方式无内存碎片风险,且内存占用在编译期可知。
+
+**实现示例**:
+
+```C++
+// 在应用代码全局区域定义
+// 注意:内存必须按照 16 字节对齐,以满足 SIMD 指令要求
+#define TENSOR_ARENA_SIZE (100 * 1024)
+static uint8_t tensor_arena[TENSOR_ARENA_SIZE] __attribute__((aligned(16)));
+```
+
+### 2、确定 Arena 大小
+
+为了精准设定 `TENSOR_ARENA_SIZE`,避免浪费或溢出,可以使用 `RecordingMicroInterpreter` 在运行时抓取实际内存用量。
+
+**调试步骤**:
+
+1. 引入记录器头文件。
+2. 使用 `RecordingMicroInterpreter` 替换标准的 `MicroInterpreter`。
+3. 运行一次模型推理(Invoke)。
+4. 读取实际使用量并添加安全冗余(建议 +1KB)。
+
+```C++
+#include "tensorflow/lite/micro/recording_micro_interpreter.h"
+
+// 1. 创建记录分配器
+auto* allocator = tflite::RecordingMicroAllocator::Create(tensor_arena, arena_size);
+
+// 2. 实例化记录解释器
+tflite::RecordingMicroInterpreter interpreter(model, resolver, allocator);
+
+// 3. 分配张量并执行推理
+interpreter.AllocateTensors();
+interpreter.Invoke();
+
+// 4. 获取内存统计信息
+size_t used = interpreter.arena_used_bytes(); // 实际占用
+interpreter.GetMicroAllocator().PrintAllocations(); // 分项明细
+size_t recommended = used + 1024; // 至少额外预留 ~1KB 空间
+```
\ No newline at end of file
diff --git a/zh-cn/edge_ai_dev/model_integration.md b/zh-cn/edge_ai_dev/model_integration.md
new file mode 100644
index 00000000..99e5b769
--- /dev/null
+++ b/zh-cn/edge_ai_dev/model_integration.md
@@ -0,0 +1,167 @@
+# 模型转换与代码集成
+
+[ [English](../../en/edge_ai_dev/model_integration.md) | 简体中文 ]
+
+在 openvela 开发中,由于微控制器 (MCU) 的 RAM 资源受限且文件系统支持可能被裁剪,直接读取 .tflite 文件通常不可行。标准做法是将训练好的 TensorFlow Lite 模型转换为 C 语言数组,作为只读数据 (RODATA) 编译到应用程序固件中,直接从 Flash 执行读取。
+
+本节将指导开发者如何将模型转换为 C 数组,并将其集成到 openvela 的 C++ 应用(如 helloxx)中。
+
+## 一、模型转换 (TFLite 转 C 数组)
+
+为了将模型嵌入固件,我们需要使用工具将 `.tflite` 二进制文件转换为 C 源代码文件。
+
+### 1、准备模型文件
+
+本教程使用 TensorFlow Lite Micro 官方的 Hello World 模型(正弦波预测)。为了配合下文的代码逻辑,我们需要下载 Float32(浮点)版本的模型。
+
+- 下载地址:[hello_world_float.tflite](https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/examples/hello_world/models/hello_world_float.tflite) (Google 官方示例)
+
+请将下载的文件重命名为 `converted_model.tflite` 并放置在当前目录下。
+
+### 2、使用 xxd 工具转换
+
+在 Linux/Unix 环境下,使用 `xxd` 命令即可生成包含模型数据的源文件:
+
+```Bash
+# 将 converted_model.tflite 转换为 model_data.cc
+xxd -i converted_model.tflite > model_data.cc
+```
+
+### 3、优化模型数组声明
+
+`xxd` 生成的默认输出类似如下:
+
+```c++
+unsigned char converted_model_tflite[] = { 0x18, 0x00, ...};
+unsigned int converted_model_tflite_len = 18200;
+```
+
+**关键优化步骤**:
+
+为了节省宝贵的 RAM 资源并确保程序稳定运行,**必须**对生成的数组进行修改:
+
+1. **添加 `const`**:将模型数据放置在 Flash (RODATA段) 中,避免占用 RAM。
+2. **添加内存对齐**:TFLite Micro 要求模型数据首地址必须 16 字节对齐。
+
+请打开 `model_data.cc`,复制其中的数组内容,将其直接粘贴到主程序文件 `helloxx_main.cxx` 中(推荐):
+
+```C++
+// 添加 alignas(16) 以满足 TFLite 的内存对齐要求
+// 添加 const 将数据放入 Flash,节省 RAM
+alignas(16) const unsigned char converted_model_tflite[] = {
+ 0x18, 0x00, ...
+};
+const unsigned int converted_model_tflite_len = 18200;
+```
+
+## 二、集成到应用程序
+
+本节以修改 openvela 中的标准 C++ 示例程序 `apps/examples/helloxx` 为例,展示如何集成 TFLite Micro。
+
+### 1、修改构建系统
+
+在应用编译时,需要包含 TFLite Micro 的头文件路径和构建规则。编辑 `apps/examples/helloxx/CMakeLists.txt`,可以参考以下内容:
+
+```Bash
+if(CONFIG_EXAMPLES_HELLOXX)
+ nuttx_add_application(
+ NAME
+ helloxx
+ STACKSIZE
+ 10240
+ MODULE
+ ${CONFIG_EXAMPLES_HELLOXX}
+ SRCS
+ helloxx_main.cxx
+ DEPENDS
+ tflite_micro
+ DEFINITIONS
+ TFLITE_WITH_STABLE_ABI=0
+ TFLITE_USE_OPAQUE_DELEGATE=0
+ TFLITE_SINGLE_ROUNDING=0
+ TF_LITE_STRIP_ERROR_STRINGS
+ TF_LITE_STATIC_MEMORY
+ COMPILE_FLAGS
+ -Wno-error)
+endif()
+```
+
+### 2、修改配置
+
+- 参考[配置 TFLite Micro 开发环境](./configure_tflite_micro_dev_env.md),配置编译环境与依赖库。
+- 启用示例应用:在配置菜单 (`menuconfig`) 中,定位到 `Application Configuration` -> `Examples`,勾选 `"Hello, World!" C++ example` (即 `helloxx`)。
+
+### 3、实现推理逻辑
+
+在代码中集成 TFLite Micro 主要包含五个标准步骤:
+
+1. **加载模型**:从 C 数组加载模型结构。
+2. **注册算子**:实例化 `OpResolver` 并注册模型所需的算子(Operators)。
+3. **准备环境**:实例化 `Interpreter` 并分配 Tensor Arena(张量内存池)。
+4. **写入输入**:将传感器数据或测试数据填入输入张量。
+5. **执行与读取**:调用 `Invoke()` 并读取输出张量。
+
+打开 `apps/examples/helloxx/helloxx_main.cxx`,需包含以下核心逻辑:
+
+```C++
+#include
+#include
+#include "tensorflow/lite/micro/micro_mutable_op_resolver.h"
+#include "tensorflow/lite/micro/micro_interpreter.h"
+#include "tensorflow/lite/schema/schema_generated.h"
+
+// ==========================================================
+// 模型数据定义 (建议直接粘贴 xxd 生成的内容并修改修饰符)
+// ==========================================================
+alignas(16) const unsigned char converted_model_tflite[] = {
+ // ... 这里粘贴 xxd -i 生成的具体十六进制数据 ...
+ 0x1c, 0x00, 0x00, 0x00, 0x54, 0x46, 0x4c, 0x33, // 示例头
+ // ... 省略中间数据 ...
+};
+const unsigned int converted_model_tflite_len = 18200; // 请填写实际长度
+
+static void test_inference(const void* file_data, size_t arenaSize) {
+ // 1. 加载模型
+ const tflite::Model* model = tflite::GetModel(file_data);
+
+ // 2. 注册算子
+ // 注意:此处仅注册了 FullyConnected 算子,请根据实际模型需求添加
+ tflite::MicroMutableOpResolver<1> resolver;
+ resolver.AddFullyConnected(tflite::Register_FULLY_CONNECTED());
+
+ // 3. 分配内存与实例化解释器
+ std::unique_ptr pArena(new uint8_t[arenaSize]);
+ // 创建一个解释器实例。解释器需要模型、算子解析器、内存缓冲区作为输入
+ tflite::MicroInterpreter interpreter(model,
+ resolver, pArena.get(), arenaSize);
+
+ // 分配张量内存
+ interpreter.AllocateTensors();
+
+ // 4. 写入输入数据
+ TfLiteTensor* input_tensor = interpreter.input(0);
+ float* input_tensor_data = tflite::GetTensorData(input_tensor);
+
+ // 测试用例:输入 x = π/2 (1.5708),期望模型输出 y ≈ 1.0
+ float x_value = 1.5708f;
+ input_tensor_data[0] = x_value;
+
+ // 5. 执行推理
+ interpreter.Invoke();
+
+ // 读取输出结果
+ TfLiteTensor* output_tensor = interpreter.output(0);
+ float* output_tensor_data = tflite::GetTensorData(output_tensor);
+ printf("Output value after inference: %f\n", output_tensor_data[0]);
+}
+```
+
+### 4、验证结果
+
+编译并烧录固件后,运行 `helloxx` 命令,终端应输出如下推理结果:
+
+```Plain
+Output value after inference:0.99999
+```
+
+若输出值接近 1.0,表明模型已成功在 openvela 平台上加载并完成了一次正弦波推理计算。
diff --git a/zh-cn/edge_ai_dev/tflite_micro_integration.md b/zh-cn/edge_ai_dev/tflite_micro_integration.md
new file mode 100644
index 00000000..a412d33a
--- /dev/null
+++ b/zh-cn/edge_ai_dev/tflite_micro_integration.md
@@ -0,0 +1,322 @@
+# TFLite Micro 架构解析与集成
+
+[ [English](../../en/edge_ai_dev/tflite_micro_integration.md) | 简体中文 ]
+
+在 openvela 平台上集成 TensorFlow Lite for Microcontrollers (TFLite Micro),要求开发者深入理解其分层软件架构、组件依赖关系及硬件加速机制。本文档将详细介绍 TFLite Micro 在 openvela 平台上的完整架构设计,指导开发者完成高效集成。
+
+## 一、前置概念与术语
+
+为了更好地理解 TFLite Micro 在嵌入式环境下的工作原理,开发者需先理解以下核心概念,这些术语贯穿于整个集成流程中。
+
+| **术语 (Term)** | **解释 (Definition)** | **openvela 平台上下文** |
+| :--------------------------------- | :-------------------------------------------------------------------------------------------------------------- | :----------------------------------------------------------------------------- |
+| **TFLite Micro (TFLM)** | TensorFlow 的微控制器版本,专为资源受限(KB级内存)设备设计的轻量级推理框架。 | 运行在 openvela 上的核心推理引擎。 |
+| **Tensor** **Arena** | 一块预先分配的大型连续内存区域。TFLM 不使用 `malloc/free`,而是将模型输入、输出及中间计算数据全部放置在此区域。 | 决定了系统能运行多大的模型,需根据 SRAM 大小谨慎配置。 |
+| **FlatBuffers** | 一种高效的序列化格式。模型文件以该格式存储,允许直接从 Flash 读取数据。 | 模型数据通常直接编译进固件或存储在文件系统中。 |
+| **Operator (****Op****) / Kernel** | 神经网络中的具体算子实现(如 Conv2D, Softmax)。Kernel 是 Op 的具体 C++ 代码。 | 可通过 **CMSIS-NN** 替换标准 Kernel 以利用 openvela 硬件加速特性。 |
+| **Op** **Resolver** | 算子解析器。用于在运行时查找并注册模型所需的算子实现。 | 推荐使用 `MicroMutableOpResolver` 按需注册,避免引入无用代码导致固件体积膨胀。 |
+| **Quantization (量化)** | 将 32 位浮点数转换为 8 位整数的技术,旨在减少模型体积并加速计算。 | openvela 推荐运行 `int8` 量化模型以获得最佳性能。 |
+
+## 二、软件栈层次
+
+openvela 平台的 TFLite Micro 软件栈采用模块化分层设计,实现了从底层硬件抽象到上层应用接口的解耦。
+
+### 1、整体架构概览
+
+```Plain
+┌─────────────────────────────────────────────────────────────┐
+│ 应用层 (Application Layer) │
+│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
+│ │ 语音识别应用 │ │ 图像检测应用 │ │ 传感器分析 │
+│ └──────────────┘ └──────────────┘ └──────────────┘ │
+└─────────────────────────────────────────────────────────────┘
+ │
+ ▼
+┌─────────────────────────────────────────────────────────────┐
+│ 推理 API 层 (Inference API) │
+│ ┌──────────────────────────────────────────────────────┐ │
+│ │ Model Loading │ Tensor Management │ Inference API │ │
+│ └──────────────────────────────────────────────────────┘ │
+└─────────────────────────────────────────────────────────────┘
+ │
+ ▼
+┌─────────────────────────────────────────────────────────────┐
+│ 框架层 (TFLite Micro Framework) │
+│ ┌─────────────────┐ ┌────────────────────────────────┐ │
+│ │ Micro Interpreter│ │ Operator Kernels (含 CMSIS-NN │ │
+│ ├─────────────────┤ │ / 自定义加速内核) │ │ │
+│ │ Memory Planner │ ├────────────────────────────────┤ │
+│ ├─────────────────┤ │ CONV │ FC │ POOL │ RELU │ ... │ │
+│ │ FlatBuffer Parser│ └────────────────────────────────┘ │
+│ └─────────────────┘ │
+└─────────────────────────────────────────────────────────────┘
+ │
+ ▼
+┌─────────────────────────────────────────────────────────────┐
+│ RTOS / 平台服务层 (NuttX 驱动、内存、文件系统等) │
+│ ┌──────────────────────────────────────────────────────┐ │
+│ │ Task Scheduler │ Memory Mgmt │ Drivers │ File Sys │ │
+│ └──────────────────────────────────────────────────────┘ │
+└─────────────────────────────────────────────────────────────┘
+ │
+ ▼
+┌─────────────────────────────────────────────────────────────┐
+│ 硬件平台 (Hardware) │
+│ ARM Cortex-M │ RISC-V │ ESP32 │ Custom SoC │
+└─────────────────────────────────────────────────────────────┘
+```
+
+### 2、应用层:推理 API
+
+应用层通过 C/C++ API 封装模型加载、推理执行和结果获取等核心功能。开发者应关注如何初始化解释器并高效处理张量数据。
+
+#### 推理程序实现示例
+
+以下代码展示了在 openvela 环境下执行一次完整推理的标准流程:
+
+```C++
+static void test_inference(void* file_data, size_t arenaSize) {
+ // 1. 加载模型
+ const tflite::Model* model = tflite::GetModel(file_data);
+ printf("arenaSize: %d\n", (int)arenaSize);
+
+ // 2. 手动添加算子
+ tflite::MicroMutableOpResolver<1> resolver;
+ resolver.AddFullyConnected(tflite::Register_FULLY_CONNECTED());
+
+ // 3. 准备 Tensor Arena (内存池)
+ std::unique_ptr pArena(new uint8_t[arenaSize]);
+
+ // 4. 创建解释器实例
+ // 解释器需要模型、算子解析器、内存缓冲区作为输入
+ tflite::MicroInterpreter interpreter(model,
+ resolver, pArena.get(), arenaSize);
+
+ // 5. 分配张量内存
+ interpreter.AllocateTensors();
+
+ // 6. 填充输入数据
+ TfLiteTensor* input_tensor = interpreter.input(0);
+ float* input_tensor_data = tflite::GetTensorData(input_tensor);
+
+ // 示例:测试输入 x = π/2, expect y ≈ 1.0
+ float x_value = 1.5708f;
+ input_tensor_data[0] = x_value;
+
+ // 7. 执行推理
+ interpreter.Invoke();
+
+ // 8. 获取输出结果
+ TfLiteTensor* output_tensor = interpreter.output(0);
+ float* output_tensor_data = tflite::GetTensorData(output_tensor);
+ syslog(LOG_INFO, "Output value after inference: %f\n", output_tensor_data[0]);
+}
+```
+
+### 3、框架层:TFLite Micro 核心组件
+
+框架层是 TFLite Micro 的核心,负责模型解析、内存管理、算子调度等关键功能。该层通过静态内存分配和精简的运行时环境,确保在 openvela 平台上实现极低的系统开销。
+
+#### Micro Interpreter(微型解释器)
+
+解释器是框架的中枢,负责协调模型加载、内存分配、算子执行等流程。它包含三个核心子组件:
+
+1. Model Parser(模型解析器)
+
+ - 解析 FlatBuffers 格式的模型文件。
+ - 提取模型元数据:算子类型、张量维度、量化参数。
+ - 构建计算图数据结构。
+
+2. Subgraph Manager(子图管理器)
+
+ - 管理模型的计算子图(针对大多数嵌入式模型,通常仅含有一个子图)。
+ - 维护节点(算子)和边(张量)的拓扑关系。
+
+3. Invocation Engine(调用引擎)
+
+ - 按拓扑顺序执行算子。
+ - 管理算子的输入/输出张量绑定。
+ - 处理算子执行错误和异常。
+
+**解释器执行流程如下**:
+
+```Plain
+初始化阶段(Setup):
+1. AllocateTensors() → 规划并分配所有张量所需的内存空间 (Tensor Arena)
+
+
+推理阶段 (Inference):
+1. interpreter.input() → 填充输入张量并填充数据
+2. Invoke() → 触发推理循环
+ ├─ for each node in execution_plan(遍历执行计划中的每个节点 (Node)):
+ │ ├─ 获取算子注册信息(Registration)
+ │ ├─ 绑定输入/输出张量
+ │ └─ 调用算子的 Invoke 函数
+ └─ 返回执行状态
+3. interpreter.output() → 读取输出张量结果
+```
+
+#### Operator Kernels Library(算子内核库)
+
+算子内核是执行数学运算(如卷积、全连接)的具体实现。TFLite Micro 采用注册机制来解耦框架与具体算法实现,这使得在 openvela 上替换特定算子(例如使用硬件加速的卷积)变得非常容易。
+
+**算子接口规范**
+
+开发者若需自定义算子或封装硬件加速驱动,需遵循 `TfLiteRegistration` 接口定义:
+
+```C++
+typedef struct {
+
+ // [可选] 初始化:分配算子所需的持久化内存(如滤波器系数表)
+ void* (*init)(TfLiteContext* context, const char* buffer, size_t length);
+
+ // [可选] 释放:清理 init 分配的资源
+ void (*free)(TfLiteContext* context, void* buffer);
+
+ // [必须] 准备:校验张量维度、类型,计算临时缓冲区(Scratch Buffer)大小
+ TfLiteStatus (*prepare)(TfLiteContext* context, TfLiteNode* node);
+
+ // [必须] 执行:核心计算逻辑,从 Input Tensor 读取数据,写入 Output Tensor
+ TfLiteStatus (*invoke)(TfLiteContext* context, TfLiteNode* node);
+} TfLiteRegistration;
+```
+
+**算子实现参考:ReLU**
+
+以下代码展示了一个标准 ReLU 激活函数的实现逻辑,体现了 TFLite Micro 对类型安全和内存操作的封装:
+
+```C++
+// 1. 准备阶段:校验数据类型与维度
+TfLiteStatus ReluPrepare(TfLiteContext* context, TfLiteNode* node)
+{
+ // 校验:输入/输出张量数量
+ TF_LITE_ENSURE_EQ(context, node->inputs->size, 1);
+ TF_LITE_ENSURE_EQ(context, node->outputs->size, 1);
+
+ const TfLiteTensor* input = GetInput(context, node, 0);
+ TfLiteTensor* output = GetOutput(context, node, 0);
+
+ // 校验:张量类型
+ TF_LITE_ENSURE_TYPES_EQ(context, input->type, kTfLiteFloat32);
+
+ // 配置:调整输出张量形状与输入一致
+ return context->ResizeTensor(context, output, TfLiteIntArrayCopy(input->dims));
+}
+
+// 2. 执行阶段:数值计算
+TfLiteStatus ReluInvoke(TfLiteContext* context, TfLiteNode* node)
+{
+ const TfLiteTensor* input = GetInput(context, node, 0);
+ TfLiteTensor* output = GetOutput(context, node, 0);
+
+ const float* input_data = GetTensorData(input);
+ float* output_data = GetTensorData(output);
+
+ // 获取数据总长度
+ const int flat_size = MatchingFlatSize(input->dims, output->dims);
+
+ // 执行 ReLU: output = max(0, input)
+ for (int i = 0; i < flat_size; ++i) {
+ output_data[i] = (input_data[i] > 0.0f) ? input_data[i] : 0.0f;
+ }
+
+ return kTfLiteOk;
+}
+
+// 3. 注册阶段:返回函数指针结构体
+TfLiteRegistration* Register_RELU()
+{
+ static TfLiteRegistration r = {
+ nullptr, // init
+ nullptr, // free
+ ReluPrepare, // prepare
+ ReluInvoke // invoke
+ };
+ return &r;
+}
+```
+
+**算子库源码目录结构**
+
+在 `tensorflow/lite/micro/kernels/` 目录下,代码按算子功能组织:
+
+```Plain
+tensorflow/lite/micro/kernels/
+├── conv.cc # 卷积算子
+├── depthwise_conv.cc # 深度可分离卷积
+├── fully_connected.cc # 全连接层
+├── pooling.cc # 池化算子
+├── activations.cc # 激活函数(ReLU, Sigmoid 等)
+├── softmax.cc # Softmax
+├── add.cc, mul.cc, sub.cc # 逐元素运算
+├── reshape.cc, transpose.cc # 张量变换
+└── ...
+```
+
+#### Memory Planner(内存规划器)
+
+内存规划器是 TFLite Micro 实现低内存占用的关键技术。与桌面端 TensorFlow 动态分配内存不同,Micro 通过分析张量生命周期实现内存复用。
+
+## 三、平台依赖与集成
+
+在 openvela 平台上运行 TFLite Micro 并非孤立存在,它深度依赖底层的 OS 服务与硬件库。理解这些依赖关系,对于性能调优和故障排查至关重要。
+
+### 1、NuttX 内核服务
+
+TFLite Micro 通过平台抽象层与 NuttX RTOS 交互。尽管 TFLite Micro 设计为无 OS 依赖,但在 openvela 上,合理的 OS 配置能显著提升系统稳定性。
+
+#### 任务调度与同步
+
+NuttX 提供了完整的 POSIX 标准支持,TFLite Micro 的推理任务通常封装在标准的 `pthread` 或 NuttX 任务(Task)中。
+
+#### 内存分配器
+
+TFLite Micro 推荐使用 **Tensor Arena** 机制进行内存管理,但在初始化阶段或处理非张量数据时,仍可能与 NuttX 的内存管理器(Mm)交互。
+
+**Tensor Arena 分配策略**
+
+虽然可以使用 `malloc` 动态申请 Arena,但强烈建议采用静态分配。
+
+```C++
+// 推荐:编译时确定大小,放置于 BSS 段或特定内存段(如 CCM)
+// 预估大小方法:先分配大空间,运行 Interpreter::ArenaUsedBytes() 获取实际用量后调整
+#define ARENA_SIZE (100 * 1024)
+static uint8_t tensor_arena[ARENA_SIZE] __attribute__((aligned(16)));
+```
+
+### 2、硬件加速:CMSIS-NN 集成
+
+为提升在 ARM Cortex-M 核心(openvela 的主要计算单元)上的推理性能,必须集成 CMSIS-NN 库。该库利用 SIMD(单指令多数据)指令集,可将卷积和矩阵乘法的性能提升 4-5 倍。
+
+#### 构建系统配置 (Makefile)
+
+在集成 CMSIS-NN 时,核心逻辑是**替换**:引入优化版本的源文件,同时从编译列表中剔除 TFLite 自带的通用参考实现(Reference Kernels),以避免符号定义冲突。
+
+以下是针对 NuttX 构建系统的配置范本:
+
+```Makefile
+# 检测是否在 Kconfig 中开启了 CMSIS-NN 选项
+ifneq ($(CONFIG_MLEARNING_CMSIS_NN),)
+
+# 1. 定义宏:告知 TFLite Micro 启用 CMSIS-NN 路径
+COMMON_FLAGS += -DCMSIS_NN
+
+# 添加头文件搜索路径
+COMMON_FLAGS += ${INCDIR_PREFIX}$(APPDIR)/mlearning/cmsis-nn/cmsis-nn
+
+# 2. 寻找优化源文件:获取 cmsis_nn 目录下的所有 .cc 文件
+CMSIS_NN_SRCS := $(wildcard $(TFLM_DIR)/tensorflow/lite/micro/kernels/cmsis_nn/*.cc)
+
+# 3. 排除冲突文件:
+# 计算需要排除的通用实现文件名(例如 conv.cc, fully_connected.cc)
+# 逻辑:取 CMSIS_NN_SRCS 的文件名,对应到 kernels/ 根目录
+UNNEEDED_SRCS := $(addprefix $(TFLM_DIR)/tensorflow/lite/micro/kernels/, $(notdir $(CMSIS_NN_SRCS)))
+
+# 4. 从原始编译列表 CXXSRCS 中过滤掉这些通用实现
+CXXSRCS := $(filter-out $(UNNEEDED_SRCS), $(CXXSRCS))
+
+# 5. 将优化后的源文件加入编译列表
+CXXSRCS += $(CMSIS_NN_SRCS)
+
+endif
+```
diff --git a/zh-cn/edge_ai_dev/tflite_micro_overview.md b/zh-cn/edge_ai_dev/tflite_micro_overview.md
new file mode 100644
index 00000000..df252bf4
--- /dev/null
+++ b/zh-cn/edge_ai_dev/tflite_micro_overview.md
@@ -0,0 +1,423 @@
+# TFLite Micro 框架概述
+
+[ [English](../../en/edge_ai_dev/tflite_micro_overview.md) | 简体中文 ]
+
+TensorFlow Lite for Microcontrollers(以下简称 TFLite Micro)是 Google 专为资源受限的嵌入式设备设计的轻量级机器学习推理框架。作为 TensorFlow Lite 的精简版本,该框架针对微控制器(MCU)的特性进行了深度优化,支持在仅有数十 KB RAM 和数百 KB Flash 的设备上运行复杂的神经网络模型。
+
+本文档旨在介绍 TFLite Micro 的核心架构、技术挑战及其在 openvela 平台上的集成价值与应用场景。
+
+## 一、核心特性与开发流程
+
+### 1、核心特性
+
+TFLite Micro 通过以下特性解决了嵌入式 AI 的核心痛点:
+
+- **轻量化设计**:核心运行时库极其精简,无需操作系统支持,可直接在裸机环境中运行。框架采用静态内存分配策略,消除了动态内存管理的开销和碎片化风险。
+- **低功耗优化**:针对嵌入式设备功耗特性优化,支持 INT8 等量化模型。在保证推理精度的前提下,显著降低计算量与功耗,支持电池供电设备长时间运行 AI 应用。
+- **广泛的硬件生态**:支持 ARM Cortex-M、RISC-V、Xtensa 等多种主流 MCU 架构,并针对特定硬件平台提供优化的算子实现,以充分利用硬件加速能力。
+
+### 2、开发工作流
+
+TFLite Micro 提供了完整的工具链支持,典型的开发流程如下:
+
+1. **模型训练**:使用 TensorFlow 或 Keras 训练模型。
+2. **模型转换**:将训练好的模型转换为 TFLite 格式 (`.tflite`),通过量化技术减小模型尺寸并降低精度损失。
+3. **集成部署**:将转换后的模型转换为 C 数组或二进制文件,集成到 openvela 项目中运行。
+
+在 openvela 系统中集成 TFLite Micro,能够赋予物联网设备端侧智能,在保护用户隐私的同时降低云端依赖,实现更快的响应速度和更低的运营成本。
+
+## 二、微控制器端 AI 推理的挑战
+
+在微控制器上部署 AI 推理涉及资源、实时性和模型尺寸等多重技术挑战。
+
+### 1、资源限制
+
+微控制器的硬件资源极其有限,这是边缘 AI 推理面临的首要挑战:
+
+#### 内存约束
+
+- 典型 IoT MCU 的 RAM 仅为 32KB 至 512KB,Flash 约为 256KB 至 2MB。
+- 相比之下,即使是一个简单的深度学习模型也可能需要数 MB 的参数存储空间。
+
+**应对策略**:
+
+- 模型经过量化压缩,将浮点参数转换为 INT8 或更低精度。
+- 推理框架本身极度轻量,运行时开销控制在几十 KB 以内。
+- 采用静态内存分配策略,避免动态内存碎片化。
+- 优化中间计算结果的存储,实现张量缓冲区复用。
+
+#### 计算能力限制
+
+- MCU 主频通常在几十到几百 MHz,往往缺乏浮点运算单元(FPU)或仅支持单精度浮点运算,更谈不上 GPU 或专用 AI 加速器。这导致复杂的矩阵运算需要经过大量优化才能满足实时性要求。
+
+**应对策略**:
+
+- 充分利用硬件特性(如 ARM Cortex-M 的 SIMD 指令)优化矩阵运算。
+- 算子实现需要针对特定架构进行汇编级优化。
+- 模型结构选择受限,倾向于使用计算高效的轻量级网络架构(如 MobileNet、SqueezeNet)。
+
+#### 功耗约束
+
+- 许多物联网设备依靠电池供电,在微瓦到毫瓦级功耗下工作。AI 推理作为计算密集型任务,功耗控制至关重要。
+
+**应对策略**:
+
+- 推理频率需要根据应用场景优化,避免持续高频运算。
+- 支持低功耗模式,在待机时关闭推理引擎。
+- 量化模型不仅减小体积,也显著降低计算功耗。
+- 需要与硬件电源管理机制深度协同。
+
+### 2、实时性要求
+
+边缘 AI 应用通常具有严格的延迟约束,这与云端推理有本质区别。
+
+#### 低延迟需求
+
+- 语音唤醒、手势识别等应用要求从数据采集到推理结果输出的端到端延迟在几十到几百毫秒内。
+
+**应对策略**:
+
+- 推理引擎启动快速,避免冷启动延迟。
+- 算子执行高效,减少单次推理时间。
+- 数据预处理流程优化,降低从传感器到模型输入的转换开销。
+
+#### 确定性执行
+
+- 在实时操作系统(RTOS)环境下,任务调度需要可预测的执行时间。
+
+**应对策略**:
+
+- 避免不确定的内存分配操作。
+- 推理时间应相对稳定,便于任务时序规划。
+- 支持中断驱动的推理触发机制。
+
+#### 离线优先
+
+- 边缘设备不能依赖网络连接,所有推理在本地完成。
+
+**应对策略**:
+
+- 模型完全驻留在设备 Flash 中。
+- 无需云端辅助的数据处理能力。
+- 网络断连情况下仍能正常工作。
+
+### 3、模型尺寸约束
+
+模型尺寸直接影响部署的可行性,这是微控制器 AI 的核心矛盾。
+
+#### 存储限制
+
+- 完整的深度学习模型(如 ResNet-50)可能有 100MB 以上,而 MCU 的 Flash 通常只有几百 KB 到 2MB。
+
+**应对策略**:
+
+- 模型经过剪枝、蒸馏等技术压缩。
+- 量化为 INT8 可减少 75% 的模型体积。
+- 选择参数效率高的模型架构(如深度可分离卷积)。
+
+#### 精度与大小的权衡
+
+- 压缩模型不可避免地带来精度损失。
+
+**应对策略**:
+
+- 需要在可接受的精度范围内最大化压缩比
+- 针对特定任务进行模型定制和微调
+- 采用量化感知训练(Quantization-Aware Training)减少精度下降
+
+## 三、TFLite Micro 架构解析
+
+TFLite Micro 采用了解释器架构,并通过一系列设计选择实现了极致的轻量化。有效应对了上述挑战,为微控制器提供了可行的 AI 推理方案。
+
+### 1、轻量级解释器设计
+
+TFLite Micro 采用解释器架构运行神经网络模型,但与传统解释器相比,它进行了激进的轻量化改造。
+
+- **模型格式**:使用 FlatBuffers 序列化模型,具有以下优势。
+
+ - 零拷贝访问:在支持内存映射 Flash (XIP) 的设备上,模型数据可以直接从 Flash 读取,无需加载到 RAM。
+ - 紧凑存储:元数据开销极小,模型文件尺寸接近参数实际大小。
+ - 快速解析:无需复杂的反序列化过程,解释器启动速度快。
+ - 跨平台兼容:与标准 TFLite 模型格式兼容,工具链统一。
+
+- **解释执行流程**:
+
+ - 模型加载:模型 FlatBuffer 常量驻留在 Flash/ROM,通过指针直接访问。
+ - 解释器初始化:分配 Tensor Arena(张量工作区)。
+ - 算子注册:根据模型使用的算子加载对应的实现。
+ - 推理执行:按照计算图顺序调用算子的 `Invoke` 函数。
+ - 结果输出:从输出张量读取推理结果。
+
+- **内存高效的设计选择**:
+
+ - 静态计算图:模型结构在模型生成时确定,无动态图开销。
+
+### 2、外部依赖少
+
+TFLite Micro 的一个关键设计原则是**减少外部依赖**,使其能在各种受限环境中运行。
+
+- **标准库依赖小**:
+
+ - 不依赖 `malloc`/`free`,所有内存从预分配的 Arena 中分配。
+ - 提供精简的替代实现(如 `micro_log`、`micro_time`)。
+
+- **操作系统中立**:
+
+ - 可在裸机环境运行,无需 RTOS。
+ - 通过平台抽象层(PAL)适配不同系统。
+ - NuttX、FreeRTOS、Zephyr 等 RTOS 均可无缝集成。
+
+- **硬件抽象**:
+
+ - 通过条件编译适配不同架构(ARM、RISC-V、Xtensa 等)。
+ - 提供优化的汇编内核(如 ARM CMSIS-NN 集成)。
+ - 支持硬件加速器接口(如 Arm Ethos-U NPU)。
+
+### 3、支持的算子和模型类型
+
+TFLite Micro 提供了精心筛选的算子集,覆盖最常用的神经网络层。
+
+- **卷积类算子**(计算机视觉核心):
+
+ - `CONV_2D`:标准二维卷积。
+ - `DEPTHWISE_CONV_2D`:深度可分离卷积(MobileNet 的核心)。
+ - 支持多种填充模式(SAME、VALID)和步长配置。
+
+- **池化与激活**:
+
+ - `MAX_POOL_2D`、`AVERAGE_POOL_2D`:下采样层。
+ - `RELU`:常用激活函数。
+ - `SOFTMAX`:分类层。
+ - `TANH`、`LOGISTIC`:循环网络常用激活。
+
+- **全连接**:
+
+ - `FULLY_CONNECTED`:全连接层。
+
+- **张量操作**:
+
+ - `RESHAPE`、`SQUEEZE`、`EXPAND_DIMS`:维度变换。
+ - `ADD`、`MUL`、`SUB`:逐元素运算。
+
+- **典型支持的模型**:
+
+ - **MobileNet V1**:轻量级图像分类。
+ - **Micro Speech**:语音关键词识别(Google 官方示例)。
+ - **Person Detection**:人体检测。
+ - **Magic Wand**:手势识别。
+ - 自定义轻量级模型(如浅层 CNN、小型 RNN)。
+
+- **量化支持**:
+
+ - **INT8 量化**:主流推荐方式,参数和激活均为 8 位整数。
+ - **INT16 激活**:更高精度的中间计算(部分算子)。
+ - **混合量化**:关键层保留高精度,其他层量化。
+ - 量化感知训练(QAT)和训练后量化(PTQ)均支持。
+
+### 4、内存管理机制
+
+TFLite Micro 采用独特的静态内存管理策略,这是其能够在 RAM 资源极度受限(如仅有几十 KB)的微控制器上高效运行的关键。
+
+#### Tensor Arena(张量工作区)
+
+Tensor Arena 是 TFLite Micro 内存管理的核心概念。
+
+- **定义与分配**:应用程序必须在推理开始前分配一块连续的内存区域(即 Tensor Arena)。TFLite Micro 运行时将从该区域分配所有中间张量(Tensors)和临时缓冲区。
+- **大小估算**:开发者需根据模型的复杂度预估 Arena 的大小。
+
+#### 内存规划与复用
+
+为了最大化利用有限的内存,解释器在模型加载阶段会执行严格的内存规划(Memory Planning)。
+
+**规划流程**:
+
+1. **生命周期分析**:分析计算图,确定每个张量的创建和销毁时间点(生命周期)。
+2. **依赖构建**:构建张量间的依赖关系图,识别哪些张量的生命周期互不重叠,从而具备内存复用的条件。
+3. **地址分配**:使用贪心算法计算每个张量在 Arena 中的内存偏移量。
+4. **布局生成**:生成最终的静态内存布局图(Memory Plan)。
+
+**典型复用案例**:
+
+假设一个包含三层的简单网络,其张量生命周期如下:
+
+- **Layer 1 (Conv2D)**:生成 Output Tensor A(生命周期覆盖 Layer 1 至 Layer 2)。
+- **Layer 2 (ReLU)**:使用张量 A,生成 Output Tensor B(生命周期覆盖 Layer 2 至 Layer 3)。
+- **Layer 3 (MaxPool)**:使用张量 B,生成 Output Tensor C(生命周期覆盖 Layer 3 至 Layer 4)。
+
+内存分配结果:
+
+- **张量 A 与张量 C**:由于两者的生命周期不重叠(A 在 Layer 2 结束时销毁,C 在 Layer 3 开始时创建),内存规划器将安排它们**共享同一块物理内存地址**。
+- **张量 B**:由于 B 的生命周期与 A 和 C 均有重叠,规划器将为其分配独立的内存空间。
+
+#### 内存对齐与优化
+
+为了提升计算效率,TFLite Micro 在内存管理层面实施了多项底层优化:
+
+- **地址对齐**:默认按一定字节对齐(常见为 16,可配置),以充分利用 ARM Cortex-M 等处理器的 SIMD(单指令多数据)指令集加速运算。
+- **权重对齐**:对模型参数权重进行地址对齐优化,减少 CPU 访问周期,提升读取效率。
+- **堆栈优化**:优化函数调用路径,避免深度嵌套调用,从而降低对系统堆栈(Stack)空间的占用。
+
+## 四、TFLite Micro 在 openvela 平台的集成价值
+
+openvela 平台基于 NuttX RTOS 构建,为物联网设备提供了统一且标准化的软件环境。TFLite Micro 与 openvela 的深度结合,不仅解决了底层资源的限制问题,更充分释放了边缘智能的应用潜力。
+
+### 1、物联网场景的深度适配
+
+openvela 面向的智能音箱、智能门锁、环境传感器及可穿戴设备等典型 IoT 终端,其业务特性与 TFLite Micro 的设计理念高度契合:
+
+- **坚持本地处理优先:**
+
+ - **隐私保护**:确保语音、图像等敏感数据完全在设备端处理,消除上传云端的隐私泄露风险。
+ - **低延迟响应**:本地推理可实现毫秒级响应,避免了云端交互带来的网络延迟(通常数百毫秒)。
+ - **离线可用**:即便在网络断连的情况下,设备仍能执行核心智能功能,确保持续的用户体验。
+
+- **满足长期运行需求:**
+
+ - **功耗优化**:INT8 量化模型结合 openvela 的低功耗管理,支持电池供电设备持续运行数月。
+ - **系统稳定性**:TFLite Micro 的静态内存分配机制消除了内存碎片和泄漏风险,满足 7x24 小时稳定运行的严苛要求。
+ - **OTA 友好**:极小的模型尺寸使得远程固件更新(FOTA)更加快速、可靠且节省流量。
+
+- **成本敏感型设计**:
+
+ - **降低硬件成本**:支持在低成本通用 MCU 上实现 AI 能力,无需额外部署昂贵的专用 NPU 芯片。
+ - **节省运营成本**:大幅减少对云端推理服务的调用,降低了服务器带宽和计算算力成本。
+ - **规模化部署**:统一的 openvela 平台屏蔽了底层硬件差异,简化了大规模设备队的管理与维护。
+
+### 2、基于 NuttX 的技术优势
+
+NuttX 作为符合 POSIX 标准的实时操作系统,其轻量级与模块化的特性为 TFLite Micro 提供了坚实的系统级支持:
+
+- **资源管理协同**:
+
+ - **任务调度:**TFLite Micro 推理引擎可作为标准的 NuttX 任务运行,接受系统优先级调度,确保关键任务的实时性。
+ - **内存隔离**:利用 NuttX 对 MPU(内存保护单元)的支持,有效隔离推理引擎与其他系统组件,提升系统安全性。
+ - **电源管理**:结合 NuttX 的 PM(电源管理)框架,系统可在推理空闲间隙自动进入低功耗模式。
+
+- **驱动与生态集成**:
+
+ - **数据采集**:NuttX 丰富的驱动模型(I2C, SPI, ADC, Video, Audio)简化了传感器数据的标准化采集。
+ - **存储管理**:支持 LittleFS 等文件系统,便于模型文件的存储、读取及版本管理。
+ - **网络通信**:网络协议栈(TCP/IP, MQTT)为模型的远程下发和更新提供了基础通道。
+
+- **调试与诊断**:
+
+ - 集成 `syslog` 系统,便于记录推理日志和错误追踪。
+ - 支持 GDB 远程调试,显著加速开发与优化周期。
+
+**集成架构示意图:**
+
+```Plain
+┌─────────────────────────────────────────┐
+│ openvela Application Layer │
+│ (Smart Home, Wearable, Industrial) │
+└─────────────────────────────────────────┘
+ │
+ ▼
+┌─────────────────────────────────────────┐
+│ TFLite Micro Inference Engine │
+│ (Model Interpreter + Optimized Ops) │
+└─────────────────────────────────────────┘
+ │
+ ▼
+┌─────────────────────────────────────────┐
+│ NuttX RTOS Core Services │
+│ (Task Scheduler, Memory, Drivers, FS) │
+└─────────────────────────────────────────┘
+ │
+ ▼
+┌─────────────────────────────────────────┐
+│ Hardware Abstraction Layer │
+│ (ARM Cortex-M, RISC-V, ESP32, etc.) │
+└─────────────────────────────────────────┘
+```
+
+### 3、典型应用场景详解
+
+在 openvela 平台上,TFLite Micro 已广泛应用于多种边缘智能场景。以下是四类典型应用的详细技术方案。
+
+#### 场景 1:语音唤醒与命令识别
+
+- **场景描述**:智能音箱、智能家居控制器需要持续监听唤醒词(如"小爱同学"),并识别简单语音命令。
+- **技术方案**:
+
+ - **模型选择**:基于 CNN 或 RNN 的关键词检测模型(如 Micro Speech)。
+ - **模型大小**:18KB(量化后)。
+ - **推理延迟**:每帧(30ms 音频)推理时间 < 5ms。
+ - **功耗优化**:
+
+ - 使用低功耗 ADC 采集音频(16kHz 采样率)。
+ - 轻量级 VAD(语音活动检测)预过滤,减少无效推理。
+ - 检测到唤醒词后激活主处理器进行复杂识别。
+
+- **openvela 平台优势**:
+
+ - NuttX 音频子系统提供标准化音频数据流。
+ - 实时任务调度保证推理实时性。
+ - 低功耗模式支持长时间待机。
+
+#### 场景 2:图像识别与物体检测
+
+- **场景描述**:智能门锁人脸识别、工业设备缺陷检测、智能摄像头物体识别。
+- **技术方案**:
+
+ - **模型选择**:MobileNet V1(图像分类)。
+ - **模型大小**:300KB-1MB。
+ - **推理延迟**:96x96 输入分辨率下,推理耗时约 200-500ms(取决于 MCU 性能)。
+ - **输入预处理**:
+
+ - 从摄像头(如 OV2640)获取 RGB/YUV 图像。
+ - 缩放到模型输入尺寸(Bilinear 插值)
+ - 归一化到 [-128, 127] 范围(INT8 输入)。
+
+- **应用案例**:
+
+ - **智能门锁**:本地完成人脸检测与活体判断,仅在必要时上传特征值进行云端验证,平衡安全性与功耗。
+ - **工业检测**:实时检测产品缺陷,降低云端带宽压力。
+ - **野生动物监测**:长时间运行的电池供电相机,本地识别目标动物后才传输图像。
+
+#### 场景 3:传感器数据异常检测
+
+- **场景描述**:工业设备预测性维护、智能建筑能耗异常检测、健康监测设备。
+- **技术方案**:
+
+ - **模型选择**:AutoEncoder(自编码器)或 1D-CNN。
+ - **模型大小**:10KB - 50KB(处理低维时序数据)。
+ - **推理频率**:非实时触发(如每分钟一次)。
+ - **数据流程**:
+
+ - 多传感器数据融合(温度、振动、压力等)。
+ - 滑动窗口特征提取(如 FFT 频谱特征)。
+ - 模型输出异常评分,超过阈值触发报警或维护请求。
+
+- **openvela** **平台优势**:
+
+ - NuttX 支持多传感器并发采集。
+ - 文件系统存储历史数据用于云端再训练。
+ - 网络协议栈上报异常事件。
+
+#### 场景 4:手势与姿态识别
+
+- **场景描述**:可穿戴设备手势控制、智能家居非接触交互、运动健身监测。
+- **技术方案**:
+
+ - **模型选择**:基于加速度计/陀螺仪数据的 LSTM 或 1D-CNN。
+ - **模型大小**:20KB - 100KB。
+ - **推理延迟**:实时处理延迟 < 50ms。
+ - **应用示例**:
+
+ - 智能手环:识别跑步、游泳、骑行等运动类型。
+ - 智能遥控器:挥手手势切换频道。
+ - AR 眼镜:头部姿态跟踪。
+
+- **关键技术**:
+
+ - **数据增强**:训练时引入噪声和旋转,以适应不同用户的佩戴习惯。
+ - **在线校准**:设备首次使用时进行个性化调整。
+ - **低功耗优化**:运动检测触发推理,静止状态暂停。
+
+## 五、总结
+
+- TFLite Micro 与 openvela 平台的结合,为微控制器端的 AI 推理提供了一套完整的解决方案。
+- 它不仅在技术层面克服了资源、实时性和碎片化的挑战,更在业务层面实现了隐私保护、低成本和高可靠性。
+- 通过标准化的开发流程和系统级支持,开发者能够快速将智能算法部署到各类 IoT 设备中,推动边缘智能的规模化落地。
+- 接下来的章节将深入探讨如何在 openvela 平台上集成、部署和优化 TFLite Micro 应用。
diff --git a/zh-cn/faq/devoloper_tech_faq.md b/zh-cn/faq/devoloper_tech_faq.md
new file mode 100644
index 00000000..692b1543
--- /dev/null
+++ b/zh-cn/faq/devoloper_tech_faq.md
@@ -0,0 +1,225 @@
+# 开发者常见问题解答
+
+[ [English](./../../en/faq/devoloper_tech_faq.md) | 简体中文 ]
+
+### 一、社区与通用
+
+#### 1. 遇到技术问题或 Bug 怎么办?
+
+如果是技术类问题,请在 [Issue 页面](../../../../docs/issues) 提交。
+
+- 对于阻塞性问题,提交 Issue 后可直接将链接发送至微信群以便快速响应。
+- 对于非阻塞性问题,社区维护团队会定期在 Issue 中进行回复处理。
+
+#### 2. 社区贡献有奖励吗?
+
+是的,社区设有贡献激励机制。详细的奖励规则和说明请参阅[贡献奖励说明](../../../../docs/issues)。
+
+#### 3. IDE 什么时候上线?
+
+预计于 2026 年初正式上线。
+
+#### 4. 源码仓库的 Gitee 和 GitHub 版本有区别吗?
+
+两者没有任何区别。GitHub 和 Gitee 内部仓库保持双向实时同步,您可以根据网络情况选择任意一个进行访问。
+
+### 二、编译与构建
+
+#### 5. openvela 的应用开发(如 Hello World)是运行在内核态还是用户态?
+
+系统主要支持三种编译模式。
+
+目前官方推荐使用 **Flat Build (平铺模式)**,在该模式下应用和内核处于同一地址空间(类似内核态),能够提供最优的性能,适用于模组、手环等嵌入式小系统。
+
+此外也支持 Kernel Build(用户态隔离)和 Product Mode,但在资源受限场景下使用较少。
+
+#### 6. 使用推荐的 Flat Build 模式,应用崩溃会导致整个系统 Crash 吗?
+
+理论上在 Flat Build 模式下,由于应用与内核共用空间,应用崩溃确实可能影响系统。但 openvela 正在开发多态隔离保护机制以防止内存踩踏。虽然系统支持运行 ELF 二进制文件,但在嵌入式场景下,官方仍强烈推荐使用 Flat Build 模式。
+
+#### 7. openvela 是否支持增量编译?每次修改代码都要全部重编吗?
+
+系统支持增量编译。
+
+- 如果您只修改了 `.c` 或 `.h` 源文件,可以直接追加编译,速度较快。
+- 但如果您修改了 `Kconfig` (menuconfig) 配置文件(即打开或关闭了某些功能),为了确保配置生效,建议进行完整的重新编译。
+
+### 三、系统架构与内核
+
+#### 8. openvela 的协议栈在模块中还是在 AP 侧?
+
+协议栈(如 TCP/IP, Bluetooth Host Stack 等)均运行在 AP(主处理器)侧。外挂的 WiFi 或蓝牙模块通常仅作为收发器(Transceiver)使用,通过 HCI 或 SDIO 等接口与主控通信,模块内部主要运行固件。
+
+#### 9. 代码中看到很多以 NX_ 开头的函数(如 nx_read),我应该在应用程序中使用它们吗?
+
+**不建议使用**。
+
+以 `NX_` 开头的函数通常为内核内部使用的系统调用或底层封装。
+
+为了保证代码的规范性和可移植性(openvela 已通过 PNS 52 认证),请务必使用标准的 POSIX 接口(如 `open`, `read`, `pthread_create`)进行开发。
+
+#### 10. 在隔离模式下,用户内存采用的是物理平坦模型还是虚拟地址映射?
+
+系统采用的是**物理平坦内存模型 (Flat Memory Model)**。
+
+在这种模型下,用户内存并非像 Linux 那样通过 MMU 进行虚拟地址映射,而是在物理平展内存中划分出独立的段。
+
+#### 11. 不同进程(Task)的用户内存段是否具有相同的虚拟基址?
+
+**不具有**。
+
+由于不存在虚拟地址重叠,不同的进程并不享有相同的虚拟基址(例如所有进程都从 0x0000 开始)。每个进程在物理内存中拥有独立的基址,通过物理地址范围来区分不同的任务。
+
+#### 12. 该系统的隔离机制依赖于 MMU 还是 MPU?
+
+系统的内存隔离主要依赖于 **MPU(Memory Protection Unit)**。
+
+这是为了在不具备 MMU 的芯片(如 Cortex-M 系列)上也能实现安全隔离。
+
+#### 13. 为什么在没有 MMU 的情况下也能实现安全隔离?
+
+这是通过 MPU 在物理内存上定义访问权限区域(Regions)来实现的。
+
+系统为每个任务分配特定的物理内存区域,并利用 MPU 限制该任务只能访问其被分配的区域,从而在物理寻址层面上实现任务间的安全隔离。
+
+#### 14. 在多任务环境下,用户态的堆(Heap)和栈(Stack)是如何分配的?
+
+为了配合 MPU 的区域保护机制,每个任务(Task)在技术实现上都必须拥有独立的用户态堆和栈,以确保运行时数据存储互不干扰,防止任务间出现内存越界访问。
+
+### 四、开发环境与工具
+
+#### 15. 文档中提到的 JLink 和 Trace32 调试工具是必须的吗?
+
+这取决于您的运行目标。
+
+如果是真机调试,通常需要 JLink 或 Trace32 等硬件调试器。如果您使用 Simulator (Linux 本地运行) 或 Emulator (QEMU/Goldfish) 进行开发,系统自带调试机制,直接使用 GDB 即可,无需额外硬件。
+
+#### 16. 在 MacOS (M1/M2) 上配置环境时,模拟器启动失败(缺库)怎么办?
+
+目前 QEMU/Goldfish 在 MacOS 上的兼容性测试相对较少,可能会遇到库缺失或指令集转换效率问题。
+
+现阶段强烈推荐在 MacOS 上安装 **Ubuntu 22.04 虚拟机** 进行开发,这是经过验证最充分、最稳定的环境。
+
+#### 17. 在 Ubuntu 虚拟机中执行 repo sync 没有反应或下载失败?
+
+请按顺序排查以下几点:
+
+- 确认您是基于 `trunk` 分支下载代码。
+- 确认操作系统版本为 Ubuntu 22.04。
+- 检查网络连接及代理设置(可能存在网络墙的问题)。
+
+若排查后仍有问题,请截图报错信息并在社区提交 Issue。
+
+#### 18. openvela 是否有配套的 VS Code 插件或 IDE?
+
+是的,官方有一个专门基于 VS Code 定制的 IDE,支持 openvela 开发。
+
+目前版本尚未完全对外开源发布,待正式发布后会第一时间同步给开发者使用。
+
+#### 19. 快应用 IDE 中的 AI 助手(如豆包插件)只能读取代码无法编辑/自动修改代码?
+
+这种情况通常是由于 VS Code 核心版本更新较快,插件与 IDE 内置的 VS Code 源码版本存在同步延迟导致的兼容性问题。
+
+建议反馈具体的插件版本和 IDE 版本,开发团队会排查修复。
+
+#### 20. 为什么在 QEMU 的 goldfish arm64 配置下,开启网络配置却无法看到网络接口?
+
+因为 `goldfish arm64` 等基础配置主要用于验证 CPU 架构和内核基础功能,并未默认开启完整的网络桥接或外设支持。
+
+若需验证网络或多媒体功能,建议使用产品形态的配置文件,例如 `smart speaker` 或 `ARMv7a Goldfish` 的完整配置。
+
+#### 21. 在模拟器中运行程序创建的文件重启后丢失,如何实现数据持久化?
+
+推荐使用 **9PFS (9P File System)** 功能。
+
+通过将宿主机(Host PC)的一个文件夹直接挂载映射到模拟器中,可以实现数据直接写入 PC 硬盘,既实现了持久化又方便 PC 端查看。
+
+#### 22. 为什么手环/手表类低功耗设备也使用 QEMU Goldfish (ARM A系列) 进行模拟?
+
+这主要是为了统一教学平台并便于管理,目前统一使用基于 Google Goldfish 的 QEMU 平台。
+openvela 操作系统屏蔽了底层架构差异,底层是 A 系列还是 M 系列对上层应用和框架的学习影响不大。未来将发布 ARM M/R 系列的模拟器支持。
+
+#### 23. 在实体开发板尚未到位的情况下,是否可以开始驱动开发的学习和课程设计?
+
+**完全可以**。
+
+建议优先使用模拟器(QEMU)。驱动框架在模拟器与实体板上是一致的,您可以先基于模拟器完成理论学习、框架开发和内存管理等核心概念的学习,待开发板到位后再进行硬件适配验证。
+
+#### 24. Telephony(电话/通信)相关业务是否需要在真实设备上进行验证?
+
+推荐使用**模拟器**。
+
+真机调试通信业务门槛较高(需 Modem、SIM 卡、入网)。openvela 模拟器内置了 Modem Simulator,可完整模拟拨打电话、收发短信等流程,足以满足教学需求。
+
+#### 25. 快应用打包生成的 RPK 文件可以直接在 openvela 设备上运行吗?
+
+目前暂时不行。快应用框架引擎(Runtime)计划于 **2026 年 2 月份** 左右以库的形式开源并集成进入系统。目前阶段建议使用模拟器进行学习和开发。
+
+### 五、硬件适配与移植
+
+#### 26. openvela 可以移植到目前官方不支持的硬件平台(如 STM32)吗?
+
+**可以**。
+
+openvela 全面兼容 NuttX 内核,理论上所有 NuttX 支持的硬件平台,openvela 都可以进行平滑适配和移植。
+
+#### 27. ESP32 系列开发板目前的支持度如何?
+
+虽然底层兼容 NuttX,但目前针对 ESP32 的适配和测试尚未完全覆盖,可能部分 Demo 无法直接运行。如果需要稳定的开发体验,目前建议优先使用官方验证过的 ARM 平台开发板。
+
+#### 28. 在进行驱动或底层开发时,代码应该提交到哪个目录?
+
+请根据代码的通用性决定:
+
+- 通用的驱动框架、调度代码或 Bug 修复建议提交至 `nuttx` 主目录(如 `drivers` 下)。
+- 特定芯片厂商或私有的板级驱动代码,建议存放在 `vendor` 目录下。
+
+openvela 遵循 Apache 协议,您可以自由选择是否开源。
+
+### 六、应用框架与多媒体
+
+#### 29. openvela 的快应用底层引擎是 Node.js 还是 V8?
+
+都不是。openvela 设备端的快应用引擎基于 **QuickJS**。
+
+#### 30. 快应用和原生应用(Native App)在运行机制上有什么区别?
+
+- 快应用运行在系统的一个独立容器中,与系统隔离,崩溃不易导致系统死机,通过 JS 接口调用底层能力。
+- 而原生应用直接调用系统 API,性能更高但与系统的耦合度也更高。
+
+#### 31. openvela 目前是否支持运行 MPlayer?
+
+目前暂不支持直接运行 MPlayer,官方尚未对其进行移植。
+
+#### 32. 当前系统下有哪些可用的多媒体开发工具或框架?
+
+目前可用的方案包括:已完成移植的 FFmpeg、系统内置的原生多媒体工具集(请参考 [Sim 环境音频功能开发指南](../quickstart/emulator/sim_audio_guide.md)),以及 `libx264`、`openh264`、`libopus` 等已移植的开源编解码库。
+
+#### 33. 哪些 Linux 下的常用多媒体工具适合移植到 openvela?
+
+纯软件实现(Pure Software)的工具通常比较容易移植。而强依赖特定硬件驱动或硬件加速的工具,则无法直接移植,必须基于 openvela 现有的多媒体框架进行适配开发。
+
+#### 34. 如果需要移植第三方代码,有无参考案例或路径?
+
+建议开发者直接参考源码目录下的 `apps/external` 文件夹。该目录包含大量已移植的第三方库,是理解构建系统和移植方法的最佳实践。
+
+#### 35. 在 openvela 上开发图形界面,支持 Qt 或 GTK/JDK 吗?
+
+**不支持且不推荐**。
+
+- Qt 和 GTK 框架对于嵌入式 RTOS 来说过于厚重。
+- 官方推荐使用 **LVGL**,团队已对其进行了深度优化并与 NuttX 系统结合良好。
+
+#### 36. openvela 是否支持 MQTT, CoAP, Matter 等物联网协议?
+
+**支持**。
+
+系统内部已集成 MQTT, CoAP 及 Matter(部分版本)。
+
+相关库通常位于 `apps/netutils` 或 `external` 目录下,可直接参考源码。
+
+#### 37. 只学习多媒体开发,是否必须深入掌握内核原理?
+
+**不需要**。
+
+仅需掌握基础的系统调用(如线程、锁、消息队列、Socket),学习重点应放在 Pipeline 设计(解码、后处理)上,无需深究调度算法等内核底层实现。
diff --git a/zh-cn/quickstart/emulator/sim_audio_guide.md b/zh-cn/quickstart/emulator/sim_audio_guide.md
new file mode 100644
index 00000000..3a9673f4
--- /dev/null
+++ b/zh-cn/quickstart/emulator/sim_audio_guide.md
@@ -0,0 +1,330 @@
+# Sim 环境音频功能开发指南
+
+[ [English](./../../../en/quickstart/emulator/sim_audio_guide.md) | 简体中文 ]
+
+## 一、 简介
+
+本文档旨在指导开发者在 openvela Sim(模拟器)环境中进行音频功能的开发与测试。通过 Sim 环境,开发者可以利用 host 主机的音频能力模拟嵌入式设备的音频输入输出,验证驱动逻辑与中间件功能。
+
+主要测试范围包括:
+
+1. 使用 `nxplayer`、`nxrecorder`、`nxlooper` 验证 **Audio** **Driver** 的基础功能。
+2. 使用 `mediatool` 验证 **Media Framework** 的业务逻辑。
+
+## 二、 模块架构
+
+Sim 环境下的音频子系统由以下核心模块构成:
+
+1. **Audio Driver**
+
+ - 在 Sim 环境下,底层驱动通过映射 Host 主机(Linux)的 ALSA 接口来模拟音频硬件的输入与输出。
+
+2. **命令行工具集 (CLI Tools)**
+
+ - **nxplayer**:音频播放测试工具。
+ - **nxrecorder**:音频录制测试工具。
+ - **nxlooper**:音频回环(Loopback)测试工具。
+ - 以上工具均基于 Audio Driver 实现。
+
+3. **Media Framework**
+
+ - 包含 Media Framework、RPC 通信、Audio Policy(音频策略)等组件。
+ - 对外提供播放、录音、音频通路切换及音量控制等标准接口。
+
+4. **mediatool**
+
+ - 基于 Media Framework 实现的命令行交互程序,用于测试框架层功能。
+
+## 三、 编译配置
+
+请在 openvela 的构建系统(Kconfig)中进行如下配置。
+
+### 1、Audio Driver 配置
+
+启用基础音频驱动支持及缓冲区配置:
+
+```Makefile
+CONFIG_AUDIO=y # 启用 AUDIO 子系统
+CONFIG_AUDIO_NUM_BUFFERS=2 # 驱动缓冲区数量
+CONFIG_AUDIO_BUFFER_NUMBYTES=8192 # 单个缓冲区大小 (Bytes)
+```
+
+### 2、命令行工具配置
+
+启用 `nxplayer`、`nxrecorder` 和 `nxlooper` 工具:
+
+```Makefile
+CONFIG_SYSTEM_NXPLAYER=y
+CONFIG_SYSTEM_NXRECORDER=y
+CONFIG_SYSTEM_NXLOOPER=y
+
+# 其他相关配置保持默认即可
+```
+
+### 3、Media Framework 配置
+
+Media Framework 支持跨核操作。在 Sim 环境中,通常涉及 AP(应用处理器)与 Audio DSP(数字信号处理器)的模拟。
+
+#### AP 侧配置
+
+将 Media Framework 主体编译在 AP 核时的配置:
+
+```Makefile
+ CONFIG_MEDIA=y
+ CONFIG_MEDIA_SERVER=y
+ CONFIG_MEDIA_SERVER_CONFIG_PATH="/etc/media/"
+ CONFIG_MEDIA_SERVER_PROGNAME="mediad"
+ CONFIG_MEDIA_SERVER_STACKSIZE=2097152
+ CONFIG_MEDIA_SERVER_PRIORITY=245
+ CONFIG_MEDIA_TOOL=y
+ CONFIG_MEDIA_TOOL_STACKSIZE=16384
+ CONFIG_MEDIA_TOOL_PRIORITY=100
+ CONFIG_MEDIA_CLIENT_LISTEN_STACKSIZE=4096
+
+ CONFIG_PFW=y
+ CONFIG_LIB_XML2=y
+ CONFIG_HAVE_CXX=y
+ CONFIG_HAVE_CXXINITIALIZE=y
+ CONFIG_LIBCXX=y
+ CONFIG_LIBSUPCXX=y
+```
+
+#### AUDIO 侧配置
+
+将 Media Framework 主体编译在 Audio 核时的配置(包含 FFmpeg 支持):
+
+```Makefile
+CONFIG_MEDIA=y
+CONFIG_MEDIA_SERVER=y
+
+# CONFIG_MEDIA_FOCUS is not set
+CONFIG_MEDIA_SERVER_CONFIG_PATH="/etc/media/"
+CONFIG_MEDIA_SERVER_PROGNAME="mediad"
+CONFIG_MEDIA_SERVER_STACKSIZE=81920
+CONFIG_MEDIA_SERVER_PRIORITY=245
+CONFIG_MEDIA_TOOL=y
+CONFIG_MEDIA_TOOL_STACKSIZE=16384
+CONFIG_MEDIA_TOOL_PRIORITY=100
+CONFIG_MEDIA_CLIENT_LISTEN_STACKSIZE=4096
+
+# Audio Policy
+CONFIG_PFW=y
+CONFIG_LIB_XML2=y
+CONFIG_HAVE_CXX=y
+CONFIG_HAVE_CXXINITIALIZE=y
+CONFIG_LIBCXX=y
+CONFIG_LIBSUPCXX=y
+CONFIG_KVDB
+
+# FFmpeg 核心配置
+CONFIG_LIB_FFMPEG=y
+CONFIG_LIB_FFMPEG_CONFIGURATION="--disable-sse --enable-avcodec --enable-avdevice --enable-avfilter --enable-avformat --enable-decoder='aac,aac_latm,flac,mp3,pcm_s16le,libopus,libfluoride_sbc,libfluoride_sbc_packed,silk' --enable-demuxer='aac,mp3,pcm_s16le,flac,mov,ogg,wav,silk' --enable-encoder='aac,pcm_s16le,libopus,libfluoride_sbc,silk' --enable-hardcoded-tables --enable-indev=nuttx --enable-ffmpeg --enable-ffprobe --enable-filter='adevsrc,adevsink,afade,amix,amovie_async,amoviesink_async,astats,astreamselect,aresample,volume' --enable-libopus --enable-muxer='opus,opusraw,pcm_s16le,silk,wav' --enable-outdev=bluelet,nuttx --enable-parser='aac,flac' --enable-protocol='cache,concat,file,http,https,rpmsg,tcp,unix' --enable-swresample --tmpdir='/stream'"
+```
+
+## 四、FFmpeg 扩展配置
+
+Media Framework 基于 FFmpeg 实现。开发者需根据项目需求配置 FFmpeg 组件(demuxer, muxer, decoder, encoder, filter 等)。
+
+### 1、基础配置字符串
+
+核心配置字符串参考如下(需写入 `.config` 或相关构建文件):
+
+```Makefile
+CONFIG_LIB_FFMPEG_CONFIGURATION="--disable-sse --enable-avcodec --enable-avdevice --enable-avfilter --enable-avformat --enable-decoder='aac,aac_latm,flac,mp3,pcm_s16le,libopus,libfluoride_sbc,libfluoride_sbc_packed,silk' --enable-demuxer='aac,mp3,pcm_s16le,flac,mov,ogg,wav,silk' --enable-encoder='aac,pcm_s16le,libopus,libfluoride_sbc,silk' --enable-hardcoded-tables --enable-indev=nuttx --enable-ffmpeg --enable-ffprobe --enable-filter='adevsrc,adevsink,afade,amix,amovie_async,amoviesink_async,astats,astreamselect,aresample,volume' --enable-libopus --enable-muxer='opus,opusraw,pcm_s16le,silk,wav' --enable-outdev=bluelet,nuttx --enable-parser='aac,flac' --enable-protocol='cache,concat,file,http,https,rpmsg,tcp,unix' --enable-swresample --tmpdir='/stream'"
+```
+
+**配置说明:**
+
+- `--enable-decoder`: 启用指定的解码器。
+- `--enable-filter`: 启用指定的过滤器。
+
+**故障排查:**
+
+如果遇到类似 `Failed to avformat_open_input ret -1330794744, Protocol not found.` 的错误,通常意味着缺少相应的协议或格式支持,请检查并修改上述配置字符串以扩展 FFmpeg 能力。
+
+### 2、依赖库配置
+
+部分 FFmpeg 解码器依赖第三方解码库,必须在 Kconfig 中显式启用这些依赖项:
+
+```Makefile
+# libhelix_aac 依赖
+CONFIG_LIB_HELIX_AAC=y
+CONFIG_LIB_HELIX_AAC_SBR=y
+
+# libfluoride_sbc,libfluoride_sbc_packed 依赖
+CONFIG_LIB_FLUORIDE_SBC=y
+CONFIG_LIB_FLUORIDE_SBC_DECODER=y
+CONFIG_LIB_FLUORIDE_SBC_ENCODER=y
+
+# libopus 依赖
+CONFIG_LIB_OPUS=y
+
+#silk 依赖
+CONFIG_LIB_SILK=y
+```
+
+## 五、调试工具使用指南
+
+本节介绍如何在 Sim 环境中运行并测试音频工具。
+
+### 1、环境启动
+
+1. **运行模拟器**
+
+ 进入 `nuttx` 目录并启动 GDB 进行调试运行:
+
+ ```Bash
+ cd nuttx
+ sudo gdb --args ./nuttx
+ ```
+
+2. **挂载 Host 文件系统**
+
+ 在 NuttX Shell 中(nsh),将 Host 主机的音频流目录挂载到 Sim 环境的 `/stream` 目录:
+
+ ```Bash
+ # 替换 为实际用户名
+ mount -t hostfs -o fs=/home//Streams/ /stream
+ ```
+
+### 2、nxplayer 使用说明
+
+`nxplayer` 用于测试音频播放功能。
+
+#### 场景 A:播放 PCM 原始数据
+
+**测试用例**:播放 `/stream/8000.pcm`(单声道,16bits,44100Hz)。
+
+```Bash
+nxplayer
+
+# 指定播放设备
+device pcm0p
+
+# 格式: playraw
+playraw /stream/8000.pcm 1 16 44100
+```
+
+#### 场景 B:播放 MP3 文件 (模拟 Offload)
+
+**Host 依赖**: 模拟 MP3 解码需要 Host 主机安装 `libmad` 库
+
+```Bash
+sudo apt install libmad0-dev:i386
+```
+
+**测试用例**:
+
+```Bash
+nxplayer
+# 指定 Offload 播放设备
+device pcm1p
+# 播放文件
+play /stream/1.mp3help
+
+# 停止播放
+stop
+```
+
+**功能限制**:
+
+- 支持带 ID3V2 header 的文件。
+- 支持不带任何 ID3 header 的文件。
+- **暂不支持** ID3V1 格式。
+
+### 3、nxrecorder 使用说明
+
+`nxrecorder` 用于测试音频录制功能。
+
+#### 场景 A:录制 PCM 原始数据
+
+**测试用例**:录制双声道、16bits、48000Hz 的音频到 `1.pcm`。
+
+```Bash
+nxrecorder
+# 指定录音设备
+device pcm0c
+# 格式: recordraw
+recordraw /stream/1.pcm 2 16 48000
+
+# 停止录音
+stop
+```
+
+验证方法:检查 Host 主机对应目录下是否生成 `1.pcm` 且能正常播放。
+
+#### 场景 B:录制 MP3 文件 (模拟 Offload)
+
+**Host 依赖**: 模拟 MP3 编码需要 Host 主机安装 `libmp3lame` 库:
+
+```Bash
+sudo apt-get install libmp3lame-dev:i386
+```
+
+**测试用例**:
+
+```Bash
+nxrecorder
+
+# 指定 Offload 录音设备
+device pcm1c
+
+# 录制 MP3
+record /stream/100.mp3 2 16 44100
+```
+
+### 4、nxlooper 使用说明
+
+`nxlooper` 用于测试音频回环(Loopback),即录音数据直接送入播放通道。
+
+#### 场景 A:PCM 数据回环
+
+```Bash
+nxlooper
+# 指定播放设备
+device pcm0p
+# 指定录音设备
+device pcm0c
+# 启动回环: 2通道 16bit 48kHz
+loopback 2 16 48000
+
+# 停止回环
+stop
+```
+
+#### 场景 B:MP3 数据回环
+
+```Bash
+nxlooper
+device pcm1p
+device pcm1c
+# 最后一个参数 '8' 代表格式代码 (AUDIO_FMT_MP3)
+loopback 2 16 44100 8
+
+# 停止回环
+stop
+```
+
+**参数说明**: `loopback` 命令格式为:`loopback [format]`
+
+其中 `[format]` 参数对应 `audio.h` 中的定义(默认为 PCM):
+
+```C
+/* 位于 ./nuttx/include/nuttx/audio/audio.h */
+#define AUDIO_FMT_UNDEF 0x00
+#define AUDIO_FMT_OTHER 0x01
+#define AUDIO_FMT_MPEG 0x02
+#define AUDIO_FMT_AC3 0x03
+#define AUDIO_FMT_WMA 0x04
+#define AUDIO_FMT_DTS 0x05
+#define AUDIO_FMT_PCM 0x06
+#define AUDIO_FMT_WAV 0x07
+#define AUDIO_FMT_MP3 0x08
+#define AUDIO_FMT_MIDI 0x09
+#define AUDIO_FMT_OGG_VORBIS 0x0a
+#define AUDIO_FMT_FLAC 0x0b
+```
+
+## 六, mediatool 使用说明
+
+关于 `mediatool` 的详细命令与使用方法,请参考 [Mediatool 介绍](../../device_dev_guide/media/mediatool_zh-cn.md)。
diff --git a/zh-cn/quickstart/openvela_macos_quick_start.md b/zh-cn/quickstart/openvela_macos_quick_start.md
index 5a0765c6..b0bfc5b4 100644
--- a/zh-cn/quickstart/openvela_macos_quick_start.md
+++ b/zh-cn/quickstart/openvela_macos_quick_start.md
@@ -1,6 +1,6 @@
# 快速入门(macOS)
-[ [English](../../en/quickstart/openvela_macos_quick_start.md) | 简体中文 ]
+\[ [English](../../en/quickstart/openvela_macos_quick_start.md) | 简体中文 \]
本指南将指导您在 **macOS** 操作系统上完成 openvela 的开发环境准备、源代码下载、编译构建,并最终通过 Vela Emulator 运行编译产物。
@@ -164,56 +164,23 @@ sudo mv repo /usr/local/bin
repo sync -c -j8
```
-## 步骤三:编译源代码
-
-完成源代码下载后,请在 openvela 根目录下执行以下编译步骤。
-
-### 1. 设置环境变量
-
-执行以下命令,将预编译的工具链路径添加到当前终端会话的环境变量中。
-
-```Bash
-uname_s=$(uname -s | tr '[A-Z]' '[a-z]')
-uname_m=$(uname -m | sed 's/arm64/aarch64/g')
-export PATH=$PWD/prebuilts/build-tools/${uname_s}-${uname_m}/bin:$PATH
-export PATH=$PWD/prebuilts/cmake/${uname_s}-${uname_m}/bin:$PATH
-export PATH=$PWD/prebuilts/python/${uname_s}-${uname_m}/bin:$PATH
-export PATH=$PWD/prebuilts/gcc/${uname_s}-${uname_m}/aarch64-none-elf/bin:$PATH
-export PATH=$PWD/prebuilts/gcc/${uname_s}-${uname_m}/arm-none-eabi/bin:$PATH
-export PYTHONPATH=$PWD/prebuilts/tools/python/dist-packages/cxxfilt
-export PYTHONPATH=$PWD/prebuilts/tools/python/dist-packages/kconfiglib:$PYTHONPATH
-export PYTHONPATH=$PWD/prebuilts/tools/python/dist-packages/pyelftools:$PYTHONPATH
-```
-
-> **注意**: 此环境变量配置仅在当前终端窗口有效。若新开终端,需重新执行此脚本。
+ 
-### 2. 配置 CMake 项目 (Out-of-Tree)
+ > **操作提示**
+ >
+ > - 首次同步耗时较长,具体时间取决于您的网络状况和磁盘性能。
+ > - 若因网络问题中断,可重复执行 `repo sync` 进行增量同步。
-openvela 采用 **Out-of-tree build** 模式,该模式将编译产物与源代码分离,以保持源码目录的整洁。
-
-运行以下 `cmake` 命令来配置项目。此命令将:
-
-- 在 `cmake_out/goldfish-arm64-v8a-ap` 目录下生成构建系统文件。
-- 使用 Ninja 作为构建工具以提升编译速度。
-- 指定目标板的配置文件。
-
-```Bash
-cmake \
- -B cmake_out/goldfish-arm64-v8a-ap \
- -S $PWD/nuttx \
- -GNinja \
- -DBOARD_CONFIG=../vendor/openvela/boards/vela/configs/goldfish-arm64-v8a-ap \
- -DEXTRA_FLAGS="-Wno-cpp -Wno-deprecated-declarations"
-```
+## 步骤三:编译源代码
-
+完成源代码下载后,请在 openvela 根目录下执行以下编译步骤。
-### 3.(可选)自定义内核配置
+### 1. (可选)自定义内核配置
您可以通过 `menuconfig` 命令打开图形化界面,以调整 NuttX 内核与组件的配置。
```Bash
-cmake --build cmake_out/goldfish-arm64-v8a-ap -t menuconfig
+./build.sh vendor/openvela/boards/vela/configs/goldfish-arm64-v8a-ap/ --cmake menuconfig
```
> **操作技巧**
@@ -222,33 +189,31 @@ cmake --build cmake_out/goldfish-arm64-v8a-ap -t menuconfig
> - 按 `空格键` 可切换选中状态(启用/禁用/模块化)。
> - 配置完成后,选择 **Save** 保存并退出。
-
+
-### 4. 执行编译
+### 2. 执行编译
执行以下命令,构建整个项目。
```Bash
-cmake --build cmake_out/goldfish-arm64-v8a-ap
+./build.sh vendor/openvela/boards/vela/configs/goldfish-arm64-v8a-ap/ --cmake -j$(nproc)
```
-编译成功后,您将在 `cmake_out/goldfish-arm64-v8a-ap` 目录下找到 `nuttx` 等编译产物。
+编译成功后,您将在 `cmake_out/vela_goldfish-arm64-v8a-ap` 目录下找到 `nuttx` 等编译产物。
-
+
## 步骤四:运行模拟器
在 openvela 根目录下,执行以下脚本启动 `Vela Emulator` 并加载您的编译产物。
```Bash
-./emulator.sh cmake_out/goldfish-arm64-v8a-ap
+./emulator.sh cmake_out/vela_goldfish-arm64-v8a-ap/
```
模拟器启动后,您将看到 `goldfish-armv8a-ap>` 提示符,表明 openvela 已成功运行。
-
-
-
+
## 后续步骤
diff --git a/zh-cn/quickstart/openvela_ubuntu_quick_start.md b/zh-cn/quickstart/openvela_ubuntu_quick_start.md
index 3221ea63..f628526e 100644
--- a/zh-cn/quickstart/openvela_ubuntu_quick_start.md
+++ b/zh-cn/quickstart/openvela_ubuntu_quick_start.md
@@ -185,6 +185,7 @@ sudo mv repo /usr/local/bin
- 常见问题
- [快速入门常见问题](../faq/QuickStart_FAQ.md)
+ - [开发者常见问题解答](../faq/devoloper_tech_faq.md)
- 进一步阅读
diff --git a/zh-cn/release_notes/v5.4.md b/zh-cn/release_notes/v5.4.md
index 050cd94b..794697b1 100644
--- a/zh-cn/release_notes/v5.4.md
+++ b/zh-cn/release_notes/v5.4.md
@@ -1,91 +1,116 @@
# openvela trunk-5.4
-\[ [English](../../en/release_notes/v5.4.md) | 简体中文 \]
+[ [English](../../en/release_notes/v5.4.md) | 简体中文 ]
## 一、概览
+
openvela 一直致力于引入更多的芯片的支持、增强系统的实时通信能力,并大幅提升了系统的健壮性、存储功能和可调试性。本次发布围绕以下核心主题进行了增强:
-- 硬件生态扩展:**新增对**[**英飞凌 AURIX™ TC4**](../quickstart/development_board/tc4d9_evb_guide.md)**、**[**旗芯微 MCU**](../quickstart/development_board/fc7300f8m_evb_guide.md)** 及 QEMU-R52 SIL 平台的支持**,拓宽了平台适用范围。
-- 系统内核加固:**实现了 SMP 与 PM 的协同工作,引入了基于 MPU 的线程栈保护和 RPC 框架重构**,系统更安全、更稳定。
-- 关键能力集成:**新增了 SocketCAN 和以太网协议栈;引入了高可靠性的 NVS2 存储方案**。
+
+- 硬件生态扩展:新增对[英飞凌 AURIX™ TC4](../quickstart/development_board/tc4d9_evb_guide.md)、[旗芯微 MCU](../quickstart/development_board/fc7300f8m_evb_guide.md) 及 QEMU-R52 SIL 平台的支持,拓宽了平台适用范围。
+- 系统内核加固:实现了 SMP 与 PM 的协同工作,引入了基于 MPU 的线程栈保护和 RPC 框架重构,系统更安全、更稳定。
+- 关键能力集成:新增了 SocketCAN 和以太网协议栈;引入了高可靠性的 NVS2 存储方案。
- 开发者体验优化:提供了低开销的 FDX 实时追踪工具和多个 LVGL 应用范例,降低了开发和调试门槛。
## 二、主要新增功能与增强
-### **1、平台支持 (Platform Support)**
-- 新增对英飞凌 AURIX™ TriCore™ TC4 芯片的支持
-- 新增对旗芯微(Flagchip)MCU 的支持
+
+### 1、平台支持 (Platform Support)
+
+- 新增对英飞凌 AURIX™ TriCore™ TC4 芯片的支持
+- 新增对旗芯微(Flagchip)MCU 的支持
- QEMU 平台下新增 Cortex-R52 核支持,支持 Vector SIL 平台
- 解决 nuttx boards 的编译问题,更好得支持原生 nuttx boards 平台
-### **2、内核与安全 (Kernel & Security)**
-- 电源管理 (PM) & 对称多处理 (SMP):
- - 实现了 SMP 和 PM 功能的同时开启,并在 `qemu-armv8a` 平台上完成功能验证,覆盖了 PM 基础功能和 `ostest` 的基础用例。
-- RPC
- - 框架重构: 重构了 RPC 框架,使其具备更强的通用性,能够为其他 VirtIO 设备提供跨核通信能力。
- - 对 Rptun/Rpmsg 进行了功能增强,引入多优先级机制以满足汽车场景的实时性需求,并修复了功能安全相关的代码扫描问题。
+
+### 2、内核与安全 (Kernel & Security)
+
+- 电源管理 (PM) & 对称多处理 (SMP):实现了 SMP 和 PM 功能的同时开启,并在 `qemu-armv8a` 平台上完成功能验证,覆盖了 PM 基础功能和 `ostest` 的基础用例。
+
+- RPC
+
+ - 框架重构: 重构了 RPC 框架,使其具备更强的通用性,能够为其他 VirtIO 设备提供跨核通信能力。
+ - 对 Rptun/Rpmsg 进行了功能增强,引入多优先级机制以满足汽车场景的实时性需求,并修复了功能安全相关的代码扫描问题。
+
- 内存管理增强: 实现了 Task 独立的 Heap 空间,`libdbus` 等库已支持。
+
- Binder 消息机制: 将 Binder 的 server/client fd 集成到 `libuv` 事件循环中,通过回调进行消息处理,实现了与其他模块的统一管理。
+
- 新增 Rpmsg Battery & Gauge 驱动。
-- 线程间隔离保护机制
- 内核现已支持基于硬件内存保护单元 (MPU) 的线程栈保护功能。当一个线程发生栈溢出时,该机制会触发硬件异常,阻止其破坏其他线程的栈空间或关键数据。
-- 代码质量:
- - 完成了多项静态代码问题的修复,提升了代码库的整体质量。
-### **3、通信 (Communication)**
-- 新增 SocketCAN 、以太网支持
- 引入了遵循标准 Socket API 的 CAN 通信框架。用户现在可以使用 `socket()`, `bind()`, `send()`, `recv()` 等标准接口进行 CAN 报文的收发和过滤。
-- WebSocket 功能增强
- 为 WebSocket Feature 新增了默认证书支持,简化了安全连接的建立流程。
-### **4、存储 (Storage)**
-- 新增 NVS2 (Non-Volatile Storage v2)
- 集成了一个全新的、全新高可靠性的 NVS2 存储方案。该存储方案针对嵌入式 Flash 介质进行了深度优化,支持磨损均衡、掉电安全和数据加密。
-### **5、调试与诊断 (Debugging & Diagnostics)**
-- 新增基于 FDX 的实时性 Trace 功能
- 实现了一种基于 FDX (Fast Debug eXchange) 协议的低侵入式实时追踪工具。它能够以极低的系统开销,捕获并导出高精度的系统事件,如任务切换、中断响应、信号量操作等。
-## **6、应用示例**
+
+- 线程间隔离保护机制:内核现已支持基于硬件内存保护单元 (MPU) 的线程栈保护功能。当一个线程发生栈溢出时,该机制会触发硬件异常,阻止其破坏其他线程的栈空间或关键数据。
+
+- 代码质量:完成了多项静态代码问题的修复,提升了代码库的整体质量。
+
+### 3、通信 (Communication)
+
+- 新增 SocketCAN、以太网支持:引入了遵循标准 Socket API 的 CAN 通信框架。用户现在可以使用 `socket()`, `bind()`, `send()`, `recv()` 等标准接口进行 CAN 报文的收发和过滤。
+
+- WebSocket 功能增强:为 WebSocket Feature 新增了默认证书支持,简化了安全连接的建立流程。
+
+### 4、存储 (Storage)
+
+新增 NVS2 (Non-Volatile Storage v2):集成了一个全新的、全新高可靠性的 NVS2 存储方案。该存储方案针对嵌入式 Flash 介质进行了深度优化,支持磨损均衡、掉电安全和数据加密。
+
+### 5、调试与诊断 (Debugging & Diagnostics)
+
+新增基于 FDX 的实时性 Trace 功能:实现了一种基于 FDX (Fast Debug eXchange) 协议的低侵入式实时追踪工具。它能够以极低的系统开销,捕获并导出高精度的系统事件,如任务切换、中断响应、信号量操作等。
+
+## 6、应用示例
+
- 新增[打砖块游戏](./../../../../../packages_demos/blob/trunk-5.4/breakout/Readme.md):基于 openvela 和 LVGL 开发的触屏打砖块游戏,已实现了基本游戏逻辑,增加了图片素材并实现了打击音效。
+
- 新增[虚拟宠物](./../../../../../packages_demos/blob/trunk-5.4/pet/README.md)应用:一个基于LVGL图形库的交互式演示程序,模拟了一个数字宠物的饲养过程。用户可以通过喂食、饮水、运动和休息等操作来照顾虚拟宠物,提升其心情和等级。
-- [贪吃蛇游戏](./../../../../../packages_demos/blob/trunk-5.4/snake_game/Readme.md):一个使用 LVGL 图形库实现的自动贪吃蛇游戏
-- [电子木鱼](./../../../../../packages_demos/blob/trunk-5.4/wooden_fish/README_zh-cn.md):基于 openvela `nxaudio` 服务与上层 LVGL UI 框架,并实现了包含响应式布局和安全资源管理的完整交互链路,实现了一个动画效果流畅、资源安全管理、良好用户体验的一个应用展示
-## **7、开发工具**
-- Ubuntu 环境 VS Code 插件支持
-支持在 Ubuntu 环境下安装 openvela VS Code 插件,实现了从项目创建、编译构建、系统调试到应用开发的全流程支持,显著提升开发效率。([openvela VS Code 插件使用指南](../quickstart/vscode_plugin_usage.md))
-## **8、模拟器运行时参数扩展**
+
+- [贪吃蛇游戏](./../../../../../packages_demos/blob/trunk-5.4/snake_game/Readme.md):一个使用 LVGL 图形库实现的自动贪吃蛇游戏。
+
+- [电子木鱼](./../../../../../packages_demos/blob/trunk-5.4/wooden_fish/README_zh-cn.md):基于 openvela `nxaudio` 服务与上层 LVGL UI 框架,并实现了包含响应式布局和安全资源管理的完整交互链路,实现了一个动画效果流畅、资源安全管理、良好用户体验的一个应用展示。
+
+## 7、开发工具
+
+Ubuntu 环境 VS Code 插件支持:支持在 Ubuntu 环境下安装 openvela VS Code 插件,实现了从项目创建、编译构建、系统调试到应用开发的全流程支持,显著提升开发效率。([openvela VS Code 插件使用指南](../quickstart/vscode_plugin_usage.md))
+
+## 8、模拟器运行时参数扩展
+
- emulator.sh 新增 -keep 参数支持
- - 如果 emulator.sh 支持多实例配置时,可以通过 `-keep` 参数接入指定的名称的实例(如果不存在则创建),并且在该实例退出时不会删除相关上下文
- ```Shell
- # 使用方法
- cp cmake_out/vela_goldfish-arm64-v8a-ap/nuttx* cmake_out/vela_goldfish-arm64-v8a-ap/vela_* cmake_out/vela_goldfish-arm64-v8a-ap/advancedFeatures.ini nuttx/
-
- ./emulator.sh vela -keep -no-window
-
- # 测试举例,在/data目录创建 test 文件并写入内容
- nsh> echo test > /data/test
- nsh> echo "openvvela qemu keep test" >> /data/test
- nsh> quit
-
- # 模拟器退出重新进入,上一次写入的内容仍然被保留
- ./emulator.sh vela -keep -no-window
- nsh> cat /data/test
- test
- openvela qemu keep test
- ```
-
-- emulator.sh 新增支持使用 Hostfs 功能,默认支持使用9pfs,效果如下:
- ```Shell
- goldfish-armv8a-ap> df -h
- Filesystem Size Used Available Mounted on
- binfs 0B 0B 0B /bin
- fatfs 255M 78M 177M /data
- romfs 1152B 1152B 0B /etc
- hostfs 0B 0B 0B /host
- procfs 0B 0B 0B /proc
- v9fs 878G 626G 252G /share
- romfs 512B 512B 0B /system
- tmpfs 6K 1K 5K /tmp
- ```
-
- 用法:
- ```Shell
- # 使用方法
- cp cmake_out/vela_goldfish-arm64-v8a-ap/nuttx* cmake_out/vela_goldfish-arm64-v8a-ap/vela_* cmake_out/vela_goldfish-arm64-v8a-ap/advancedFeatures.ini nuttx/
-
- ./emulator.sh vela
- ```
+
+ 如果 emulator.sh 支持多实例配置时,可以通过 `-keep` 参数接入指定的名称的实例(如果不存在则创建),并且在该实例退出时不会删除相关上下文
+
+ ```bash
+ # 使用方法
+ cp cmake_out/vela_goldfish-arm64-v8a-ap/nuttx* cmake_out/vela_goldfish-arm64-v8a-ap/vela_* cmake_out/vela_goldfish-arm64-v8a-ap/advancedFeatures.ini nuttx/
+
+ ./emulator.sh vela -keep -no-window
+
+ # 测试举例,在/data目录创建 test 文件并写入内容
+ nsh> echo test > /data/test
+ nsh> echo "openvvela qemu keep test" >> /data/test
+ nsh> quit
+
+ # 模拟器退出重新进入,上一次写入的内容仍然被保留
+ ./emulator.sh vela -keep -no-window
+ nsh> cat /data/test
+ test
+ openvela qemu keep test
+ ```
+
+- emulator.sh 新增支持使用 Hostfs 功能,默认支持使用 9pfs,效果如下:
+
+ ```bash
+ goldfish-armv8a-ap> df -h
+ Filesystem Size Used Available Mounted on
+ binfs 0B 0B 0B /bin
+ fatfs 255M 78M 177M /data
+ romfs 1152B 1152B 0B /etc
+ hostfs 0B 0B 0B /host
+ procfs 0B 0B 0B /proc
+ v9fs 878G 626G 252G /share
+ romfs 512B 512B 0B /system
+ tmpfs 6K 1K 5K /tmp
+ ```
+
+ 用法:
+
+ ```bash
+ # 使用方法
+ cp cmake_out/vela_goldfish-arm64-v8a-ap/nuttx* cmake_out/vela_goldfish-arm64-v8a-ap/vela_* cmake_out/vela_goldfish-arm64-v8a-ap/advancedFeatures.ini nuttx/
+
+ ./emulator.sh vela
+ ```