Reproduce / approximate the functional behavior of small digital sequential circuits (VHDL / schematic designs) using neural networks trained on input–output traces.
- Overview
- Technical details
- Publication
- Quickstart
- Dataset / Data preparation
- How to run (examples)
- Training tips & wandb
- Evaluation & expected outputs
- Contributing
- License
- Contact / Acknowledgements
Digital circuits (counters, shift registers, LFSRs, etc.) can be represented as input → state → output mappings. This project collects traces from VHDL/Quartus/ModelSim simulations and uses them to train neural-network models that approximate the circuit's functional behaviour. The goal is functionality duplication: given the same inputs (and initial states), the trained NN should produce the same outputs as the original digital design.
- Model type: Long Short-Term Memory (LSTM) networks were employed, as they are well-suited for capturing sequential dependencies in digital circuits.
- Input format: Time-aligned traces of input and internal states generated from VHDL/Quartus simulations.
- Output format: Predicted output traces that mirror the circuit’s original behaviour.
- Training objective: Minimize per-bit sequence error across multiple time steps, enabling the NN to reproduce both combinational and sequential logic accurately.
- Tools used: Python, TensorFlow/Keras, Quartus, ModelSim.
This work has been published in the 2023 IEEE 17th International Conference on Industrial and Information Systems (ICIIS):
Digital Integrated Circuit Functionality Duplication Using Neural Networks
- Clone the repo:
git clone https://github.com/Anjanamb/Digital-IC-Functionality-Duplication-Using-NN.git
cd Digital-IC-Functionality-Duplication-Using-NN- Create a Python virtual environment and activate it:
python -m venv venv
# on Linux/macOS
source venv/bin/activate
# on Windows (PowerShell)
.\venv\Scripts\Activate.ps1- Install dependencies (or create a
requirements.txtfrom these lines):
pip install numpy mysql-connector-python tensorflow Flask wandb keras-tunerThis project uses datasets generated from VHDL designs and ModelSim/Quartus simulations. The dataset-preparation repository referenced in this project contains the scripts and designs used to produce the training traces (VHDL testbenches, schematic files, and dataset .txt traces). Please refer to that Sequential Logic Datasets with Designs repository for the exact dataset-generation pipeline and file formats.
Typical dataset items:
- VHDL testbenches (
.vhd) used to create stimuli. - Schematic design files (
.bdf) for the circuit layout. - Plain text dataset files (
.txt) containing aligned input / output / state traces suitable for feeding into training pipelines.
Where to put the data locally
Create a directory data/ at repo root and place the generated .txt traces there (or update the training script --data argument to point to your dataset folder).
Digital-IC-Functionality-Duplication-Using-NN/
├─ Logic_Function/ # Logic designs / helper scripts (VHDL, scheatics, generators)
├─ NN for testing/ # Neural-network training / testing scripts and model code
├─ .gitignore
└─ README.md # <- you are replacing/updating this file
Note: It is recommended to rename
NN for testing→nn_for_testing(no spaces) to make CLI paths and imports easier.
Train (example)
# Example - adapt filenames to match this repo's scripts
python "NN for testing/train.py" --data ../data/my_trace.txt --model-dir models/ --epochs 50 --batch-size 64Evaluate
python "NN for testing/evaluate.py" --model models/last --data ../data/validation_trace.txtIf you rename the folder (recommended):
python nn_for_testing/train.py --data data/my_trace.txtThis repository lists wandb as an optional dependency for experiment tracking; to use it:
- Install & login:
pip install wandb
wandb login- In your training script, initialize:
import wandb
wandb.init(project="digital-ic-duplication", config=your_config_dict)- Expected model outputs: the NN should produce digital-output sequences matching the ground-truth trace, within an acceptable error (binary classification per output bit or regression + thresholding depending on setup).
- Evaluation scripts should compute per-bit accuracy, sequence-level accuracy, and optionally confusion matrices and timing/skew analyses.
- Add
requirements.txtwith pinned versions. - Rename
NN for testingtonn_for_testingto avoid spaces. - Add explicit entrypoints (e.g.,
train.py,evaluate.py) and show example CLI flags. - Add a
data/README.mdexplaining the expected trace/text format and a small sample.txt. - Add unit / smoke tests for reproducibility.
- Document model architecture choices.
- Add a usage example notebook.
- Open an issue to discuss major changes.
- Create a branch, add tests & docs for your feature, and submit a PR.
- Keep commits small and clear; update README examples if you change CLIs.
MIT License – feel free to use, modify, and share. Please credit the repository if used in research or academic work.
Author: Anjana Bandara (GitHub: Anjanamb)
Contributors: Ayesh-Rajakaruna, sahannt98