Skip to content

Reproduce / approximate the functional behavior of small digital sequential circuits (VHDL / schematic designs) using neural networks trained on input–output traces.

License

Notifications You must be signed in to change notification settings

Anjanamb/Digital-IC-Functionality-Duplication-Using-NN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

162 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Digital IC Functionality Duplication Using Neural Networks

Reproduce / approximate the functional behavior of small digital sequential circuits (VHDL / schematic designs) using neural networks trained on input–output traces.


Table of contents


Overview

Digital circuits (counters, shift registers, LFSRs, etc.) can be represented as input → state → output mappings. This project collects traces from VHDL/Quartus/ModelSim simulations and uses them to train neural-network models that approximate the circuit's functional behaviour. The goal is functionality duplication: given the same inputs (and initial states), the trained NN should produce the same outputs as the original digital design.


Technical details

  • Model type: Long Short-Term Memory (LSTM) networks were employed, as they are well-suited for capturing sequential dependencies in digital circuits.
  • Input format: Time-aligned traces of input and internal states generated from VHDL/Quartus simulations.
  • Output format: Predicted output traces that mirror the circuit’s original behaviour.
  • Training objective: Minimize per-bit sequence error across multiple time steps, enabling the NN to reproduce both combinational and sequential logic accurately.
  • Tools used: Python, TensorFlow/Keras, Quartus, ModelSim.

Publication

This work has been published in the 2023 IEEE 17th International Conference on Industrial and Information Systems (ICIIS):

Digital Integrated Circuit Functionality Duplication Using Neural Networks


Quickstart

  1. Clone the repo:
git clone https://github.com/Anjanamb/Digital-IC-Functionality-Duplication-Using-NN.git
cd Digital-IC-Functionality-Duplication-Using-NN
  1. Create a Python virtual environment and activate it:
python -m venv venv
# on Linux/macOS
source venv/bin/activate
# on Windows (PowerShell)
.\venv\Scripts\Activate.ps1
  1. Install dependencies (or create a requirements.txt from these lines):
pip install numpy mysql-connector-python tensorflow Flask wandb keras-tuner

Dataset / Data preparation

This project uses datasets generated from VHDL designs and ModelSim/Quartus simulations. The dataset-preparation repository referenced in this project contains the scripts and designs used to produce the training traces (VHDL testbenches, schematic files, and dataset .txt traces). Please refer to that Sequential Logic Datasets with Designs repository for the exact dataset-generation pipeline and file formats.

Typical dataset items:

  • VHDL testbenches (.vhd) used to create stimuli.
  • Schematic design files (.bdf) for the circuit layout.
  • Plain text dataset files (.txt) containing aligned input / output / state traces suitable for feeding into training pipelines.

Where to put the data locally
Create a directory data/ at repo root and place the generated .txt traces there (or update the training script --data argument to point to your dataset folder).


Project structure (top-level)

Digital-IC-Functionality-Duplication-Using-NN/
├─ Logic_Function/          # Logic designs / helper scripts (VHDL, scheatics, generators)
├─ NN for testing/          # Neural-network training / testing scripts and model code
├─ .gitignore
└─ README.md                # <- you are replacing/updating this file

Note: It is recommended to rename NN for testingnn_for_testing (no spaces) to make CLI paths and imports easier.


How to run (examples)

Train (example)

# Example - adapt filenames to match this repo's scripts
python "NN for testing/train.py"   --data ../data/my_trace.txt   --model-dir models/   --epochs 50   --batch-size 64

Evaluate

python "NN for testing/evaluate.py"   --model models/last   --data ../data/validation_trace.txt

If you rename the folder (recommended):

python nn_for_testing/train.py --data data/my_trace.txt

Training tips & wandb

This repository lists wandb as an optional dependency for experiment tracking; to use it:

  1. Install & login:
pip install wandb
wandb login
  1. In your training script, initialize:
import wandb
wandb.init(project="digital-ic-duplication", config=your_config_dict)

Evaluation & expected outputs

  • Expected model outputs: the NN should produce digital-output sequences matching the ground-truth trace, within an acceptable error (binary classification per output bit or regression + thresholding depending on setup).
  • Evaluation scripts should compute per-bit accuracy, sequence-level accuracy, and optionally confusion matrices and timing/skew analyses.

Suggested improvements

  1. Add requirements.txt with pinned versions.
  2. Rename NN for testing to nn_for_testing to avoid spaces.
  3. Add explicit entrypoints (e.g., train.py, evaluate.py) and show example CLI flags.
  4. Add a data/README.md explaining the expected trace/text format and a small sample .txt.
  5. Add unit / smoke tests for reproducibility.
  6. Document model architecture choices.
  7. Add a usage example notebook.

Contributing

  1. Open an issue to discuss major changes.
  2. Create a branch, add tests & docs for your feature, and submit a PR.
  3. Keep commits small and clear; update README examples if you change CLIs.

License

MIT License – feel free to use, modify, and share. Please credit the repository if used in research or academic work.


Contact / Acknowledgements

Author: Anjana Bandara (GitHub: Anjanamb)
Contributors: Ayesh-Rajakaruna, sahannt98

About

Reproduce / approximate the functional behavior of small digital sequential circuits (VHDL / schematic designs) using neural networks trained on input–output traces.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages