| language | library_name | license_name | |
|---|---|---|---|
|
nnll |
MPL-2.0 + Commons Clause 1.0 |
A CLI tool and Python library for image processing and analysis of feature extraction to determine origin, whether synthetic/reconstructed or genuine.
We use a modern VAE to extract features from images generated using Diffusers, ComfyUI, Darkshapes tools (Zodiac/Divisor/singularity) and Google Nano-Banana. Using a modular pipeline that integrates a combination of feature extraction techniques such as spectral residual analysis and feature vectors extracted from the images, we train a Gradient Boosting Decision Tree model and its associated PCA transformer to distinguish between images of synthetic and human origin. Preliminary results demonstrate high accuracy in detecting synthetic images.
This repo provides a simple command‑line interface to invoke the tool and examples of integrating the library of predictions and metrics into other works. We make all attempts to follow continuous integration best practices for deploying and maintaining software, ensuring the code is readied for production environments.
Future work includes the development of an automated testing framework and evaluation suite, expanding the scope of research to include wider diversity of synthetic and original human-generated datasets, benchmarking against comparable methods, and exploring additional model architectures.
- A dataset of images made by human artists with width and height dimensions larger than 512 pixels. This will serve as ground truth and should be placed in the
/assetsfolder. - A huggingface account that will be used to download models and synthetic datasets. Create an API Key at their website, then sign in with
hf auth login. - It is recommended to run
negateon a GPU to ensure efficient processing and reduced training time.
Note
Our training results and visualizations were created with data provided consensually by generous artists at https://purelyhuman.xyz. We don't have and won't seek permission to share that dataset here.
git clone https://github.com/darkshapes/negate.git
cd negate
uv syncmacos/linux
source .venv/bin/activatewindows
Set-ExecutionPolicy Bypass -Scope Process -Force; .venv\Scripts\Activate.ps1Basic Syntax:
usage: negate [-h] {train,check} ...
Negate CLI
positional arguments:
{train,check}
train Train model on the dataset in the provided path or `assets/`. The resulting model will be saved to disk.
check Check whether an image at the provided path is synthetic or original.
options:
-h, --help show this help message and exitTraining syntax:
usage: negate train [-h]
[-m {exdysa/dc-ae-f32c32-sana-1.1-diffusers,zai-org/GLM-Image,black-forest-labs/FLUX.2-dev,black-forest-labs/FLUX.2-klein-4B,Tongyi-MAI/Z-Image,Freepik/F-Lite-Texture,exdysa/mitsua-vae-SAFETENSORS}]
[path]
positional arguments:
path Dataset path
options:
-h, --help show this help message and exit
-m, --model {exdysa/dc-ae-f32c32-sana-1.1-diffusers,zai-org/GLM-Image,black-forest-labs/FLUX.2-dev,black-forest-labs/FLUX.2-klein-4B,Tongyi-MAI/Z-Image,Freepik/F-Lite-Texture,exdysa/mitsua-vae-SAFETENSORS}
Change the VAE model to use for training to a supported HuggingFace repo. Accuracy and memory use decrease from left to rightCheck the origin of an image:
usage: negate check [-h] [-s | -g] path
positional arguments:
path Image or folder path
options:
-h, --help show this help message and exit
-s, --synthetic Mark image as synthetic (label = 1) for evaluation.
-g, --genuine Mark image as genuine (label = 0) for evaluation.@article{darkshapes2026,
author={darkshapes},
title={negate},
year={2026},
howpublished={\url{https://github.com/darkshapes/negate}},
}


