[TOC]
This repository contains the code implementation for the paper "EQNAS: Evolutionary Quantum Neural Architecture Search for Image Classification", Paper Link
EQNAS proposes a neural network structure search method based on quantum evolutionary algorithm for quantum neural networks based on quantum circuits. In the EQNAS method, by searching for the optimal network structure, the model accuracy is improved, the complexity of quantum circuits is reduced, and the burden of constructing actual quantum circuits is reduced, which is used to solve image classification tasks.
This model designs and implements a quantum neural network for image classification, and conducts neural architecture search based on quantum evolutionary algorithms. The network structure mainly includes two modules:
- Quantum encoding circuit Encoder: Encode different dataset images using 01 encoding and Rx encoding, respectively
- Ansatz training circuit: A two-layer quantum neural network Ansatz was constructed using double bit quantum gates (XX gate, YY gate, ZZ gate) and quantum I gate
By performing the Pauli z-operator on the output of the quantum neural network to measure the Hamiltonian expectation, and using quantum evolution algorithms to search for the architecture of the aforementioned quantum neural network, the model accuracy is improved and the circuit complexity is reduced.
-
Dataset MNIST The MNIST dataset contains a total of 70000 images, of which 60000 are for training and 10000 are for testing. Each image is composed of handwritten digit images ranging from 0 to 9, with a size of 28 x 28. Each image is in the form of black background and white text, with a black background represented by 0 and white text represented by floating-point numbers between 0 and 1. The closer to 1, the whiter the color. This model filters out the "3" and "6" categories and performs binary classification.
-
-Dataset Warship To verify the classification performance of QNN on more complex image datasets and the effectiveness of our proposed EQNAS method, we used a set of ship target datasets. This dataset is a sailing ship captured by drones from different angles. The image is in JPG format with a resolution of 640 x 512. It contains two categories: Burke and Nimitz. The number of training sets for this dataset is 411 (202 Burke class and 209 Nimitz class), and the number of test sets is 150 (78 Burke class and 72 Nimitz class).
After downloading, extract the dataset to the following directory:
~/path/to/EQNAS/dataset/mnist ~/path/to/EQNAS/dataset/warship
-
Hardware (GPU)
- Use GPU to build hardware environment.
-
Framework
-
Installation of other third-party libraries
cd EQNAS conda env create -f eqnas.yaml conda install --name eqnas --file condalist.txt pip install -r requirements.txt
-
To view details, please refer to the following resources:
-
After installing MindSpore and MindQuantum through the official website, you can follow the following steps for training and evaluation:
-
GPU environment
# train # mnist Dataset training example python eqnas.py --data-type mnist --data-path ./dataset/mnist/ --batch 32 --epoch 3 --final 10 | tee mnist_train.log OR bash run_train.sh mnist /abs_path/to/dataset/mnist/ 32 3 10 # warship Dataset training example python eqnas.py --data-type warship --data-path ./dataset/warship/ --batch 10 --epoch 10 --final 20 | tee warship_train.log OR bash run_train.sh warship /abs_path/to/dataset/warship/ 10 10 20 # Evaluation can be performed after training is completed # mnist Dataset evaluation python eval.py --data-type mnist --data-path ./dataset/mnist/ --ckpt-path /abs_path/to/best_ckpt/ | tee mnist_eval.log OR bash run_eval.sh mnist /abs_path/to/dataset/mnist/ /abs_path/to/best_ckpt/ # warship dataset evaluation python eval.py --data-type warship --data-path ./dataset/warship/ --ckpt-path /abs_path/to/best_ckpt/ | tee warship_eval.log OR bash run_eval.sh warship /abs_path/to/dataset/warship/ /abs_path/to/best_ckpt/
├── EQNAS
├── condalist.txt # Anaconda env list
├── eqnas.py # Script for training
├── eqnas.yaml # Anaconda environment
├── eval.py # Evaluation Script
├── README.md # EQNAS README
├── requirements.txt # pip Package dependency
├── scripts
│ ├── run_eval.sh # Evaluate shell script
│ └── run_train.sh # Training shell script
└── src
├── dataset.py # Dataset generator
├── loss.py # Model loss function
├── metrics.py # Model evaluation metrics
├── model
│ └── common.py # Quantum neural network builing
├── qea.py # Quantum evolutionary algorithm
└── utils
├── config.py # Model parameter configuration file
├── data_preprocess.py # Data preprocessing
├── logger.py # Log Builder
└── train_utils.py # Model Training Definition
In config.py, quantum evolutionary algorithm parameters, training parameters, dataset, and evaluation parameters can be configured simultaneously.
cfg = EasyDict()
cfg.LOG_NAME = "logger"data_preprocess
# Quantum evolution algorithm parameters
cfg.QEA = EasyDict()
cfg.QEA.fitness_best = [] # The best fitness of each generation
# Various parameters of the population
cfg.QEA.Genome = 64 # Chromosome length
cfg.QEA.N = 10 # Population size
cfg.QEA.generation_max = 50 # Population Iterations
# Dataset parameters
cfg.DATASET = EasyDict()
cfg.DATASET.type = "mnist" # mnist or warship
cfg.DATASET.path = "./dataset/"+cfg.DATASET.type+"/" # ./dataset/mnist/ or ./dataset/warship/
cfg.DATASET.THRESHOLD = 0.5
# Training parameters
cfg.TRAIN = EasyDict()
cfg.TRAIN.EPOCHS = 3 # 10 for warship
cfg.TRAIN.EPOCHS_FINAL = 10 # 20 for warship
cfg.TRAIN.BATCH_SIZE = 32 # 10 for warship
cfg.TRAIN.learning_rate = 0.001
cfg.TRAIN.checkpoint_path = "./weights/"+cfg.DATASET.type+"/final/"
For more configuration details, please refer to the config.py
file in the utils
directory.
-
Running training mnist dataset in GPU environment
When running the following command, please move the dataset to the
dataset
folder under the root directory of EQNAS. Relative paths can be used to describe the location of the dataset. Otherwise, please set the--data-path
to an absolute path.python eqnas.py --data-type mnist --data-path ./dataset/mnist/ --batch 32 --epoch 3 --final 10 | tee mnist_train.log OR bash run_train.sh mnist /abs_path/to/dataset/mnist/ 32 3 10
The above Python command will run in the background. You can use the
mnist_train. log
file in the current directory or/ View the results of the log files under the.log/
directory.After the training is completed, you can find the
eqnas. py
script in the directory where it is located Find thebest.ckpt, init.ckpt, latest.ckpt
files and themodel.arch
model architecture files corresponding to each model during the architecture search process in theweights/
directory. -
Training Warship Dataset in GPU Environment
python eqnas.py --data-type warship --data-path ./dataset/warship/ --batch 10 --epoch 10 --final 20 | tee warship_train.log OR bash run_train.sh warship /abs_path/to/dataset/warship/ 10 10 20
View the model training results in the same way as the training results on the mnist dataset.
-
Running an evaluation mnist dataset in a GPU environment
-
Before running the following command, please move the dataset to the 'dataset' folder in the root directory of EQNAS. Relative paths can be used to describe the location of the dataset. Otherwise, please provide the absolute path of the
dataset
. -
Please use the checkpoint path for evaluation. Please set the checkpoint path to an absolute path.
python eval.py --data-type mnist --data-path ./dataset/mnist/ --ckpt-path /abs_path/to/best_ckpt/ | tee mnist_eval.log OR bash run_eval.sh mnist /abs_path/to/dataset/mnist/ /abs_path/to/best_ckpt/
The above Python command will run in the background, and you can view the results through the
mnist_eval.log
file. -
Running and evaluating the Warship dataset in a GPU environment.
Please refer to the evaluation of the mnist dataset.
- The quantum model created based on MindQuantum is currently not officially supported for export to this format.
- But in order to save quantum circuits, in this project, Python's built-in pickle data serialization package is used to save every quantum model obtained from architecture search as
/weights/model/model.arch
, you can load the model architecture according to the method ineval. py
parameter | GPU | GPU |
---|---|---|
Model version | EQNAS | EQNAS |
resource | NVIDIA GeForce RTX 3090 ; ubuntu20.04 | NVIDIA GeForce RTX2080Ti ; ubuntu18.04 |
Upload date | 2022-12-6 | 2022-12-6 |
MindSpore version | 1.8.1 | 1.8.1 |
MindQuantum version | 0.7.0 | 0.7.0 |
Dataset | warship | mnist |
Training parameters | epoch=20, steps per epoch=41, batch_size = 10 | epoch=10.steps per epoch=116, batch_size = 32 |
optimizer | Adam | Adam |
loss function | Binary CrossEntropy Loss | Binary CrossEntropy Loss |
output | accuracy | accuracy |
accuracy | 84.0% | 98.9% |
Training duration | 7h19m29s | 27h27m23s |
speed | 631ms/step | 2734ms/step |
- In the script
dataset.py
, when creating the ship data loader and shuffling the ship data, a random number seed was set. - To ensure the randomness of mutation and crossover operations in quantum evolutionary algorithms, after setting the random number seed mentioned above, the random number seed was immediately reset at system time.