diff --git a/HelloWorldMerLin.ipynb b/HelloWorldMerLin.ipynb new file mode 100644 index 0000000..700c9a7 --- /dev/null +++ b/HelloWorldMerLin.ipynb @@ -0,0 +1,266 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "c7a956c2", + "metadata": {}, + "source": [ + "# Hello World: Quantum Machine Learning with Merlin\n", + "\n", + "Welcome! This notebook demonstrates how to use Merlin to build, train, and evaluate a hybrid quantum-classical neural network for the classic Iris classification task. \n", + "You'll see how quantum layers can be integrated into standard PyTorch models and how to compare their performance." + ] + }, + { + "cell_type": "markdown", + "id": "34a189db", + "metadata": {}, + "source": [ + "## 1. Install and Import Dependencies\n", + "\n", + "First, let's make sure all required packages are installed and import them. \n", + "If you haven't installed Merlin yet, run: \n", + "`pip install merlinquantum` in your terminal or `!pip install merlinquantum` in your notebook." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "2ed0f1a9", + "metadata": {}, + "outputs": [], + "source": [ + "import torch\n", + "import torch.nn as nn\n", + "import merlin as ML\n", + "from merlin.datasets import iris" + ] + }, + { + "cell_type": "markdown", + "id": "5c13b34f", + "metadata": {}, + "source": [ + "## 2. Load and Prepare the Iris Dataset\n", + "\n", + "We'll use the classic Iris dataset, a simple and well-known benchmark for classification. \n", + "Let's load the data and convert it to PyTorch tensors for training." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c71851b3", + "metadata": {}, + "outputs": [], + "source": [ + "train_features, train_labels, train_metadata = iris.get_data_train()\n", + "test_features, test_labels, test_metadata = iris.get_data_test()\n", + "\n", + "# Convert data to PyTorch tensors\n", + "X_train = torch.FloatTensor(train_features)\n", + "y_train = torch.LongTensor(train_labels)\n", + "X_test = torch.FloatTensor(test_features)\n", + "y_test = torch.LongTensor(test_labels)\n", + "\n", + "print(f\"Training samples: {X_train.shape[0]}\")\n", + "print(f\"Test samples: {X_test.shape[0]}\")\n", + "print(f\"Features: {X_train.shape[1]}\")\n", + "print(f\"Classes: {len(torch.unique(y_train))}\")" + ] + }, + { + "cell_type": "markdown", + "id": "74a16ed3", + "metadata": {}, + "source": [ + "![iris](./img/iris.png)" + ] + }, + { + "cell_type": "markdown", + "id": "c80a7e7d", + "metadata": {}, + "source": [ + "## 3. Define the Hybrid Quantum-Classical Model\n", + "\n", + "We'll build a neural network that combines classical and quantum layers:\n", + "- **Classical preprocessing**: Reduces the 4 input features to 3.\n", + "- **Quantum layer**: Processes the features using a quantum circuit.\n", + "- **Classical output**: Maps quantum outputs to the 3 Iris classes." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "282b544b", + "metadata": {}, + "outputs": [], + "source": [ + "class HybridIrisClassifier(nn.Module):\n", + " \"\"\"\n", + " Hybrid model for Iris classification:\n", + " - Classical layer reduces 4 features to 3\n", + " - Quantum layer processes the 3 features\n", + " - Classical output layer for 3-class classification\n", + " \"\"\"\n", + " def __init__(self):\n", + " super(HybridIrisClassifier, self).__init__()\n", + "\n", + " # Classical preprocessing layer: 4 → 8 → 3\n", + " self.classical_in = nn.Sequential(\n", + " nn.Linear(4, 8),\n", + " nn.ReLU(),\n", + " nn.Linear(8, 3),\n", + " nn.Tanh() # Normalize to [-1, 1] for quantum layer\n", + " )\n", + "\n", + " # Quantum layer: processes 3 features\n", + " self.quantum = ML.QuantumLayer.simple(\n", + " input_size=3,\n", + " n_params=number_of_quantum_params\n", + " )\n", + "\n", + " # Classical output layer: quantum → 8 → 3\n", + " self.classical_out = nn.Sequential(\n", + " nn.Linear(self.quantum.output_size, 8),\n", + " nn.ReLU(),\n", + " nn.Dropout(0.1),\n", + " nn.Linear(8, 3)\n", + " )\n", + "\n", + " print(f\"\\nModel Architecture:\")\n", + " print(f\" Input: 4 features\")\n", + " print(f\" Classical preprocessing: 4 → 3\")\n", + " print(f\" Quantum layer: 3 → {self.quantum.output_size}\")\n", + " print(f\" Classical output: {self.quantum.output_size} → 3 classes\")\n", + "\n", + " def forward(self, x):\n", + " # Preprocess with classical layer\n", + " x = self.classical_in(x)\n", + "\n", + " # Shift tanh output from [-1, 1] to [0, 1] for quantum layer\n", + " x = (x + 1) / 2\n", + " x = self.quantum(x)\n", + "\n", + " # Final classification\n", + " x = self.classical_out(x)\n", + " return x" + ] + }, + { + "cell_type": "markdown", + "id": "aac69a97", + "metadata": {}, + "source": [ + "## 4. Set Training and Quantum Parameters\n", + "\n", + "You can adjust these parameters to see how they affect training and model performance." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ebdcce3a", + "metadata": {}, + "outputs": [], + "source": [ + "learning_rate = 0.05\n", + "number_of_epochs = 100\n", + "number_of_quantum_params = 200" + ] + }, + { + "cell_type": "markdown", + "id": "0b96210f", + "metadata": {}, + "source": [ + "## 5. Train the Hybrid Model\n", + "\n", + "We'll train our hybrid model using standard PyTorch routines. \n", + "The optimizer and loss function are set up as usual." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "88470525", + "metadata": {}, + "outputs": [], + "source": [ + "# Create model\n", + "model = HybridIrisClassifier()\n", + "\n", + "# Standard PyTorch training\n", + "optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)\n", + "criterion = nn.CrossEntropyLoss()\n", + "\n", + "# Training loop (standard PyTorch)\n", + "for epoch in range(number_of_epochs):\n", + " optimizer.zero_grad()\n", + " outputs = model(X_train)\n", + " loss = criterion(outputs, y_train)\n", + " loss.backward()\n", + " optimizer.step()" + ] + }, + { + "cell_type": "markdown", + "id": "e4d0bc29", + "metadata": {}, + "source": [ + "## 6. Evaluate the Model\n", + "\n", + "After training, let's evaluate our model on the test set and print the accuracy." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "6020e4ef", + "metadata": {}, + "outputs": [], + "source": [ + "# Evaluate on test set\n", + "model.eval()\n", + "with torch.no_grad():\n", + " test_outputs = model(X_test)\n", + " predictions = torch.argmax(test_outputs, dim=1)\n", + " accuracy = (predictions == y_test).float().mean().item()\n", + " print(f\"Test accuracy: {accuracy:.4f}\")" + ] + }, + { + "cell_type": "markdown", + "id": "7f6dd19d", + "metadata": {}, + "source": [ + "# Conclusion\n", + "\n", + "Congratulations! You've trained and evaluated a hybrid quantum-classical neural network using Merlin. \n", + "Feel free to experiment with the model architecture, quantum parameters, or try other datasets!" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "venv", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.11" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/HelloWorldPerceval.ipynb b/HelloWorldPerceval.ipynb new file mode 100644 index 0000000..3a4002c --- /dev/null +++ b/HelloWorldPerceval.ipynb @@ -0,0 +1,282 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Hello World: Quantum Computing with Perceval\n", + "\n", + "Welcome! This notebook demonstrates how to use Perceval to simulate and run photonic quantum experiments, both locally and remotely on Quandela's Quantum Hardware. Follow along to learn the basics and try it yourself!" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 1. Install and Import Dependencies\n", + "\n", + "First, let's make sure all required packages are installed and import them. \n", + "If you haven't installed Perceval yet, run: \n", + "`pip install perceval-quandela` in your terminal or `!pip install perceval-quandela` in your notebook." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 2. Check Perceval Version\n", + "\n", + "Let's check which version of Perceval is installed to ensure compatibility. The most up to date version is 0.13.2." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import perceval as pcvl\n", + "from tqdm.notebook import tqdm\n", + "import time\n", + "pcvl.__version__" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 3. Build and Simulate a Simple Quantum Circuit\n", + "\n", + "We'll create a basic quantum circuit using a beam splitter and simulate it locally. \n", + "This helps us understand the basics before moving to the cloud.\n", + "\n", + "![HOM](./img/HOM_effect.png)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from perceval.algorithm import Sampler\n", + "\n", + "input_state = pcvl.BasicState(\"|1,1>\") # Inject one photon on each input mode...\n", + "circuit = pcvl.BS() # ... of a perfect beam splitter\n", + "noise_model = pcvl.NoiseModel(transmittance=0.05, indistinguishability=0.85, g2=0.03) # Define some noise level (No noise if: transmittance=1, indistinguishability=1, g2=0)\n", + "\n", + "processor = pcvl.Processor(\"SLOS\", circuit, noise=noise_model) # Use SLOS, a strong simulation back-end\n", + "processor.min_detected_photons_filter(1) # Accept all output states containing at least 1 photon\n", + "processor.with_input(input_state)\n", + "\n", + "nsamples = 200_000 # Number of samples to generate\n", + "\n", + "sampler = Sampler(processor)\n", + "samples = sampler.sample_count(nsamples)['results'] # Ask to generate 10k samples, and get back only the raw results\n", + "probs = sampler.probs()['results'] # Ask for the exact probabilities\n", + "print(f\"Samples: {samples}\")\n", + "print(f\"Probabilities: {probs}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 4. Configure Cloud Access\n", + "\n", + "To run jobs on Quandela's real quantum hardware, you'll need an API token. \n", + "Follow the instructions to set up your credentials securely." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from perceval import RemoteConfig, RemoteProcessor\n", + "# Save your token and proxy configuration into Perceval persistent data, you only need to do it once per machine.\n", + "remote_config = RemoteConfig()\n", + "remote_config.set_token(\"ENTER_YOUR_TOKEN_HERE\") # Replace with your Quandela token which you can create in you Quandela account at https://www.cloud.quandela.com/\n", + "remote_config.save()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 5. Run a Job on Quandela's Quantum Computer\n", + "\n", + "Now, let's submit our circuit to the cloud and monitor the job's progress.\n", + "If you change the platform from sim:slos to qpu:ascella the job will be send not to the simulator anymore but to the quantum computer." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "remote_processor = RemoteProcessor(\"qpu:ascella\", noise=noise_model)\n", + "remote_processor.set_circuit(circuit)\n", + "remote_processor.min_detected_photons_filter(1)\n", + "remote_processor.with_input(input_state)\n", + "\n", + "remote_sampler = Sampler(remote_processor, max_shots_per_call=1e6)\n", + "remote_sampler.default_job_name = \"Hello World\" \n", + "remote_job = remote_sampler.sample_count.execute_async(nsamples)\n", + "print(remote_job.id)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 6. Monitor Job Progress\n", + "\n", + "We'll use a progress bar to track the status of our cloud job in real time.\n", + "\n", + "Or you can go to your Quandela Account and see the Job you have just created there. (Note you can see in the Platform column if the job was run on a simulator or on teh quantum computer)\n", + "\n", + "![Cloud Job Status](./img/FirstCloudJob.png)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "previous_prog = 0\n", + "with tqdm(total=1, bar_format='{desc}{percentage:3.0f}%|{bar}|') as tq:\n", + " tq.set_description(f'Get {nsamples} samples from {remote_processor.name}')\n", + " while not remote_job.is_complete:\n", + " tq.update(remote_job.status.progress/100-previous_prog)\n", + " previous_prog = remote_job.status.progress/100\n", + " time.sleep(1)\n", + " tq.update(1-previous_prog)\n", + " tq.close()\n", + "\n", + "print(f\"Job status = {remote_job.status()}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 7. Retrieve and Display Results\n", + "\n", + "Once the job is complete, let's fetch and visualize the results from the quantum hardware. \n", + "\n", + "This can be done directly:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "results = remote_job.get_results()\n", + "pcvl.pdisplay(results['results'])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Or later with the job ID:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "job_id = \"put_your_job_id_here\" # Replace with your job id\n", + "remote_processor = RemoteProcessor(\"put_your_platform_name_here\")\n", + "remote_job = remote_processor.resume_job(job_id) # even though it says resume job this also works for fetching an already completed job\n", + "\n", + "# The remote job acts as if it had been created the first time: you can launch it, get the results...\n", + "results = remote_job.get_results() # If the job has already been run and results are available \n", + "pcvl.pdisplay(results['results'])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 8. Extra Tools\n", + "\n", + "You can visualize your circuit with Perceval:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "pcvl.pdisplay(circuit)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can get the current performance parameters of our QPUs:\n", + "\n", + "The optimal performance is: HOM = 100%, Transmittance = 100% and g2 = 0%.\n", + "\n", + "The Clock tells you how many photons are generated per second. (80 MHz means 80 million photons per second)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "ascella = pcvl.RemoteProcessor(\"qpu:ascella\")\n", + "perf_ascella = ascella.performance\n", + "print(f\"The Performance of Ascella is: {perf_ascella}\")\n", + "\n", + "belenos = pcvl.RemoteProcessor(\"qpu:belenos\")\n", + "perf_belenos = belenos.performance\n", + "print(f\"The Performance of Belenos is: {perf_belenos}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Conclusion\n", + "\n", + "Congratulations! You've simulated and run a quantum photonic circuit on real hardware using Perceval and Quandela's cloud platform. \n", + "Feel free to experiment with different circuits and parameters!" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "venv", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.11" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/img/FirstCloudJob.png b/img/FirstCloudJob.png new file mode 100644 index 0000000..d55dde9 Binary files /dev/null and b/img/FirstCloudJob.png differ diff --git a/img/HOM_effect.png b/img/HOM_effect.png new file mode 100644 index 0000000..b350e7c Binary files /dev/null and b/img/HOM_effect.png differ diff --git a/img/Iris.png b/img/Iris.png new file mode 100644 index 0000000..7282e52 Binary files /dev/null and b/img/Iris.png differ