Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add quantum state discrimination tutorial #3250

Merged
merged 8 commits into from
Mar 1, 2023
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -167,6 +167,7 @@ const _PAGES = [
"tutorials/conic/experiment_design.md",
"tutorials/conic/min_ellipse.md",
"tutorials/conic/ellipse_approx.md",
"tutorials/conic/quantum_discrimination.md",
],
"Algorithms" => [
"tutorials/algorithms/benders_decomposition.md",
Expand Down
127 changes: 127 additions & 0 deletions docs/src/tutorials/conic/quantum_discrimination.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,127 @@
# Copyright 2017, Iain Dunning, Joey Huchette, Miles Lubin, and contributors #src
# This Source Code Form is subject to the terms of the Mozilla Public License #src
# v.2.0. If a copy of the MPL was not distributed with this file, You can #src
# obtain one at https://mozilla.org/MPL/2.0/. #src

# # Quantum state discrimination

# This tutorial solves the problem of [quantum state discrimination](https://en.wikipedia.org/wiki/Quantum_state_discrimination).

# The purpose of this tutorial to demonstrate how to solve problems involving
# complex-valued decision variables and the [`HermitianPSDCone`(@ref). See
odow marked this conversation as resolved.
Show resolved Hide resolved
odow marked this conversation as resolved.
Show resolved Hide resolved
# [Complex number support](@ref) for more details.

# ## Required packages

# This tutorial makes use of the following packages:

using JuMP
import LinearAlgebra
import SCS

# ## Formulation

# A `d`-dimensional quantum state, ``\rho``, can be defined by a complex-valued
# Hermitian matrix with a trace of `1`. Assume we have `N` `d`-dimensional
# quantum states, ``\{\rho_i\}_{i=1}^n``, each of which is equally likely.

# The goal of the Quantum state discrimination problem is to choose a set of
odow marked this conversation as resolved.
Show resolved Hide resolved
# positive-operator-valued-measures (POVMs), ``E_i`` such that if we observe
odow marked this conversation as resolved.
Show resolved Hide resolved
# ``E_i`` then the most probable state that we are in is ``\rho_i``.

# Each POVM ``E_i`` is a complex-valued Hermitian matrix, and there is a
odow marked this conversation as resolved.
Show resolved Hide resolved
# requirement that ``\sum\limits_{i=1}^N E_i = \mathbf{I}``.

# To choose the set of POVMs, we want to maximize the probability that we guess
odow marked this conversation as resolved.
Show resolved Hide resolved
# the quantum state corrrectly. This can be formulated as the following
# optimization problem:

# ```math
# \begin{aligned}
# \max\limits_{E} \;\; & \mathbb{E}_i[ tr(\rho_i \times E_i)] \\
odow marked this conversation as resolved.
Show resolved Hide resolved
# \text{s.t.} \;\; & \sum\limits_{i=1}^N E_i = \mathbf{I} \\
# & E_i \succeq 0 \forall i = 1,\ldots,N.
odow marked this conversation as resolved.
Show resolved Hide resolved
# \end{aligned}
# ```

# ## Data

# To setup our problem, we need `N` `d-`dimensional quantum states. To keep the
# problem simple, we use `N = 2` and `d = 2`.

N, d = 2, 2

# We then generated `N` random `d`-dimensional quantum states:

function random_state(d)
x = randn(ComplexF64, (d, d))
y = x * x'
return LinearAlgebra.Hermitian(y / LinearAlgebra.tr(y))
end

ρ = [random_state(d) for i in 1:N]

# ## JuMP formulation

# To model the problem in JuMP, we need a solver that supports positive
# semidefinite matrices:

model = Model(SCS.Optimizer)
set_silent(model)

# Then, we construct our set of `E` variables:

E = [@variable(model, [1:d, 1:d] in HermitianPSDCone()) for i in 1:N]

# Here we have created a vector of matrices. This is different to other modeling
# languages such as YALMIP, which allow you to create a multi-dimensional array
# in which 2-dimensional slices of the array are Hermitian matrices.

# We also need to enforce the constraint that
# ``\sum\limits_i E_i = \mathbf{I}``:

@constraint(model, sum(E) .== LinearAlgebra.I)

# This constraint is a complex-valued equality constraint. In the solver, it
# will be decomposed onto two types of equality constraints: one to enforce
# equality of the real components, and one to enforce equality of the imaginary
# components.

# Our objective is to maximize the expected probability of guessing correctly:

@objective(
model,
Max,
sum(real(LinearAlgebra.tr(ρ[i] * E[i])) for i in 1:N) / N,
)

# Now we optimize:

optimize!(model)
solution_summary(model)

# The POVMs are:
odow marked this conversation as resolved.
Show resolved Hide resolved

solution = [value.(e) for e in E]

# ## Alternative formulation

# The formulation above includes `n` Hermitian matrices, and a set of linear
odow marked this conversation as resolved.
Show resolved Hide resolved
# equality constraints. We can simplify the problem by replacing `E[n]` with
# ``I - \sum E_i``, where ``I`` is the identity matrix. This results in:

model = Model(SCS.Optimizer)
set_silent(model)
E = [@variable(model, [1:d, 1:d] in HermitianPSDCone()) for i in 1:N-1]
E_n = LinearAlgebra.Hermitian(LinearAlgebra.I - sum(E))
@constraint(model, E_n in HermitianPSDCone())
push!(E, E_n)

# The objective can also be simplified, by observing that it is equivalent to:

@objective(model, Max, real(LinearAlgebra.dot(ρ, E)) / N)

# Then we can check that we get the same solution:

optimize!(model)
solution_summary(model)