jrnl · home about list my companies

## # QUBOs for TSP and Maximum-3SAT

Generating QUBOs for Maximum 3-SAT and the Traveling Salesman Problem (TSP) was a bit tougher than the other 9 problems that were outlined in https://arxiv.org/pdf/1811.11538.pdf. In this post I would like to give a quick example on how one would do so. For reference, the context of this is this project. To understand this post, it is required to read the arxiv paper above. Else I believe the code won't make any sense.

For Maximum 3-SAT, the following clauses are given:

$(x_1 \lor x_2 \lor x_3), (\overline{x_1} \lor x_2 \lor x_3), (x_1 \lor \overline{x_2} \lor x_3), (\overline{x_1} \lor x_2 \lor \overline{x_3})$

In code (c.f. here) it looks like this, where the number is the variable index and the boolean denotes the state of the variable:

problem = Max3SAT(
[
((0, True), (1, True), (2, True)),
((0, False), (1, True), (2, True)),
((0, True), (1, False), (2, True)),
((0, False), (1, True), (2, False))
],
3
)


The QUBO matrix we are looking for looks like this:

[
[0.0, -1.0, 1.5, -0.5, 0.5, -0.5, 0.5],
[-1.0, 1.0, 0.0, -0.5, -0.5, 0.5, -0.5],
[1.5, 0.0, -1.0, -0.5, -0.5, -0.5, 0.5],
[-0.5, -0.5, -0.5, 2.0, 0.0, 0.0, 0.0],
[0.5, -0.5, -0.5, 0.0, 1.0, 0.0, 0.0],
[-0.5, 0.5, -0.5, 0.0, 0.0, 1.0, 0.0],
[0.5, -0.5, 0.5, 0.0, 0.0, 0.0, 0.0]
]


And as a normalized heatmap: For Maximum 2-SAT, the QUBO formulation is really space-efficient (which is important for current architectures) and only scales with the number of variables (see arxiv link above). Awesome! For Maximum 3-SAT unfortunately we scale with both the number of variables and the number of clauses.

The code to generate a QUBO for Maximum 3-SAT is seen below. The main loop is split in two parts, one that generates the blue part of the formula below and one that generates the green part. To be specific, the formula is just for 3 variables, so it's not general. The green part is simply the combinations of all variables (e.g. if you have 5 variables, there's 10 combinations).

$\sum^{m}_{i=1} \textcolor{blue}{((1 + w_i)(y_{i1}+y_{i2}+y_{i3})} - \textcolor{green}{y_{i1}y_{i2} - y_{i1}y_{i3} - y_{i2}y_{i3} - 2w_i}$

Here's the code:

def gen_qubo_matrix(self):
n = self.n_vars + len(self.clauses)
Q = np.zeros((n, n))

# Clauses is a list of 3-tuples of 2-tuples.
# Eg: [((0, True), (2, True), (3, False)), ((1, True), (3, False), (4, True))]
# So we loop through all tuples and find the type of it.
# There are 4 types: (True, True), (True, False), etc.
for i, clause in enumerate(self.clauses):
clause_idx = self.n_vars + i
Q[clause_idx][clause_idx] += 2

# (1 + w1) * (x1 + x2 + x3) = x1 + x2 + x3 + w1x1 + w1x2 + w1x3
# (1 + w1) * ((1 - x1) + x2 + x3) = -x1 + x2 + x3 + w1 -w1x1 +w1x2 +w1x3
# (1 + w1) * ((1 - x1) + x2 + (1 - x3)) = -x1 + x2 -x3 + 2w1 -w1x1 +w1x2 -w1x3
for item in clause:
item_idx = item
val = item
if val:
Q[item_idx][item_idx] -= 1
Q[clause_idx][item_idx] -= .5
Q[item_idx][clause_idx] -= .5
if not val:
Q[item_idx][item_idx] += 1
Q[clause_idx][clause_idx] -= 1
Q[clause_idx][item_idx] += .5
Q[item_idx][clause_idx] += .5

# -x1x2 -x1x3 -x2x3
# -(1-x1)x2 -x1x3 -x2x3 = -1 +x1x2
# -(1-x1)(1-x2) -x1x3 -x2x3 = -1 -2x1x2 +x1 +x2
for (item1, item2) in itertools.combinations(clause, 2):
idx1 = item1
idx2 = item2
val1 = item1
val2 = item2
if val1 and val2:
Q[idx1][idx2] += .5
Q[idx2][idx1] += .5
if not val1 and val2:
Q[idx2][idx2] += 1.
Q[idx1][idx2] -= .5
Q[idx2][idx1] -= .5
if val1 and not val2:
Q[idx1][idx1] += 1.
Q[idx1][idx2] -= .5
Q[idx2][idx1] -= .5
if not val1 and not val2:
Q[idx1][idx2] += 1.
Q[idx2][idx1] += 1.
Q[idx1][idx1] -= 1.
Q[idx2][idx2] -= 1.

return Q


As you can see, it's longer than most other QUBO generators in the project. The matrix that is generated also looks very intricate. A small note for newcomers to QUBOs: Whenever there is a variable that is not on the diagonal, e.g. $x_1x_2$, one splits the constant in two. This is why you see these .5 additions and subtractions. A lone variable $x_3$ would be on the diagonal at Q and leads to no such split. What about $2x_1x_2$? Well, we end up with $\pm1$.

As for TSP, the problem is defined as a distance matrix between the nodes of the graph and a constraint c. For instance, below we see three nodes with distances normalized to the range 0 to 1.

dist_matrix = [
[0, 1/3., 2/3.],
[1/3., 0, 1/3.],
[1, 1/3., 0]
]


The resulting QUBO matrix looks like this (note: the constraint is set to 400):

[
[-800.00, 800.00, 800.00, 800.00, 0.00, 0.00, 800.00, 3.33, 10.00],
[800.00, -800.00, 800.00, 0.00, 800.00, 0.00, 3.33, 800.00, 3.33],
[800.00, 800.00, -800.00, 0.00, 0.00, 800.00, 6.67, 3.33, 800.00],
[800.00, 3.33, 10.00, -800.00, 800.00, 800.00, 800.00, 0.00, 0.00],
[3.33, 800.00, 3.33, 800.00, -800.00, 800.00, 0.00, 800.00, 0.00],
[6.67, 3.33, 800.00, 800.00, 800.00, -800.00, 0.00, 0.00, 800.00],
[800.00, 0.00, 0.00, 800.00, 3.33, 10.00, -800.00, 800.00, 800.00],
[0.00, 800.00, 0.00, 3.33, 800.00, 3.33, 800.00, -800.00, 800.00],
[0.00, 0.00, 800.00, 6.67, 3.33, 800.00, 800.00, 800.00, -800.00]
]


And as a normalized heatmap: TSP is unfortunately rather inefficient to encode. For $n$ nodes, we have a $n * n$ distance matrix and a $n^2 * n^2$ QUBO matrix! Below is the code for generating a TSP QUBO (c.f. here for more context):

def gen_qubo_matrix(self):
n = len(self.dist_matrix)
Q = np.zeros((n ** 2, n ** 2))

quadrants_y = list(range(0, n ** 2, n))

# The diagonal positive constraints
for i in range(n):
Q[start_x + i][start_y + i] = 2 * self.constraint

# The distance matrices
for i in range(n):
for j in range(n):
if i == j:
continue
Q[start_x + i][start_y + j] = self.P * self.dist_matrix[j][i]
Q[start_x + i][start_y + i] = 2 * self.constraint

# The middle diagonal negative constraints