我有一个代码,该代码运行sample_size
个矩阵乘法序列,每个序列都涉及seq_length
个矩阵乘法和的运算。但是我的代码的缺点是seq_length
的值一旦超过300,算法就会变慢,不用说,seq_length
变大,整个算法就会越来越慢。因此,我想知道是否可以通过编写代码的方式来实现优化/矢量化,或者仅在一般代码上可以实现。
基本上,这里我只是定义一堆具有复杂条目的(2x2)
矩阵,这些矩阵将在以后的算法中使用。然后,cliff_operators()
函数将从所有参数中选择一个随机矩阵。
import random
import numpy as np
import matplotlib.pyplot as plt
import time
import sys
init_state = np.array([[1, 0], [0, 0]], dtype=complex)
II = np.identity(2, dtype=complex)
X = np.array([[0, 1], [1, 0]], dtype=complex)
Y = np.array([[0, -1j], [1j, 0]], dtype=complex)
Z = np.array([[1, 0], [0, -1]], dtype=complex)
PPP = (-II + 1j*X + 1j*Y + 1j*Z)/2
PPM = (-II + 1j*X + 1j*Y - 1j*Z)/2
PMM = (-II + 1j*X - 1j*Y - 1j*Z)/2
MMM = (-II - 1j*X - 1j*Y - 1j*Z)/2
MMP = (-II - 1j*X - 1j*Y + 1j*Z)/2
MPP = (-II - 1j*X + 1j*Y + 1j*Z)/2
PMP = (-II + 1j*X - 1j*Y + 1j*Z)/2
MPM = (-II - 1j*X + 1j*Y - 1j*Z)/2
def cliff_operators():
return random.choice([II, X, Y, Z, PPP, PPM, PMM, MMM, MMP, MPP, PMP, MPM])
这里的compute_channel_operation
函数在输入块矩阵/张量内执行逐元素矩阵点积,然后对张量内的所有矩阵进行逐元素求和。
def compute_channel_operation(rho, operators):
return np.sum(operators@rho@operators.transpose(0, 2, 1).conj(), axis=0)
def depolarizing_error(param):
XYZ = np.sqrt(param/3)*np.array([X, Y, Z])
return np.array([np.sqrt(1-param)*II, XYZ[0], XYZ[1], XYZ[2]])
def random_angles(sd):
return np.random.normal(0, sd, 3)
def unitary_error(params):
e_1 = np.exp(-1j*(params[0]+params[2])/2)*np.cos(params[1]/2)
e_2 = np.exp(-1j*(params[0]-params[2])/2)*np.sin(params[1]/2)
return np.array([[[e_1, e_2], [-e_2.conj(), e_1.conj()]]])
def rb(input_state, seq_length, sample_size, noise_mean,
noise_sd, noise2_sd):
fidelity = []
for i in range(1, sample_size+1, 1):
rho = input_state
sequence = []
for j in range(1, seq_length+1, 1):
noise = depolarizing_error(np.random.normal(noise_mean, noise_sd))
noise_2 = unitary_error(random_angles(noise2_sd))
unitary = cliff_operators()
sequence.append(unitary)
i_ideal_operator = compute_channel_operation(rho,
np.array([unitary]))
i_noisy_operator = compute_channel_operation(i_ideal_operator,
noise)
i_noisy_operator_2 = compute_channel_operation(i_noisy_operator,
noise_2)
sys.stdout.write("\r" + "gate applied: " + str(j))
rho = i_noisy_operator_2
# Final random noise
noise = depolarizing_error(np.random.normal(noise_mean, noise_sd))
noise_2 = unitary_error(random_angles(noise2_sd))
# Computes the Hermitian of the forward operators sequence
unitary_plus_1 = np.linalg.multi_dot(sequence[::-1]).conj().T
# Final ideal&noisy density operator
f_ideal_operator = compute_channel_operation(rho,
np.array([unitary_plus_1]))
f_noisy_operator = compute_channel_operation(f_ideal_operator, noise)
f_noisy_operator_2 = compute_channel_operation(f_noisy_operator,
noise_2)
fidelity.append(np.trace(input_state@f_noisy_operator_2))
avg_fidelity = (1/sample_size)*np.sum(fidelity)
return avg_fidelity
def get_data(rho, seq_length, sample_size, noise_mean, noise_sd, noise2_sd):
length = []
fidelity_s = []
for s in range(2, seq_length, 1):
avg_fidelity = rb(rho, s, sample_size, noise_mean,
noise_sd, noise2_sd)
length.append(s)
fidelity_s.append(avg_fidelity)
plt.plot(length, fidelity_s)
plt.title("Fidelity vs Clifford length")
plt.ylim(0.5, 1)
plt.xlabel("Clifford length")
plt.ylabel("Fidelity")
plt.xlim(0, 100)
plt.show()
starttime = time.time()
get_data(init_state, 402, 1, 0.005, 0.001, 0.01)
timeElapsed = time.time() - starttime
print(timeElapsed)
那么可以潜在地实现矢量化以消除i
和j
循环,并使其随着seq_length
变大而更快地运行吗?是否可以对sample_size
上的循环进行矢量化处理,以使n个序列在一个大矩阵中同时运行?