我试图通过在python中使用Gurobi来计算l1范数。由于我是python和Gurobi的新手,我在这里寻求帮助。
模型是:
minimize 1^T(r+ + r-)
s.t. y - X beta = r+ - r-
r+ >= 0 and r- >= 0
其中y是n向量,X是n-by-p矩阵。 r +,r-是n向量,β是p向量 这是我的代码,我不知道什么是错的,有人可以帮助我吗?
row col = X.shape
# import Gurobi
from gurobipy import *
# model
m = Model("l1-norm")
# create decision variables
r_plus = []
for i in range(row):
r_plus = m.addVar(name="r_plus%d" % i)
r_minus = []
for i in range(row):
r_minus = m.addVar(name = "r_minu%d" % i)
beta = []
for j in range(col):
beta = m.addVar(name = "beta%d" % j)
# Update model to integrate new variables
m.update()
# set objective
m.setObjective(sum(r_plus) + sum(r_minus), GRB.MINIMIZE)
# add model constraint
for i in range(row):
m.addConstr(y[i] - quicksum(X[[i], j] * beta[j] for j in range(col)) == r_plus[i] - r_minus[i])
# solve
m.optimize()
答案 0 :(得分:3)
您正在创建错误的变量数组,它应该是
r_plus = []
for i in range(row):
r_plus.append(m.addVar(name="r_plus%d" % i))
r_minus = []
for i in range(row):
r_minus.append(m.addVar(name = "r_minu%d" % i))
beta = []
for j in range(col):
beta.append(m.addVar(name = "beta%d" % j))
或更简单
r_plus = [m.addVar(name="r_plus%d" % i) for i in range(row)]
r_minus = [m.addVar(name="r_minu%d" % i) for i in range(row)]
beta = [m.addVar(name = "beta%d" % j) for j in range(col)]