我想计算预测值和真值的平均绝对百分比误差(MAPE)。我找到了来自iTracing App的解决方案,但这会产生错误并在行mask = a <> 0
中显示无效语法
def mape_vectorized_v2(a, b):
mask = a <> 0
return (np.fabs(a - b)/a)[mask].mean()
def mape_vectorized_v2(a, b):
File "<ipython-input-5-afa5c1162e83>", line 1
def mape_vectorized_v2(a, b):
^
SyntaxError: unexpected EOF while parsing
我正在使用spyder3。我的预测值是类型np.array,真值是dataframe
type(predicted)
Out[7]: numpy.ndarray
type(y_test)
Out[8]: pandas.core.frame.DataFrame
如何清除此错误并继续进行MAPE计算?
编辑:
predicted.head()
Out[22]:
Total_kWh
0 7.163627
1 6.584960
2 6.638057
3 7.785487
4 6.994427
y_test.head()
Out[23]:
Total_kWh
79 7.2
148 6.7
143 6.7
189 7.2
17 6.4
np.abs(y_test[['Total_kWh']] - predicted[['Total_kWh']]).head()
Out[24]:
Total_kWh
0 NaN
1 NaN
2 NaN
3 NaN
4 0.094427
答案 0 :(得分:11)
在python中进行比较,不等于需要!=
,而不是<>
。
所以需要:
def mape_vectorized_v2(a, b):
mask = a != 0
return (np.fabs(a - b)/a)[mask].mean()
stats.stackexchange的另一个解决方案:
def mean_absolute_percentage_error(y_true, y_pred):
y_true, y_pred = np.array(y_true), np.array(y_pred)
return np.mean(np.abs((y_true - y_pred) / y_true)) * 100
答案 1 :(得分:2)
新版 scikit-learn (v0.24) 有一个计算 MAPE 的函数。
sklearn.metrics.mean_absolute_percentage_error
您只需要两个类似数组的变量:y_true
存储实际/真实值,y_pred
存储预测值。
可以参考官方文档here。
答案 2 :(得分:0)
两个解决方案都无法使用零值。这是我的工作方式:
def percentage_error(actual, predicted):
res = np.empty(actual.shape)
for j in range(actual.shape[0]):
if actual[j] != 0:
res[j] = (actual[j] - predicted[j]) / actual[j]
else:
res[j] = predicted[j] / np.mean(actual)
return res
def mean_absolute_percentage_error(y_true, y_pred):
return np.mean(np.abs(percentage_error(np.asarray(y_true), np.asarray(y_pred)))) * 100
希望对您有帮助。
答案 3 :(得分:0)
由于实际值也可以为零,因此我将分母中的实际值取平均值,而不是实际值:
Error = np.sum(np.abs(np.subtract(data_4['y'],data_4['pred'])))
Average = np.sum(data_4['y'])
MAPE = Error/Average
答案 4 :(得分:0)
这是一个注意零的改进版本:
#Mean Absolute Percentage error
def mape(y_true, y_pred,sample_weight=None,multioutput='uniform_average'):
y_type, y_true, y_pred, multioutput = _check_reg_targets(y_true, y_pred, multioutput)
epsilon = np.finfo(np.float64).eps
mape = np.abs(y_pred - y_true) / np.maximum(np.abs(y_true), epsilon)
output_errors = np.average(mape,weights=sample_weight, axis=0)
if isinstance(multioutput, str):
if multioutput == 'raw_values':
return output_errors
elif multioutput == 'uniform_average':
# pass None as weights to np.average: uniform mean
multioutput = None
return np.average(output_errors, weights=multioutput)
def _check_reg_targets(y_true, y_pred, multioutput, dtype="numeric"):
if y_true.ndim == 1:
y_true = y_true.reshape((-1, 1))
if y_pred.ndim == 1:
y_pred = y_pred.reshape((-1, 1))
if y_true.shape[1] != y_pred.shape[1]:
raise ValueError("y_true and y_pred have different number of output "
"({0}!={1})".format(y_true.shape[1], y_pred.shape[1]))
n_outputs = y_true.shape[1]
allowed_multioutput_str = ('raw_values', 'uniform_average',
'variance_weighted')
if isinstance(multioutput, str):
if multioutput not in allowed_multioutput_str:
raise ValueError("Allowed 'multioutput' string values are {}. "
"You provided multioutput={!r}".format(
allowed_multioutput_str,
multioutput))
elif multioutput is not None:
multioutput = check_array(multioutput, ensure_2d=False)
if n_outputs == 1:
raise ValueError("Custom weights are useful only in "
"multi-output cases.")
elif n_outputs != len(multioutput):
raise ValueError(("There must be equally many custom weights "
"(%d) as outputs (%d).") %
(len(multioutput), n_outputs))
y_type = 'continuous' if n_outputs == 1 else 'continuous-multioutput'
return y_type, y_true, y_pred, multioutput