下面我有一个效率很低的代码。除了[0:x,1:2]
之外,代码在6个回合中是相同的,其中x1, x2...x6
的x在每个回合中都有x-1的变化,最终在x = 8(对于变量y,x)和x = 9(对于变量t)处停止。我确实将预测输出存储在变量import pandas as pd
import numpy as np
#round 1
y = Macro.iloc[0:13,1:2]
x = Macro.iloc[0:13,2:21]
t = Macro.iloc[13:14,2:21]
boost = GradientBoostingRegressor(n_estimators=100, learning_rate=0.1, max_depth=1, random_state=0, loss='ls').fit(x, y)
x6 = boost.predict(t)
#round 2
y = Macro.iloc[0:12,1:2]
x = Macro.iloc[0:12,2:21]
t = Macro.iloc[12:13,2:21]
boost = GradientBoostingRegressor(n_estimators=100, learning_rate=0.1, max_depth=1, random_state=0, loss='ls').fit(x, y)
x5 = boost.predict(t)
#round 3
y = Macro.iloc[0:11,1:2]
x = Macro.iloc[0:11,2:21]
t = Macro.iloc[11:12,2:21]
boost = GradientBoostingRegressor(n_estimators=100, learning_rate=0.1, max_depth=1, random_state=0, loss='ls').fit(x, y)
x4 = boost.predict(t)
# round 4
y = Macro.iloc[0:10,1:2]
x = Macro.iloc[0:10,2:21]
t = Macro.iloc[10:11,2:21]
boost = GradientBoostingRegressor(n_estimators=100, learning_rate=0.1, max_depth=1, random_state=0, loss='ls').fit(x, y)
x3 = boost.predict(t)
# round 5
y = Macro.iloc[0:9,1:2]
x = Macro.iloc[0:9,2:21]
t = Macro.iloc[9:10,2:21]
boost = GradientBoostingRegressor(n_estimators=100, learning_rate=0.1, max_depth=1, random_state=0, loss='ls').fit(x, y)
x2 = boost.predict(t)
# round 6
y = Macro.iloc[0:8,1:2]
x = Macro.iloc[0:8,2:21]
t = Macro.iloc[8:9,2:21]
boost = GradientBoostingRegressor(n_estimators=100, learning_rate=0.1, max_depth=1, random_state=0, loss='ls').fit(x, y)
x1 = boost.predict(t)
中。请参阅下面的内容以清楚了解。
以简洁的方式编写此代码以便不重复冗余代码的每个步骤的最简单的简洁方法是什么?我考虑过引入一个循环遍历x的变量列表的循环-但有没有更简单或更强大的功能?
{{1}}
答案 0 :(得分:0)
def func(i, n):
y = Macro.iloc[0:i,1:2]
x = Macro.iloc[0:i,2:21]
t = Macro.iloc[i:n,2:21]
boost = GradientBoostingRegressor(n_estimators=100, learning_rate=0.1, max_depth=1, random_state=0, loss='ls').fit(x, y)
x4 = boost.predict(t)
答案 1 :(得分:0)
这可能有帮助吗?
import pandas as pd
import numpy as np
x = []
for i in range(6):
print('round = ', i+1)
y = Macro.iloc[0:13-i,1:2]
x = Macro.iloc[0:13-i,2:21]
t = Macro.iloc[13:14-i,2:21]
boost = GradientBoostingRegressor(n_estimators=100, learning_rate=0.1,
max_depth=1, random_state=0,
loss='ls').fit(x, y)
x.append(boost.predict(t))