因此,我关于强化学习的最后一个主题被标记为太宽泛,我完全理解。我以前从未使用过它,所以我试图自己学习它-到目前为止,这并不是一件容易的事。现在,我一直在阅读一些论文,并试图以我从中学到的东西为基础,但是我不确定我在做什么是否有意义,所以我希望在这里有所帮助!
基本上,我想使用Q学习来计算每天订购的数量。下面是代码的相关部分-Ip,Im已在代码的较早部分通过计算每天的订单进行了计算,而无需进行强化学习,因此我可以将其输入到我的算法中并进行训练。
我将状态分为9(取决于我的存货量),而行动则分为9(每个动作都意味着我应该订购一定的值)。我的奖励函数是我要最小化的目标函数,它是我的总成本(因此,这实际上是一个“损失”函数)。但是,我保证这并没有真正得到优化,因为Q矩阵没有得到正确的训练,似乎太随机了。关于如何改进/修复此代码有什么想法吗?
# Training
# Ip - on-hand inventory
# Im - Lost orders
# T - no. of days (360)
# Qt - order quantity
def reward(t):
return h*Ip[t]+b*Im[t]
Q = np.matrix(np.zeros([9,9]))
iteration = 0
t = 0
MAX_ITERATION = 500
alp = 0.2 # learning rate (between 0 and 1)
exploitation_p = 0.15 # exploitation probability (incresed after each iteration until it reaches 1)
while iteration <= MAX_ITERATION:
while t < T-1:
if Ip[t] <= 8:
state = 0
if Ip[t] > 8 and Ip[t] <= 14:
state = 1
if Ip[t] > 14 and Ip[t] <= 20:
state = 2
if Ip[t] > 20 and Ip[t] <= 26:
state = 3
if Ip[t] > 26 and Ip[t] <= 32:
state = 4
if Ip[t] > 32 and Ip[t] <= 38:
state = 5
if Ip[t] > 38 and Ip[t] <= 44:
state = 6
if Ip[t] > 44 and Ip[t] <= 50:
state = 7
if Ip[t] > 50:
state = 8
rd = random.random()
if rd < exploitation_p:
action = np.where(Q[state,] == np.max(Q[state,]))[1]
if np.size(action) > 1:
action = np.random.choice(action,1)
elif rd >= exploitation_p:
av_act = np.where(Q[state,] < 999999)[1]
action = np.random.choice(av_act,1)
action = int(action)
rew = reward(t+1)
if Ip[t+1] <= 8:
next_state = 0
if Ip[t+1] > 8 and Ip[t+1] <= 14:
next_state = 1
if Ip[t+1] > 14 and Ip[t+1] <= 20:
next_state = 2
if Ip[t+1] > 20 and Ip[t+1] <= 26:
next_state = 3
if Ip[t+1] > 26 and Ip[t+1] <= 32:
next_state = 4
if Ip[t+1] > 32 and Ip[t+1] <= 38:
next_state = 5
if Ip[t+1] > 38 and Ip[t+1] <= 44:
next_state = 6
if Ip[t+1] > 44 and Ip[t+1] <= 50:
next_state = 7
if Ip[t+1] > 50:
next_state = 8
next_action = np.where(Q[next_state,] == np.max(Q[next_state,]))[1]
if np.size(next_action) > 1:
next_action = np.random.choice(next_action,1)
next_action = int(next_action)
Q[state, action] = Q[state, action] + alp*(-rew+Q[next_state, next_action]-Q[state, action])
t += 1
if (exploitation_p < 1):
exploitation_p = exploitation_p + 0.05
t = 0
iteration += 1
# Testing
Ip = [0] * T
Im = [0] * T
It[0] = I0 - d[0]
if (It[0] >= 0):
Ip[0] = It[0]
else:
Im[0] = -It[0]
Qt[0] = 0
Qbase = 100
sumIp = Ip[0]
sumIm = Im[0]
i = 1
while i < T:
if (i - LT >= 0):
It[i] = Ip[i-1] - d[i] + Qt[i-LT]
else:
It[i] = Ip[i-1] - d[i]
It[i] = round(It[i], 0)
if It[i] >= 0:
Ip[i] = It[i]
else:
Im[i] = -It[i]
if Ip[i] <= 8:
state = 0
if Ip[i] > 8 and Ip[i] <= 14:
state = 1
if Ip[i] > 14 and Ip[i] <= 20:
state = 2
if Ip[i] > 20 and Ip[i] <= 26:
state = 3
if Ip[i] > 26 and Ip[i] <= 32:
state = 4
if Ip[i] > 32 and Ip[i] <= 38:
state = 5
if Ip[i] > 38 and Ip[i] <= 44:
state = 6
if Ip[i] > 44 and Ip[i] <= 50:
state = 7
if Ip[i] > 50:
state = 8
action = np.where(Q[state,] == np.max(Q[state,]))[1]
if np.size(action) > 1:
action = np.random.choice(action,1)
action = int(action)
if action == 0:
Qt[i] = Qbase
if action == 1:
Qt[i] = Qbase * 0.95
if action == 2:
Qt[i] = Qbase * 0.9
if action == 3:
Qt[i] = Qbase * 0.85
if action == 4:
Qt[i] = Qbase * 0.8
if action == 5:
Qt[i] = Qbase * 0.75
if action == 6:
Qt[i] = Qbase * 0.7
if action == 7:
Qt[i] = Qbase * 0.65
if action == 8:
Qt[i] = Qbase * 0.6
sumIp = sumIp + Ip[i]
sumIm = sumIm + Im[i]
i += 1
objfunc = h*sumIp+b*sumIm
print(objfunc)
如果您想要/需要运行它,这是我的完整代码:https://pastebin.com/vU5V0ehg
谢谢!
P.S。我猜我的MAX_ITERATION应该更高(大多数论文似乎使用10000),但是在这种情况下,我的计算机运行该程序所需的时间太长,因此为什么我使用500。