我的神经网络没有学到正确的答案

时间:2017-12-16 07:16:02

标签: c++ neural-network minimax temporal-difference

首先,我是一个完整的业余爱好者,所以我可能会混淆一些术语。

我一直致力于神经网络连续播放Connect 4 / Four。

网络模型的当前设计是170个输入值,417个隐藏神经元和1个输出神经元。网络是完全连接的,即每个输入都连接到每个隐藏的神经元,每个隐藏的神经元都连接到输出节点。

每个连接都有一个独立的权重,每个隐藏节点和单个输出节点都有一个带权重的附加偏置节点。

连接4的游戏状态的170个值的输入表示是:

  • 42对值(84个输入变量),表示空间是否被玩家1,玩家2占用或空置。
    • 0,0表示免费
    • 1,0表示其玩家1的位置
    • 0,1表示玩家2的位置
    • 1,1是不可能的
  • 另外42对值(84个输入变量)表示是否在这里添加一个棋子将给予玩家1或玩家2 a"连接4" /"四个连续"。值的组合与上述相同。
  • 2个最终输入变量表示谁变成了:
    • 1,0玩家1回合
    • 0,1玩家2轮到
    • 1,10,0是不可能的

我测量了100种游戏的平均均方误差,超过10,000种不同配置的游戏到达:

  • 417隐藏神经元
  • Alpha和Beta学习率在开始时为0.1,在整个时期内线性下降到0.01
  • λ值为0.5
  • 100个动作中的90个在开始时是随机的,并且在前50%的时期之后下降到每100个中的10个。所以在中途点,100个动作中的10个是随机的
  • 前50%的时代以随机移动开始
  • 每个节点中使用的Sigmoid激活函数

此图显示了以对数刻度绘制的各种配置的结果。这就是我确定使用哪种配置的方式。

enter image description here

我通过将获胜状态下的棋盘输出与玩家2获胜的-1和玩家1获胜的1进行比较来计算此均方误差。我每100场比赛加上这些,并将总数除以100,得到1000个值,在上图中绘制。即代码片段是:

if(board.InARowConnected(4) == Board<7,6,4>::Player1)
{
    totalLoss += NN->BackPropagateFinal({1},previousNN,alpha,beta,lambda);
    winState = true;
}
else if(board.InARowConnected(4) == Board<7,6,4>::Player2)
{
    totalLoss += NN->BackPropagateFinal({-1},previousNN,alpha,beta,lambda);
    winState = true;
}
else if(!board.IsThereAvailableMove())
{
    totalLoss += NN->BackPropagateFinal({0},previousNN,alpha,beta,lambda);
    winState = true;
}

...

if(gameNumber % 100 == 0 && gameNumber != 0)
{
    totalLoss = totalLoss / gamesToOutput;
    matchFile << std::fixed << std::setprecision(51) << totalLoss << std::endl;
    totalLoss = 0.0;
}

我训练网络的方式是让它一遍又一遍地对抗自己。它是一个前馈网络,我使用TD-Lambda为每次移动训练它(每次移动都不是随机选择的)。

给予神经网络的董事会国家是通过以下方式完成的:

template<std::size_t BoardWidth, std::size_t BoardHeight, std::size_t InARow>
void create_board_state(std::array<double,BoardWidth*BoardHeight*4+2>& gameState, const Board<BoardWidth,BoardHeight,InARow>& board,
                        const typename Board<BoardWidth,BoardHeight,InARow>::Player player)
{
    using BoardType = Board<BoardWidth,BoardHeight,InARow>;
    auto bb = board.GetBoard();
    std::size_t stateIndex = 0;
    for(std::size_t boardIndex = 0; boardIndex < BoardWidth*BoardHeight; ++boardIndex, stateIndex += 2)
    {
        if(bb[boardIndex] == BoardType::Free)
        {
            gameState[stateIndex] = 0;
            gameState[stateIndex+1] = 0;
        }
        else if(bb[boardIndex] == BoardType::Player1)
        {
            gameState[stateIndex] = 1;
            gameState[stateIndex+1] = 0;
        }
        else
        {
            gameState[stateIndex] = 0;
            gameState[stateIndex+1] = 1;
        }
    }

    for(std::size_t x = 0; x < BoardWidth; ++x)
    {
        for(std::size_t y = 0; y < BoardHeight; ++y)
        {
            auto testBoard1 = board;
            auto testBoard2 = board;
            testBoard1.SetBoardChecker(x,y,Board<BoardWidth,BoardHeight,InARow>::Player1);
            testBoard2.SetBoardChecker(x,y,Board<BoardWidth,BoardHeight,InARow>::Player2);
            // player 1's set
            if(testBoard1.InARowConnected(4) == Board<7,6,4>::Player1)
                gameState[stateIndex] = 1;
            else
                gameState[stateIndex] = 0;
            // player 2's set
            if(testBoard2.InARowConnected(4) == Board<7,6,4>::Player2)
                gameState[stateIndex+1] = 1;
            else
                gameState[stateIndex+1] = 0;

            stateIndex += 2;
        }
    }

    if(player == Board<BoardWidth,BoardHeight,InARow>::Player1)
    {
        gameState[stateIndex] = 1;
        gameState[stateIndex+1] = 0;
    }
    else
    {
        gameState[stateIndex] = 0;
        gameState[stateIndex+1] = 1;
    }
}

很容易让以后更容易改变事情。我不相信上面有任何错误。

我的Sigmoid激活功能:

inline double sigmoid(const double x)
{
    //  return 1.0 / (1.0 + std::exp(-x));
    return x / (1.0 + std::abs(x));
}

我的神经元课程

template<std::size_t NumInputs>
class Neuron
{
public:
    Neuron()
    {
        for(auto& i : m_inputValues)
            i = 9;
        for(auto& e : m_eligibilityTraces)
            e = 9;
        for(auto& w : m_weights)
            w = 9;
        m_biasWeight = 9;
        m_biasEligibilityTrace = 9;
        m_outputValue = 9;
    }

    void SetInputValue(const std::size_t index, const double value)
    {
        m_inputValues[index] = value;
    }

    void SetWeight(const std::size_t index, const double weight)
    {
        if(std::isnan(weight))
            throw std::runtime_error("Shit! this is a nan bread");
        m_weights[index] = weight;
    }

    void SetBiasWeight(const double weight)
    {
        m_biasWeight = weight;
    }

    double GetInputValue(const std::size_t index) const
    {
        return m_inputValues[index];
    }

    double GetWeight(const std::size_t index) const
    {
        return m_weights[index];
    }

    double GetBiasWeight() const
    {
        return m_biasWeight;
    }

    double CalculateOutput()
    {
        m_outputValue = 0;
        for(std::size_t i = 0; i < NumInputs; ++i)
        {
            m_outputValue += m_inputValues[i] * m_weights[i];
        }
        m_outputValue += 1.0 * m_biasWeight;
        m_outputValue = sigmoid(m_outputValue);
        return m_outputValue;
    }

    double GetOutput() const
    {
        return m_outputValue;
    }

    double GetEligibilityTrace(const std::size_t index) const
    {
        return m_eligibilityTraces[index];
    }

    void SetEligibilityTrace(const std::size_t index, const double eligibility)
    {
        m_eligibilityTraces[index] = eligibility;
    }

    void SetBiasEligibility(const double eligibility)
    {
        m_biasEligibilityTrace = eligibility;
    }

    double GetBiasEligibility() const
    {
        return m_biasEligibilityTrace;
    }

    void ResetEligibilityTraces()
    {
        for(auto& e : m_eligibilityTraces)
            e = 0;
        m_biasEligibilityTrace = 0;
    }

private:
    std::array<double,NumInputs> m_inputValues;
    std::array<double,NumInputs> m_weights;
    std::array<double,NumInputs> m_eligibilityTraces;
    double m_biasWeight;
    double m_biasEligibilityTrace;
    double m_outputValue;
};

我的神经网络课程

模板 类NeuralNetwork { 公共:

void RandomiseWeights()
{
    double inputToHiddenRange = 4.0 * std::sqrt(6.0 / (NumInputs+1+NumOutputs));
    RandomGenerator inputToHidden(-inputToHiddenRange,inputToHiddenRange);

    double hiddenToOutputRange = 4.0 * std::sqrt(6.0 / (NumHidden+1+1));
    RandomGenerator hiddenToOutput(-hiddenToOutputRange,hiddenToOutputRange);

    for(auto& hiddenNeuron : m_hiddenNeurons)
    {
        for(std::size_t i = 0; i < NumInputs; ++i)
            hiddenNeuron.SetWeight(i, inputToHidden());
        hiddenNeuron.SetBiasWeight(inputToHidden());
    }

    for(auto& outputNeuron : m_outputNeurons)
    {
        for(std::size_t h = 0; h < NumHidden; ++h)
            outputNeuron.SetWeight(h, hiddenToOutput());
        outputNeuron.SetBiasWeight(hiddenToOutput());
    }
}

double GetOutput(const std::size_t index) const
{
    return m_outputNeurons[index].GetOutput();
}

std::array<double,NumOutputs> GetOutputs()
{
    std::array<double, NumOutputs> returnValue;
    for(std::size_t o = 0; o < NumOutputs; ++o)
        returnValue[o] = m_outputNeurons[o].GetOutput();
    return returnValue;
}

void SetInputValue(const std::size_t index, const double value)
{
    for(auto& hiddenNeuron : m_hiddenNeurons)
        hiddenNeuron.SetInputValue(index, value);
}

std::array<double,NumOutputs> Calculate()
{
    for(auto& h : m_hiddenNeurons)
        h.CalculateOutput();
    for(auto& o : m_outputNeurons)
        o.CalculateOutput();

    return GetOutputs();
}

std::array<double,NumOutputs> FeedForward(const std::array<double,NumInputs>& inputValues)
{
    for(std::size_t h = 0; h < NumHidden; ++h)//auto& hiddenNeuron : m_hiddenNeurons)
    {
        for(std::size_t i = 0; i < NumInputs; ++i)
            m_hiddenNeurons[h].SetInputValue(i,inputValues[i]);

        m_hiddenNeurons[h].CalculateOutput();
    }

    std::array<double, NumOutputs> returnValue;

    for(std::size_t h = 0; h < NumHidden; ++h)
    {
        auto hiddenOutput = m_hiddenNeurons[h].GetOutput();
        for(std::size_t o = 0; o < NumOutputs; ++o)
            m_outputNeurons[o].SetInputValue(h, hiddenOutput);
    }

    for(std::size_t o = 0; o < NumOutputs; ++o)
    {
        returnValue[o] = m_outputNeurons[o].CalculateOutput();
    }

    return returnValue;
}

double BackPropagateFinal(const std::array<double,NumOutputs>& actualValues, const NeuralNetwork<NumInputs,NumHidden,NumOutputs>* NN, const double alpha, const double beta, const double lambda)
{
    for(std::size_t iO = 0; iO < NumOutputs; ++iO)
    {
        auto y = NN->m_outputNeurons[iO].GetOutput();
        auto y1 = actualValues[iO];

        for(std::size_t iH = 0; iH < NumHidden; ++iH)
        {
            auto e = NN->m_outputNeurons[iO].GetEligibilityTrace(iH);
            auto h = NN->m_hiddenNeurons[iH].GetOutput();
            auto w = NN->m_outputNeurons[iO].GetWeight(iH);

            double e1 = lambda * e + (y * (1.0 - y) * h);

            double w1 = w + beta * (y1 - y) * e1;

            m_outputNeurons[iO].SetEligibilityTrace(iH,e1);
            m_outputNeurons[iO].SetWeight(iH,w1);
        }

        auto e = NN->m_outputNeurons[iO].GetBiasEligibility();
        auto h = 1.0;
        auto w = NN->m_outputNeurons[iO].GetBiasWeight();

        double e1 = lambda * e + (y * (1.0 - y) * h);

        double w1 = w + beta * (y1 - y) * e1;

        m_outputNeurons[iO].SetBiasEligibility(e1);
        m_outputNeurons[iO].SetBiasWeight(w1);
    }

    for(std::size_t iH = 0; iH < NumHidden; ++iH)
    {
        auto h = NN->m_hiddenNeurons[iH].GetOutput();

        for(std::size_t iI = 0; iI < NumInputs; ++iI)
        {
            auto e = NN->m_hiddenNeurons[iH].GetEligibilityTrace(iI);
            auto x = NN->m_hiddenNeurons[iH].GetInputValue(iI);
            auto u = NN->m_hiddenNeurons[iH].GetWeight(iI);

            double sumError = 0;

            for(std::size_t iO = 0; iO < NumOutputs; ++iO)
            {
                auto w = NN->m_outputNeurons[iO].GetWeight(iH);
                auto y = NN->m_outputNeurons[iO].GetOutput();
                auto y1 = actualValues[iO];

                auto grad = y1 - y;

                double e1 = lambda * e + (y * (1.0 - y) * w * h * (1.0 - h) * x);

                sumError += grad * e1;
            }

            double u1 = u + alpha * sumError;

            m_hiddenNeurons[iH].SetEligibilityTrace(iI,sumError);
            m_hiddenNeurons[iH].SetWeight(iI,u1);
        }

        auto e = NN->m_hiddenNeurons[iH].GetBiasEligibility();
        auto x = 1.0;
        auto u = NN->m_hiddenNeurons[iH].GetBiasWeight();

        double sumError = 0;

        for(std::size_t iO = 0; iO < NumOutputs; ++iO)
        {
            auto w = NN->m_outputNeurons[iO].GetWeight(iH);
            auto y = NN->m_outputNeurons[iO].GetOutput();
            auto y1 = actualValues[iO];

            auto grad = y1 - y;

            double e1 = lambda * e + (y * (1.0 - y) * w * h * (1.0 - h) * x);

            sumError += grad * e1;
        }

        double u1 = u + alpha * sumError;

        m_hiddenNeurons[iH].SetBiasEligibility(sumError);
        m_hiddenNeurons[iH].SetBiasWeight(u1);
    }

    double retVal = 0;
    for(std::size_t o = 0; o < NumOutputs; ++o)
    {
        retVal += 0.5 * alpha * std::pow((NN->GetOutput(o) - GetOutput(0)),2);
    }
    return retVal / NumOutputs;
}

double BackPropagate(const NeuralNetwork<NumInputs,NumHidden,NumOutputs>* NN, const double alpha, const double beta, const double lambda)
{
    for(std::size_t iO = 0; iO < NumOutputs; ++iO)
    {
        auto y = NN->m_outputNeurons[iO].GetOutput();
        auto y1 = m_outputNeurons[iO].GetOutput();

        for(std::size_t iH = 0; iH < NumHidden; ++iH)
        {
            auto e = NN->m_outputNeurons[iO].GetEligibilityTrace(iH);
            auto h = NN->m_hiddenNeurons[iH].GetOutput();
            auto w = NN->m_outputNeurons[iO].GetWeight(iH);

            double e1 = lambda * e + (y * (1.0 - y) * h);

            double w1 = w + beta * (y1 - y) * e1;

            m_outputNeurons[iO].SetEligibilityTrace(iH,e1);

            m_outputNeurons[iO].SetWeight(iH,w1);
        }

        auto e = NN->m_outputNeurons[iO].GetBiasEligibility();
        auto h = 1.0;
        auto w = NN->m_outputNeurons[iO].GetBiasWeight();

        double e1 = lambda * e + (y * (1.0 - y) * h);

        double w1 = w + beta * (y1 - y) * e1;

        m_outputNeurons[iO].SetBiasEligibility(e1);
        m_outputNeurons[iO].SetBiasWeight(w1);
    }

    for(std::size_t iH = 0; iH < NumHidden; ++iH)
    {
        auto h = NN->m_hiddenNeurons[iH].GetOutput();

        for(std::size_t iI = 0; iI < NumInputs; ++iI)
        {
            auto e = NN->m_hiddenNeurons[iH].GetEligibilityTrace(iI);
            auto x = NN->m_hiddenNeurons[iH].GetInputValue(iI);
            auto u = NN->m_hiddenNeurons[iH].GetWeight(iI);

            double sumError = 0;

            for(std::size_t iO = 0; iO < NumOutputs; ++iO)
            {
                auto w = NN->m_outputNeurons[iO].GetWeight(iH);
                auto y = NN->m_outputNeurons[iO].GetOutput();
                auto y1 = m_outputNeurons[iO].GetOutput();

                auto grad = y1 - y;

                double e1 = lambda * e + (y * (1.0 - y) * w * h * (1.0 - h) * x);

                sumError += grad * e1;
            }

            double u1 = u + alpha * sumError;

            m_hiddenNeurons[iH].SetEligibilityTrace(iI,sumError);

            m_hiddenNeurons[iH].SetWeight(iI,u1);
        }

        auto e = NN->m_hiddenNeurons[iH].GetBiasEligibility();
        auto x = 1.0;
        auto u = NN->m_hiddenNeurons[iH].GetBiasWeight();

        double sumError = 0;

        for(std::size_t iO = 0; iO < NumOutputs; ++iO)
        {
            auto w = NN->m_outputNeurons[iO].GetWeight(iH);
            auto y = NN->m_outputNeurons[iO].GetOutput();
            auto y1 = m_outputNeurons[iO].GetOutput();

            auto grad = y1 - y;

            double e1 = lambda * e + (y * (1.0 - y) * w * h * (1.0 - h) * x);

            sumError += grad * e1;
        }

        double u1 = u + alpha * sumError;

        m_hiddenNeurons[iH].SetBiasEligibility(sumError);
        m_hiddenNeurons[iH].SetBiasWeight(u1);
    }

    double retVal = 0;
    for(std::size_t o = 0; o < NumOutputs; ++o)
    {
        retVal += 0.5 * alpha * std::pow((NN->GetOutput(o) - GetOutput(0)),2);
    }
    return retVal / NumOutputs;
}

std::array<double,NumInputs*NumHidden+NumHidden+NumHidden*NumOutputs+NumOutputs> GetNetworkWeights() const
{
    std::array<double,NumInputs*NumHidden+NumHidden+NumHidden*NumOutputs+NumOutputs> returnVal;

    std::size_t weightPos = 0;

    for(std::size_t h = 0; h < NumHidden; ++h)
    {
        for(std::size_t i = 0; i < NumInputs; ++i)
            returnVal[weightPos++] = m_hiddenNeurons[h].GetWeight(i);
        returnVal[weightPos++] = m_hiddenNeurons[h].GetBiasWeight();
    }
    for(std::size_t o = 0; o < NumOutputs; ++o)
    {
        for(std::size_t h = 0; h < NumHidden; ++h)
            returnVal[weightPos++] = m_outputNeurons[o].GetWeight(h);
        returnVal[weightPos++] = m_outputNeurons[o].GetBiasWeight();
    }

    return returnVal;
}

static constexpr std::size_t NumWeights = NumInputs*NumHidden+NumHidden+NumHidden*NumOutputs+NumOutputs;


void SetNetworkWeights(const std::array<double,NumInputs*NumHidden+NumHidden+NumHidden*NumOutputs+NumOutputs>& weights)
{
    std::size_t weightPos = 0;
    for(std::size_t h = 0; h < NumHidden; ++h)
    {
        for(std::size_t i = 0; i < NumInputs; ++i)
            m_hiddenNeurons[h].SetWeight(i, weights[weightPos++]);
        m_hiddenNeurons[h].SetBiasWeight(weights[weightPos++]);
    }
    for(std::size_t o = 0; o < NumOutputs; ++o)
    {
        for(std::size_t h = 0; h < NumHidden; ++h)
            m_outputNeurons[o].SetWeight(h, weights[weightPos++]);
        m_outputNeurons[o].SetBiasWeight(weights[weightPos++]);
    }
}

void ResetEligibilityTraces()
{
    for(auto& h : m_hiddenNeurons)
        h.ResetEligibilityTraces();
    for(auto& o : m_outputNeurons)
        o.ResetEligibilityTraces();
}

private:

std::array<Neuron<NumInputs>,NumHidden> m_hiddenNeurons;
std::array<Neuron<NumHidden>,NumOutputs> m_outputNeurons;
};

我认为我可能遇到的一个问题是神经网络课程中的BackPropagateBackPropagateFinal方法。

这是我正在训练网络的main功能:

int main()
{
    std::ofstream matchFile("match.txt");

    RandomGenerator randomPlayerStart(0,1);
    RandomGenerator randomMove(0,100);

    Board<7,6,4> board;

    auto NN = new NeuralNetwork<7*6*4+2,417,1>();
    auto previousNN = new NeuralNetwork<7*6*4+2,417,1>();
    NN->RandomiseWeights();

    const int numGames = 3000000;
    double alpha = 0.1;
    double beta = 0.1;
    double lambda = 0.5;
    double learningRateFloor = 0.01;
    double decayRateAlpha = (alpha - learningRateFloor) / numGames;
    double decayRateBeta = (beta - learningRateFloor) / numGames;
    double randomChance = 90; // out of 100
    double randomChangeFloor = 10;
    double percentToReduceRandomOver = 0.5;
    double randomChangeDecay = (randomChance-randomChangeFloor) / (numGames*percentToReduceRandomOver);
    double percentOfGamesToRandomiseStart = 0.5;

    int numGamesWonP1 = 0;
    int numGamesWonP2 = 0;

    int gamesToOutput = 100;

    matchFile << "Num Games: " << numGames << "\t\ta,b,l: " << alpha << ", " << beta << ", " << lambda << std::endl;

    Board<7,6,4>::Player playerStart = randomPlayerStart() > 0.5 ? Board<7,6,4>::Player1 : Board<7,6,4>::Player2;

    double totalLoss = 0.0;

    for(int gameNumber = 0; gameNumber < numGames; ++gameNumber)
    {
        bool winState = false;
        Board<7,6,4>::Player playerWhoTurnItIs = playerStart;
        playerStart = playerStart == Board<7,6,4>::Player1 ? Board<7,6,4>::Player2 : Board<7,6,4>::Player1;
        board.ClearBoard();

        int turnNumber = 0;

        while(!winState)
        {
            Board<7,6,4>::Player playerWhoTurnItIsNot = playerWhoTurnItIs == Board<7,6,4>::Player1 ? Board<7,6,4>::Player2 : Board<7,6,4>::Player1;

            bool wasRandomMove = false;

            std::size_t selectedMove;
            bool moveFound = false;

            if(board.IsThereAvailableMove())
            {
                std::vector<std::size_t> availableMoves;
                if((gameNumber <= numGames * percentOfGamesToRandomiseStart && turnNumber == 0) || randomMove() > 100.0-randomChance)
                    wasRandomMove = true;

                std::size_t bestMove = 8;
                double bestWorstResponse = playerWhoTurnItIs == Board<7,6,4>::Player1 ? std::numeric_limits<double>::min() : std::numeric_limits<double>::max();

                for(std::size_t m = 0; m < 7; ++m)
                {
                    Board<7,6,4> testBoard = board;    // make a copy of the current board to run our tests
                    if(testBoard.AvailableMoveInColumn(m))
                    {
                        if(wasRandomMove)
                        {
                            availableMoves.push_back(m);
                        }
                        testBoard.AddChecker(m, playerWhoTurnItIs);

                        double worstResponse = playerWhoTurnItIs == Board<7,6,4>::Player1 ? std::numeric_limits<double>::max() : std::numeric_limits<double>::min();
                        std::size_t worstMove = 8;

                        for(std::size_t m2 = 0; m2 < 7; ++m2)
                        {
                            Board<7,6,4> testBoard2 = testBoard;
                            if(testBoard2.AvailableMoveInColumn(m2))
                            {
                                testBoard2.AddChecker(m,playerWhoTurnItIsNot);

                                StateType state;
                                create_board_state(state, testBoard2, playerWhoTurnItIs);
                                auto outputs = NN->FeedForward(state);

                                if(playerWhoTurnItIs == Board<7,6,4>::Player1 && (outputs[0] < worstResponse || worstMove == 8))
                                {
                                    worstResponse = outputs[0];
                                    worstMove = m2;
                                }
                                else if(playerWhoTurnItIs == Board<7,6,4>::Player2 && (outputs[0] > worstResponse || worstMove == 8))
                                {
                                    worstResponse = outputs[0];
                                    worstMove = m2;
                                }
                            }
                        }

                        if(playerWhoTurnItIs == Board<7,6,4>::Player1 && (worstResponse > bestWorstResponse || bestMove == 8))
                        {
                            bestWorstResponse = worstResponse;
                            bestMove = m;
                        }
                        else if(playerWhoTurnItIs == Board<7,6,4>::Player2 && (worstResponse < bestWorstResponse || bestMove == 8))
                        {
                            bestWorstResponse = worstResponse;
                            bestMove = m;
                        }
                    }
                }
                if(bestMove == 8)
                {
                    std::cerr << "wasn't able to determine the best move to make" << std::endl;
                    return 0;
                }
                if(gameNumber <= numGames * percentOfGamesToRandomiseStart && turnNumber == 0)
                {
                    std::size_t rSelection = int(randomMove()) % (availableMoves.size());

                    selectedMove = availableMoves[rSelection];
                    moveFound = true;
                }
                else if(wasRandomMove)
                {
                    std::remove(availableMoves.begin(),availableMoves.end(),bestMove);
                    std::size_t rSelection = int(randomMove()) % (availableMoves.size());

                    selectedMove = availableMoves[rSelection];
                    moveFound = true;
                }
                else
                {
                    selectedMove = bestMove;
                    moveFound = true;
                }
            }

            StateType prevState;
            create_board_state(prevState,board,playerWhoTurnItIs);
            NN->FeedForward(prevState);
            *previousNN = *NN;

            // now that we have the move, add it to the board
            StateType state;
            board.AddChecker(selectedMove,playerWhoTurnItIs);
            create_board_state(state,board,playerWhoTurnItIsNot);

            auto outputs = NN->FeedForward(state);

            if(board.InARowConnected(4) == Board<7,6,4>::Player1)
            {
                totalLoss += NN->BackPropagateFinal({1},previousNN,alpha,beta,lambda);
                winState = true;
                ++numGamesWonP1;
            }
            else if(board.InARowConnected(4) == Board<7,6,4>::Player2)
            {
                totalLoss += NN->BackPropagateFinal({-1},previousNN,alpha,beta,lambda);
                winState = true;
                ++numGamesWonP2;
            }
            else if(!board.IsThereAvailableMove())
            {
                totalLoss += NN->BackPropagateFinal({0},previousNN,alpha,beta,lambda);
                winState = true;
            }
            else if(turnNumber > 0 && !wasRandomMove)
            {
                NN->BackPropagate(previousNN,alpha,beta,lambda);
            }

            if(!wasRandomMove)
            {
                outputs = NN->FeedForward(state);
            }

            ++turnNumber;
            playerWhoTurnItIs = playerWhoTurnItIsNot;
        }

        alpha -= decayRateAlpha;
        beta -= decayRateBeta;

        NN->ResetEligibilityTraces();

        if(gameNumber > 0 && randomChance > randomChangeFloor && gameNumber <= numGames * percentToReduceRandomOver)
        {
            randomChance -= randomChangeDecay;
            if(randomChance < randomChangeFloor)
                randomChance = randomChangeFloor;
        }

        if(gameNumber % gamesToOutput == 0 && gameNumber != 0)
        {
            totalLoss = totalLoss / gamesToOutput;
            matchFile << std::fixed << std::setprecision(51) << totalLoss << std::endl;
            totalLoss = 0.0;
        }
    }

    matchFile << std::endl << "Games won: " << numGamesWonP1 << " . " << numGamesWonP2 << std::endl;

    auto weights = NN->GetNetworkWeights();
    matchFile << std::endl;
    matchFile << std::endl;
    for(const auto& w : weights)
        matchFile << std::fixed << std::setprecision(51) << w << ", \n";
    matchFile << std::endl;

    return 0;
}

我认为我可能遇到问题的一个地方是选择最佳行动的极小极大。

还有一些我不认为与我所遇到的问题过于相关的内容。

问题

  1. 无论我是训练1000场比赛还是300万场比赛,无论是玩家1还是玩家2都将赢得绝大多数游戏,这似乎无关紧要。在一场球员赢得的100场比赛中,有90场比赛。如果我输出实际的单个游戏移动和输出,我可以看到其他玩家赢得的游戏几乎总是幸运随机移动的结果。

    与此同时,我注意到预测输出有点&#34;青睐&#34;一个玩家。即输出似乎在0的负面,所以玩家1总是做出最好的动作,例如,但他们似乎都预测会向玩家2获胜。

    有时它的玩家1赢得多数,其他时候是玩家2.我假设这是由于随机权重初始化 对一个球员轻微。

    第一场比赛左右比一个球员更胜一筹,但很快就会开始“瘦身”#34;一种方式。

  2. 我已经尝试过300多场比赛的训练,花了3天时间,但网络似乎仍无法做出正确的决定。我通过让网络玩其他“机器人”来测试网络。在riddles.io上连接4 comp。

    • 它没有意识到它需要连续阻挡对手
    • 即使在3000000场比赛之后,它也不会作为第一步发挥中锋栏目,我们知道这是唯一可以保证赢球的首发动作。
  3. 任何帮助和指示都将不胜感激。具体来说,我对TD-Lambda反向传播的实现是否正确?

0 个答案:

没有答案