神经网络分类器扑克

时间:2017-04-04 18:22:29

标签: python numpy machine-learning neural-network deep-learning

我目前正在尝试创建一个神经网络来预测扑克手,我对机器学习和神经网络很新,可能需要一些帮助!我找到了一些关于如何创建神经网络的教程,这是我的尝试试图使这个数据集适应它。以下代码使pycharm崩溃 这是代码:

import numpy as np
import pandas as pnd
# sigmoid function


def nonlin(x, deriv=False):
    if deriv:
        return x * (1 - x)
    return 1 / (1 + np.exp(-x))
# InputData
training_data = pnd.read_csv("train.csv")
print(training_data)
training_data = training_data.drop(['hand'], axis=1)
print(training_data)
X = np.array(training_data)

# output data
training_data = pnd.read_csv("train.csv")
print(training_data)
training_data = training_data.drop(['S1'], axis=1)
training_data = training_data.drop(['C1'], axis=1)
training_data = training_data.drop(['S2'], axis=1)
training_data = training_data.drop(['C2'], axis=1)
training_data = training_data.drop(['S3'], axis=1)
training_data = training_data.drop(['C3'], axis=1)
training_data = training_data.drop(['S4'], axis=1)
training_data = training_data.drop(['C4'], axis=1)
training_data = training_data.drop(['S5'], axis=1)
training_data = training_data.drop(['C5'], axis=1)
print(training_data)
Y = np.array(training_data).T
print(Y)
# input dataset
# seed random numbers to make calculation
# deterministic (just a good practice)
np.random.seed(1)
# initialize weights randomly with mean 0
syn0 = 2 * np.random.random((10, 25011)) - 1
syn1 = 2*np.random.random((10, 1)) - 1

for j in range(10000):
    # Feed forward through layers 0, 1, and 2
    l0 = X
    l1 = nonlin(np.dot(l0, syn0))
    l2 = nonlin(np.dot(l1, syn1))
    # how much did we miss the target value?
    l2_error = y - l2
    if (j % 10000) == 0:
        print("Error:" + str(np.mean(np.abs(l2_error))))
    # in what direction is the target value
    # were we really sure? if so, don't change too much.
    l2_delta = l2_error * nonlin(l2, deriv=True)
    # how much did each l1 value contribute to the l2 error (according to the weights)?
    l1_error = l2_delta.dot(syn1.T)
    # in what direction is the target l1?
    # were we really sure? if so, don't change too much.
    l1_delta = l1_error * nonlin(l1, deriv=True)
    syn1 += l1.T.dot(l2_delta)
    syn0 += l0.T.dot(l1_delta)

以下是我的数据集的片段: Data set snippet

以下是我正在使用的数据集的说明: 属性信息:

1) S1 "Suit of card #1" 
Ordinal (1-4) representing {Hearts, Spades, Diamonds, Clubs} 

2) C1 "Rank of card #1" 
Numerical (1-13) representing (Ace, 2, 3, ... , Queen, King) 

3) S2 "Suit of card #2" 
Ordinal (1-4) representing {Hearts, Spades, Diamonds, Clubs} 

4) C2 "Rank of card #2" 
Numerical (1-13) representing (Ace, 2, 3, ... , Queen, King) 

5) S3 "Suit of card #3" 
Ordinal (1-4) representing {Hearts, Spades, Diamonds, Clubs} 

6) C3 "Rank of card #3" 
Numerical (1-13) representing (Ace, 2, 3, ... , Queen, King) 

7) S4 "Suit of card #4" 
Ordinal (1-4) representing {Hearts, Spades, Diamonds, Clubs} 

8) C4 "Rank of card #4" 
Numerical (1-13) representing (Ace, 2, 3, ... , Queen, King) 

9) S5 "Suit of card #5" 
Ordinal (1-4) representing {Hearts, Spades, Diamonds, Clubs} 

10) C5 "Rank of card 5" 
Numerical (1-13) representing (Ace, 2, 3, ... , Queen, King) 

11) CLASS "Poker Hand" 
Ordinal (0-9) 

0: Nothing in hand; not a recognized poker hand 
1: One pair; one pair of equal ranks within five cards 
2: Two pairs; two pairs of equal ranks within five cards 
3: Three of a kind; three equal ranks within five cards 
4: Straight; five cards, sequentially ranked with no gaps 
5: Flush; five cards with the same suit 
6: Full house; pair + different rank three of a kind 
7: Four of a kind; four equal ranks within five cards 
8: Straight flush; straight + flush 
9: Royal flush; {Ace, King, Queen, Jack, Ten} + flush 

正在使用的变量:

Variable    Definition
X   Input dataset matrix where each row is a training example
y   Output dataset matrix where each row is a training example
l0  First Layer of the Network, specified by the input data
l1  Second Layer of the Network, otherwise known as the hidden layer
l2  Final Layer of the Network, which is our hypothesis, and should approximate the correct answer as we train.
syn0    First layer of weights, Synapse 0, connecting l0 to l1.
syn1    Second layer of weights, Synapse 1 connecting l1 to l2.
l2_error    This is the amount that the neural network "missed".
l2_delta    This is the error of the network scaled by the confidence. It's almost identical to the error except that very confident errors are muted.
l1_error    Weighting l2_delta by the weights in syn1, we can calculate the error in the middle/hidden layer.
l1_delta    This is the l1 error of the network scaled by the confidence. Again, it's almost identical to the l1_error except that confident errors are muted.

1 个答案:

答案 0 :(得分:0)

首先,您应该明确说明这是否会导致您的计算机或进程崩溃。

如果是您的计算机,请在启动NN时检查您的RAM规格以及使用量(顶部,顶部等)。

你的syn0矩阵是10x25011。那个10 * 25011 * 8/1024 = 1953kB。如果您将几乎2个演出放入Python中的单个变量并运行一堆Chrome标签,那么完全关闭是非常可能的。