机器学习的奇/偶预测不起作用(成功率50%)

时间:2018-12-07 14:26:08

标签: python tensorflow machine-learning keras

我是机器学习的新手。我试图创建一个模型来预测数字是否为偶数。

我使用了此代码https://machinelearningmastery.com/tutorial-first-neural-network-python-keras/ 我改变了我的需求。

问题是成功率大约为50%,等于随机数。

您知道该怎么做吗?

from keras.models import Sequential
from keras.layers import Dense
import numpy
# fix random seed for reproducibility
seed = 7
numpy.random.seed(seed)

X = list(range(1000))
Y = [1,0]*500
# create model
model = Sequential()
model.add(Dense(12, input_dim=1, init='uniform', activation='relu'))
model.add(Dense(8, init='uniform', activation='relu'))
model.add(Dense(1, init='uniform', activation='sigmoid'))
# Compile model
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])
# Fit the model
model.fit(X, Y, epochs=150, batch_size=10,  verbose=2)
# calculate predictions
predictions = model.predict(X)
# round predictions
rounded = [round(x[0])for x in predictions]
print(rounded)


>>> [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]

6 个答案:

答案 0 :(得分:1)

我不惊讶它不起作用-神经网络根本不起作用。

您必须更好地感觉到您要传递什么作为神经网络的输入。

传递数字时,它必须具有一些含义。这意味着:如果一个数字大于另一个数字,它将引起某些情况。像年龄->金钱,应该有依赖。

但是当寻找奇数时,这个含义更加抽象。老实说,您应该将输入视为独立的字符串值。

也许:请尝试作为输入:

X = [[math.floor(item/2), item/2 - math.floor(item/2)] for item in range(1000)]

并检查网络是否会理解“如果第二个值大于零,则数字为奇数”。

继续阅读以获得更好的感觉:)

编辑:

@MIlano 完整的代码如下:

from keras.models import Sequential
from keras.layers import Dense
import numpy
import math
# fix random seed for reproducibility
seed = 7
numpy.random.seed(seed)

X = numpy.array([[math.floor(item/2), item/2 - math.floor(item/2)] for item in range(1000)])

Y = [1, 0]*500
# create model
model = Sequential()
model.add(Dense(12, input_shape=(2,), init='uniform', activation='relu'))
model.add(Dense(8, init='uniform', activation='relu'))
model.add(Dense(1, init='uniform', activation='sigmoid'))
# Compile model
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])
# Fit the model
model.fit(X, Y, epochs=150, batch_size=10,  verbose=2)
# calculate predictions
predictions = model.predict(X)
# round predictions
rounded = [round(x[0])for x in predictions]
print(rounded)

答案 1 :(得分:1)

神经网络不善于判断数字是否为偶数。如果输入表示形式只是一个整数,至少不是这样。神经网络擅长找出并组合线性决策边界。对于所有自然数,有无数个决策边界来检查数字是否为偶数。但是,如果仅让您的NN在所有数字的子集上工作,那么您可以使其工作。但是,您基本上希望每个数字都需要一个神经元才能在输入层中进行测试。因此,对于0 <= n < 1000,您的输入层将需要一千个神经元。那确实不是神经网络的一个很好的例子。

如果要将输入的表示形式更改为数字的二进制表示形式,则NN将更容易地检测数字是否为偶数。例如。

X = [
  [0, 0, 0], # 0
  [0, 0, 1], # 1
  [0, 1, 0], # 2
  [0, 1, 1], # 3
  [1, 0, 0], # 4
  [1, 0, 1], # 5
  [1, 1, 0], # 6
  [1, 1, 1]  # 7
]

Y = [1, 0, 1, 0, 1, 0, 1, 0]

如您所见,这现在是一个相当简单的问题:基本上是最后一个二进制数的倒数。这是预处理输入以创建神经网络更容易解决的问题的示例。

答案 2 :(得分:1)

我认为阅读perceptron XOR-problem是一个好主意,以了解单个感知器的工作原理及其局限性。

用一维输入来预测数字是否为偶数分类问题;在分类问题中,训练了神经网络以通过边界将类分离。思考此问题的一种方法是通过将输入数字添加到添加的维度(例如,将映射7映射到[7,7])来将其一维输入映射为二维输入,并查看散点图中奇偶矢量的样子。

如果您在Jupyter笔记本中运行以下代码

%matplotlib inline
import matplotlib.pyplot as plt

X = list(range(-20, 20))
evens = [x for x in X if x % 2 == 0]
odds = [x for x in X if x % 2 != 0]
data = (evens, odds)
colors = ("green", "blue")
groups = ("Even", "Odd") 

fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
for data, color, group in zip(data, colors, groups):
    x = data
    ax.scatter(x, x, alpha=0.8, c=color, edgecolors='none', s=30, label=group)
plt.title('Even/Odd numbers')
plt.legend(loc=2)
plt.show()

data = (evens, odds)
fig2 = plt.figure()
ax = fig2.add_subplot(1, 1, 1)
for data, color, group in zip(data, colors, groups):
    x = data
    y = [abs(i) if i%2==0 else -abs(i) for i in data]
    ax.scatter(x, y, alpha=0.8, c=color, edgecolors='none', s=30, label=group)
plt.title('Even/Odd numbers (Separatable)')
plt.legend(loc=2)
plt.show()

您将看到类似下图的图像:

Evens and Odds numbers

您可以在第一个图中看到,实际上不可能在偶数和奇数向量之间找到边界,但是如果将第二维数映射到其等效负数,则可以在两个类(偶数和奇数)之间绘制边界数字向量)很容易。结果,如果将输入数据转换为二维,并基于偶数或奇数取反第二维值,则神经网络可以学习如何分离偶数和奇数向量类。

您可以尝试使用类似以下代码的方法,您将看到网络将学习并收敛到几乎100%的准确性。

import numpy
from keras.models import Sequential
from keras.layers import Dense

# fix random seed for reproducibility
seed = 7
numpy.random.seed(seed)

X = numpy.array([[x, x if x%2 == 0 else -x] for x in range(1000)])
Y = [1,0]*500

# create model
model = Sequential()
model.add(Dense(12, input_dim=2, init='uniform', activation='relu'))
model.add(Dense(8, init='uniform', activation='relu'))
model.add(Dense(1, init='uniform', activation='sigmoid'))
# Compile model
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])
# Fit the model
model.fit(X, Y, epochs=50, batch_size=10,  verbose=2)
# Calculate predictions
predictions = model.predict(X)

请注意,基于偶数或奇数将数字转换为负空间也适用于一维,但是使用具有二维向量的散点图更容易演示。

答案 3 :(得分:0)

这不是我见过的最奇怪的神经网络应用。最接近的例子是2006年的A Compositional Neural-network Solution to Prime-number Testing,它使用神经网络解决了更复杂的数论问题。

研究的结果是可以对其进行训练,我建议您尝试使用类似的构造,但是如本文结论所示,对于此类问题有更好的解决方案。

答案 4 :(得分:0)

机器学习的目标是预测具有特征/模式的数据上的标签(对您来说是Y)(此处为X)

您遇到的问题是您的X只是一个不断增加的列表,没有没有特定的模式序列或任何说明。

因此,您尝试使用统计算法来解释完全随机的事物,这是不可能的

在ML的参考平台Kaggle上尝试使用 Titanic数据集进行机器学习的最开始:

https://www.kaggle.com/upendr/titanic-machine-learning-from-disaster/data

下载它,通过 pandas 加载并尝试相同的算法。

您的X将是每个功能,例如班级,年龄,性别等。您的Y是还活着,如果他活着,则值为1,否则为0。而且您将尝试确定他是否活着,这要归功于年龄,性别等因素。

我还建议您查看吴伟(Andrew Ng):机器学习课程,该课程将解释所有内容,并且对于初学者来说真的很容易

玩得开心! :)

答案 5 :(得分:0)

这是我在Keras中创建模型以在Python 3中对奇/偶数进行分类的方式。

它仅在第一个隐藏层中使用32个输入的1个神经元。输出层只有2个神经元,用于一次热编码0和1。

from keras.models import Sequential
from keras.layers import Dense
from keras.utils import to_categorical


# Helper function to convert a number 
# to its fixed width binary representation
def conv(x):
  a = format(x, '032b')
  l = list(str(a))
  l = np.array(list(map(int, l)))
  return l

# input data
data = [conv(i) for i in range(100000)]
X = np.array(data)


Y= list() # empty list of results
for v in range(100000):
  Y.append( to_categorical(v%2, 2) )

Y = np.array(Y) # we need np.array


# Sequential is a fully connected network
model = Sequential()

# 32 inputs and 1 neuron in the first layer (hidden layer)
model.add(Dense(1, input_dim=32, activation='relu'))

# 2 output layer 
model.add(Dense(2, activation='sigmoid'))


model.compile(loss='binary_crossentropy', 
              optimizer='adam', 
              metrics=['accuracy'])

# epochs is the number of times to retrain over the same data set
# batch_size is how may elements to process in parallel at one go
model.fit(X, Y, epochs=5, batch_size=100, verbose=1)
weights, biases = model.layers[0].get_weights()
print("weights",weights.size, weights, "biases", biases)
model.summary()

Epoch 1/5
100000/100000 [==============================] - 3s 26us/step - loss: 0.6111 - acc: 0.6668
Epoch 2/5
100000/100000 [==============================] - 1s 13us/step - loss: 0.2276 - acc: 1.0000
Epoch 3/5
100000/100000 [==============================] - 1s 13us/step - loss: 0.0882 - acc: 1.0000
Epoch 4/5
100000/100000 [==============================] - 1s 13us/step - loss: 0.0437 - acc: 1.0000
Epoch 5/5
100000/100000 [==============================] - 1s 13us/step - loss: 0.0246 - acc: 1.0000
weights 32 [[-4.07479703e-01]
 [ 2.29798079e-01]
 [ 4.12091196e-01]
 [-1.86401993e-01]
 [ 3.70162904e-01]
 [ 1.34553611e-02]
 [ 2.01252878e-01]
 [-1.00370705e-01]
 [-1.41752958e-01]
 [ 7.27931559e-02]
 [ 2.55639553e-01]
 [ 1.90407157e-01]
 [-2.42316410e-01]
 [ 2.43226111e-01]
 [ 2.22285628e-01]
 [-7.04377817e-05]
 [ 2.20522008e-04]
 [-1.48785894e-05]
 [-1.15533156e-04]
 [ 1.16850446e-04]
 [ 6.37861085e-05]
 [-9.74628711e-06]
 [ 3.84256418e-05]
 [-6.19597813e-06]
 [-7.05791535e-05]
 [-4.78575275e-05]
 [-3.07796836e-05]
 [ 3.26417139e-05]
 [-1.51580054e-04]
 [ 1.27965177e-05]
 [ 1.48101550e-04]
 [ 3.18456793e+00]] biases [-0.00016785]
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_49 (Dense)             (None, 1)                 33        
_________________________________________________________________
dense_50 (Dense)             (None, 2)                 4         
=================================================================
Total params: 37
Trainable params: 37
Non-trainable params: 0

以下是预测:

print(X[0:1])
scores = model.predict(X[0:1])
print(scores)
print(np.argmax(scores))

print(X[1:2])
scores = model.predict(X[1:2])
print(scores)
print(np.argmax(scores))

[[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]]
[[0.9687797  0.03584918]]
0
[[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1]]
[[0.00130448 0.9949934 ]]
1