玩具神经网络还是花哨的波浪线发生器?

时间:2019-01-13 22:14:54

标签: python neural-network turtle-graphics

我编写了这段代码是为了更好地理解机器学习,但是不确定我是否走在正确的轨道上。到目前为止,它使用python 3.7在屏幕上绘制了随机的波浪线。

import turtle
import random

# Sets the Turtle main screen color 
turtle.bgcolor("pink")

# Settings for bug sprite
bug = turtle.Turtle()
bug.penup()
bug.color("red")
bug_x = bug.setx(-150)
bug_y = bug.sety(12)
bug.pendown()

# Settings for food sprite
food = turtle.Turtle()
food.penup()
food.color("green")
food_x = food.setx(160)
food_y = food.sety(59)
food.pendown()



# Main Loop
while True:


    # X and Y coordinate of Food
    destination = [160,59]

    # X and Y coordinate of Bug
    x_1 = bug.xcor()
    y_1 = bug.ycor()
    origin = [x_1,y_1]

    learn = .10
    bias = 0

    # Weights
    wghts = [random.uniform(-1,1),random.uniform(-1,1),random.uniform(-1,1),
             random.uniform(-1,1),random.uniform(-1,1),random.uniform(-1,1)]
    #print(wghts)




    # Output Neurons
    output_1 = (wghts[0] * origin[0]) + (wghts[1] * origin[1]) + bias
    output_2 = (wghts[2] * origin[0]) + (wghts[3] * origin[1]) + bias
    output_3 = (wghts[4] * origin[0]) + (wghts[5] * origin[1]) + bias

    #Relu Function
    if output_1 >= 0.1:
        output_1 = output_1
    else:
        output_1 = 0

    if output_2 >= 0.1:
        output_2 = output_2
    else:
        output_2 = 0

    if output_3 >= 0.1:
        output_3 = output_3
    else:
        output_3 = 0

    # Compares food/destination X and Y with bug/origin X and Y.
    # applies update ("learn") to all weights
    if origin[0] != destination[0] and origin[1] != destination[1]:
        wghts[0] = wghts[0] + learn
        wghts[1] = wghts[1] + learn
        wghts[2] = wghts[2] + learn
        wghts[3] = wghts[3] + learn
        wghts[4] = wghts[4] + learn
        wghts[5] = wghts[5] + learn
    else:
        wghts[0] = wghts[0] 
        wghts[1] = wghts[1] 
        wghts[2] = wghts[2] 
        wghts[3] = wghts[3] 
        wghts[4] = wghts[4] 
        wghts[5] = wghts[5]

    #print(wghts)
    #print("\n")

    # Creates a barrier for turtle
    bug_1a = int(bug.xcor())
    bug_2a = int(bug.ycor())

    if bug_1a > 300 or bug_2a > 300:
        bug.penup()
        bug.setx(5)
        bug.sety(5)
        bug.pendown()
    if bug_1a < -300 or bug_2a < -300:
        bug.penup()
        bug.setx(5)
        bug.sety(5)
        bug.pendown()

    # Output values applied to turtle direction controls
    bug.forward(output_1)
    bug.right(output_2)
    bug.left(output_3)

1 个答案:

答案 0 :(得分:0)

我在您的程序中看到的问题:

wghts在上一次迭代中一无所获-每次循环都将它们随机重置。

output_1output_2output_3是根据刚重新初始化的wghts计算出来的,因此更改如下:

if origin[0] != destination[0] and origin[1] != destination[1]:
        wghts[0] = wghts[0] + learn
        ...
        wghts[5] = wghts[5] + learn

从未反映在output_*变量中。

您要添加错误的X和Y坐标,并将其用作旋转的度数。两次。我看不出这有什么意义,但我想这是神经网络。

您在代码中进行障碍检查的时间太晚,以致与后面的内容不同步。该错误不会移动,因此请更早进行检查。

以下代码清理不会使您的bug减少随机性-只是希望使您的代码更易于使用:

from turtle import Screen, Turtle
from random import uniform

# Sets the Turtle main screen color
screen = Screen()
screen.bgcolor("pink")

# X and Y coordinate of Food
destination = (160, 59)

# Settings for food sprite
food = Turtle()
food.color("green")
food.penup()
food.setposition(destination)
food.pendown()

start = (-150, 12)

# Settings for bug sprite
bug = Turtle()
bug.color("red")
bug.penup()
bug.setposition(start)
bug.pendown()

LEARN = 0.1
BIAS = 0

# Main Loop
while True:

    # X and Y coordinate of Bug
    x, y = bug.position()

    # Creates a barrier for turtle
    if not -300 <= x <= 300 or not -300 <= y <= 300:
        bug.penup()
        bug.goto(start)
        bug.pendown()
        origin = start
    else:
        origin = (x, y)

    # Weights
    wghts = [uniform(-1, 1), uniform(-1, 1), uniform(-1, 1), uniform(-1, 1), uniform(-1, 1), uniform(-1, 1)]

    # Compares food/destination X and Y with bug/origin X and Y.
    # applies update ("LEARN") to all weights
    if origin != destination:
        wghts[0] += LEARN
        wghts[1] += LEARN
        wghts[2] += LEARN
        wghts[3] += LEARN
        wghts[4] += LEARN
        wghts[5] += LEARN

    # Output Neurons
    output_1 = (wghts[0] * origin[0]) + (wghts[1] * origin[1]) + BIAS
    output_2 = (wghts[2] * origin[0]) + (wghts[3] * origin[1]) + BIAS
    output_3 = (wghts[4] * origin[0]) + (wghts[5] * origin[1]) + BIAS

    # Relu Function
    if output_1 < 0.1:
        output_1 = 0

    if output_2 < 0.1:
        output_2 = 0

    if output_3 < 0.1:
        output_3 = 0

    # Output values applied to turtle direction controls
    bug.forward(output_1)
    bug.right(output_2)
    bug.left(output_3)