如何在Numpy中实现ReLU功能

时间:2015-08-20 03:58:56

标签: python numpy machine-learning neural-network

我想制作一个简单的神经网络,我希望使用ReLU功能。有人能告诉我如何使用numpy实现该功能。 谢谢你的时间!

9 个答案:

答案 0 :(得分:86)

有两种方法。

[1] "current designing ./2011/bst.rda"
Error: cannot allocate vector of size 434.7 Mb
In addition: There were 50 or more warnings (use warnings() to see the first 50)
> gc(verbose=T)
Garbage collection 27232 = 15350+4362+7520 (level 2) ... 
31.5 Mbytes of cons cells used (49%)
450.6 Mbytes of vectors used (21%)
           used  (Mb) gc trigger   (Mb)  max used   (Mb)
Ncells  1175911  31.5    2421436   64.7   1770749   47.3
Vcells 59048650 450.6  278146328 2122.1 461815808 3523.4
> rm(list=ls(all=TRUE))
> gc(verbose=T)
Garbage collection 27233 = 15350+4362+7521 (level 2) ... 
11.1 Mbytes of cons cells used (21%)
7.1 Mbytes of vectors used (0%)
         used (Mb) gc trigger   (Mb)  max used   (Mb)
Ncells 414283 11.1    1937148   51.8   1770749   47.3
Vcells 920035  7.1  222517062 1697.7 461815808 3523.4
> 

如果使用以下代码计时结果:

>>> x = np.random.random((3, 2)) - 0.5
>>> x
array([[-0.00590765,  0.18932873],
       [-0.32396051,  0.25586596],
       [ 0.22358098,  0.02217555]])
>>> np.maximum(x, 0)
array([[ 0.        ,  0.18932873],
       [ 0.        ,  0.25586596],
       [ 0.22358098,  0.02217555]])
>>> x * (x > 0)
array([[-0.        ,  0.18932873],
       [-0.        ,  0.25586596],
       [ 0.22358098,  0.02217555]])
>>> (abs(x) + x) / 2
array([[ 0.        ,  0.18932873],
       [ 0.        ,  0.25586596],
       [ 0.22358098,  0.02217555]])

我们得到:

import numpy as np

x = np.random.random((5000, 5000)) - 0.5
print("max method:")
%timeit -n10 np.maximum(x, 0)

print("multiplication method:")
%timeit -n10 x * (x > 0)

print("abs method:")
%timeit -n10 (abs(x) + x) / 2

所以乘法似乎是最快的。

答案 1 :(得分:35)

如果您不介意x被修改,请使用np.maximum(x, 0, x)Daniel S指出了这一点。它要快得多,因为人们可能会忽略它,我会把它作为答案重新发布。这是比较:

max method:
10 loops, best of 3: 238 ms per loop
multiplication method:
10 loops, best of 3: 128 ms per loop
abs method:
10 loops, best of 3: 311 ms per loop
in-place max method:
10 loops, best of 3: 38.4 ms per loop

答案 2 :(得分:18)

我找到了一个更快的ReLU方法和numpy。您也可以使用numpy的花式索引功能。

花式索引:

20.3 ms±272μs/循环(平均值±标准偏差,7次运行,每次10次循环)

>>> x = np.random.random((5,5)) - 0.5 
>>> x
array([[-0.21444316, -0.05676216,  0.43956365, -0.30788116, -0.19952038],
       [-0.43062223,  0.12144647, -0.05698369, -0.32187085,  0.24901568],
       [ 0.06785385, -0.43476031, -0.0735933 ,  0.3736868 ,  0.24832288],
       [ 0.47085262, -0.06379623,  0.46904916, -0.29421609, -0.15091168],
       [ 0.08381359, -0.25068492, -0.25733763, -0.1852205 , -0.42816953]])
>>> x[x<0]=0
>>> x
array([[ 0.        ,  0.        ,  0.43956365,  0.        ,  0.        ],
       [ 0.        ,  0.12144647,  0.        ,  0.        ,  0.24901568],
       [ 0.06785385,  0.        ,  0.        ,  0.3736868 ,  0.24832288],
       [ 0.47085262,  0.        ,  0.46904916,  0.        ,  0.        ],
       [ 0.08381359,  0.        ,  0.        ,  0.        ,  0.        ]])

这是我的基准:

import numpy as np
x = np.random.random((5000, 5000)) - 0.5
print("max method:")
%timeit -n10 np.maximum(x, 0)
print("max inplace method:")
%timeit -n10 np.maximum(x, 0,x)
print("multiplication method:")
%timeit -n10 x * (x > 0)
print("abs method:")
%timeit -n10 (abs(x) + x) / 2
print("fancy index:")
%timeit -n10 x[x<0] =0

max method:
241 ms ± 3.53 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
max inplace method:
38.5 ms ± 4 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
multiplication method:
162 ms ± 3.1 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
abs method:
181 ms ± 4.18 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
fancy index:
20.3 ms ± 272 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

答案 3 :(得分:10)

你可以更简单的方式做到这一点,而不是numpy:

def ReLU(x):
    return x * (x > 0)

def dReLU(x):
    return 1. * (x > 0)

答案 4 :(得分:4)

Richard Möhn's comparison不公平 作为Andrea Di Biagio's comment,就地方法np.maximum(x, 0, x)将在第一个循环中修改x。
所以这是我的基准:

import numpy as np

def baseline():
    x = np.random.random((5000, 5000)) - 0.5
    return x

def relu_mul():
    x = np.random.random((5000, 5000)) - 0.5
    out = x * (x > 0)
    return out

def relu_max():
    x = np.random.random((5000, 5000)) - 0.5
    out = np.maximum(x, 0)
    return out

def relu_max_inplace():
    x = np.random.random((5000, 5000)) - 0.5
    np.maximum(x, 0, x)
    return x 

时间安排:

print("baseline:")
%timeit -n10 baseline()
print("multiplication method:")
%timeit -n10 relu_mul()
print("max method:")
%timeit -n10 relu_max()
print("max inplace method:")
%timeit -n10 relu_max_inplace()

获得结果:

baseline:
10 loops, best of 3: 425 ms per loop
multiplication method:
10 loops, best of 3: 596 ms per loop
max method:
10 loops, best of 3: 682 ms per loop
max inplace method:
10 loops, best of 3: 602 ms per loop

就地最大化方法只比最大化方法快一点,并且可能因为它省略了“out”的变量赋值。并且它仍然比乘法方法慢 因为你正在实施ReLU功能。您可能需要保存&#39; x&#39;通过relu提供backprop。例如:

def relu_backward(dout, cache):
    x = cache
    dx = np.where(x > 0, dout, 0)
    return dx

所以我建议你使用乘法方法。

答案 5 :(得分:1)

ReLU(x)也等于(x + abs(x))/ 2

答案 6 :(得分:0)

这是更精确的实施:

def ReLU(x):
    return abs(x) * (x > 0)

答案 7 :(得分:0)

如果我们有3个用于Relu的参数(t0, a0, a1),那就是我们要实现的

if x > t0:
    x = x * a1
else:
    x = x * a0

我们可以使用以下代码:

X = X * (X > t0) * a1 +  X * (X < t0) * a0

X有一个矩阵。

答案 8 :(得分:0)

numpy不具有relu的功能,但是您可以按以下方式自己定义它:

def relu(x):
    return np.maximum(0, x)

例如:

arr = np.array([[-1,2,3],[1,2,3]])

ret = relu(arr)
print(ret) # print [[0 2 3] [1 2 3]]