这个Haskell代码是否等同于这个Python代码?

时间:2015-09-23 23:25:34

标签: python haskell numpy

我试图将Python程序移植到Haskell,而且我对NumPy(Python程序使用的)还不熟悉所以我想知道为什么这个代码不等同。 这是我的Haskell代码:

data NNetwork = NNetwork { nlayers :: Int
                         , sizes   :: [Int]
                         , biases  :: [[Float]]
                         , weights :: [[Float]] }
                deriving (Show, Ord, Eq)

buildNetwork :: [Int] -> NNetwork
buildNetwork sizes = NNetwork { nlayers = length sizes
                              , sizes   = sizes
                              , biases  = map (\y -> replicate y (sig . toFloat . rands $ y)) sizes
                              , weights = map (\y -> replicate y (toFloat $ rands y)) sizes }
feedforward :: NNetwork -> Float -> [[Float]]
feedforward net a = map (equation a) (zip (weights net) (biases net))

toFloat x = fromIntegral x :: Float

sig :: Float -> Float
sig a = 1 / (1 + exp (-a))

rands :: Int -> Int
rands x = (7 * x) `mod` 11

equation :: Float -> ([Float], [Float]) -> [Float]
equation a (w, b) = map sig $ zipWith (+) (dot w (rep w a)) b
  where dot = zipWith (*)
        rep a b = replicate (length a) b

原始的Python代码:

class Network(object):

    def __init__(self, sizes):
        self.num_layers = len(sizes)
        self.sizes = sizes
        self.biases = [np.random.randn(y, 1) for y in sizes[1:]]
        self.weights = [np.random.randn(y, x) 
                        for x, y in zip(sizes[:-1], sizes[1:])]

def sigmoid(z):
    return 1.0/(1.0+np.exp(-z))

def feedforward(self, a):
        """Return the output of the network if "a" is input."""
        for b, w in zip(self.biases, self.weights):
            a = sigmoid(np.dot(w, a)+b)
        return a

我试图将一个非常简单的神经网络程序从Python移植到Haskell,因为我更喜欢Haskell。我也担心我做错了什么,因为Haskell代码更冗长。

- 谢谢!

1 个答案:

答案 0 :(得分:1)

首先:请注意,Python版本缺少等效的deriving (Show, Eq, Ord) - 尝试实现相应的__magic__方法,并查看添加了多少行代码。如果没有这些,==<=>以及print Network()几乎没有意义。

基本上,详细程度主要来自类型签名。此外,您可以将rands移至where下的buildNetwork块,只需将toFloat的所有来电替换为toFloat,即可完全摆脱fromIntegral没有类型注释。还有其他一些微小的重构。

一般来说,在某些情况下,你可以期望某些事情在通常更简洁的语言中更加冗长。我确信当你的神经网络程序朝着更加丰富的代码库发展时,Haskell将比Python更简洁,忽略了可能比他们(可能不存在)更成熟的Python神经网络库的存在Haskell同行。

data NNetwork = NNetwork { nlayers :: Int
                         , sizes   :: [Int]
                         , biases  :: [[Float]]
                         , weights :: [[Float]] }
                deriving (Show, Ord, Eq)

buildNetwork sizes =
  NNetwork { nlayers = length sizes
           , sizes   = sizes
           , biases  = map (\y -> replicate y (sig . fromIntegral . rands $ y)) sizes
           , weights = map (\y -> replicate y (fromIntegral . rands $ y)) sizes }
  where rands x = (7 * x) `mod` 11

feedforward net a = map (equation a) (zip (weights net) (biases net))

sig a = 1 / (1 + exp (-a))

equation a (w, b) = map sig $ zipWith (+) (dot w rep) b
  where dot = zipWith (*)
        rep = replicate (length w) a

并且您可以在buildNetwork中执行一些微重构以删除一些小的重复,但这只会缩短行,并且可能会使代码对域专家的可读性降低:

buildNetwork sizes =
  NNetwork { nlayers = length sizes
           , sizes   = sizes
           , biases  = nameMe sig
           , weights = nameMe id }
  where nameMe fn = map (replicate y (fn y')) sizes
        y'        = fromIntegral $ y * 7 `mod` 11