为什么我修改的(真实世界haskell)Mapreduce实现失败,“太多打开的文件”

时间:2011-04-04 16:56:56

标签: mapreduce haskell

我正在实现一个haskell程序,它将文件的每一行与文件中的每一行进行比较。为简洁起见,我们假设由一行表示的数据结构只是一个Int,我的算法是平方距离。我将按如下方式实施:

--My operation
distance :: Int -> Int -> Int
distance a b = (a-b)*(a-b)

combineDistances :: [Int] -> Int
combineDistances = sum

--Applying my operation simply on a file
sumOfDistancesOnSmallFile :: FilePath -> IO Int
sumOfDistancesOnSmallFile path = do
              fileContents <- readFile path
              return $ allDistances $ map read $ lines $ fileContents
              where
                  allDistances (x:xs) = (allDistances xs) + ( sum $ map (distance x) xs)
                  allDistances _ = 0

--Test file generation
createTestFile :: Int -> FilePath -> IO ()
createTestFile n path = writeFile path $ unlines $ map show $ take n $ infiniteList 0 1
    where infiniteList :: Int->Int-> [Int]
          infiniteList i j = (i + j) : infiniteList j (i+j)

不幸的是,完整的文件将保存在内存中。为了防止非常大的文件可能出现内存不足异常,我想在'allDistances'的每次递归中寻找文件开头到文件的开头。

在“真实世界Haskell”一书中,给出了mapreduce的实现,它具有以块的形式分割文件的功能(第24章,可用here)。我修改了分块函数,而不是将整个文件分成块,返回与行相同的块,每个块代表一个元素

tails . lines. readFile

完整实现(加上以前的代码区域)

import qualified Data.ByteString.Lazy.Char8 as Lazy
import Control.Exception (bracket,finally)
import Control.Monad(forM,liftM)
import Control.Parallel.Strategies
import Control.Parallel
import Control.DeepSeq (NFData)
import Data.Int (Int64)
import System.IO

--Applying my operation using mapreduce on a very big file
sumOfDistancesOnFile :: FilePath -> IO Int
sumOfDistancesOnFile path = chunkedFileOperation chunkByLinesTails (distancesUsingMapReduce) path

distancesUsingMapReduce :: [Lazy.ByteString] -> Int
distancesUsingMapReduce = mapReduce rpar (distancesFirstToTail . lexer)
                                rpar combineDistances
              where lexer :: Lazy.ByteString -> [Int]
                    lexer chunk = map (read . Lazy.unpack) (Lazy.lines chunk)

distancesOneToMany :: Int -> [Int] -> Int
distancesOneToMany one many = combineDistances $ map (distance one) many

distancesFirstToTail :: [Int] -> Int
distancesFirstToTail s = 
              if not (null s)
              then distancesOneToMany (head s) (tail s)
              else 0
--The mapreduce algorithm
mapReduce :: Strategy b -- evaluation strategy for mapping
      -> (a -> b)   -- map function
      -> Strategy c -- evaluation strategy for reduction
      -> ([b] -> c) -- reduce function
      -> [a]        -- list to map over
      -> c
mapReduce mapStrat mapFunc reduceStrat reduceFunc input =
      mapResult `pseq` reduceResult
      where mapResult    = parMap mapStrat mapFunc input
            reduceResult = reduceFunc mapResult `using` reduceStrat


--Working with (file)chunks:
data ChunkSpec = CS{
    chunkOffset :: !Int64
    , chunkLength :: !Int64
    } deriving (Eq,Show)

chunkedFileOperation ::   (NFData a)=>
            (FilePath-> IO [ChunkSpec])
       ->   ([Lazy.ByteString]-> a)
       ->   FilePath
       ->   IO a
chunkedFileOperation chunkCreator funcOnChunks path = do
    (chunks, handles)<- chunkedRead chunkCreator path
    let r = funcOnChunks chunks
    (rdeepseq r `seq` return r) `finally` mapM_ hClose handles

chunkedRead ::  (FilePath -> IO [ChunkSpec])
        ->  FilePath
        ->  IO ([Lazy.ByteString], [Handle])
chunkedRead chunkCreator path = do
    chunks <- chunkCreator path
    liftM unzip . forM chunks $ \spec -> do
    h <- openFile path ReadMode
    hSeek h AbsoluteSeek (fromIntegral (chunkOffset spec))
    chunk <- Lazy.take (chunkLength spec) `liftM` Lazy.hGetContents h
    return (chunk,h)

-- returns set of chunks representing  tails . lines . readFile 
chunkByLinesTails :: FilePath -> IO[ChunkSpec]
chunkByLinesTails path = do
    bracket (openFile path ReadMode) hClose $ \h-> do
        totalSize <- fromIntegral `liftM` hFileSize h
        let chunkSize = 1
            findChunks offset = do
            let newOffset = offset + chunkSize
            hSeek h AbsoluteSeek (fromIntegral newOffset)
            let findNewline lineSeekOffset = do
                eof <- hIsEOF h
                if eof
                    then return [CS offset (totalSize - offset)]
                    else do
                        bytes <- Lazy.hGet h 4096
                        case Lazy.elemIndex '\n' bytes of
                            Just n -> do
                                nextChunks <- findChunks (lineSeekOffset + n + 1)
                                return (CS offset (totalSize-offset):nextChunks)
                            Nothing -> findNewline (lineSeekOffset + Lazy.length bytes)
            findNewline newOffset
        findChunks 0

不幸的是,在更大的文件(例如2000行)上,mapreduce版本会引发异常:
* 异常:getCurrentDirectory:资源耗尽(打开文件过多)

我有点惭愧自己无法调试程序,但我只知道如何调试java / c#代码。而且我也不知道如何正确测试文件分块和读取。我希望问题不是mapreduce函数本身的一部分,因为没有mapreduce的类似版本也会引发异常。在那次尝试中,我让chunkedFileOperation接受了一个块的操作和'reduce'函数,它直接应用了。

是的,我正在跑步  HaskellPlatform 2011.2.0在Mac OS X 10.6.7(雪豹)上 以下包装:
 bytestring 0.9.1.10
 并行3.1.0.1
我有资格成为一名自学成才的初学者/新鲜的haskell程序员

2 个答案:

答案 0 :(得分:4)

您正在使用惰性IO,因此使用readFile打开的文件未及时关闭。您需要考虑一个定期明确关闭文件的解决方案(例如,通过严格的IO或迭代IO)。

答案 1 :(得分:0)

此错误意味着它所说的内容:您的进程打开了太多文件。操作系统对进程可以同时读取的文件(或目录)数量施加任意限制。请参阅您的ulimit(1)联机帮助页和/或限制映射器的数量。