ByteString直方图

时间:2013-06-15 14:30:02

标签: haskell bytestring

给定(严格)ByteString,计算它包含的每个可能字节的数量的最有效方法是什么?

我看到sort应该作为计数排序实现 - 但似乎没有办法访问相关的计数。我还看到有一个count函数,它计算给定字节出现的次数。这给了我以下选项:

  • map (\ b -> count b str) [0x00 .. 0xFF]
  • map length . group . sort
  • 带有fold*IntMap字节频率的内容。

哪个可能会给我最好的表现?

4 个答案:

答案 0 :(得分:5)

dflemstr的基本概念当然是正确的,但由于您希望获得最佳性能,因此您需要使用对ByteString以及计数数组的未经检查的访问权限,例如

import Control.Monad.ST
import Data.Array.ST
import Data.Array.Base (unsafeRead, unsafeWrite)
import Data.Array.Unboxed

import Data.Word

import Data.ByteString (ByteString)
import qualified Data.ByteString as BS
import Data.ByteString.Unsafe

histogram :: ByteString -> UArray Word8 Int
histogram bs = runSTUArray $ do
    hist <- newArray (0, 255) 0
    let l = BS.length bs
        update b = do
            o <- unsafeRead hist b
            unsafeWrite hist b (o+1)
        loop i
            | i < 0     = return hist
            | otherwise = do
                update $ fromIntegral (bs `unsafeIndex` i)
                loop (i-1)
    loop (l-1)

根据criterion(建立一个200000长ByteString的直方图),这会产生相当大的差异:

warming up
estimating clock resolution...
mean is 1.667687 us (320001 iterations)
found 3078 outliers among 319999 samples (1.0%)
  1947 (0.6%) high severe
estimating cost of a clock call...
mean is 40.43765 ns (14 iterations)

benchmarking dflemstr
mean: 21.42852 ms, lb 21.05213 ms, ub 21.77954 ms, ci 0.950
std dev: 1.873897 ms, lb 1.719565 ms, ub 2.038779 ms, ci 0.950
variance introduced by outliers: 74.820%
variance is severely inflated by outliers

benchmarking unsafeIndex
mean: 312.6447 us, lb 304.3425 us, ub 321.0795 us, ci 0.950
std dev: 42.86886 us, lb 39.64363 us, ub 46.52899 us, ci 0.950
variance introduced by outliers: 88.342%
variance is severely inflated by outliers

(我更改了dflemstr的代码以使用runSTUArray并返回UArray Word8 Int以获得uiform返回值,但这对运行时间没有太大影响。)

答案 1 :(得分:4)

最有效的方法可能涉及使用可变数组来存储计数。这可能是最有效的O(n)解决方案之一:

import Control.Monad
import Control.Monad.ST

import Data.Array.ST

import Data.ByteString (ByteString)
import qualified Data.ByteString as ByteString

import Data.Word

byteHistogram :: ByteString -> [Int]
byteHistogram bs = runST $ do
  histogram <- newArray (minBound, maxBound) 0 :: ST s (STUArray s Word8 Int)
  forM_ (ByteString.unpack bs) $ \ byte ->
    readArray histogram byte >>= return . (+1) >>= writeArray histogram byte
  getElems histogram

答案 2 :(得分:3)

你可以猜到,或者你可以制作一个程序并测量它 - 结果会让你感到惊讶。

import Data.ByteString as B
import Data.IntMap.Strict as I
import qualified Data.Vector.Unboxed.Mutable as M
import Data.Vector.Unboxed as V
import Criterion
import Criterion.Main
import System.Entropy
import System.IO.Unsafe
import Data.Word

main = do
    bs <- getEntropy 1024
    defaultMain [ bench "map count" $ nf mapCount bs
                , bench "map group sort" $ nf mapGroup bs
                , bench "fold counters" $ nf mapFoldCtr bs
                , bench "vCount" $ nf vectorCount bs
                ]

-- O(n*m) (length of bytestring, length of list of element being counted up)
-- My guess: bad
mapCount :: ByteString -> [Int]
mapCount bs = Prelude.map (`B.count` bs) [0x00..0xFF]

-- Notice that B.sort uses counting sort, so there's already lots of
-- duplicate work done here.
-- O() isn't such a big deal as the use of lists - likely allocation and
-- large constant factors.
mapGroup :: ByteString -> [Int]
mapGroup = Prelude.map Prelude.length . Prelude.map B.unpack . B.group . B.sort

mapFoldCtr :: ByteString -> [Int]
mapFoldCtr bs  = I.elems $ B.foldl' cnt I.empty bs
 where
 cnt :: I.IntMap Int -> Word8 -> I.IntMap Int
 cnt m k = I.insertWith (+) (fromIntegral k) 1 m

 -- create (do { v <- new 2; write v 0 'a'; write v 1 'b'; return v })
vectorCount :: B.ByteString -> [Int]
vectorCount bs = V.toList $ V.create $ do
        v <- M.new 256
        Prelude.mapM_ (\i -> M.unsafeWrite v i 0) [0..255]
        Prelude.mapM_ (\i -> M.unsafeRead v (fromIntegral i) >>= M.unsafeWrite v (fromIntegral i) . (+1) ) (B.unpack bs)
        return v

结果(缩短)在地图组排序方面反映得非常好,但是未加框的可变矢量/数组样式解决方案毫不奇怪地处于领先地位:

benchmarking map count
mean: 308.7067 us, lb 307.3562 us, ub 310.5099 us, ci 0.950
std dev: 7.942305 us, lb 6.269114 us, ub 10.08334 us, ci 0.950

benchmarking map group sort
mean: 43.03601 us, lb 42.93492 us, ub 43.15815 us, ci 0.950
std dev: 567.5979 ns, lb 486.8838 ns, ub 666.0098 ns, ci 0.950

benchmarking fold counters
mean: 191.5338 us, lb 191.1102 us, ub 192.0366 us, ci 0.950
std dev: 2.370183 us, lb 1.995243 us, ub 2.907595 us, ci 0.950

benchmarking vCount
mean: 12.98727 us, lb 12.96037 us, ub 13.02261 us, ci 0.950
std dev: 156.6505 ns, lb 123.6789 ns, ub 198.4892 ns, ci 0.950

奇怪的是,当我将字节串大小增加到200K时,就像Daniel使用的那样,然后在〜250us处映射/分组/排序时钟,而矢量解决方案需要超过500us:

benchmarking map count
mean: 5.796340 ms, lb 5.788830 ms, ub 5.805126 ms, ci 0.950
std dev: 41.65349 us, lb 35.69293 us, ub 48.39205 us, ci 0.950

benchmarking map group sort
mean: 260.7405 us, lb 259.2525 us, ub 262.4742 us, ci 0.950
std dev: 8.247289 us, lb 7.127576 us, ub 9.371299 us, ci 0.950

benchmarking fold counters
mean: 3.915101 ms, lb 3.892415 ms, ub 4.006287 ms, ci 0.950
std dev: 201.7632 us, lb 43.13063 us, ub 469.8170 us, ci 0.950

benchmarking vCount
mean: 556.5588 us, lb 545.4895 us, ub 567.9318 us, ci 0.950
std dev: 57.44888 us, lb 51.22270 us, ub 65.91105 us, ci 0.950
found 1 outliers among 100 samples (1.0%)
variance introduced by outliers: 80.038%
variance is severely inflated by outliers

但是这种差异是巨大的 - 也许一些玩堆大小会让它消失(至少在基准程序中),但对我来说不是很快或很容易。

答案 3 :(得分:2)

(不要认真对待)

(实际)最快的解决方案和纯FP解决方案就是这样......几乎:

data Hist = Hist {v00 :: Int, v01 :: Int {- , v02 :: Int, ... -} }

emptyHist :: Hist
emptyHist = Hist 0 0 {- 0 0 ... -}

foldRecord :: B.ByteString -> [Int]
foldRecord = histToList . B.foldl' cnt emptyHist
  where histToList (Hist x00 x01 {- x02 ... -}) = [x00, x01 {- , x02, ... -}]
        cnt (Hist !x00 !x01)    0x00      = Hist (x00 + 1) x01 {- x02 ... -}
        cnt (Hist !x00 !x01) {- 0x01 -} _ = Hist x00 (x01 + 1) {- x02 ... -}
        {- ... -}

使用@Thomas基准测试在11.67 us中运行(之前最快的vCount在我的机器中运行14.99 us。)

问题是当cnt被分割成256个可能的模式时(使用lens的完整等效代码是here)。

编译器选择正确的模式(cnt的左侧)或递增(cnt的右侧)很慢,但我认为应该生成有效的代码(至少,两种模式的效率相同)

(使用256个cnt模式和256个Hist值需要1.35毫秒!!!)

(在我的机器中,map group sort占用42我们,在vCount替代品

之后