c ++中汉明距离的更快形式(可能利用标准库)?

时间:2014-01-13 13:14:34

标签: c++ algorithm optimization stl

我有两个int vectors,例如a[100]b[100]
计算汉明距离的简单方法是:

std::vector<int> a(100);
std::vector<int> b(100);

double dist = 0;    
for(int i = 0; i < 100; i++){
    if(a[i] != b[i])
        dist++;
}
dist /= a.size();

我想问一下,有更快的方法在C ++中进行此计算或如何使用STL执行相同的工作吗?

3 个答案:

答案 0 :(得分:5)

你要求更快的方式。这是embarrassingly parallel problem,因此,使用C ++,您可以通过两种方式利用它:线程并行性和通过优化进行矢量化。

//The following flags allow cpu specific vectorization optimizations on *my cpu*
//clang++ -march=corei7-avx hd.cpp -o hd -Ofast -pthread -std=c++1y
//g++ -march=corei7-avx hd.cpp -o hd -Ofast -pthread -std=c++1y

#include <vector>
#include <thread>
#include <future>
#include <numeric>

template<class T, class I1, class I2>
T hamming_distance(size_t size, I1 b1, I2 b2) {
    return std::inner_product(b1, b1 + size, b2, T{},
            std::plus<T>(), std::not_equal_to<T>());
}

template<class T, class I1, class I2>
T parallel_hamming_distance(size_t threads, size_t size, I1 b1, I2 b2) {
    if(size < 1000)
       return hamming_distance<T, I1, I2>(size, b1, b2);

    if(threads > size)
        threads = size;

    const size_t whole_part = size / threads;
    const size_t remainder = size - threads * whole_part;

    std::vector<std::future<T>> bag;
    bag.reserve(threads + (remainder > 0 ? 1 : 0));

    for(size_t i = 0; i < threads; ++i)
        bag.emplace_back(std::async(std::launch::async,
                            hamming_distance<T, I1, I2>,
                            whole_part,
                            b1 + i * whole_part,
                            b2 + i * whole_part));
    if(remainder > 0)
        bag.emplace_back(std::async(std::launch::async,
                            hamming_distance<T, I1, I2>,
                            remainder,
                            b1 + threads * whole_part,
                            b2 + threads * whole_part));

    T hamming_distance = 0;
    for(auto &f : bag) hamming_distance += f.get();
    return hamming_distance;
}

#include <ratio>
#include <random>
#include <chrono>
#include <iostream>
#include <cinttypes>

int main() {
    using namespace std;
    using namespace chrono;

    random_device rd;
    mt19937 gen(rd());
    uniform_int_distribution<> random_0_9(0, 9);

    const auto size = 100 * mega::num;
    vector<int32_t> v1(size);
    vector<int32_t> v2(size);

    for(auto &x : v1) x = random_0_9(gen);
    for(auto &x : v2) x = random_0_9(gen);

    cout << "naive hamming distance: ";
    const auto naive_start = high_resolution_clock::now();
    cout << hamming_distance<int32_t>(v1.size(), begin(v1), begin(v2)) << endl;
    const auto naive_elapsed = high_resolution_clock::now() - naive_start;

    const auto n = thread::hardware_concurrency();

    cout << "parallel hamming distance: ";
    const auto parallel_start = high_resolution_clock::now();
    cout << parallel_hamming_distance<int32_t>(
                                                    n,
                                                    v1.size(),
                                                    begin(v1),
                                                    begin(v2)
                                              )
         << endl;
    const auto parallel_elapsed = high_resolution_clock::now() - parallel_start;

    auto count_microseconds =
        [](const high_resolution_clock::duration &elapsed) {
            return duration_cast<microseconds>(elapsed).count();
        };

    cout << "naive delay:    " << count_microseconds(naive_elapsed) << endl;
    cout << "parallel delay: " << count_microseconds(parallel_elapsed) << endl;
}

注意到我没有对矢量大小进行划分

我的计算机的结果(这表明对于只有2个物理核心的计算机来说它没有太多...):

$ clang++ -march=corei7-avx hd.cpp -o hd -Ofast -pthread -std=c++1y -stdlib=libc++ -lcxxrt -ldl
$ ./hd
naive hamming distance: 89995190
parallel hamming distance: 89995190
naive delay:    52758
parallel delay: 47227

$ clang++ hd.cpp -o hd -O3 -pthread -std=c++1y -stdlib=libc++ -lcxxrt -ldl
$ ./hd
naive hamming distance: 90001042
parallel hamming distance: 90001042
naive delay:    53851
parallel delay: 46887

$ g++ -march=corei7-avx hd.cpp -o hd -Ofast -pthread -std=c++1y -Wl,--no-as-needed
$ ./hd
naive hamming distance: 90001825
parallel hamming distance: 90001825
naive delay:    55229
parallel delay: 49355

$ g++ hd.cpp -o hd -O3 -pthread -std=c++1y -Wl,--no-as-needed
$ ./hd
naive hamming distance: 89996171
parallel hamming distance: 89996171
naive delay:    54189
parallel delay: 44928

此外,我发现自动矢量化没有效果,可能需要检查程序集......

有关矢量化和编译器选项的示例,请检查此blog post of mine

答案 1 :(得分:2)

有一种非常简单的方法可以优化它。

int disti = 0;    
for(int i = 0; i < n; i++) disti += (a[i] != b[i]);
double dist = 1.0*disti/a.size();

这会跳过分支并使用条件测试返回1或0的优点。此外,它在GCC中自动矢量化(-ftree-vectorizer-verbose=1来检查),而问题中的版本不是。

修改

我继续用问题中的函数测试了这个问题,我调用了hamming_distance我建议的简单修复,我称之为hamming_distance_fix以及使用修复程序和OpenMP的版本致电hamming_distance_fix_omp。这是时代

hamming_distance          1.71 seconds
hamming_distance_fix      0.38 seconds  //SIMD
hamming_distance_fix_omp  0.12 seconds  //SIMD + MIMD

这是代码。我没有使用太多的语法糖果,但它应该很容易转换为使用STL等等...你可以在这里看到结果http://coliru.stacked-crooked.com/a/31293bc88cff4794

//g++-4.8 -std=c++11 -O3 -fopenmp -msse2 -Wall -pedantic -pthread main.cpp && ./a.out
#include <stdio.h>
#include <stdlib.h>
#include <omp.h>

double hamming_distance(int* a, int*b, int n) {
    double dist = 0;
    for(int i=0; i<n; i++) {
        if (a[i] != b[i]) dist++;
    }
    return dist/n;
}
double hamming_distance_fix(int* a, int* b, int n) {
    int disti = 0;
    for(int i=0; i<n; i++) {
       disti += (a[i] != b[i]);
    }
    return 1.0*disti/n;
}

double hamming_distance_fix_omp(int* a, int* b, int n) {
    int disti = 0;
    #pragma omp parallel for reduction(+:disti)
    for(int i=0; i<n; i++) {
       disti += (a[i] != b[i]);
    }
    return 1.0*disti/n;
}

int main() {
    const int n = 1<<16;
    const int repeat = 10000;
    int *a = new int[n];
    int *b = new int[n];
    for(int i=0; i<n; i++) 
    { 
        a[i] = rand()%10;
        b[i] = rand()%10;
    }

    double dtime, dist;
    dtime = omp_get_wtime();
    for(int i=0; i<repeat; i++) dist = hamming_distance(a,b,n);
    dtime = omp_get_wtime() - dtime;
    printf("dist %f, time (s) %f\n", dist, dtime);

    dtime = omp_get_wtime();
    for(int i=0; i<repeat; i++) dist = hamming_distance_fix(a,b,n);
    dtime = omp_get_wtime() - dtime;
    printf("dist %f, time (s) %f\n", dist, dtime);

    dtime = omp_get_wtime();
    for(int i=0; i<repeat; i++) dist = hamming_distance_fix_omp(a,b,n);
    dtime = omp_get_wtime() - dtime;
    printf("dist %f, time (s) %f\n", dist, dtime);  
}

答案 2 :(得分:0)

作为观察,使用double是非常慢,即使是增量。所以你应该在for(递增)中使用一个int,然后使用double来进行除法。

作为加速,我能想到的一种测试方法是使用SSE指令:

伪代码:

distance = 0
SSE register e1
SSE register e2
for each 4 elements in vectors
  load 4 members from a in e1
  load 4 members from b in e2
  if e1 == e2
    continue
  else
    check each 4 members individually (using e1 and e2)
dist /= 4

在实际(非伪代码)程序中,这可以被抽象,以便编译器可以使用cmov指令代替branches

这里的主要优点是我们从内存中读取的次数减少了4倍 缺点是我们对之前的每4次检查都有额外的检查 根据通过cmovesbranches在汇编中实现的方式,对于在两个向量中具有相同值的许多相邻位置的向量,这可能更快。

我真的不能告诉它与标准解决方案相比会如何表现,但至少值得测试。