我使用Cormen着名的文本实现了并行合并排序算法。我使用pthreads在C中编写它,并在Win7 x64上使用MinGW编译(稍后在Ubuntu中使用GCC进行测试,结果相同)。我在并行化方面的第一种方法是天真...我在每个递归级别产生了一个新线程(这实际上是Cormen的伪代码所暗示的)。然而,由于分段错误,这通常会导致太长时间或崩溃(我可以假设系统可以处理的线程数有一些硬性限制)。这似乎是递归并行化的常见新手错误,事实上我在这个网站上发现了类似的DISCUSSION。所以我改为使用该线程中的建议,即设置问题大小的阈值,并且如果生成新线程的函数被赋予小于阈值的集合(比如10,000个元素),那么它只是直接对元素进行操作,而不是为这么小的一组创建一个新线程。
现在一切似乎都运转正常。我列出了下面的一些结果。 N是问题大小(一组整数[1,2,3,...,N]彻底加扰),阈值是我的并行排序和并行合并函数拒绝生成新线程的值。第一个表显示以ms为单位的排序时间,第二个表显示在每种情况下生成了多少排序/合并工作线程。查看底部表中的N = 1E6和N = 1E7行,您可以看到,只要我降低阈值,允许超过~8000个合并工作者,我就会出现分段错误。同样,我认为这是由于系统给予线程的一些限制,我很乐意听到更多关于这一点,但这不是我的主要问题。
主要问题是,为什么最后一行在尝试使用相当高的阈值时会出现段错误,这会产生预期的15/33工作线程(跟随前一行的模式)。当然,我的系统处理的线程并不多。完成的一个实例(表中右下方的单元格)使用了大约1.2GB的RAM(我的系统有6GB),并且与每行右侧有0个线程的线程相比,线程版本似乎永远不会占用更多RAM。 / p>
很明显它与问题规模和系统资源有关......我希望我缺少一些常识,这使得答案非常清楚。
有什么想法吗?抱歉长篇文章...谢谢你,如果你读到这么远!如果它似乎相关,我可以发布消息来源。
编辑:添加来源以供参考:
#include <stdio.h>
#include <stdlib.h>
#include <sys/time.h>
#include <pthread.h>
const int N = 100000000;
const int SORT_THRESHOLD = 10000000;
const int MERGE_THRESHOLD = 10000000;
int sort_thread_count = 0;
int merge_thread_count = 0;
typedef struct s_pmergesort_args {
int *vals_in, p, r, *vals_out, s;
} pmergesort_args;
typedef struct s_pmerge_args {
int *temp, p1, r1, p2, r2, *vals_out, p3;
} pmerge_args;
void *p_merge_sort(void *v_pmsa);
void *p_merge(void *v_pma);
int binary_search(int val, int *temp, int p, int r);
int main() {
int *values, i, rand1, rand2, temp, *sorted;
long long rand1a, rand1b, rand2a, rand2b;
struct timeval start, end;
/* allocate values on heap and initialize */
values = malloc(N * sizeof(int));
sorted = malloc(N * sizeof(int));
for (i = 0; i < N; i++) {
values[i] = i + 1;
sorted[i] = 0;
}
/* scramble
* - complicated logic to maximize swapping
* - lots of testing (not shown) was done to verify optimal swapping */
srand(time(NULL));
for (i = 0; i < N/10; i++) {
rand1a = (long long)(N*((double)rand()/(1+(double)RAND_MAX)));
rand1b = (long long)(N*((double)rand()/(1+(double)RAND_MAX)));
rand1 = (int)((rand1a * rand1b + rand()) % N);
rand2a = (long long)(N*((double)rand()/(1+(double)RAND_MAX)));
rand2b = (long long)(N*((double)rand()/(1+(double)RAND_MAX)));
rand2 = (int)((rand2a * rand2b + rand()) % N);
temp = values[rand1];
values[rand1] = values[rand2];
values[rand2] = temp;
}
/* set up args for p_merge_sort */
pmergesort_args pmsa;
pmsa.vals_in = values;
pmsa.p = 0;
pmsa.r = N-1;
pmsa.vals_out = sorted;
pmsa.s = 0;
/* sort */
gettimeofday(&start, NULL);
p_merge_sort(&pmsa);
gettimeofday(&end, NULL);
/* verify sorting */
for (i = 1; i < N; i++) {
if (sorted[i] < sorted[i-1]) {
fprintf(stderr, "Error: array is not sorted.\n");
exit(0);
}
}
printf("Success: array is sorted.\n");
printf("Sorting took %dms.\n", (int)(((end.tv_sec * 1000000 + end.tv_usec) - (start.tv_sec * 1000000 + start.tv_usec))/1000));
free(values);
free(sorted);
printf("( sort threads created: %d )\n", sort_thread_count);
printf("( merge threads created: %d )\n", merge_thread_count);
return 0;
}
void *p_merge_sort(void *v_pmsa) {
pmergesort_args pmsa = *((pmergesort_args *) v_pmsa);
int *vals_in = pmsa.vals_in;
int p = pmsa.p;
int r = pmsa.r;
int *vals_out = pmsa.vals_out;
int s = pmsa.s;
int n = r - p + 1;
pthread_t worker;
if (n > SORT_THRESHOLD) {
sort_thread_count++;
}
if (n == 1) {
vals_out[s] = vals_in[p];
} else {
int *temp = malloc(n * sizeof(int));
int q = (p + r) / 2;
int q_ = q - p + 1;
pmergesort_args pmsa_l;
pmsa_l.vals_in = vals_in;
pmsa_l.p = p;
pmsa_l.r = q;
pmsa_l.vals_out = temp;
pmsa_l.s = 0;
pmergesort_args pmsa_r;
pmsa_r.vals_in = vals_in;
pmsa_r.p = q+1;
pmsa_r.r = r;
pmsa_r.vals_out = temp;
pmsa_r.s = q_;
if (n > SORT_THRESHOLD) {
pthread_create(&worker, NULL, p_merge_sort, &pmsa_l);
} else {
p_merge_sort(&pmsa_l);
}
p_merge_sort(&pmsa_r);
if (n > SORT_THRESHOLD) {
pthread_join(worker, NULL);
}
pmerge_args pma;
pma.temp = temp;
pma.p1 = 0;
pma.r1 = q_ - 1;
pma.p2 = q_;
pma.r2 = n - 1;
pma.vals_out = vals_out;
pma.p3 = s;
p_merge(&pma);
free(temp);
}
}
void *p_merge(void *v_pma) {
pmerge_args pma = *((pmerge_args *) v_pma);
int *temp = pma.temp;
int p1 = pma.p1;
int r1 = pma.r1;
int p2 = pma.p2;
int r2 = pma.r2;
int *vals_out = pma.vals_out;
int p3 = pma.p3;
int n1 = r1 - p1 + 1;
int n2 = r2 - p2 + 1;
int q1, q2, q3, t;
pthread_t worker;
if (n1 < n2) {
t = p1; p1 = p2; p2 = t;
t = r1; r1 = r2; r2 = t;
t = n1; n1 = n2; n2 = t;
}
if (n1 > MERGE_THRESHOLD) {
merge_thread_count++;
}
if (n1 == 0) {
return;
} else {
q1 = (p1 + r1) / 2;
q2 = binary_search(temp[q1], temp, p2, r2);
q3 = p3 + (q1 - p1) + (q2 - p2);
vals_out[q3] = temp[q1];
pmerge_args pma_l;
pma_l.temp = temp;
pma_l.p1 = p1;
pma_l.r1 = q1-1;
pma_l.p2 = p2;
pma_l.r2 = q2-1;
pma_l.vals_out = vals_out;
pma_l.p3 = p3;
if (n1 > MERGE_THRESHOLD) {
pthread_create(&worker, NULL, p_merge, &pma_l);
} else {
p_merge(&pma_l);
}
pmerge_args pma_r;
pma_r.temp = temp;
pma_r.p1 = q1+1;
pma_r.r1 = r1;
pma_r.p2 = q2;
pma_r.r2 = r2;
pma_r.vals_out = vals_out;
pma_r.p3 = q3+1;
p_merge(&pma_r);
if (n1 > MERGE_THRESHOLD) {
pthread_join(worker, NULL);
}
}
}
int binary_search(int val, int *temp, int p, int r) {
int low = p;
int mid;
int high = (p > r+1)? p : r+1;
while (low < high) {
mid = (low + high) / 2;
if (val <= temp[mid]) {
high = mid;
} else {
low = mid + 1;
}
}
return high;
}
编辑2:在下面添加了新图像,显示每个版本使用的“最大”和“总”RAM(最大值表示最高同时分配/使用量,总数表示通过程序生命周期内所有分配请求的总和)。这些表明,在N = 1E8和阈值= 1E7时,我应该获得3.2GB的最大使用率(我的系统应该能够支持)。但是再次......我猜这与pthread库中的其他一些限制有关......而不是我的实际系统资源。
答案 0 :(得分:3)
看起来内存不足。在您的示例中,如果代码按顺序运行,则它一次分配的最大内存为1.6GB。使用线程时,它使用超过3GB。我在malloc / free函数周围放了一些包装器,得到了这个结果:
Allocation of 12500000 bytes failed with 3074995884 bytes already allocated.
很容易看到线程时内存使用量会更多。在这种情况下,它将同时对整个数组的左侧和右侧进行排序,并分配两个大的临时缓冲区来完成它。顺序运行时,左半部分的临时缓冲区将被释放,然后排序右半部分。
以下是我使用的包装器:
static size_t total_allocated = 0;
static size_t max_allocated = 0;
static pthread_mutex_t total_allocated_mutex;
static void *allocate(int n)
{
void *result = 0;
pthread_mutex_lock(&total_allocated_mutex);
result = malloc(n);
if (!result) {
fprintf(stderr,"Allocation of %d bytes failed with %u bytes already allocated\n",n,total_allocated);
}
assert(result);
total_allocated += n;
if (total_allocated>max_allocated) {
max_allocated = total_allocated;
}
pthread_mutex_unlock(&total_allocated_mutex);
return result;
}
static void *deallocate(void *p,int n)
{
pthread_mutex_lock(&total_allocated_mutex);
total_allocated -= n;
free(p);
pthread_mutex_unlock(&total_allocated_mutex);
}
答案 1 :(得分:2)
我跑了然后得到了:
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 7120.0x14dc]
0x004017df in p_merge (v_pma=0x7882c120) at t.c:177
177 vals_out[q3] = temp[q1];
(gdb) p q3
$1 = 58
(gdb) p vals_out
$2 = (int *) 0x0
(gdb)
这是一个NULL指针取消引用。在分配temp
以确保分配成功后,我会发出断言。
int *temp = malloc(n * sizeof(int));
assert(temp);
稍微分析一下你的算法,看起来你正在预先分配你需要进行合并的内存,因为你递归地下去了。您可能需要考虑更改算法以在实际执行合并时进行分配。
但是,如果我没记错的话,在任何合并发生之前,merge sort会在算法的最顶部分配第二个数组,然后当递归调用展开时,它们会在合并运行时更长时间在两个数组之间来回切换。这样,整个算法中只有一个malloc
调用。除了使用更少的内存外,它还会表现得更好。
我的SWAG修改您的代码以使用在算法顶部分配的单个分配的临时数组如下所示。
#include <stdio.h>
#include <stdlib.h>
#include <sys/time.h>
#include <pthread.h>
const int N = 100000000;
const int SORT_THRESHOLD = 10000000;
const int MERGE_THRESHOLD = 10000000;
int sort_thread_count = 0;
int merge_thread_count = 0;
typedef struct s_pmergesort_args {
int *vals_in, p, r, *vals_out, s, *temp;
} pmergesort_args;
typedef struct s_pmerge_args {
int *temp, p1, r1, p2, r2, *vals_out, p3;
} pmerge_args;
void *p_merge_sort(void *v_pmsa);
void *p_merge(void *v_pma);
int binary_search(int val, int *temp, int p, int r);
int main() {
int *values, i, rand1, rand2, temp, *sorted, *scratch;
long long rand1a, rand1b, rand2a, rand2b;
struct timeval start, end;
/* allocate values on heap and initialize */
values = malloc(N * sizeof(int));
sorted = malloc(N * sizeof(int));
scratch = malloc(N * sizeof(int));
for (i = 0; i < N; i++) {
values[i] = i + 1;
sorted[i] = 0;
}
/* scramble
* - complicated logic to maximize swapping
* - lots of testing (not shown) was done to verify optimal swapping */
srand(time(NULL));
for (i = 0; i < N/10; i++) {
rand1a = (long long)(N*((double)rand()/(1+(double)RAND_MAX)));
rand1b = (long long)(N*((double)rand()/(1+(double)RAND_MAX)));
rand1 = (int)((rand1a * rand1b + rand()) % N);
rand2a = (long long)(N*((double)rand()/(1+(double)RAND_MAX)));
rand2b = (long long)(N*((double)rand()/(1+(double)RAND_MAX)));
rand2 = (int)((rand2a * rand2b + rand()) % N);
temp = values[rand1];
values[rand1] = values[rand2];
values[rand2] = temp;
}
/* set up args for p_merge_sort */
pmergesort_args pmsa;
pmsa.vals_in = values;
pmsa.p = 0;
pmsa.r = N-1;
pmsa.vals_out = sorted;
pmsa.s = 0;
pmsa.temp = scratch;
/* sort */
gettimeofday(&start, NULL);
p_merge_sort(&pmsa);
gettimeofday(&end, NULL);
/* verify sorting */
for (i = 1; i < N; i++) {
if (sorted[i] < sorted[i-1]) {
fprintf(stderr, "Error: array is not sorted.\n");
exit(0);
}
}
printf("Success: array is sorted.\n");
printf("Sorting took %dms.\n", (int)(((end.tv_sec * 1000000 + end.tv_usec) - (start.tv_sec * 1000000 + start.tv_usec))/1000));
free(values);
free(sorted);
free(scratch);
printf("( sort threads created: %d )\n", sort_thread_count);
printf("( merge threads created: %d )\n", merge_thread_count);
return 0;
}
void *p_merge_sort(void *v_pmsa) {
pmergesort_args pmsa = *((pmergesort_args *) v_pmsa);
int *vals_in = pmsa.vals_in;
int p = pmsa.p;
int r = pmsa.r;
int *vals_out = pmsa.vals_out;
int s = pmsa.s;
int *scratch = pmsa.temp;
int n = r - p + 1;
pthread_t worker;
if (n > SORT_THRESHOLD) {
sort_thread_count++;
}
if (n == 1) {
vals_out[s] = vals_in[p];
} else {
int q = (p + r) / 2;
int q_ = q - p + 1;
pmergesort_args pmsa_l;
pmsa_l.vals_in = vals_in;
pmsa_l.p = p;
pmsa_l.r = q;
pmsa_l.vals_out = scratch;
pmsa_l.s = p;
pmsa_l.temp = vals_out;
pmergesort_args pmsa_r;
pmsa_r.vals_in = vals_in;
pmsa_r.p = q+1;
pmsa_r.r = r;
pmsa_r.vals_out = scratch;
pmsa_r.s = q+1;
pmsa_r.temp = vals_out;
if (n > SORT_THRESHOLD) {
pthread_create(&worker, NULL, p_merge_sort, &pmsa_l);
} else {
p_merge_sort(&pmsa_l);
}
p_merge_sort(&pmsa_r);
if (n > SORT_THRESHOLD) {
pthread_join(worker, NULL);
}
pmerge_args pma;
pma.temp = scratch + p;
pma.p1 = 0;
pma.r1 = q_ - 1;
pma.p2 = q_;
pma.r2 = n - 1;
pma.vals_out = vals_out + p;
pma.p3 = s - p;
p_merge(&pma);
}
}
void *p_merge(void *v_pma) {
pmerge_args pma = *((pmerge_args *) v_pma);
int *temp = pma.temp;
int p1 = pma.p1;
int r1 = pma.r1;
int p2 = pma.p2;
int r2 = pma.r2;
int *vals_out = pma.vals_out;
int p3 = pma.p3;
int n1 = r1 - p1 + 1;
int n2 = r2 - p2 + 1;
int q1, q2, q3, t;
pthread_t worker;
if (n1 < n2) {
t = p1; p1 = p2; p2 = t;
t = r1; r1 = r2; r2 = t;
t = n1; n1 = n2; n2 = t;
}
if (n1 > MERGE_THRESHOLD) {
merge_thread_count++;
}
if (n1 == 0) {
return;
} else {
q1 = (p1 + r1) / 2;
q2 = binary_search(temp[q1], temp, p2, r2);
q3 = p3 + (q1 - p1) + (q2 - p2);
vals_out[q3] = temp[q1];
pmerge_args pma_l;
pma_l.temp = temp;
pma_l.p1 = p1;
pma_l.r1 = q1-1;
pma_l.p2 = p2;
pma_l.r2 = q2-1;
pma_l.vals_out = vals_out;
pma_l.p3 = p3;
if (n1 > MERGE_THRESHOLD) {
pthread_create(&worker, NULL, p_merge, &pma_l);
} else {
p_merge(&pma_l);
}
pmerge_args pma_r;
pma_r.temp = temp;
pma_r.p1 = q1+1;
pma_r.r1 = r1;
pma_r.p2 = q2;
pma_r.r2 = r2;
pma_r.vals_out = vals_out;
pma_r.p3 = q3+1;
p_merge(&pma_r);
if (n1 > MERGE_THRESHOLD) {
pthread_join(worker, NULL);
}
}
}
int binary_search(int val, int *temp, int p, int r) {
int low = p;
int mid;
int high = (p > r+1)? p : r+1;
while (low < high) {
mid = (low + high) / 2;
if (val <= temp[mid]) {
high = mid;
} else {
low = mid + 1;
}
}
return high;
}
答案 2 :(得分:0)
你对系统的压力过大,因为加速并行实现并没有多大意义。并行化会带来成本,当你用这样的线程泛滥时,你的系统整体上必须做很多工作,线程不是免费的。
特别是对于你的程序崩溃的“问题”,如果你要求太多的线程,这完全是你的错:阅读pthread_create
的手册页。它声明该函数返回一个值,并且它出于某种原因。
为了获得加速(这是我想你正在寻找的),你不能期望获得比系统中的物理核心更多的东西。有时候比核心更多的线程(比如两倍)更好,但很快线程创建的开销远远超过你可以获得的开销。
然后mergesort是一种算法,通常受到RAM访问的约束,而不是通过比较。 RAM访问(即使在mergesort中进行流式处理)也是比CPU慢的数量级。此外,您的内存总线不是并行设备,您在内存访问中唯一的并行性是缓存(如果是)。将内存占用量减少两倍可能会导致所有性能提升。在你的代码中,你甚至可以通过在单独的线程调用中分配下面的内存来加剧,因为分配内存本身就有成本,系统必须协调这些分配。
为了给它另一个开始,首先编写一个具有良好内存处理和访问模式的递归mergesort算法。只在递归的顶部节点中分配一些大缓冲区,并将其部分分配给递归调用。
创建一个单独的合并例程,将两个已排序的缓冲区合并为第三个。对其进行基准测试,计算算法所花费的每个微秒级项目。根据您的CPU速度,计算每个已排序项目浪费的数字周期。阅读编译器为合并生成的汇编程序,如果发现它看起来太复杂,请尝试找出如何改进它。
之后,开始向递归函数添加并行性。