这是矩阵乘法的优化实现,并且此例程执行矩阵乘法运算。 C:= C + A * B(其中A,B和C是以列大格式存储的n×n矩阵) 在出口处,A和B保持其输入值。
void matmul_optimized(int n, int *A, int *B, int *C)
{
// to the effective bitwise calculation
// save the matrix as the different type
int i, j, k;
int cij;
for (i = 0; i < n; ++i) {
for (j = 0; j < n; ++j) {
cij = C[i + j * n]; // the initialization into C also, add separate additions to the product and sum operations and then record as a separate variable so there is no multiplication
for (k = 0; k < n; ++k) {
cij ^= A[i + k * n] & B[k + j * n]; // the multiplication of each terms is expressed by using & operator the addition is done by ^ operator.
}
C[i + j * n] = cij; // allocate the final result into C }
}
}
如何根据上述函数/方法进一步加快矩阵的乘法速度?
此功能在2048矩阵下最多可测试2048。
matmul_optimized函数由matmul完成。
#include <stdio.h>
#include <stdlib.h>
#include "cpucycles.c"
#include "helper_functions.c"
#include "matmul_reference.c"
#include "matmul_optimized.c"
int main()
{
int i, j;
int n = 1024; // Number of rows or columns in the square matrices
int *A, *B; // Input matrices
int *C1, *C2; // Output matrices from the reference and optimized implementations
// Performance and correctness measurement declarations
long int CLOCK_start, CLOCK_end, CLOCK_total, CLOCK_ref, CLOCK_opt;
long int COUNTER, REPEAT = 5;
int difference;
float speedup;
// Allocate memory for the matrices
A = malloc(n * n * sizeof(int));
B = malloc(n * n * sizeof(int));
C1 = malloc(n * n * sizeof(int));
C2 = malloc(n * n * sizeof(int));
// Fill bits in A, B, C1
fill(A, n * n);
fill(B, n * n);
fill(C1, n * n);
// Initialize C2 = C1
for (i = 0; i < n; i++)
for (j = 0; j < n; j++)
C2[i * n + j] = C1[i * n + j];
// Measure performance of the reference implementation
CLOCK_total = 0;
for (COUNTER = 0; COUNTER < REPEAT; COUNTER++)
{
CLOCK_start = cpucycles();
matmul_reference(n, A, B, C1);
CLOCK_end = cpucycles();
CLOCK_total = CLOCK_total + CLOCK_end - CLOCK_start;
}
CLOCK_ref = CLOCK_total / REPEAT;
printf("n=%d Avg cycle count for reference implementation = %ld\n", n, CLOCK_ref);
// Measure performance of the optimized implementation
CLOCK_total = 0;
for (COUNTER = 0; COUNTER < REPEAT; COUNTER++)
{
CLOCK_start = cpucycles();
matmul_optimized(n, A, B, C2);
CLOCK_end = cpucycles();
CLOCK_total = CLOCK_total + CLOCK_end - CLOCK_start;
}
CLOCK_opt = CLOCK_total / REPEAT;
printf("n=%d Avg cycle count for optimized implementation = %ld\n", n, CLOCK_opt);
speedup = (float)CLOCK_ref / (float)CLOCK_opt;
// Check correctness by comparing C1 and C2
difference = 0;
for (i = 0; i < n; i++)
for (j = 0; j < n; j++)
difference = difference + C1[i * n + j] - C2[i * n + j];
if (difference == 0)
printf("Speedup factor = %.2f\n", speedup);
if (difference != 0)
printf("Reference and optimized implementations do not match\n");
//print(C2, n);
free(A);
free(B);
free(C1);
free(C2);
return 0;
}
答案 0 :(得分:0)
您可以尝试使用Strassen或Coppersmith-Winograd之类的算法,这也是不错的example。 或尝试使用 future :: task 或 std :: thread
之类的并行计算答案 1 :(得分:0)
优化矩阵矩阵乘法需要仔细注意许多问题:
首先,您需要能够使用向量指令。只有向量指令才能访问体系结构中固有的并行性。因此,您的编译器需要能够自动映射到矢量指令,或者您必须手动进行映射,例如,通过调用AVX-2指令的矢量固有库(对于x86体系结构)。
接下来,您需要仔细注意内存层次结构。如果不这样做,您的性能很容易下降到峰值以下的5%。
一旦正确执行此操作,就有望将计算分解为足够小的计算块,您也可以通过OpenMP或pthreads对其并行化。
可以在http://www.cs.utexas.edu/users/flame/laff/pfhp/LAFF-On-PfHP.html上找到认真进行所需操作的文档。 (这是一项尚在进行中的工作。)最后,您将实现一个接近于高性能库(如英特尔的数学内核库(MKL)或类似于BLAS的库实例化)所实现的性能。软件(BLIS)。
(而且,实际上,您还可以有效地合并Strassen的算法。但这是另一个故事,在本说明的第3.5.3节中讲述。)
您可能会发现以下相关线程:How does BLAS get such extreme performance?