在性能至关重要的代码中,我得到了2个大矩阵(大小以千计)
期望, 认识
大小相同,但包含不同的值。这些矩阵都以相同的方式在列上划分, 每个子矩阵都有不同数量的列。像这样
submat1 submat2 submat3
-----------------------------
|...........| .......| .....|
|...........| .......| .....|
|...........| .......| .....|
|...........| .......| .....|
|...........| .......| .....|
-----------------------------
我需要通过以下方式最快地填充第三个矩阵 (以伪代码)
for each submatrix
for each row in submatrix
pos= argmax(expectations(row,start_submatrix(col):end_submatrix(col)))
result(row,col) = realization(row,pos)
也就是说,对于每个子矩阵,我扫描每一行,找到期望子矩阵中最大元素的位置, 并将实现矩阵的相应值放入结果矩阵。
我想以最快的方式(可能通过智能并行化/缓存优化)来完成此任务,因为该功能使我在算法中花费了大约40%的时间。 我使用Visual Studio 15.9.6和Windows 10。
这是我的参考C ++实现,在这里我使用Armadillo(主要列)矩阵
#include <iostream>
#include <chrono>
#include <vector>
///Trivial implementation, for illustration purposes
void find_max_vertical_trivial(const arma::mat& expectations, const arma::mat& realizations, arma::mat& results, const arma::uvec & list, const int max_size_action)
{
const int number_columns_results = results.n_cols;
const int number_rows = expectations.n_rows;
#pragma omp parallel for schedule(static)
for (int submatrix_to_process = 0; submatrix_to_process < number_columns_results; submatrix_to_process++)
{
const int start_loop = submatrix_to_process * max_size_action;
//Looping over rows
for (int current_row = 0; current_row < number_rows; current_row++)
{
int candidate = start_loop;
const int end_loop = candidate + list(submatrix_to_process);
//Finding the optimal action
for (int act = candidate + 1; act < end_loop; act++)
{
if (expectations(current_row, act) > expectations(current_row, candidate))
candidate = act;
}
//Placing the corresponding realization into the results
results(current_row, submatrix_to_process) = realizations(current_row, candidate);
}
}
}
这是我想出的最快方法。有可能改善它吗?
///Stripped all armadillo functionality, to bare C
void find_max_vertical_optimized(const arma::mat& expectations, const arma::mat& realizations, arma::mat& values, const arma::uvec & list, const int max_block)
{
const int n_columns = values.n_cols;
const int number_rows = expectations.n_rows;
const auto exp_ptr = expectations.memptr();
const auto real_ptr = realizations.memptr();
const auto values_ptr = values.memptr();
const auto list_ptr = list.memptr();
#pragma omp parallel for schedule(static)
for (int col_position = 0; col_position < n_columns; col_position++)
{
const int start_loop = col_position * max_block*number_rows;
const int end_loop = start_loop + list_ptr[col_position]*number_rows;
const int position_value = col_position * number_rows;
for (int row_position = 0; row_position < number_rows; row_position++)
{
int candidate = start_loop;
const auto st_exp = exp_ptr + row_position;
const auto st_real = real_ptr + row_position;
const auto st_val = values_ptr + row_position;
for (int new_candidate = candidate + number_rows; new_candidate < end_loop; new_candidate+= number_rows)
{
if (st_exp[new_candidate] > st_exp[candidate])
candidate = new_candidate;
}
st_val[position_value] = st_real[candidate];
}
}
}
和测试部分,在这里我比较性能
typedef std::chrono::microseconds dur;
const double dur2seconds = 1e6;
//Testing the two methods
int main()
{
const int max_cols_submatrix = 6; //Typical size: 3-100
const int n_test = 500;
const int number_rows = 2000; //typical size: 1000-10000
std::vector<int> size_to_test = {4,10,40,100,1000,5000 }; //typical size: 10-5000
arma::vec time_test(n_test, arma::fill::zeros);
arma::vec time_trivial(n_test, arma::fill::zeros);
for (const auto &size_grid : size_to_test) {
arma::mat expectations(number_rows, max_cols_submatrix*size_grid, arma::fill::randn);
arma::mat realizations(number_rows, max_cols_submatrix*size_grid, arma::fill::randn);
arma::mat reference_values(number_rows, size_grid, arma::fill::zeros);
arma::mat optimized_values(number_rows, size_grid, arma::fill::zeros);
arma::uvec number_columns_per_submatrix(size_grid);
//Generate random number of columns per each submatrices
number_columns_per_submatrix= arma::conv_to<arma::uvec>::from(arma::vec(size_grid,arma::fill::randu)*max_cols_submatrix);
for (int i = 0; i < n_test; i++) {
auto st_meas = std::chrono::high_resolution_clock::now();
find_max_vertical_trivial(expectations, realizations, optimized_values, number_columns_per_submatrix, max_cols_submatrix);
time_trivial(i) = std::chrono::duration_cast<dur>(std::chrono::high_resolution_clock::now() - st_meas).count() / dur2seconds;;
st_meas = std::chrono::high_resolution_clock::now();
find_max_vertical_optimized(expectations, realizations, reference_values, number_columns_per_submatrix, max_cols_submatrix);
time_test(i) = std::chrono::duration_cast<dur>(std::chrono::high_resolution_clock::now() - st_meas).count() / dur2seconds;
const auto diff = arma::sum(arma::sum(arma::abs(reference_values - optimized_values)));
if (diff > 1e-3)
{
std::cout <<"Error: " <<diff << "\n";
throw std::runtime_error("Error");
}
}
std::cout <<"grid size:"<< size_grid << "\n";
const double mean_time_trivial = arma::mean(time_trivial);
const double mean_time_opt = arma::mean(time_test);
std::cout << "Trivial: "<< mean_time_trivial << " s +/-" << 1.95*arma::stddev(time_trivial) / sqrt(n_test) <<"\n";
std::cout << "Optimized: "<< mean_time_opt <<" s ("<< (mean_time_opt/ mean_time_trivial-1)*100.0 <<" %) "<<"+/-" << 1.95*arma::stddev(time_test) / sqrt(n_test) << "\n";
}
}
答案 0 :(得分:0)
您可能可以通过SIMD循环优化缓存,该循环读取8或12个完整的行向量,然后读取下一行的相同行。 (因此对于32位元素,并行8 * 4或8 * 8行)。您正在使用MSVC,该MSVC支持x _mm256_load_ps
和_mm256_max_ps
或_mm256_max_epi32
之类的x86 SSE2 / AVX2内部函数。
如果您从对齐边界开始,那么希望您会完全读取所有触摸的所有缓存行。然后在输出中使用相同的访问模式。 (因此,您正在读取2到6条连续的缓存行,并且在读/写块之间有一个跨度。)
或者可能将tmp结果记录在紧凑的位置(每行每段1个值),然后吹走更多的高速缓存,将相同元素的副本写入每列。但是尝试两种方式;混合读写可以让CPU更好地重叠工作,并找到更多的内存级并行性。