快速读取CSV文件

时间:2019-12-21 19:53:48

标签: c++ csv

我有一个csv文件,我必须仅使用fstream 库读取此文件。有8列,但我将仅使用前三列。该文件包含591.000行数据。

我试图这样阅读;

while (retailFile.good()) {
        if (i == 0) continue;
        getline(retailFile, invoiceNo, ';');
        getline(retailFile, stockCode, ';');
        getline(retailFile, desc, ';');
        getline(retailFile, dummy, ';');
        getline(retailFile, dummy, ';');
        getline(retailFile, dummy, ';');
        getline(retailFile, dummy, ';');
        getline(retailFile, dummy);
        i++;
    }

尝试过-我不太希望-完全令人失望。

如何快速阅读?保留为空变量是荒谬的。我们不能传递这些列吗?

2 个答案:

答案 0 :(得分:3)

要找到该行的结尾,您必须通读该行的所有列以寻找该行的结尾。这是不可避免的。您不必处理那些不需要的字段。

Taking inspiration from option two of this linked answer我得到类似

//discard first line without looking at it. 
if (retailFile.ignore(std::numeric_limits<std::streamsize>::max(), '\n')
{ // ALWAYS test IO transactions to make sure they worked, even something as 
  // trivial as ignoring the input. 

    std::string line;
    while (std::getline(retailFile, line))
    { // read the whole line
        // wrap the line in a stream for easy parsing
        std::istringstream stream (line);
        if (std::getline(retailFile, invoiceNo, ';') && 
            std::getline(retailFile, stockCode, ';') &&
            std::getline(retailFile, desc, ';'))
        { // successfully read all three required columns
          // Do not use anything you read until after you know it is good. Not 
          // checking leads to bugs and malware.

          // strongly consider doing something with the variables here. The next loop 
          // iteration will write over them
            i++;
        }
        else
        {
            // failed to find all three columns. You should look into why and 
            // handle accordingly.
        }
    }
}
else
{
    // failed to ignore the line. You should look into why and handle accordingly.
}

您可能不会发现多少实际速度差异。从磁盘读取文件通常比对文件执行任何操作都要耗费更多时间,除非在读取文件后对文件数据进行了大量处理。可能会有更快的方法来进行行的分割,但是同样,这种差异也可能首先隐藏在读取文件的成本中。

答案 1 :(得分:0)

问题是:什么是快速的?

在下面的演示中,我创建了具有591.000行的文件。大小为74MB。

然后,我为std::ifstream设置一个更大的输入缓冲区,读取所有行,对其进行解析,然后将前3个条目复制到结果向量中。其余的我都会忽略。

为避免优化结果,我显示了50行输出。

VS2019,C ++ 17,发布模式,所有优化均已启用。

结果:〜2.7s用于读取和解析计算机上的所有行。 (我必须承认我通过PCIe在RAID 0中有4个SSD)

#include <iostream>
#include <fstream>
#include <vector>
#include <string>
#include <regex>
#include <array>
#include <chrono>
#include <iterator>

int main() {
    // Put whatever filename you want
    static const std::string fileName{ "r:\\big.txt" };

    // Start Time measurement
    auto start = std::chrono::system_clock::now();
#if 0
    // Write file with 591000 lines
    if (std::ofstream ofs(fileName); ofs) {
        for (size_t i = 0U; i < 591000U; ++i) {
            ofs << "invoiceNo_" << i << ";"
                << "stockCode_" << i << ";"
                << "description_" << i << ";"
                << "Field_4_" << i << ";"
                << "Field_5_" << i << ";"
                << "Field_6_" << i << ";"
                << "Field_7_" << i << ";"
                << "Field_8_" << i << "\n";
        }
    }
#endif
    auto end = std::chrono::system_clock::now();
    auto elapsed = std::chrono::duration_cast<std::chrono::milliseconds>(end - start);
    // How long did it take?
    std::cout << "Time for writing the file:       " << elapsed.count() << " ms\n";


    // We are just interested in 3 fields
    constexpr size_t NumberOfNeededFields = 3U;

    // We expect 591000 lines, give a little bit more
    constexpr size_t NumberOfExpectedFilesInFile = 600000U;

    // We will create a bigger input buffer for our stream
    constexpr size_t ifStreamBufferSize = 100000U;
    static char buffer[ifStreamBufferSize];

    // The delimtzer for our csv
    static const std::regex delimiter{ ";" };

    // Main working variables
    using Fields3 = std::array<std::string, NumberOfNeededFields>;

    static Fields3 fields3;
    static std::vector<Fields3> fields{};

    // Reserve space to avoid reallocation
    fields.reserve(NumberOfExpectedFilesInFile);

    // Start timer
    start = std::chrono::system_clock::now();

    // Open file and check, if it is open
    if (std::ifstream ifs(fileName); ifs) {
        // Set bigger file buffer
        ifs.rdbuf()->pubsetbuf(buffer, ifStreamBufferSize);

        // Read all lines
        for (std::string line{}; std::getline(ifs, line); ) {
            // Parse string
            std::copy_n(std::sregex_token_iterator(line.begin(), line.end(), delimiter, -1), NumberOfNeededFields, fields3.begin());
            // Store resulting 3 fields
            fields.push_back(std::move(fields3));
        }
    }
    end = std::chrono::system_clock::now();
    elapsed = std::chrono::duration_cast<std::chrono::milliseconds>(end - start);
    std::cout << "Time for parsing the file:       " << elapsed.count() << " ms\n";

    // Show some result 
    for (size_t i = 0; i < fields.size(); i += (fields.size()/50)) {
        std::copy_n(fields[i].begin(), NumberOfNeededFields, std::ostream_iterator<std::string>(std::cout, " "));
        std::cout << "\n";
    }
    return 0;
}