使用协议缓冲区存储大对象时遇到麻烦(在此示例中,当序列化为二进制文件时,它们占用约200 MB)。在我看来,C ++实现需要超过200MB的空间来容纳它们,而且我不确定这是预期的还是做错了。
我正在使用协议缓冲区3.5.1。这是一个有效的示例。
我希望在内存中保存5个protobuf消息副本,大约1GB的内存就足够了,尽管从下面的示例中看来,代码需要5至10GB。这是protobuf的缺点,错误还是我做错了什么?
这是我的cpp文件。
#include <fstream>
#include <iomanip>
#include <iostream>
#include <sstream>
#include <string>
#include <cstdlib>
#include <iostream>
#include <string>
#include "../generated_src/leviosa.pb.h"
template <class ProtoMessage>
ProtoMessage deserializeProtobufFromFile(std::string filename) {
ProtoMessage m;
std::string s_json;
std::fstream input(filename, std::ios::in | std::ios::binary);
if (!input) {
throw std::runtime_error(filename + ": file not found.");
} else if (!m.ParseFromIstream(&input)) {
throw std::runtime_error("Failed to parse " + filename +
" as binary protobuf.");
}
return m;
}
using namespace std;
using namespace leviosa;
int main(int argc, char* argv[]) {
OfflineOutput proto0 =
deserializeProtobufFromFile<OfflineOutput>("offline0.lev");
cout << "Read from file completed." << endl;
auto p1(proto0);
cout << "First copy completed." << endl;
auto p2(p1);
cout << "Another copy completed." << endl;
auto p3(p1);
cout << "Another copy completed." << endl;
auto p4(p1);
cout << "Another copy completed." << endl;
cout << "Done." << endl;
}
以下是原型文件的一部分:
syntax = "proto3";
package leviosa;
message OleSender{
int64 a = 1;
int64 b = 2;
}
message OleReceiver{
int64 x = 1;
int64 z = 2;
}
message Ole{
oneof ole_oneof {
OleSender sender = 1;
OleReceiver receiver = 2;
}
}
message OleVector{
repeated Ole ole = 1;
}
message WatchInfoPerServer{
bytes prg = 1;
OleVector oles = 2;
int64 degree_test_blind_share = 4;
int64 perm_test_blind_share = 5;
}
message OfflineOutput{
repeated bytes prg_seeds = 1;
repeated OleVector oles_for_servers = 2;
bytes degree_test_blind_poly = 3;
bytes perm_test_blind_shares = 4;
oneof commit {
bytes commitment = 5;
bytes randomness_committed = 6;
}
map<int32,WatchInfoPerServer> watchlist = 7;
}
这是我的程序的示例运行。它可以在没有10GB内存的情况下运行,但在5GB内存中止运行:
$ls -al offline0.lev
-rw-rw-r-- 1 antonio antonio 186560583 Aug 27 17:57 offline0.lev
$ ulimit -v 10000000
$ ./test
Read from file completed.
First copy completed.
Another copy completed.
Another copy completed.
Another copy completed.
Done.
$ ulimit -v 5000000
$ ./test
Read from file completed.
First copy completed.
Another copy completed.
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Aborted (core dumped)
感谢您的帮助
答案 0 :(得分:0)
Protobuf 在您序列化数据时会对其进行压缩,因此如果您将其反序列化到您的 ram,预计会使用更多内存。