I am using C++ ofstream to write a log file on Linux. When I monitor the file contents with tail -f command I can see the contents are correctly populated. But if a power outage happens and I check the file again after power cycle, the last couple lines of records are gone. With hexdump I can see those records turned into null characters '\0' instead. I tried flush() and manipulator std::endl and they don't help anyway.
Is it true what tail showed to me was not actually written to the disk and they were just in buffer? The inode table wasn't update before the power outage? I can accept this fact but I don't understand why the records turned to null characters if they weren't written to the file.
Btw, I tried Google's glog and have the same results (a bunch of null characters at the end). I also tried zlog, a C library. and found it only lost the last records but didn't replace them with null chars.
答案 0 :(得分:0)
好吧,当你停电,然后再次启动系统时,linux内核会尝试转发日志日志,以检测并纠正系统崩溃时从内存到磁盘的不一致。通常这意味着重做并提交所有可能的操作,直到系统崩溃,但撤消(并擦除)在崩溃时未提交的所有数据。
Linux(以及其他un * x内核,如freebsd)有一个称为有序数据写入的工具,它强制元数据(如来自inode或目录条目的块指针)在实际之后更新他们指向的数据有效地写在磁盘上,因此不一致性降至最低。我不知道实际的linux实现,但是例如,在freebsd中你指向的是什么(文件中的一个零块而不是写入的实际数据)是完全不可能的freebsd内核(好吧,你可以在目的,但不是偶然)最可能的事情是linux可能只是管理块信息而不是文件内容,或者它已经更新了文件大小指针而不是那里的数据。这不应该发生,因为它已经解决了问题。
另一件事是您在系统崩溃后写了多少数据或者您在屏幕上看到的内容没有出现。您可能听说过一些名为延迟写入的内容,它允许内核通过不立即将数据写入磁盘来将写操作保存到繁忙的系统上的磁盘,但等待一段时间以便可以在核心内存缓冲区中解析更新在他们进入磁盘之前。无论如何,磁盘写入是在经过一段时间延迟之后强制写入的,这意味着在Linux中有5秒(我试着记住,很多时候我上次检查了这个值,我怀疑在5到30秒之间) )所以你最多可以失去你的最后五秒钟。