我想分块地解析XML文件,以便它不会耗尽内存,并以列存储方式进行解析。 ie key1:value1,key2:value2,key3:value3,等等。
当前,我正在读取如下文件:
string parseFieldFromLine(const string &line, const string &key)
{
// We're looking for a thing that looks like:
// [key]="[value]"
// as part of a larger string.
// We are given [key], and want to return [value].
// Find the start of the pattern
string keyPattern = key + "=\"";
ssize_t idx = line.find(keyPattern);
// No match
if (idx == -1)
return "";
// Find the closing quote at the end of the pattern
size_t start = idx + keyPattern.size();
size_t end = start;
while (line[end] != '"')
{
end++;
}
// Extract [value] from the overall string and return it
// We have (start, end); substr() requires,
// so we must compute, (start, length).
return line.substr(start, end - start);
}
map<string, User> users;
void readUsers(const string &filename)
{
ifstream fin;
fin.open(filename.c_str());
string line;
while (getline(fin, line))
{
User u;
u.Id = parseFieldFromLine(line, "Id");
u.DisplayName = parseFieldFromLine(line, "DisplayName");
users[u.Id] = u;
}
}
如您所见,我正在调用一个函数,该函数在一行中找到一个子字符串。从某种意义上来说,这是有问题的,如果我的文件(行)格式不正确,我会得到意想不到的值,从而导致无提示失败。
我读过有关使用XML解析器的信息,但对于C ++来说,这是新知识,由于对测试工作/效率的了解也很少,因此我无法确定哪种解析器最适合键值格式。 我当前的I / P数据如下:
<?xml version="1.0" encoding="utf-8"?>
<posts>
<row Id="1" PostTypeId="1" AcceptedAnswerId="509" CreationDate="2009-04-30T06:49:01.807" Score="13" ViewCount="903" Body="<p>Our nightly full (and periodic differential) backups are becoming quite large, due mostly to the amount of indexes on our tables; roughly half the backup size is comprised of indexes.</p>

<p>We're using the <strong>Simple</strong> recovery model for our backups.</p>

<p>Is there any way, through using <code>FileGroups</code> or some other file-partitioning method, to <strong>exclude</strong> indexes from the backups?</p>

<p>It would be nice if this could be extended to full-text catalogs, as well.</p>
" OwnerUserId="3" LastEditorUserId="919" LastEditorDisplayName="" LastEditDate="2009-05-04T02:11:16.667" LastActivityDate="2009-05-10T15:22:39.707" Title="How to exclude indexes from backups in SQL Server 2008" Tags="<sql-server><backup><sql-server-2008><indexes>" AnswerCount="3" CommentCount="0" FavoriteCount="3" />
<row Id="2" PostTypeId="1" AcceptedAnswerId="1238" CreationDate="2009-04-30T07:04:18.883" Score="18" ViewCount="1951" Body="<p>We've struggled with the RAID controller in our database server, a <a href="http://www.pc.ibm.com/europe/server/index.html?nl&amp;cc=nl" rel="nofollow">Lenovo ThinkServer</a> RD120. It is a rebranded Adaptec that Lenovo / IBM dubs the <a href="http://www.redbooks.ibm.com/abstracts/tips0054.html#ServeRAID-8k" rel="nofollow">ServeRAID 8k</a>.</p>

<p>We have patched this <a href="http://www.redbooks.ibm.com/abstracts/tips0054.html#ServeRAID-8k" rel="nofollow">ServeRAID 8k</a> up to the very latest and greatest:</p>

<ul>
<li>RAID bios version</li>
<li>RAID backplane bios version</li>
<li>Windows Server 2008 driver</li>
</ul>

<p>This RAID controller has had multiple critical BIOS updates even in the short 4 month time we've owned it, and the <a href="ftp://ftp.software.ibm.com/systems/support/system%5Fx/ibm%5Ffw%5Faacraid%5F5.2.0-15427%5Fanyos%5F32-64.chg" rel="nofollow">change history</a> is just.. well, scary. </p>

<p>We've tried both write-back and write-through strategies on the logical RAID drives. <strong>We still get intermittent I/O errors under heavy disk activity.</strong> They are not common, but serious when they happen, as they cause SQL Server 2008 I/O timeouts and sometimes failure of SQL connection pools.</p>

<p>We were at the end of our rope troubleshooting this problem. Short of hardcore stuff like replacing the entire server, or replacing the RAID hardware, we were getting desperate.</p>

<p>When I first got the server, I had a problem where drive bay #6 wasn't recognized. Switching out hard drives to a different brand, strangely, fixed this -- and updating the RAID BIOS (for the first of many times) fixed it permanently, so I was able to use the original "incompatible" drive in bay 6. On a hunch, I began to assume that <a href="http://www.newegg.com/Product/Product.aspx?Item=N82E16822136143" rel="nofollow">the Western Digital SATA hard drives</a> I chose were somehow incompatible with the ServeRAID 8k controller.</p>

<p>Buying 6 new hard drives was one of the cheaper options on the table, so I went for <a href="http://www.newegg.com/Product/Product.aspx?Item=N82E16822145215" rel="nofollow">6 Hitachi (aka IBM, aka Lenovo) hard drives</a> under the theory that an IBM/Lenovo RAID controller is more likely to work with the drives it's typically sold with.</p>

<p>Looks like that hunch paid off -- we've been through three of our heaviest load days (mon,tue,wed) without a single I/O error of any kind. Prior to this we regularly had at least one I/O "event" in this time frame. <strong>It sure looks like switching brands of hard drive has fixed our intermittent RAID I/O problems!</strong></p>

<p>While I understand that IBM/Lenovo probably tests their RAID controller exclusively with their own brand of hard drives, I'm disturbed that a RAID controller would have such subtle I/O problems with particular brands of hard drives.</p>

<p>So my question is, <strong>is this sort of SATA drive incompatibility common with RAID controllers?</strong> Are there some brands of drives that work better than others, or are "validated" against particular RAID controller? I had sort of assumed that all commodity SATA hard drives were alike and would work reasonably well in any given RAID controller (of sufficient quality).</p>
" OwnerUserId="1" LastActivityDate="2011-03-08T08:18:15.380" Title="Do RAID controllers commonly have SATA drive brand compatibility issues?" Tags="<raid><ibm><lenovo><serveraid8k>" AnswerCount="8" FavoriteCount="2" />
<row Id="3" PostTypeId="1" AcceptedAnswerId="104" CreationDate="2009-04-30T07:48:06.750" Score="26" ViewCount="692" Body="<ul>
<li>How do you keep your servers up to date?</li>
<li>When using a package manager like <a href="http://wiki.debian.org/Aptitude" rel="nofollow">Aptitude</a>, do you keep an upgrade / install history, and if so, how do you do it?</li>
<li>When installing or upgrading packages on multiple servers, are there any ways to speed the process up as much as possible?</li>
</ul>
" OwnerUserId="22" LastEditorUserId="22" LastEditorDisplayName="" LastEditDate="2009-04-30T08:05:02.217" LastActivityDate="2009-06-05T04:01:09.423" Title="Best practices for keeping UNIX packages up to date?" Tags="<unix><package-management><server-management>" AnswerCount="11" FavoriteCount="14" />
<row Id="4" PostTypeId="2" ParentId="3" CreationDate="2009-04-30T07:49:58.027" Score="10" ViewCount="" Body="<p>Regarding your third question: I always run a local repository. Even if it's only for one machine, it saves time in case I need to reinstall (I generally use something like aptitude autoclean), and for two machines, it almost always pays off.</p>

<p>For the clusters I admin, I don't generally keep explicit logs: I let the package manager do it for me. However, for those machines (as opposed to desktops), I don't use automatic installations, so I do have my notes about what I intended to install to all machines.</p>
" OwnerUserId="28" LastActivityDate="2009-04-30T07:49:58.027" CommentCount="1" />
<row Id="5" PostTypeId="2" ParentId="2" CreationDate="2009-04-30T07:56:20.070" Score="4" ViewCount="" Body="<p>I don't think it's common per se. However, as soon as you start using enterprise storage controllers, whether that be SAN's or standalone RAID controllers, you'll generally want to adhere to their compatibility list rather closely.</p>

<p>You may be able to save some bucks on the sticker price by buying a cheap range of disks, but that's probably one of the last areas I'd want to save money on - given the importance of data in most scenarios.</p>

<p>In other words, explicit incompatibility is very uncommon, but explicit compatibility adherence is recommendable.</p>
" OwnerUserId="24" LastActivityDate="2009-04-30T07:56:20.070" />
<row Id="6" PostTypeId="1" AcceptedAnswerId="537" CreationDate="2009-04-30T07:57:06.247" Score="8" ViewCount="2648" Body="<p>Our database currently only has one FileGroup, PRIMARY, which contains roughly 8GB of data (table rows, indexes, full-text catalog).</p>

<p>When is a good time to split this into secondary data files? What are some criteria that I should be aware of?</p>
" OwnerUserId="3" LastActivityDate="2009-07-08T07:23:49.527" Title="In SQL Server, when should you split your PRIMARY Data FileGroup into secondary data files?" Tags="<sql-server><files><filegroups>" AnswerCount="3" FavoriteCount="1" />
<row Id="7" PostTypeId="1" AcceptedAnswerId="17" CreationDate="2009-04-30T07:57:09.117" Score="12" ViewCount="529" Body="<p>What enterprise virus-scanning systems do you recommend?</p>
" OwnerUserId="32" LastActivityDate="2009-04-30T11:51:09.290" Title="What is the best enterprise virus-scanning system?" Tags="<antivirus>" AnswerCount="8" CommentCount="3" FavoriteCount="2" />
<row Id="8" PostTypeId="2" ParentId="3" CreationDate="2009-04-30T07:57:15.653" Score="0" ViewCount="" Body="<p>You can have a local repository and configure all servers to point to it for updates. Not only you get speed of local downloads, you also get to control which official updates you want installed on your infrastructure in order to prevent any compatibility issues.</p>

<p>On the Windows side of things, I've used <a href="http://technet.microsoft.com/en-us/wsus/default.aspx" rel="nofollow">Windows Server Update Services</a> with very satisfying results.</p>
" OwnerUserId="36" LastActivityDate="2009-04-30T07:57:15.653" />
其他文件:
<?xml version="1.0" encoding="utf-8"?>
<users>
<row Id="1" Reputation="4220" CreationDate="2009-04-30T07:08:27.067" DisplayName="Jeff Atwood" EmailHash="51d623f33f8b83095db84ff35e15dbe8" LastAccessDate="2011-09-03T13:30:29.990" WebsiteUrl="http://www.codinghorror.com/blog/" Location="El Cerrito, CA" Age="40" AboutMe="<p><img src="http://img377.imageshack.us/img377/4074/wargames1xr6.jpg" width="250"></p>

<p><a href="http://www.codinghorror.com/blog/archives/001169.html" rel="nofollow">Stack Overflow Valued Associate #00001</a></p>

<p>Wondering how our software development process works? <a href="http://www.youtube.com/watch?v=08xQLGWTSag" rel="nofollow">Take a look!</a></p>
" Views="3562" UpVotes="1995" DownVotes="31" />
<row Id="2" Reputation="697" CreationDate="2009-04-30T07:08:27.067" DisplayName="Geoff Dalgas" EmailHash="b437f461b3fd27387c5d8ab47a293d35" LastAccessDate="2011-09-05T22:14:06.527" WebsiteUrl="http://stackoverflow.com" Location="Corvallis, OR" Age="34" AboutMe="<p>Developer on the StackOverflow team. Find me on</p>

<p><a href="http://www.twitter.com/SuperDalgas" rel="nofollow">Twitter</a>
<br><br>
<a href="http://blog.stackoverflow.com/2009/05/welcome-stack-overflow-valued-associate-00003/" rel="nofollow">Stack Overflow Valued Associate #00003</a> </p>
" Views="291" UpVotes="46" DownVotes="2" />
<row Id="3" Reputation="259" CreationDate="2009-04-30T07:08:27.067" DisplayName="Jarrod Dixon" EmailHash="2dfa19bf5dc5826c1fe54c2c049a1ff1" LastAccessDate="2011-09-01T20:43:27.743" WebsiteUrl="http://stackoverflow.com" Location="New York, NY" Age="32" AboutMe="<p><a href="http://blog.stackoverflow.com/2009/01/welcome-stack-overflow-valued-associate-00002/" rel="nofollow">Developer on the Stack Overflow team</a>.</p>

<p>Was dubbed <strong>SALTY SAILOR</strong> by Jeff Atwood, as filth and flarn would oft-times fly when dealing with a particularly nasty bug!</p>

<ul>
<li>Twitter me: <a href="http://twitter.com/jarrod_dixon" rel="nofollow">jarrod_dixon</a></li>
<li>Email me: jarrod.m.dixon@gmail.com</li>
</ul>
" Views="210" UpVotes="259" DownVotes="4" />
答案 0 :(得分:0)
我猜您正在寻找的是SAX parser,它不会一次读取整个文档(就像DOM-parser一样),但是可以为特定事件定义回调(例如新的XML元素的开始)。由于您要逐个元素地处理声音,因此听起来非常适合您。
我必须承认,我从未在C ++中进行过任何XML解析,但是他的两个库听起来很适合您的问题: