用美丽的汤刮取论坛 - 如何排除引用的回复?

时间:2017-03-31 18:50:55

标签: python html web-scraping beautifulsoup forums

这是我第一次使用Beautiful Soup或做网页刮痧。到目前为止,我已经走了多远,但我已经感到非常高兴,但是我遇到了一些障碍。

我正在尝试抓取特定线程上的所有帖子。但是,我想从引用的回复中排除文本。

An example:

我想从这些帖子中删除文本,而不会在红色框中指示的区域内抓取文本。

在html中,我要排除的部分位于我需要为消息选择的部分内,这就是我遇到困难的原因。我已经包含了html的截图

HTML image

<div id="post_message_39096267"><!-- google_ad_section_start --><div style="margin:20px; margin-top:5px; ">
<div class="smallfont" style="margin-bottom:2px">Quote:</div>
<table cellpadding="6" cellspacing="0" border="0" width="100%">
<tbody><tr>
    <td class="alt2" style="border:1px inset">

            <div>
                Originally Posted by <strong>SAAN</strong>
                <a href="http://www.city-data.com/forum/economics/2056372-minimum-wage-vs-liveable-wage-post33645660.html#post33645660" rel="nofollow"><img class="inlineimg li fs-viewpost" src="http://pics3.city-data.com/trn.gif" border="0" alt="View Post" title="View Post"></a>
            </div>
            <div style="font-style:italic">I agree with trying to buy a 
cheap car outright, the problem is everyone I know that has done that $2-
5000 car, always ended up with these huge repair bills that are equivalent 
to car payments.  Most cars after 100K will need all sort of regulatr 
maintance that is easily a $200 repair to go along with anything that may 
break which is common with cars as they age.<br>
<br>
I have a 2yr old im making payments on and 14yr old car that is paid off, 
but needs $2000 in maintenance.  When car shopping this summer, I saw many 
cars i could buy outright, but after adding u everything needed to make sure 
it needs nothing, your back into the price range of a car payment.</div>

    </td>
</tr>
</tbody></table>
</div>Depends on how long the car loan would be stretched. Just because you 
can get an 8 year loan and reduce payments to a level like the repairs on 
your old car doesn't make it a good idea, especially for new cars that <a 
href="/knowledge/Depreciation.html" title="View 'depreciate' definition from 
Wikipedia" class="knldlink" rel="nofollow">depreciate</a> quickly. You'd 
just be putting yourself into negative equity territory.<!-- 
google_ad_section_end --></div>

我在下面列出了我的代码:希望这可以帮助您理解我在说什么。

from bs4 import BeautifulSoup
import urllib2


num_pages = 101
page_range = range(1,num_pages+1)
clean_posts = []

for page in page_range:
  print("Reading page: ", page, "...")
  if page == 1:
    page_url = urllib2.urlopen('http://www.city-data.com/forum/economics/2056372-minimum-wage-vs-liveable-wage.html')
  else:
    page_url = urllib2.urlopen('http://www.city-data.com/forum/economics/2056372-minimum-wage-vs-liveable-wage'+'-'+str(page)+'.html')


soup = BeautifulSoup(page_url)

postData = soup.find_all("div", id=lambda value: value and value.startswith("post_message_"))

posts = []
for post in postData:
    posts.append(BeautifulSoup(str(post)).get_text().encode("utf-8").strip().replace("\t", ""))

posts_stripped = [x.replace("\n","") for x in posts]

clean_posts.append(posts_stripped)

最后,如果你能给我一些有用的代码示例并向我解释一些事情,就好像我已经9岁了,我会非常感激!

干杯 Diarmaid

1 个答案:

答案 0 :(得分:2)

检查您的post_message_ div中是否有另一个div(引用div)。如果是这样提取它。将原始div(post_message_)文本附加到列表中。用这个替换你的for post in postData

posts = []
for post in postData:
    hasQuote = post.find("div")
     if not hasQuote is None:
        hasQuote.extract()
    posts.append(post.get_text(strip=True))