仅提取链接和标题

时间:2016-09-09 05:23:09

标签: python visual-studio web-scraping python-3.4 ptvs

我正在尝试在动漫网站中提取这些链接的链接和标题,但是,我只能提取整个标记,我只想要href和标题。

这是我正在使用的代码:

import requests
from bs4 import BeautifulSoup

r = requests.get('http://animeonline.vip/info/phi-brain-kami-puzzle-3')
soup = BeautifulSoup(r.content, "html.parser")
for link in soup.find_all('div', class_='list_episode'):
    href = link.get('href')
    print(href)

这是网站html:

<a href="http://animeonline.vip/phi-brain-kami-puzzle-3-episode-25" title="Phi Brain: Kami no Puzzle 3 episode 25">
                    Phi Brain: Kami no Puzzle 3 episode 25                  <span> 26-03-2014</span>
        </a>

这是输出:

C:\Python34\python.exe C:/Users/M.Murad/PycharmProjects/untitled/Webcrawler.py
None

Process finished with exit code 0

我想要的就是该课程中的所有链接和标题(剧集及其链接)

感谢。

2 个答案:

答案 0 :(得分:1)

整个页面只有一个带有“&list”集合的元素,因此您可以过滤掉&#39; a&#39;标记,然后获取属性&#39; href&#39;:

的值
In [127]: import requests
     ...: from bs4 import BeautifulSoup
     ...: 
     ...: r = requests.get('http://animeonline.vip/info/phi-brain-kami-puzzle-3')
     ...: soup = BeautifulSoup(r.content, "html.parser")
     ...: 

In [128]: [x.get('href') for x in soup.find('div', class_='list_episode').find_all('a')]
Out[128]: 
[u'http://animeonline.vip/phi-brain-kami-puzzle-3-episode-25',
 u'http://animeonline.vip/phi-brain-kami-puzzle-3-episode-24',
 u'http://animeonline.vip/phi-brain-kami-puzzle-3-episode-23',
 u'http://animeonline.vip/phi-brain-kami-puzzle-3-episode-22',
 u'http://animeonline.vip/phi-brain-kami-puzzle-3-episode-21',
 u'http://animeonline.vip/phi-brain-kami-puzzle-3-episode-20',
 u'http://animeonline.vip/phi-brain-kami-puzzle-3-episode-19',
 u'http://animeonline.vip/phi-brain-kami-puzzle-3-episode-18',
 u'http://animeonline.vip/phi-brain-kami-puzzle-3-episode-17',
 u'http://animeonline.vip/phi-brain-kami-puzzle-3-episode-16',
 u'http://animeonline.vip/phi-brain-kami-puzzle-3-episode-15',
 u'http://animeonline.vip/phi-brain-kami-puzzle-3-episode-14',
 u'http://animeonline.vip/phi-brain-kami-puzzle-3-episode-13',
 u'http://animeonline.vip/phi-brain-kami-puzzle-3-episode-12',
 u'http://animeonline.vip/phi-brain-kami-puzzle-3-episode-11',
 u'http://animeonline.vip/phi-brain-kami-puzzle-3-episode-10',
 u'http://animeonline.vip/phi-brain-kami-puzzle-3-episode-9',
 u'http://animeonline.vip/phi-brain-kami-puzzle-3-episode-8',
 u'http://animeonline.vip/phi-brain-kami-puzzle-3-episode-7',
 u'http://animeonline.vip/phi-brain-kami-puzzle-3-episode-6',
 u'http://animeonline.vip/phi-brain-kami-puzzle-3-episode-5',
 u'http://animeonline.vip/phi-brain-kami-puzzle-3-episode-4',
 u'http://animeonline.vip/phi-brain-kami-puzzle-3-episode-3',
 u'http://animeonline.vip/phi-brain-kami-puzzle-3-episode-2',
 u'http://animeonline.vip/phi-brain-kami-puzzle-3-episode-1']

答案 1 :(得分:-1)

所以发生的事情是,你的link元素包含锚<div>和class =&#34; last_episode&#34;中的所有信息。但是它有很多锚点,它在#34; href&#34;和标题&#34;标题&#34;。

只需稍微修改一下代码,就可以得到你想要的东西。

import requests
from bs4 import BeautifulSoup

r = requests.get('http://animeonline.vip/info/phi-brain-kami-puzzle-3')
soup = BeautifulSoup(r.content, "html.parser")
for link in soup.find_all('div', class_='list_episode'):
    href_and_title = [(a.get("href"), a.get("title")) for a in link.find_all("a")]   
    print href_and_title

输出将以[(href,title),(href,title),........(href,title)]

的形式出现

编辑(解释):

所以当你做

时会发生什么
soup.find_all('div', class_='list_episode')

它为您提供了所有细节(在html页面中)&#34; div&#34;和班级&#34; last_episode&#34;但是现在这个锚拥有一套巨大的锚,它们具有不同的&#34; href&#34;和标题详细信息,所以我们使用for循环(可以有多个锚点(<a>))和&#34; .get()&#34;。

 href_and_title = [(a.get("href"), a.get("title")) for a in link.find_all("a")]

我希望这次更清楚。