删除基本URL

时间:2018-08-14 08:22:15

标签: python web-scraping beautifulsoup

我编写了一个python脚本,从给定网页上的所有链接中提取href值:

from BeautifulSoup import BeautifulSoup
import urllib2
import re

html_page = urllib2.urlopen("http://kteq.in/services")
soup = BeautifulSoup(html_page)
for link in soup.findAll('a'):
    print link.get('href')

运行上面的代码时,我得到以下输出,其中包括外部和内部链接:

index
index
#
solutions#internet-of-things
solutions#online-billing-and-payment-solutions
solutions#customer-relationship-management
solutions#enterprise-mobility
solutions#enterprise-content-management
solutions#artificial-intelligence
solutions#b2b-and-b2c-web-portals
solutions#robotics
solutions#augement-reality-virtual-reality`enter code here`
solutions#azure
solutions#omnichannel-commerce
solutions#document-management
solutions#enterprise-extranets-and-intranets
solutions#business-intelligence
solutions#enterprise-resource-planning
services
clients
contact
#
#
#
https://www.facebook.com/KTeqSolutions/
#
#
#
#
#contactform
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
index
services
#
contact
#
iOSDevelopmentServices
AndroidAppDevelopment
WindowsAppDevelopment
HybridSoftwareSolutions
CloudServices
HTML5Development
iPadAppDevelopment
services
services
services
services
services
services
contact
contact
contact
contact
contact
None
https://www.facebook.com/KTeqSolutions/
#
#
#
#

我想删除具有完整网址(如https://www.facebook.com/KTeqSolutions/)的外部链接,同时保留诸如solutions#internet-of-things之类的链接。如何有效地做到这一点?

2 个答案:

答案 0 :(得分:1)

如果我对您的理解正确,则可以尝试以下方法:

l = []
for link in soup.findAll('a'):
    print link.get('href')
    l.append(link.get('href'))
l = [x for x in l if "www" not in x] #or 'https'

答案 1 :(得分:0)

您可以从parse_url模块使用requests

import requests

url = 'https://www.facebook.com/KTeqSolutions/'

requests.urllib3.util.parse_url(url)

给你

Url(scheme='https', auth=None, host='www.facebook.com', port=None, path='/KTeqSolutions/', query=None, fragment=None)