如何列出给定域中的唯一URL

时间:2019-04-05 04:47:27

标签: python list url

我已经编写了从给定站点中提取所有url的代码,但是问题是某些url被重复了,我希望它具有唯一的url列表。

from bs4 import BeautifulSoup
from termcolor import colored
import re, os

import requests

url = 'http://example.com'
ext = 'html'
count=0
countfiles=0
files=[]
def ulist(x):
  return list(dict.fromkeys(x))



def listFD(filename, ext=''):
  print filename
  print url
  if filename == url:
      page = requests.get(url).text
  else:
      page = requests.get(url + filename).text

  soup = BeautifulSoup(page, 'html.parser')
  return ['/' + node.get('href') for node in soup.find_all('a') if node.get('href').endswith(ext)]



for file in ulist(listFD(url, ext)):
   for unfile in ulist(listFD(file, ext)):
    print unfile

3 个答案:

答案 0 :(得分:2)

您可以在下面进行操作:

urls = list(set(urls))

答案 1 :(得分:1)

只需使用python内置的set功能包装您的列表即可:

urls = ['www.google.com', 'www.google.com', 'www.facebook.com']
unique_urls = list(set(urls))
print(unique_urls)  # prints >> ['www.facebook.com', 'www.google.com']

答案 2 :(得分:0)

有了网址列表后,就可以使用set来获取唯一元素和列表理解:

unique_urls = [url for url in set(urls)]