使用set()删除重复的网址python BeautifulSoup会使网址分散

时间:2019-02-23 17:59:58

标签: python python-3.x beautifulsoup duplicates

在python中,我使用BeautifulSoup从项目的网站上抓取了url,并且一切正常,直到我尝试通过将标签传递到set对象中来删除重复项为止。标签被“炸开”。这是我的代码和打印结果示例。

file = open('parsed_data.csv', 'w')

for link in soup.find_all('a', attrs={'href': re.compile("^http")}):

    soup_link = str(link)
    if soup_link.endswith('/') or soup_link.endswith('#'):
        soup_link = soup_link[-1]

    soup_link_unique = str(set(soup_link))

    print (soup_link)
    print (soup_link_unique)

    file.write(soup_link_unique)
    file.flush()
    file.close
```
Before passing into set object:
<a href="https://www.census.gov/en.html" onfocus="CensusSearchTypeahead.onSearchFocusBlur(false);" tabindex="2">
<img alt="United States Census Bureau" class="uscb-nav-image" src="https://www.census.gov/etc/designs/census/images/USCENSUS_IDENTITY_SOLO_White_2in_TM.svg" title="U.S. Census Bureau"/>
</a>

After passing into a set object:
{'I', 'S', '\n', 'C', '>', 'u', '"', '-', 'i', 'Y', 'L', 'M', 'p', '.', 'c', ')', 'B', '2', 't', 'N', '<', ' ', 'b', 'w', 'e', 'E', '/', 'O', ':', 'U', 'x', 'o', 'W', 'f', '(', 'l', 'D', 'F', 'g', 'd', '_', '=', 'n', 's', 'h', 'a', 'T', 'v', 'r', ';', 'm', 'y'}

1 个答案:

答案 0 :(得分:0)

for循环之前创建一个集合,并使用方法add()向该集合中添加新元素:

soup_link_unique = set()

for link in soup.find_all('a', attrs={'href': re.compile("^http")}):    
    soup_link = str(link)
    if soup_link.endswith('/') or soup_link.endswith('#'):
        soup_link = soup_link[-1]   
    soup_link_unique.add(soup_link)

示例:

my_set = set('ABCDE')
print(my_set)
# {'E', 'D', 'C', 'B', 'A'}

vs

my_set = set()
my_set.add('ABCDE')
print(my_set)
# {'ABCDE'}