如何避免Python 3中的级联错误

时间:2018-11-07 15:52:31

标签: python web-scraping concatenation python-3.6 string-concatenation

我遇到了级联问题。

我尝试提取(公司名称)+(电话号码)+(地址)+(网站网址),虽然所有内容都适用于前三个元素,但“网站网址”存在问题。

实际上,当我将内容提取到文本文件中时,所有网站的网址都直接显示在顶部,enter image description here与合适的企业不匹配。当我在命令提示符下输入内容时,所有与正确的业务相匹配的内容。

很难解释...所以我附上了两个屏幕截图(下面的链接)。 在Excel文档中,用红色下划线标记的URL不在正确的位置,应该在下面。

这是我进行串联的方式:

try:
    print("list if contains websites")

    for i in range(0, min(len(freeNames),len(fullPhones),len(fullStreets),len(fullWebsites))):
            c = ' ~ '  + freeNames[i] + ' ~ ' + fullPhones[i] + ' ~ ' + fullStreets[i] + ' ~ '  + fullWebsites[i] + ' ~ '
            contents.append(c)
            print(c)
            trustedprotxtfile.write(c + '\n')
except Exception as e:
      print(e)
      pass

try:
    print("list if no websites")

        for i in range(min(len(freeNames),len(fullPhones),len(fullStreets),len(fullWebsites)), max(len(freeNames),len(fullPhones),len(fullStreets))):
            c = ' ~ '  + freeNames[i] + ' ~ ' + fullPhones[i] + ' ~ ' + fullStreets[i] + ' ~ '
            contents.append(c)
            print(c)
            trustedprotxtfile.write(c + '\n')
except Exception as e:
      print(e)
      pass

您知道如何解决此问题吗?

非常感谢您的帮助。

2 个答案:

答案 0 :(得分:0)

如果可以的话,我建议使用CSV格式,Python可以像大多数电子表格程序一样轻松地处理它

List<string, string>

答案 1 :(得分:0)

[回答山姆·梅森]

这是我使用的完整代码:

这是导入的库的列表:(re,selenium,lxml,urllib3,numpy,beautifulSoup)

浏览器= webdriver.Chrome(“ / Users / gdeange1 / dev / chromedriver”)

trustedprotxtfile = open(“ / Users / gdeange1 / Dev / trustedpros / test.txt”,“ w +”,编码='utf-8')

链接= ['ns / halifax',]

对于链接中的l:     链接=“ https://trustedpros.ca/” + l

driver = browser.get("https://trustedpros.ca/" + l)


page0 = requests.get(link)
soup0 = bs(page0.content, "lxml")


nextpages = soup0.findAll('div', attrs={'class': 'paging-sec'})


pagination = []

if nextpages:
    for ul in nextpages:
        for li in ul.find_all('li'):
            liText = li.text
            if liText != '-':
                pagination.append(int(liText)) 


maxpagination = max(pagination)



freeNames = [] 
fullPhones = []
fullStreets = []
fullWebsites = []


i = 0
while i < maxpagination:
    time.sleep(1)
    i += 1    


    try:
        inputElement = browser.find_elements_by_xpath('//*[@id="final-search"]/div/div[1]/div[2]/a')
        allLinksTim = [];
        for url in inputElement:
            allLinksTim.append(url.get_attribute("href"))
    except:
        pass


    for eachLink in allLinksTim:
        driver = browser.get(eachLink)
        page = requests.get(eachLink)
        tree = html.fromstring(page.content)
        soup = bs(page.content, "lxml")


        try:
            namess = browser.find_elements_by_class_name('name-alt')
            if len(namess) > 0:

                for name in namess:
                    freeNames.append(name.text)
                    print(name.text)
            else:

                names = browser.find_elements_by_class_name('name-altimg')
                for names1 in names:
                    freeNames.append(names1.text)
                    print(names1.text)
        except:
            print("Error while trying to get the names")
            pass


        try:
            phones = browser.find_elements_by_class_name('taptel')
            if phones:
                for phone in phones:
                    fullPhones.append(phone.text)
                    print(phone.text)
            else:
                print("No phones found")
        except:
            print('Error while trying to get the phones')
            pass


        try:
            streets = browser.find_elements_by_class_name('address')
            if streets:
                for street in streets:
                    fullStreets.append(street.text)
                    print(street.text)
            else:
                print("No street address found")
        except:
            print('Error while trying to get the streets')
            pass


        try:
            websites = soup.findAll('div', attrs={'class': 'contact-prom'})
            #print('Entered the Div!')
            if websites:
                for div in websites:
                    for url in div.find_all('a'):
                        if url.has_attr('target'):
                            fullWebsites.append(url['href'])
                            print(url['href'])
            else:
                print("No websites found")

        except:
            print('Error while trying to get the websites')
            pass


        browser.back()

    inputElement = browser.find_element_by_class_name('next-page')
    inputElement.click()


contents = []      


print("Size of free names: ", len(freeNames))
print("Size of full phones: ", len(fullPhones))
print("Size of full streets: ", len(fullStreets))
print("Size of full websites: ", len(fullWebsites))



try:
    print("list with everything")

    for i in range(min(len(freeNames),len(fullPhones),len(fullStreets),len(fullWebsites))):
        c = ' ~ '  + freeNames[i] + ' ~ ' + fullPhones[i] + ' ~ ' + fullStreets[i] + ' ~ '  + fullWebsites[i] + ' ~ '
        contents.append(c)
        print(c)
        trustedprotxtfile.write(c + '\n')
except:
    print('not working 1')
    pass

try:
    print("list without websites")

    for i in range(min(len(freeNames),len(fullPhones),len(fullStreets),len(fullWebsites)), max(len(freeNames),len(fullPhones),len(fullStreets))):
        c = ' ~ '  + freeNames[i] + ' ~ ' + fullPhones[i] + ' ~ ' + fullStreets[i] + ' ~ '
        contents.append(c)
        print(c)
        trustedprotxtfile.write(c + '\n')
except:
    print('not working')
    pass

print(“ [检索已经结束,感谢您的等待!]”) trustprotxtfile.close()