报废到文本文件,但除非重复,否则不会写。 Python 3.7 ChromeDriver BS4

时间:2018-12-18 22:06:08

标签: python python-3.x web-scraping beautifulsoup selenium-chromedriver

此代码已在4-5个小时前工作,现在它正在复制我想要将其写入文件的内容。我尝试的显而易见的事情是注释掉file.write行或它下面的打印行,这导致文本文件空白。我在注释掉上述2条线之一时尝试了各种参数,例如a +,a,w和w +。希望有人可以弄清楚我的问题所在,并帮助我解决问题。

我要问的另一个问题是,在复制当前一章后如何导航到下一章,但是如果我要为此提出一个新问题。另外,如果您有任何建议可以使代码更好地减去def,我将在稍后(脚本完成后)添加该代码。

这是到目前为止的代码::

#! python3
import requests
import bs4 as BeautifulSoup
from selenium import webdriver
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.chrome.options import Options

def Close():
    driver.stop_client()
    driver.close()
    driver.quit()

CHROMEDRIVER_PATH = 'E:\Downloads\chromedriver_win32\chromedriver.exe'

# start raw html
NovelName = 'Novel/Isekai-Maou-to-Shoukan-Shoujo-Dorei-Majutsu'
BaseURL = 'https://novelplanet.com/'
url = '%(U)s/%(N)s' % {'U': BaseURL, "N": NovelName}

options = Options()
options.add_experimental_option("excludeSwitches",["ignore-certificate-errors"])
options.add_argument("--headless") # Runs Chrome in headless mode.
options.add_argument('--no-sandbox') # Bypass OS security model
options.add_argument('--disable-gpu')  # applicable to windows os only
options.add_argument('start-maximized') # 
options.add_argument('disable-infobars')
options.add_argument("--disable-extensions")
driver = webdriver.Chrome(CHROMEDRIVER_PATH, options=options)
driver.get(url)

# wait for title not be equal to "Please wait 5 seconds..."
wait = WebDriverWait(driver, 10)
wait.until(lambda driver: driver.title != "Please wait 5 seconds...")

soup = BeautifulSoup.BeautifulSoup(driver.page_source, 'html.parser')
# End raw html

# Start get first chapter html coded
i=0
for chapterLink in soup.find_all(class_='rowChapter'):
    i+=1
cLink = chapterLink.find('a').contents[0].strip()
print(driver.title)
# end get first chapter html coded

# start navigate to first chapter
link = driver.find_element_by_link_text(cLink)
link.click()
# end navigate to first chapter

# start copy of chapter and add to a file
wait = WebDriverWait(driver, 10)
wait.until(lambda driver: driver.title != "Please wait 5 seconds...")
print(driver.title)
soup = BeautifulSoup.BeautifulSoup(driver.page_source, 'html.parser')
readables = soup.find(id='divReadContent')
text = readables.text.strip().replace('○','0').replace('×','x').replace('《',' <<').replace('》','>> ').replace('「','"').replace('」','"')
name = driver.title
file_name = (name.replace('Read ',"").replace(' - NovelPlanet',"")+'.txt')
print(file_name)

with open(file_name,'a+') as file:
    print(text,file=file)

lastURL = driver.current_url.replace('https://novelplanet.com','')
# end copy of chapter and add to a file

# start goto next chapter if exists then return to copy chapter else Close()

# end goto next chapter if exists then return to copy chapter else Close()

Close()
#EOF

编辑: 更改了上面的代码以使用下面的建议。考虑到文档中没有这些信息,花了我大约一个小时的时间才意识到可以在其中使用修饰符,这也是我偏离简单路径的原因。

现在要弄清楚如何浏览页面,<div class="4u 12u(small)">中有6个,第二个和第五个是组合框/选项框,我怀疑它们是否易于使用。第一章和第四章是上一章,第三章和第六章是下一章。当上一个按钮或下一个按钮无处可走时,他们说<div class="4u 12u(small)">&nbsp;</div>。有人知道有一种方法可以从所有6个选项中选择第3个或第6个按钮,以及在结束时杀死程序的方法吗?

0 个答案:

没有答案