将脚本部署到AWS Lambda的问题

时间:2019-01-20 18:51:07

标签: python amazon-web-services selenium firefox aws-lambda

我遇到的问题是我正在尝试运行一个使用Selenium特别是webdriver的脚本。

driver = webdriver.Firefox(executable_path='numpy-test/geckodriver', options=options, service_log_path ='/dev/null')

我的问题是该功能需要geckodriver才能运行。可以在我上载到AWS的zip文件中找到Geckodriver,但是我不知道如何获得在AWS上访问它的功能。在本地,这不是问题,因为它在我的目录中,因此一切正常。

通过无服务器运行该函数时,出现以下错误消息:

  

{       “ errorMessage”:“消息:'geckodriver'可执行文件必须位于PATH中。\ n”,       “ errorType”:“ WebDriverException”,       “堆栈跟踪”: [           [               “ /var/task/handler.py”,               66,               “主要”,               “打印(TatamiClearanceScrape())”           ],           [               “ /var/task/handler.py”,               28,               “榻榻米清除刮板”,               “驱动程序= webdriver.Firefox(executable_path ='numpy-test / geckodriver',options = options,service_log_path ='/ dev / null')”           ],           [               “ /var/task/selenium/webdriver/firefox/webdriver.py”,               164,               “ 初始化”,               “ self.service.start()”           ],           [               “ /var/task/selenium/webdriver/common/service.py”,               83,               “开始”,               “ os.path.basename(self.path),self.start_error_message)”           ]       ]   }

错误---------------------------------------------- ----

调用的函数失败

任何帮助将不胜感激。

编辑:

def TatamiClearanceScrape():
    options = Options()
    options.add_argument('--headless')

    page_link = 'https://www.tatamifightwear.com/collections/clearance'
    # this is the url that we've already determined is safe and legal to scrape from.
    page_response = requests.get(page_link, timeout=5)
    # here, we fetch the content from the url, using the requests library
    page_content = BeautifulSoup(page_response.content, "html.parser")

    driver = webdriver.Firefox(executable_path='numpy-test/geckodriver', options=options, service_log_path ='/dev/null')
    driver.get('https://www.tatamifightwear.com/collections/clearance')

    labtnx = driver.find_element_by_css_selector('a.btn.close')
    labtnx.click()
    time.sleep(10)
    labtn = driver.find_element_by_css_selector('div.padding')
    labtn.click()
    time.sleep(5)
    # wait(driver, 50).until(lambda x: len(driver.find_elements_by_css_selector("div.detailscontainer")) > 30)
    html = driver.page_source
    page_content = BeautifulSoup(html)
    # we use the html parser to parse the url content and store it in a variable.
    textContent = []

    tags = page_content.findAll("a", class_="product-title")

    product_title = page_content.findAll(attrs={'class': "product-title"})  # allocates all product titles from site

    old_price = page_content.findAll(attrs={'class': "old-price"})

    new_price = page_content.findAll(attrs={'class': "special-price"})

    products = []
    for i in range(len(product_title) - 2):
        #  groups all products together in list of dictionaries, with name, old price and new price
        object = {"Product Name": product_title[i].get_text(strip=True),
                  "Old Price:": old_price[i].get_text(strip=True),
                  "New Price": new_price[i].get_text(), 'date': str(datetime.datetime.now())
                  }
        products.append(object)



    return products

1 个答案:

答案 0 :(得分:0)

您可能想要查看一下AWS Lambda Layers。使用Lambda,您可以使用Lambda来使用库,而无需将它们包括在部署包中以实现功能。分层使您不必在每次代码更改时都上载依赖项,只需创建一个包含所有必需软件包的附加层即可。

在此处阅读有关AWS Lambda Layers

的更多详细信息