在文档中,无论如何我都无法检查爬虫的运行状态。目前,我这样做的唯一方法是不断对AWS进行ping操作,以检查文件/表是否已创建。
在爬虫完成运行之前,还有更好的方法阻止吗?
答案 0 :(得分:6)
以下函数使用 boto3
。它启动 AWS Glue 爬网程序并等待其完成。它还在进行时记录状态。使用 Python v3.8 和 boto3 v1.17.3 对其进行了测试。
import logging
import time
import timeit
import boto3
log = logging.getLogger(__name__)
def run_crawler(crawler: str, *, timeout_minutes: int = 120, retry_seconds: int = 5) -> None:
"""Run the specified AWS Glue crawler, waiting until completion."""
# Ref: https://stackoverflow.com/a/66072347/
timeout_seconds = timeout_minutes * 60
client = boto3.client("glue")
start_time = timeit.default_timer()
abort_time = start_time + timeout_seconds
def wait_until_ready() -> None:
state_previous = None
while True:
response_get = client.get_crawler(Name=crawler)
state = response_get["Crawler"]["State"]
if state != state_previous:
log.info(f"Crawler {crawler} is {state.lower()}.")
state_previous = state
if state == "READY": # Other known states: RUNNING, STOPPING
return
if timeit.default_timer() > abort_time:
raise TimeoutError(f"Failed to crawl {crawler}. The allocated time of {timeout_minutes:,} minutes has elapsed.")
time.sleep(retry_seconds)
wait_until_ready()
response_start = client.start_crawler(Name=crawler)
assert response_start["ResponseMetadata"]["HTTPStatusCode"] == 200
log.info(f"Crawling {crawler}.")
wait_until_ready()
log.info(f"Crawled {crawler}.")
可选奖励:使用一些合理的默认值创建或更新 AWS Glue 爬网程序的函数:
def ensure_crawler(**kwargs: Any) -> None:
"""Ensure that the specified AWS Glue crawler exists with the given configuration.
At minimum the `Name` and `Targets` keyword arguments are required.
"""
# Use defaults
assert all(kwargs.get(k) for k in ("Name", "Targets"))
defaults = {
"Role": "AWSGlueRole",
"DatabaseName": kwargs["Name"],
"SchemaChangePolicy": {"UpdateBehavior": "UPDATE_IN_DATABASE", "DeleteBehavior": "DELETE_FROM_DATABASE"},
"RecrawlPolicy": {"RecrawlBehavior": "CRAWL_EVERYTHING"},
"LineageConfiguration": {"CrawlerLineageSettings": "DISABLE"},
}
kwargs = {**defaults, **kwargs}
# Ensure crawler
client = boto3.client("glue")
name = kwargs["Name"]
try:
response = client.create_crawler(**kwargs)
log.info(f"Created crawler {name}.")
except client.exceptions.AlreadyExistsException:
response = client.update_crawler(**kwargs)
log.info(f"Updated crawler {name}.")
assert response["ResponseMetadata"]["HTTPStatusCode"] == 200
答案 1 :(得分:0)
您可以使用boto3(或类似功能)进行操作。有get_crawler方法。您将在“ LastCrawl”部分找到所需的信息