无法在Azure Batch上运行任务:启动后,节点进入不可用状态

时间:2018-10-15 11:24:48

标签: python-3.x azure-cli azure-batch

我正在尝试使用Azure Batch并行化Python应用程序。我在Python客户端脚本中遵循的工作流程是: 1)使用 blobxfer 实用程序(输入容器)

将本地文件上传到Azure Blob容器

2)使用具有azure-cli的服务主体帐户登录后,启动批处理服务以处理输入容器中的文件。

3)通过使用Azure Batch在节点上分布的python应用程序将文件上传到输出容器。

我遇到的问题与我在此处阅读的问题非常相似,但不幸的是,本文未给出解决方案。 Nodes go into Unusable State

我现在将提供相关信息,以便您可以重现此错误:

用于Azure Batch的图像是自定义的。

1)选择Ubuntu Server 18.04 LTS作为VM的操作系统,并打开以下端口(ssh,http,https)。其余设置在Azure门户中保留为默认设置。

2)服务器可用后,将运行以下脚本。

sudo apt-get install build-essential checkinstall -y
sudo apt-get install libreadline-gplv2-dev  libncursesw5-dev libssl-dev 
libsqlite3-dev tk-dev libgdbm-dev libc6-dev libbz2-dev -y
cd /usr/src
sudo wget https://www.python.org/ftp/python/3.6.6/Python-3.6.6.tgz
sudo tar xzf Python-3.6.6.tgz
cd Python-3.6.6
sudo ./configure --enable-optimizations
sudo make altinstall
sudo pip3.6 install --upgrade pip
sudo pip3.6 install pymupdf==1.13.20
sudo pip3.6 install tqdm==4.19.9
sudo pip3.6 install sentry-sdk==0.4.1
sudo pip3.6 install blobxfer==1.5.0
sudo pip3.6 install azure-cli==2.0.47

3)使用此链接中概述的过程创建了该服务器的映像。 Creating VM Image in Azure Linux 同样在取消布建期间,并未删除用户:sudo waagent -deprovision

4)从Azure门户记录了图像的资源ID,它将作为python客户端脚本中的参数之一提供

安装在运行批处理python脚本的客户端服务器上的软件包

sudo pip3.6 install tqdm==4.19.9
sudo pip3.6 install sentry-sdk==0.4.1
sudo pip3.6 install blobxfer==1.5.0
sudo pip3.6 install azure-cli==2.0.47
sudo pip3.6 install pandas==0.22.0

在Azure Batch中使用的资源是通过以下方式创建的:

1)使用cmd创建了具有贡献者特权的服务主体帐户。

$az ad sp create-for-rbac --name <SERVICE-PRINCIPAL-ACCOUNT>

2)与批处理帐户关联的资源组,批处理帐户和存储是通过以下方式创建的:

$ az group create --name <RESOURCE-GROUP-NAME> --location eastus2
$ az storage account create --resource-group <RESOURCE-GROUP-NAME> --name <STORAGE-ACCOUNT-NAME> --location eastus2 --sku Standard_LRS
$ az batch account create --name <BATCH-ACCOUNT-NAME> --storage-account <STORAGE-ACCOUNT-NAME> --resource-group <RESOURCE-GROUP-NAME> --location eastus2

用于启动上传和处理的客户端Python脚本: (更新3)

import subprocess
import os
import time
import datetime
import tqdm
import pandas
import sys
import fitz
import parmap
import numpy as np
import sentry_sdk
import multiprocessing as mp


def batch_upload_local_to_azure_blob(azure_username,azure_password,azure_tenant,azure_storage_account,azure_storage_account_key,log_dir_path):
    try:
        subprocess.check_output(["az","login","--service-principal","--username",azure_username,"--password",azure_password,"--tenant",azure_tenant])
    except subprocess.CalledProcessError:
        sentry_sdk.capture_message("Invalid Azure Login Credentials")
        sys.exit("Invalid Azure Login Credentials")
    dir_flag=False
    while dir_flag==False:
        try:
            no_of_dir=input("Enter the number of directories to upload:")
            no_of_dir=int(no_of_dir)
            if no_of_dir<0:
                print("\nRetry:Enter an integer value")   
            else: 
                dir_flag=True
        except ValueError:
            print("\nRetry:Enter an integer value")
    dir_path_list=[]
    for dir in range(no_of_dir):
        path_exists=False
        while path_exists==False:
            dir_path=input("\nEnter the local absolute path of the directory no.{}:".format(dir+1))
            print("\n")
            dir_path=dir_path.replace('"',"")
            path_exists=os.path.isdir(dir_path)
            if path_exists==True:
                dir_path_list.append(dir_path)
            else:
                print("\nRetry:Enter a valid directory path")
    timestamp = time.time()
    timestamp_humanreadable= datetime.datetime.fromtimestamp(timestamp).strftime('%Y-%m-%d-%H-%M-%S')
    input_azure_container="pdf-processing-input"+"-"+timestamp_humanreadable
    try:
        subprocess.check_output(["az","storage","container","create","--name",input_azure_container,"--account-name",azure_storage_account,"--auth-mode","login","--fail-on-exist"])
    except subprocess.CalledProcessError:
        sentry_sdk.capture_message("Invalid Azure Storage Credentials.")
        sys.exit("Invalid Azure Storage Credentials.")
    log_file_path=os.path.join(log_dir_path,"upload-logs"+"-"+timestamp_humanreadable+".txt")
    dir_upload_success=[]
    dir_upload_failure=[]
    for dir in tqdm.tqdm(dir_path_list,desc="Uploading Directories"):
        try:
            subprocess.check_output(["blobxfer","upload","--remote-path",input_azure_container,"--storage-account",azure_storage_account,\
            "--enable-azure-storage-logger","--log-file",\
            log_file_path,"--storage-account-key",azure_storage_account_key,"--local-path",dir]) 
            dir_upload_success.append(dir)
        except subprocess.CalledProcessError:
            sentry_sdk.capture_message("Failed to upload directory: {}".format(dir))
            dir_upload_failure.append(dir)
    return(input_azure_container)

def query_azure_storage(azure_storage_container,azure_storage_account,azure_storage_account_key,blob_file_path):
    try:
        blob_list=subprocess.check_output(["az","storage","blob","list","--container-name",azure_storage_container,\
        "--account-key",azure_storage_account_key,"--account-name",azure_storage_account,"--auth-mode","login","--output","tsv"])
        blob_list=blob_list.decode("utf-8")
        with open(blob_file_path,"w") as f:
            f.write(blob_list)
        blob_df=pandas.read_csv(blob_file_path,sep="\t",header=None)
        blob_df=blob_df.iloc[:,3]
        blob_df=blob_df.to_frame(name="container_files")
        blob_df=blob_df.assign(container=azure_storage_container)
        return(blob_df)
    except subprocess.CalledProcessError:
        sentry_sdk.capture_message("Invalid Azure Storage Credentials")
        sys.exit("Invalid Azure Storage Credentials.")

def analyze_files_for_tasks(data_split,azure_storage_container,azure_storage_account,azure_storage_account_key,download_folder):
    try:
        blob_df=data_split
        some_calculation_factor=2
        analyzed_azure_blob_df=pandas.DataFrame()
        analyzed_azure_blob_df=analyzed_azure_blob_df.assign(container="empty",container_files="empty",pages="empty",max_time="empty")
        for index,row in blob_df.iterrows():
            file_to_analyze=os.path.join(download_folder,row["container_files"])
            subprocess.check_output(["az","storage","blob","download","--container-name",azure_storage_container,"--file",file_to_analyze,"--name",row["container_files"],\
            "--account-name",azure_storage_account,"--auth-mode","key"])        #Why does login auth not work for this while we are multiprocessing
            doc=fitz.open(file_to_analyze)
            page_count=doc.pageCount
            analyzed_azure_blob_df=analyzed_azure_blob_df.append([{"container":azure_storage_container,"container_files":row["container_files"],"pages":page_count,"max_time":some_calculation_factor*page_count}])
            doc.close()
            os.remove(file_to_analyze)
        return(analyzed_azure_blob_df)
    except Exception as e:
        sentry_sdk.capture_exception(e)


def estimate_task_completion_time(azure_storage_container,azure_storage_account,azure_storage_account_key,azure_blob_df,azure_blob_downloads_file_path):
    try: 
        cores=mp.cpu_count()                                           #Number of CPU cores on your system
        partitions = cores-2  
        timestamp = time.time()
        timestamp_humanreadable= datetime.datetime.fromtimestamp(timestamp).strftime('%Y-%m-%d-%H-%M-%S')
        file_download_location=os.path.join(azure_blob_downloads_file_path,"Blob_Download"+"-"+timestamp_humanreadable)
        os.mkdir(file_download_location)
        data_split = np.array_split(azure_blob_df,indices_or_sections=partitions,axis=0)
        analyzed_azure_blob_df=pandas.concat(parmap.map(analyze_files_for_tasks,data_split,azure_storage_container,azure_storage_account,azure_storage_account_key,file_download_location,\
        pm_pbar=True,pm_processes=partitions))
        analyzed_azure_blob_df=analyzed_azure_blob_df.reset_index(drop=True)
        return(analyzed_azure_blob_df)
    except Exception as e:
        sentry_sdk.capture_exception(e)
        sys.exit("Unable to Estimate Job Completion Status")

def azure_batch_create_pool(azure_storage_container,azure_resource_group,azure_batch_account,azure_batch_account_endpoint,azure_batch_account_key,vm_image_name,no_nodes,vm_compute_size,analyzed_azure_blob_df):
    timestamp = time.time()
    timestamp_humanreadable= datetime.datetime.fromtimestamp(timestamp).strftime('%Y-%m-%d-%H-%M-%S')
    pool_id="pdf-processing"+"-"+timestamp_humanreadable
    try:
        subprocess.check_output(["az","batch","account","login","--name", azure_batch_account,"--resource-group",azure_resource_group])
    except subprocess.CalledProcessError:
        sentry_sdk.capture_message("Unable to log into the Batch account")
        sys.exit("Unable to log into the Batch account")
    #Pool autoscaling formula would go in here
    try:
        subprocess.check_output(["az","batch","pool","create","--account-endpoint",azure_batch_account_endpoint, \
        "--account-key",azure_batch_account_key,"--account-name",azure_batch_account,"--id",pool_id,\
        "--node-agent-sku-id","batch.node.ubuntu 18.04",\
        "--image",vm_image_name,"--target-low-priority-nodes",str(no_nodes),"--vm-size",vm_compute_size])
        return(pool_id)
    except subprocess.CalledProcessError:
        sentry_sdk.capture_message("Unable to create a Pool corresponding to Container:{}".format(azure_storage_container))
        sys.exit("Unable to create a Pool corresponding to Container:{}".format(azure_storage_container))

def azure_batch_create_job(azure_batch_account,azure_batch_account_key,azure_batch_account_endpoint,pool_info):
    timestamp = time.time()
    timestamp_humanreadable= datetime.datetime.fromtimestamp(timestamp).strftime('%Y-%m-%d-%H-%M-%S')
    job_id="pdf-processing-job"+"-"+timestamp_humanreadable
    try:
    subprocess.check_output(["az","batch","job","create","--account-endpoint",azure_batch_account_endpoint,"--account-key",\
    azure_batch_account_key,"--account-name",azure_batch_account,"--id",job_id,"--pool-id",pool_info])
    return(job_id)
    except subprocess.CalledProcessError:
        sentry_sdk.capture_message("Unable to create a Job on the Pool :{}".format(pool_info))
        sys.exit("Unable to create a Job on the Pool :{}".format(pool_info))

def azure_batch_create_task(azure_batch_account,azure_batch_account_key,azure_batch_account_endpoint,pool_info,job_info,azure_storage_account,azure_storage_account_key,azure_storage_container,analyzed_azure_blob_df):
    print("\n")
    for i in tqdm.tqdm(range(180),desc="Waiting for the Pool to Warm-up"):
        time.sleep(1)
    successful_task_list=[]
    unsuccessful_task_list=[]
    input_azure_container=azure_storage_container 
    output_azure_container= "pdf-processing-output"+"-"+input_azure_container.split("-input-")[-1]
    try:
        subprocess.check_output(["az","storage","container","create","--name",output_azure_container,"--account-name",azure_storage_account,"--auth-mode","login","--fail-on-exist"])
    except subprocess.CalledProcessError:
        sentry_sdk.cpature_message("Unable to create an output container")
        sys.exit("Unable to create an output container")
    print("\n")
    pbar = tqdm.tqdm(total=analyzed_azure_blob_df.shape[0],desc="Creating and distributing Tasks")
    for index,row in analyzed_azure_blob_df.iterrows():
        try:
            task_info="mytask-"+str(index)
            subprocess.check_output(["az","batch","task","create","--task-id",task_info,"--job-id",job_info,"--command-line",\
            "python3 /home/avadhut/pdf_processing.py {} {} {}".format(input_azure_container,output_azure_container,row["container_files"])])
            pbar.update(1)
        except subprocess.CalledProcessError:
            sentry_sdk.capture_message("unable to create the Task: mytask-{}".format(i))
            pbar.update(1)
    pbar.close()

def wait_for_tasks_to_complete(azure_batch_account,azure_batch_account_key,azure_batch_account_endpoint,job_info,task_file_path,analyzed_azure_blob_df):
        try:
            print(analyzed_azure_blob_df)
            nrows_tasks_df=analyzed_azure_blob_df.shape[0]
            print("\n")
            pbar=tqdm.tqdm(total=nrows_tasks_df,desc="Waiting for task to complete")
            for index,row in analyzed_azure_blob_df.iterrows():
                task_list=subprocess.check_output(["az","batch","task","list","--job-id",job_info,"--account-endpoint",azure_batch_account_endpoint,"--account-key",azure_batch_account_key,"--account-name",azure_batch_account,\
                "--output","tsv"])
                task_list=task_list.decode("utf-8")
                with open(task_file_path,"w") as f:
                    f.write(task_list)
                task_df=pandas.read_csv(task_file_path,sep="\t",header=None)
                task_df=task_df.iloc[:,21]
                active_task_list=[]
                for x in task_df:
                    if x =="active":
                        active_task_list.append(x)
                if len(active_task_list)>0:
                    time.sleep(row["max_time"])  #This time can be changed in accordance with the time taken to complete each task
                    pbar.update(1)
                    continue
                else:
                    pbar.close()
                    return("success")
            pbar.close()
            return("failure")
        except subprocess.CalledProcessError:
            sentry_sdk.capture_message("Error in retrieving task status")

def azure_delete_job(azure_batch_account,azure_batch_account_key,azure_batch_account_endpoint,job_info):
    try:
        subprocess.check_output(["az","batch","job","delete","--job-id",job_info,"--account-endpoint",azure_batch_account_endpoint,"--account-key",azure_batch_account_key,"--account-name",azure_batch_account,"--yes"])
    except subprocess.CalledProcessError:
        sentry_sdk.capture_message("Unable to delete Job-{}".format(job_info))

def azure_delete_pool(azure_batch_account,azure_batch_account_key,azure_batch_account_endpoint,pool_info):
    try:
        subprocess.check_output(["az","batch","pool","delete","--pool-id",pool_info,"--account-endpoint",azure_batch_account_endpoint,"--account-key",azure_batch_account_key,"--account-name",azure_batch_account,"--yes"])
    except subprocess.CalledProcessError:
        sentry_sdk.capture_message("Unable to delete Pool--{}".format(pool_info))

if __name__=="__main__":
    print("\n")
    print("-"*40+"Azure Batch processing POC"+"-"*40)
    print("\n")

    #Credentials and initializations
    sentry_sdk.init(<SENTRY-CREDENTIALS>) #Sign-up for a Sentry trail account
    azure_username=<AZURE-USERNAME>
    azure_password=<AZURE-PASSWORD>
    azure_tenant=<AZURE-TENANT>
    azure_resource_group=<RESOURCE-GROUP-NAME>
    azure_storage_account=<STORAGE-ACCOUNT-NAME>
    azure_storage_account_key=<STORAGE-KEY>
    azure_batch_account_endpoint=<BATCH-ENDPOINT>
    azure_batch_account_key=<BATCH-ACCOUNT-KEY>
    azure_batch_account=<BATCH-ACCOUNT-NAME>
    vm_image_name=<VM-IMAGE>
    vm_compute_size="Standard_A4_v2"
    no_nodes=2
    log_dir_path="/home/user/azure_batch_upload_logs/"
    azure_blob_downloads_file_path="/home/user/blob_downloads/"
    blob_file_path="/home/user/azure_batch_upload.tsv"
    task_file_path="/home/user/azure_task_list.tsv"


    input_azure_container=batch_upload_local_to_azure_blob(azure_username,azure_password,azure_tenant,azure_storage_account,azure_storage_account_key,log_dir_path)

    azure_blob_df=query_azure_storage(input_azure_container,azure_storage_account,azure_storage_account_key,blob_file_path)

    analyzed_azure_blob_df=estimate_task_completion_time(input_azure_container,azure_storage_account,azure_storage_account_key,azure_blob_df,azure_blob_downloads_file_path)

    pool_info=azure_batch_create_pool(input_azure_container,azure_resource_group,azure_batch_account,azure_batch_account_endpoint,azure_batch_account_key,vm_image_name,no_nodes,vm_compute_size,analyzed_azure_blob_df)

    job_info=azure_batch_create_job(azure_batch_account,azure_batch_account_key,azure_batch_account_endpoint,pool_info)

    azure_batch_create_task(azure_batch_account,azure_batch_account_key,azure_batch_account_endpoint,pool_info,job_info,azure_storage_account,azure_storage_account_key,input_azure_container,analyzed_azure_blob_df)

    task_status=wait_for_tasks_to_complete(azure_batch_account,azure_batch_account_key,azure_batch_account_endpoint,job_info,task_file_path,analyzed_azure_blob_df)

    if task_status=="success":
        azure_delete_job(azure_batch_account,azure_batch_account_key,azure_batch_account_endpoint,job_info)
        azure_delete_pool(azure_batch_account,azure_batch_account_key,azure_batch_account_endpoint,pool_info)
        print("\n\n")
        sys.exit("Job Complete")
    else:
        azure_delete_job(azure_batch_account,azure_batch_account_key,azure_batch_account_endpoint,job_info)
        azure_delete_pool(azure_batch_account,azure_batch_account_key,azure_batch_account_endpoint,pool_info)
        print("\n\n")
        sys.exit("Job Unsuccessful")

用于创建zip文件的cmd:

zip pdf_process_1.zip pdf_processing.py

打包为zip文件并通过客户端脚本批量上传的Python应用

(更新3)

import os
import fitz
import subprocess
import argparse
import time
from tqdm import tqdm
import sentry_sdk
import sys
import datetime

def azure_active_directory_login(azure_username,azure_password,azure_tenant):
    try:
        azure_login_output=subprocess.check_output(["az","login","--service-principal","--username",azure_username,"--password",azure_password,"--tenant",azure_tenant])
    except subprocess.CalledProcessError:
        sentry_sdk.capture_message("Invalid Azure Login Credentials")
        sys.exit("Invalid Azure Login Credentials")

def download_from_azure_blob(azure_storage_account,azure_storage_account_key,input_azure_container,file_to_process,pdf_docs_path):
    file_to_download=os.path.join(input_azure_container,file_to_process)
    try:
        subprocess.check_output(["az","storage","blob","download","--container-name",input_azure_container,"--file",os.path.join(pdf_docs_path,file_to_process),"--name",file_to_process,"--account-key",azure_storage_account_key,\
        "--account-name",azure_storage_account,"--auth-mode","login"])
    except subprocess.CalledProcessError:
        sentry_sdk.capture_message("unable to download the pdf file")
        sys.exit("unable to download the pdf file")

def pdf_to_png(input_folder_path,output_folder_path):
    pdf_files=[x for x in os.listdir(input_folder_path) if x.endswith((".pdf",".PDF"))]
    pdf_files.sort()
    for pdf in tqdm(pdf_files,desc="pdf--->png"):
        doc=fitz.open(os.path.join(input_folder_path,pdf))
        page_count=doc.pageCount
        for f in range(page_count):
            page=doc.loadPage(f)
            pix = page.getPixmap()
            if pdf.endswith(".pdf"):
                png_filename=pdf.split(".pdf")[0]+"___"+"page---"+str(f)+".png"
                pix.writePNG(os.path.join(output_folder_path,png_filename))
            elif pdf.endswith(".PDF"):
                png_filename=pdf.split(".PDF")[0]+"___"+"page---"+str(f)+".png"
                pix.writePNG(os.path.join(output_folder_path,png_filename))


def upload_to_azure_blob(azure_storage_account,azure_storage_account_key,output_azure_container,png_docs_path):
    try:
        subprocess.check_output(["az","storage","blob","upload-batch","--destination",output_azure_container,"--source",png_docs_path,"--account-key",azure_storage_account_key,\
        "--account-name",azure_storage_account,"--auth-mode","login"])
    except subprocess.CalledProcessError:
        sentry_sdk.capture_message("Unable to upload file to the container")


if __name__=="__main__":
    #Credentials 
    sentry_sdk.init(<SENTRY-CREDENTIALS>)
    azure_username=<AZURE-USERNAME>
    azure_password=<AZURE-PASSWORD>
    azure_tenant=<AZURE-TENANT>
    azure_storage_account=<AZURE-STORAGE-NAME>
    azure_storage_account_key=<AZURE-STORAGE-KEY>
    try:
        parser = argparse.ArgumentParser()
        parser.add_argument("input_azure_container",type=str,help="Location to download files from")
        parser.add_argument("output_azure_container",type=str,help="Location to upload files to")
        parser.add_argument("file_to_process",type=str,help="file link in azure blob storage")
        args = parser.parse_args()
        timestamp = time.time()
        timestamp_humanreadable= datetime.datetime.fromtimestamp(timestamp).strftime('%Y-%m-%d-%H-%M-%S')
        task_working_dir=os.getcwd()
        file_to_process=args.file_to_process
        input_azure_container=args.input_azure_container
        output_azure_container=args.output_azure_container
        pdf_docs_path=os.path.join(task_working_dir,"pdf_files"+"-"+timestamp_humanreadable)
        png_docs_path=os.path.join(task_working_dir,"png_files"+"-"+timestamp_humanreadable)
        os.mkdir(pdf_docs_path)
        os.mkdir(png_docs_path)
    except Exception as e:
        sentry_sdk.capture_exception(e)
    azure_active_directory_login(azure_username,azure_password,azure_tenant)
    download_from_azure_blob(azure_storage_account,azure_storage_account_key,input_azure_container,file_to_process,pdf_docs_path)
    pdf_to_png(pdf_docs_path,png_docs_path)
    upload_to_azure_blob(azure_storage_account,azure_storage_account_key,output_azure_container,png_docs_path)

更新1: 我已经解决了服务器节点进入无法使用状态的错误。我解决此问题的方法是:

1)我没有使用上面提到的cmds在Ubuntu上设置Python env 3.6,因为Ubuntu 18.04 LTS带有其自己的python 3环境。最初,我用Google搜索“在Ubuntu上安装Python 3”并得到了{ {3}}。在服务器设置过程中完全避免了此步骤。 我所做的就是这次安装了这些软件包。

sudo apt-get install -y python3-pip
sudo -H pip3 install tqdm==4.19.9
sudo -H pip3 install sentry-sdk==0.4.1
sudo -H pip3 install blobxfer==1.5.0
sudo -H pip3 install pandas==0.22.0

使用此链接中的cmds在计算机上安装了Azure cli Python 3.6 installation on Ubuntu link

2)创建OS磁盘的快照,然后从该快照创建映像,最后在客户端脚本中引用该映像。

我现在面临另一个问题,该节点上的stderr.txt文件告诉我:

  python3: can't open file '$AZ_BATCH_APP_PACKAGE_pdfprocessingapp/pdf_processing.py': [Errno 2] No such file or directory

使用随机用户登录服务器,我看到目录_azbatch已创建,但该目录内没有内容。

Install Azure CLI with apt

我肯定知道这是azure_batch_create_task()函数的命令行,事情进展不便,但是我不能对此付诸行动。我已经完成了本文档中的所有操作建议:No directory structure seen请查看我的客户端Python脚本,并让我知道我在做什么错!

修改3: 这个问题看起来与这篇文章中描述的非常相似: Install app packages to Azure Batch Compute Nodes

更新2:

我能够使用一个我不太喜欢的脏破解方法来克服找不到文件/目录错误。我将python应用放在了用户的主目录中,创建虚拟机,并在任务的工作目录中创建了处理所需的所有目录。

我仍然想知道如何通过使用应用程序包方式将工作流部署到节点上来运行工作流。

更新3

我已经更新了客户端代码和python应用程序,以反映所做的最新更改。重要的事情是相同的.....

我将评论他/她提出的@fparks点。

我打算在Azure Batch中使用的原始python应用程序包含许多模块和一些配置文件,以及用于Python包的相当长的requirements.txt文件.Azure还建议在这种情况下使用自定义图像。 在我的情况下,每个任务还​​下载python模块有点不合理,因为1个任务等于多页pdf,我的预期工作量为25k多页pdf 我之所以使用CLI是因为Python SDK的文档稀疏且难以遵循。进入不可用状态的节点已得到解决。我确实同意blobxfer错误。

1 个答案:

答案 0 :(得分:0)

答案和一些观察结果:

  1. 我不清楚为什么需要自定义图片。您可以使用平台映像,即Canonical, UbuntuServer, 18.04-LTS,然后只需安装start task中需要的内容即可。可以简单地通过18.04中的apt安装Python3.6。当实际上使用平台映像+启动任务可能更快,更稳定时,您可能会通过选择自定义映像来过早地优化工作流程。
  2. 您的脚本使用Python,但您正在调用Azure CLI。您可能需要考虑直接使用Azure Batch Python SDK insteadsamples)。
  3. 当节点不可用时,应首先检查该节点是否有错误。您应该查看是否填充了ComputeNodeError字段。此外,您可以尝试从stdout.txt目录获取stderr.txtstartup文件,以诊断正在发生的情况。您可以在Azure门户中或通过Batch Explorer进行这两项操作。如果这不起作用,则可以获取compute node service logs并提出支持请求。但是,通常不可用意味着您的自定义映像配置不正确,虚拟网络的NSG配置错误或应用程序包不正确。
  4. 您的应用程序包包含一个python文件;而是使用资源文件。只需将脚本上传到Azure存储blob,然后在任务中使用SAS URL作为Resource File引用它。如果使用CLI,请参见--resource-files中的az batch task create参数。这样,您要调用的命令就是python3 pdf_processing.py(假设您将资源文件下载到任务工作目录中)。
  5. 如果您坚持使用应用程序包,请考虑使用task application package。这样可以将可能源自不良应用程序包的节点启动问题与调试任务执行分离开来。
  6. blobxfer错误非常明显。您的locale is not set properly。解决此问题的简单方法是设置environment variables for the task。如果使用CLI,请参阅--environment-settings参数,并在任务中设置两个环境变量LC_ALL=C.UTF-8LANG=C.UTF-8