我希望能够将blob存储中的一些大JSON文件(每个约1GB)拆分为单个文件(每条记录一个文件)
我曾尝试使用Azure Python SDK中的get_blob_to_stream,但遇到以下错误:
AzureHttpError:服务器无法验证请求。请确保正确构成Authorization标头的值(包括签名)。
为了测试,我刚刚打印了从blob下载的文本,还没有尝试写回单个JSON文件
with BytesIO() as document:
block_blob_service = BlockBlobService(account_name=STORAGE_ACCOUNT_NAME, account_key=STORAGE_ACCOUNT_KEY)
block_blob_service.get_blob_to_stream(container_name=CONTAINER_NAME, blob_name=BLOB_ID, stream=document)
print(document.getvalue())
有趣的是,当我限制要下载的Blob信息的大小时,不会出现错误消息,并且我可以得到一些信息:
with BytesIO() as document:
block_blob_service = BlockBlobService(account_name=STORAGE_ACCOUNT_NAME, account_key=STORAGE_ACCOUNT_KEY)
block_blob_service.get_blob_to_stream(container_name=CONTAINER_NAME, blob_name=BLOB_ID, stream=document, start_range=0, end_range=100000)
print(document.getvalue())
有人知道这里发生了什么吗,还是有更好的方法来拆分大型JSON?
谢谢!
答案 0 :(得分:0)
此错误消息“服务器无法验证请求。请确保正确构造包括签名的Authorization标头的值”,通常是在标头格式不正确时得到的。出现此错误时,您会得到关注:
<?xml version="1.0" encoding="utf-8"?>
<Error>
<Code>AuthenticationFailed</Code>
<Message>Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
RequestId:096c6d73-f01e-0054-6816-e8eaed000000
Time:2019-03-31T23:08:43.6593937Z</Message>
<AuthenticationErrorDetail>Authentication scheme Bearer is not supported in this version.</AuthenticationErrorDetail>
</Error>
解决此问题的方法是在标题下方添加
x-ms-version: 2017-11-09
但是由于您说的是在限制大小时它是有效的,这意味着您必须使用块方法来编写代码。这是您可以尝试的方法。
import io
import datetime
from azure.storage.blob import BlockBlobService
acc_name = 'myaccount'
acc_key = 'my key'
container = 'storeai'
blob = "orderingai2.csv"
block_blob_service = BlockBlobService(account_name=acc_name, account_key=acc_key)
props = block_blob_service.get_blob_properties(container, blob)
blob_size = int(props.properties.content_length)
index = 0
chunk_size = 104,858 # = 0.1meg don't make this to big or you will get memory error
output = io.BytesIO()
def worker(data):
print(data)
while index < blob_size:
now_chunk = datetime.datetime.now()
block_blob_service.get_blob_to_stream(container, blob, stream=output, start_range=index, end_range=index + chunk_size - 1, max_connections=50)
if output is None:
continue
output.seek(index)
data = output.read()
length = len(data)
index += length
if length > 0:
worker(data)
if length < chunk_size:
break
else:
break
希望有帮助。