我有一个项目使用aiohttp和aiobotocore来处理AWS中的资源。我正在尝试测试适用于AWS S3的类,我正在使用moto来模拟AWS。使用同步代码的示例(来自moto docs的示例),Mocking可以正常工作
import boto3
from moto import mock_s3
class MyModel(object):
def __init__(self, name, value):
self.name = name
self.value = value
def save(self):
s3 = boto3.client('s3', region_name='us-east-1')
s3.put_object(Bucket='mybucket', Key=self.name, Body=self.value)
def test_my_model_save():
with mock_s3():
conn = boto3.resource('s3', region_name='us-east-1')
conn.create_bucket(Bucket='mybucket')
model_instance = MyModel('steve', 'is awesome')
model_instance.save()
body = conn.Object('mybucket', 'steve').get()['Body'].read().decode("utf-8")
assert body == 'is awesome'
但是,在重写之后使用aiobotocore模拟不起作用 - 它在我的示例中连接到真正的AWS S3。
import aiobotocore
import asyncio
import boto3
from moto import mock_s3
class MyModel(object):
def __init__(self, name, value):
self.name = name
self.value = value
async def save(self, loop):
session = aiobotocore.get_session(loop=loop)
s3 = session.create_client('s3', region_name='us-east-1')
await s3.put_object(Bucket='mybucket', Key=self.name, Body=self.value)
def test_my_model_save():
with mock_s3():
conn = boto3.resource('s3', region_name='us-east-1')
conn.create_bucket(Bucket='mybucket')
loop = asyncio.get_event_loop()
model_instance = MyModel('steve', 'is awesome')
loop.run_until_complete(model_instance.save(loop=loop))
body = conn.Object('mybucket', 'steve').get()['Body'].read().decode("utf-8")
assert body == 'is awesome'
所以我的假设是moto与aiobotocore无法正常工作。如果我的源代码在第二个示例中看起来如何,我如何有效地模拟AWS资源?
答案 0 :(得分:3)
来自moto
的模拟不起作用,因为它们使用的是同步API。
但您可以启动moto服务器并配置aiobotocore以连接到此测试服务器。
看看aiobotocore测试的灵感。
答案 1 :(得分:1)
来自没有pytest的aiobotocore的mock_server.py:
# Initially from https://raw.githubusercontent.com/aio-libs/aiobotocore/master/tests/mock_server.py
import shutil
import signal
import subprocess as sp
import sys
import time
import requests
_proxy_bypass = {
"http": None,
"https": None,
}
def start_service(service_name, host, port):
moto_svr_path = shutil.which("moto_server")
args = [sys.executable, moto_svr_path, service_name, "-H", host,
"-p", str(port)]
process = sp.Popen(args, stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.DEVNULL)
url = "http://{host}:{port}".format(host=host, port=port)
for _ in range(30):
if process.poll() is not None:
break
try:
# we need to bypass the proxies due to monkeypatches
requests.get(url, timeout=0.1, proxies=_proxy_bypass)
break
except requests.exceptions.RequestException:
time.sleep(0.1)
else:
stop_process(process)
raise AssertionError("Can not start service: {}".format(service_name))
return process
def stop_process(process, timeout=20):
try:
process.send_signal(signal.SIGTERM)
process.communicate(timeout=timeout / 2)
except sp.TimeoutExpired:
process.kill()
outs, errors = process.communicate(timeout=timeout / 2)
exit_code = process.returncode
msg = "Child process finished {} not in clean way: {} {}" \
.format(exit_code, outs, errors)
raise RuntimeError(msg)
答案 2 :(得分:1)
使用AWS的存根即可解决问题。这是我在龙卷风应用程序中对aws读取操作的处理方式:
import aiobotocore
from botocore.stub import Stubber
from tornado.testing import AsyncTestCase
from aiobotocore.response import StreamingBody
class RawStream(io.BytesIO):
async def __aenter__(self):
return self
async def __aexit__(self, exc_type, exc_val, exc_tb):
pass
async def read(self, n):
return super().read(n)
class S3TestCase(AsyncTestCase):
def setUp(self):
super().setUp()
session = aiobotocore.get_session()
self.client = session.create_client("s3", region_name="AWS_S3_REGION",
aws_secret_access_key="AWS_SECRET_ACCESS_KEY",
aws_access_key_id="AWS_ACCESS_KEY_ID")
@tornado.testing.gen_test
def test_read(self):
stubber = Stubber(self.client)
stubber.add_response("get_object",
{"Body": StreamingBody(raw_stream=RawStream(self.binary_content), content_length=128),
"ContentLength": 128},
expected_params={"Bucket": "AWS_S3_BUCKET",
"Key": "filename"})
stubber.activate()
response = await client.get_object(Bucket="AWS_S3_BUCKET", Key="filename")
对于写操作应该是相似的。希望这会指引您正确的方向。
有关存根的更多信息:https://botocore.amazonaws.com/v1/documentation/api/latest/reference/stubber.html
答案 3 :(得分:1)
我认为塞巴斯蒂安·布雷斯廷斯的答案应该是被接受的答案。我将发布此新答案,因为自发布之日起有些变化。 python 3.8现在支持异步测试用例,而aioboto3客户端现在是上下文管理器。
使用python 3.8的一个最小示例如下:
from unittest import IsolatedAsyncioTestCase
import aioboto3
from botocore.stub import Stubber
class Test(IsolatedAsyncioTestCase):
async def asyncSetUp(self):
self._s3_client = await aioboto3.client('s3').__aenter__()
self._s3_stub = Stubber(self._s3_client)
async def asyncTearDown(self):
await self._s3_client.__aexit__(None, None, None)
async def test_case(self):
self._s3_stub.add_response(
"get_object",
{"Body": "content"},
expected_params={"Bucket": "AWS_S3_BUCKET", "Key": "filename"}
)
self._s3_stub.activate()
response = await self._s3_client.get_object(Bucket="AWS_S3_BUCKET", Key="filename")
self.assertEquals(response, "content")
答案 4 :(得分:0)
我们可以使用moto [server]创建S3服务器,然后根据其中的内容创建一个pytest固定装置,类似于aioboto3
@pytest.yield_fixture(scope='session')
def s3_server():
host = 'localhost'
port = 5002
url = 'http://{host}:{port}'.format(host=host, port=port)
process = start_service('s3', host, port)
yield url
stop_process(process)
,然后使用patch('aiobotocore.AioSession.create_client')
的{{1}} return_value
aiobotocore.get_session().create_client('s3', region_name='us-east-1', end_point_url=s3_server)