我有一个测试,我有一个设置方法,它应该接收一个数据集和一个测试函数,它应该为数据集中的每个数据运行
基本上我需要这样的东西:
datasetA = [data1_a, data2_a, data3_a]
datasetB = [data1_b, data2_b, data3_b]
@pytest.fixture(autouse=True, scope="module", params=[datasetA, datasetB])
def setup(dataset):
#do setup
yield
#finalize
#dataset should be the same instantiated for the setup
@pytest.mark.parametrize('data', [data for data in dataset])
def test_data(data):
#do test
应该像:
一样运行然而,似乎不可能对夹具获得的变量进行参数化,正如我在示例中所希望的那样。
我可以让我的函数使用fixture并在测试方法中迭代:
def test_data(dataset):
for data in dataset:
#do test
但是我会为每个案例进行一次大型测试,而不是单独的测试,我不希望这样。
有没有办法实现这个目标?
谢谢!
答案 0 :(得分:2)
答案#1:如果严格遵循测试设计,那么它应该如下所示:
intList = []
running = True
while running:
intAmount = int(input("Enter the amount of integers you are inputting: "))
if intAmount > 0:
running = False
for i in range (intAmount):
integers = int(input("Enter an integer here: "))
intList.append(integers)
print(intList)
count = 0
for num in intList:
if (num % 5) == 0:
count+=1
percent = (count / len(intList)) * 100
print("%.2f%% of values in intList are multiples of 5"%percent)
注意import pytest
datasetA = [10, 20, 30]
datasetB = [100, 200, 300]
@pytest.fixture
def dataset(request):
#do setup
items = request.param
yield items
#finalize
@pytest.fixture
def item(request, dataset):
index = request.param
yield dataset[index]
#dataset should be the same instantiated for the setup
@pytest.mark.parametrize('dataset', [datasetA, datasetB], indirect=True)
@pytest.mark.parametrize('item', [0, 1, 2], indirect=True)
def test_data(dataset, item):
print(item)
#do test
&的间接参数化。 item
。参数值将传递给与dataset
相同的夹具。在这种情况下,我们使用索引假设数据集的长度相同,为3个项目。
以下是它的执行方式:
request.param
答案 1 :(得分:1)
答案#2:你也可以注入收集和放大pytest的参数化阶段通过当前目录中名为conftest.py
的伪插件:
conftest.py
:
import pytest
datasetA = [100, 200, 300]
datasetB = [10, 20, 30]
def pytest_generate_tests(metafunc):
if 'data' in metafunc.fixturenames:
for datasetname, dataset in zip(['A', 'B'], [datasetA, datasetB]):
for data in dataset:
metafunc.addcall(dict(data=data), id=datasetname+str(data))
test_me.py
:
def test_data(data):
print(data)
#do test
执行命令
$ pytest -ra -v -s test_me.py
test_me.py::test_data[A100] 100
PASSED
test_me.py::test_data[A200] 200
PASSED
test_me.py::test_data[A300] 300
PASSED
test_me.py::test_data[B10] 10
PASSED
test_me.py::test_data[B20] 20
PASSED
test_me.py::test_data[B30] 30
PASSED
然而,由于dataset
不支持间接参数,因此使metafunc.addcall()
间接(即通过具有设置和拆除阶段的夹具可访问)变得困难,因为indirect=...
不支持间接参数。
添加metafunc.parametrize()
的唯一方法是通过conftest.py
。但在这种情况下,假设数据集的大小不同,则必须构建整个数据集 - 数据项对列表:
import pytest
datasetA = [100, 200, 300]
datasetB = [10, 20, 30]
datasets = [datasetA, datasetB]
def pytest_generate_tests(metafunc):
if 'data' in metafunc.fixturenames:
metafunc.parametrize('dataset, data', [
(dataset, data)
for dataset in datasets
for data in dataset
], indirect=['dataset'], ids=[
'DS{}-{}'.format(idx, str(data))
for idx, dataset in enumerate(datasets)
for data in dataset
])
@pytest.fixture()
def dataset(request):
#do setup
yield request.param
#finalize
:
test_me.py
def test_data(dataset, data):
print(data)
#do test
:
$ pytest -ra -v -s test_me.py
test_me.py::test_data[DS0-100] 100
PASSED
test_me.py::test_data[DS0-200] 200
PASSED
test_me.py::test_data[DS0-300] 300
PASSED
test_me.py::test_data[DS1-10] 10
PASSED
test_me.py::test_data[DS1-20] 20
PASSED
test_me.py::test_data[DS1-30] 30
PASSED
执行命令
#include <mutex>
class SomeEventObserver {
public:
virtual void handleEventA() = 0;
virtual void handleEventB() = 0;
};
class Client : public SomeEventObserver {
public:
Client() {
m_shouldQuit = false;
m_hasEventAHappened = false;
m_hasEventBHappened = false;
}
// will be callbed by some other thread (for exampe, thread 10)
virtual void handleEventA() override {
{
std::lock_guard<std::mutex> lock(m_mutexForA);
m_hasEventAHappened = true;
}
m_condVarEventForA.notify_all();
}
// will be called by some other thread (for exampe, thread 11)
virtual void handleEventB() override {
{
std::lock_guard<std::mutex> lock(m_mutexForB);
m_hasEventBHappened = true;
}
m_condVarEventForB.notify_all();
}
// here waitForA and waitForB are in the main thread, they are executed sequentially
// so I am wondering if I can use just one mutex to simplify the code
void run() {
waitForA();
waitForB();
}
void doShutDown() {
m_shouldQuit = true;
}
private:
void waitForA() {
std::unique_lock<std::mutex> lock(m_mutexForA);
m_condVarEventForA.wait(lock, [this]{ return m_hasEventAHappened; });
}
void waitForB() {
std::unique_lock<std::mutex> lock(m_mutexForB);
m_condVarEventForB.wait(lock, [this]{ return m_hasEventBHappened; });
}
// I am wondering if I can use just one mutex
std::condition_variable m_condVarEventForA;
std::condition_variable m_condVarEventForB;
std::mutex m_mutexForA;
std::mutex m_mutexForB;
bool m_hasEventAHappened;
bool m_hasEventBHappened;
};
int main(int argc, char* argv[]) {
Client client;
client.run();
}
答案 2 :(得分:1)
Sergey的答案对我来说似乎不完整,因为它依赖于以下事实:两个数据集具有相同的项数,因此可以使用等于{{1}的相同index
参数进行参数化}。
这是另一个更通用的答案,允许每个数据库拥有自己的项目数。它使用pytest-cases
的新版本2.0.0,该版本比旧版本有了很大的改进(我在此页面上回答了一些其他/其他问题时,请保留我的旧答案):
range(3)
您认为简单得多吗?
答案 3 :(得分:0)
pytest-cases
提供了两种解决此问题的方法
@cases_data
,一种装饰器,可以在测试功能或固定装置上使用,以使其从各种“ case函数”(可能在各个模块中,并且可能已经对其参数化)中获取其参数。问题是“案例功能”不是固定装置,因此不允许您从依赖项和设置/拆卸机制中受益。我宁愿用它来收集文件系统中的各种情况。
最近但更“ pytest-y”的fixture_union
允许您创建一个夹具,该夹具是两个或多个夹具的并集。这包括设置/拆卸和依赖关系,所以这是您在这里想要的。您可以显式创建联合,也可以通过在参数值中将pytest_parametrize_plus
与fixture_ref()
一起使用来创建联合。
这是您的示例的外观:
import pytest
from pytest_cases import pytest_parametrize_plus, pytest_fixture_plus, fixture_ref
# ------ Dataset A
DA = ['data1_a', 'data2_a', 'data3_a']
DA_data_indices = list(range(len(DA)))
@pytest_fixture_plus(scope="module")
def datasetA():
print("setting up dataset A")
yield DA
print("tearing down dataset A")
@pytest_fixture_plus(scope="module")
@pytest.mark.parametrize('data_index', DA_data_indices, ids="idx={}".format)
def data_from_datasetA(datasetA, data_index):
return datasetA[data_index]
# ------ Dataset B
DB = ['data1_b', 'data2_b']
DB_data_indices = list(range(len(DB)))
@pytest_fixture_plus(scope="module")
def datasetB():
print("setting up dataset B")
yield DB
print("tearing down dataset B")
@pytest_fixture_plus(scope="module")
@pytest.mark.parametrize('data_index', range(len(DB)), ids="idx={}".format)
def data_from_datasetB(datasetB, data_index):
return datasetB[data_index]
# ------ Test
@pytest_parametrize_plus('data', [fixture_ref('data_from_datasetA'),
fixture_ref('data_from_datasetB')])
def test_databases(data):
# do test
print(data)
当然,您可能希望动态处理任意数量的数据集。在这种情况下,您必须动态生成所有备用夹具,因为pytest必须事先知道要执行的测试数量。效果很好:
import pytest
from makefun import with_signature
from pytest_cases import pytest_parametrize_plus, pytest_fixture_plus, fixture_ref
# ------ Datasets
datasets = {
'DA': ['data1_a', 'data2_a', 'data3_a'],
'DB': ['data1_b', 'data2_b']
}
datasets_indices = {dn: range(len(dc)) for dn, dc in datasets.items()}
# ------ Datasets fixture generation
def create_dataset_fixture(dataset_name):
@pytest_fixture_plus(scope="module", name=dataset_name)
def dataset():
print("setting up dataset %s" % dataset_name)
yield datasets[dataset_name]
print("tearing down dataset %s" % dataset_name)
return dataset
def create_data_from_dataset_fixture(dataset_name):
@pytest_fixture_plus(name="data_from_%s" % dataset_name, scope="module")
@pytest.mark.parametrize('data_index', dataset_indices, ids="idx={}".format)
@with_signature("(%s, data_index)" % dataset_name)
def data_from_dataset(data_index, **kwargs):
dataset = kwargs.popitem()[1]
return dataset[data_index]
return data_from_dataset
for dataset_name, dataset_indices in datasets_indices.items():
globals()[dataset_name] = create_dataset_fixture(dataset_name)
globals()["data_from_%s" % dataset_name] = create_data_from_dataset_fixture(dataset_name)
# ------ Test
@pytest_parametrize_plus('data', [fixture_ref('data_from_%s' % n)
for n in datasets_indices.keys()])
def test_databases(data):
# do test
print(data)
两者都提供相同的输出:
setting up dataset DA
data1_a
data2_a
data3_a
tearing down dataset DA
setting up dataset DB
data1_b
data2_b
tearing down dataset DB
编辑:如果使用param_fixtures
对于所有数据集,设置/拆卸步骤都相同,则可能会有一个更简单的解决方案。我会尽快发布。
编辑2:实际上,我所指的更简单的解决方案似乎会导致多次设置/拆卸,就像您在已接受的答案中已经指出的那样:
from pytest_cases import pytest_fixture_plus, param_fixtures
# ------ Datasets
datasets = {
'DA': ['data1_a', 'data2_a', 'data3_a'],
'DB': ['data1_b', 'data2_b']
}
was_setup = {
'DA': False,
'DB': False
}
data_indices = {_dataset_name: list(range(len(_dataset_contents)))
for _dataset_name, _dataset_contents in datasets.items()}
param_fixtures("dataset_name, data_index", [(_dataset_name, _data_idx) for _dataset_name in datasets
for _data_idx in data_indices[_dataset_name]],
scope='module')
@pytest_fixture_plus(scope="module")
def dataset(dataset_name):
print("setting up dataset %s" % dataset_name)
assert not was_setup[dataset_name]
was_setup[dataset_name] = True
yield datasets[dataset_name]
print("tearing down dataset %s" % dataset_name)
@pytest_fixture_plus(scope="module")
def data(dataset, data_index):
return dataset[data_index]
# ------ Test
def test_databases(data):
# do test
print(data)
我在pytest-dev上打开了一张票,以更好地了解原因:pytest-dev#5457
有关详细信息,请参见documentation。 (顺便说一下,我是作者))