编写描述性单元测试

时间:2016-11-22 14:30:03

标签: javascript python unit-testing

我试图用Python编写单元测试,并且很难找到一种描述性的方法来做事。我有一个JavaScript背景,我使用mocha来帮助我描述。

这就是我所说的"描述性":

foo.js

exports.foo = function (type, isLogged, iOk) {
    if (type === undefined) throw new Error('a cannot be undefined');
    if (isLogged === undefined) throw new Error('b cannot be undefined');
    if (isLogged) {
        if (type === 'WRITER') {
            return isOk ? "writer" : -1;
        } else {
            return "something else"
        }
    }
}

foo.spec.js

describe('#foo()', function () {
    context('when type is undefined', function () {
      ...
    })
    context('when isLogged is undefined', function () {
      ...
    })
    context('when type is defined', function () {
        context('when isLogger is not defined', function () {
         ...
        })
        context('when isLogged is defined', function () {
            context('when type is not WRITER', function () {
             ...
            })
            context('when type is WRITER', function () {
                context('when isOk is true', function () {
                 ...
                })
            })
        }) 
    })
})

当我在Python中编写单元测试时,我最终会得到这样的东西:

foo.spec.py

class TestFoo:
    def test_when_type_is_undefined(self):
        ...
    def test_when_isLogged_is_undefined(self):
        ...
    # This test name is too long
    def test_when_type_is_defined_and_isLogged_is_undefined_and_type_is_writer_when_is_ok_is_true(self):
        ...

如何以更好的方式构建这些测试?有关描述性单元测试的最佳实践是什么?是否有好的单元测试的好例子?

3 个答案:

答案 0 :(得分:2)

您可以使用pyspecs在代码中使用类似BDD的语法。

实施例

from pyspecs import given, when, then, and_, the, this

with given.two_operands:
    a = 2
    b = 3

    with when.supplied_to_the_add_function:
        total = a + b

        with then.the_total_should_be_mathmatically_correct:
            the(total).should.equal(5)

        with and_.the_total_should_be_greater_than_either_operand:
            the(total).should.be_greater_than(a)
            the(total).should.be_greater_than(b)

    with when.supplied_to_the_subtract_function:
        difference = b - a

        with then.the_difference_should_be_mathmatically_correct:
            the(difference).should.equal(1)

输出

# run_pyspecs.py

  | • given two operands 
  |   • when supplied to the add function 
  |     • then the total should be mathmatically correct 
  |     • and the total should be greater than either operand 
  |   • when supplied to the subtract function 
  |     • then the difference should be mathmatically correct 

(ok) 6 passed (6 steps, 1 scenarios in 0.0002 seconds)

答案 1 :(得分:0)

我认为您的单元测试结构没有任何问题。单元测试应该是非常具有描述性的,因此一旦测试失败,问题就显而易见了。作为一个信息很少的开发人员,测试用例test_when_type_is_defined_and_isLogged_is_undefined_and_type_is_writer_when_is_ok_is_true告诉了我很多关于出了什么问题以及在哪里看的问题。

通过在assert语句中添加错误消息,您可以使测试更具描述性,以便在出现故障时确切知道原因。例如:“预期作家为ok但作家为无”。

对我来说,测试所在的文件的名称,测试用例的名称以及断言消息应该为代码失败的原因以及原因提供了明确的路径。

答案 2 :(得分:0)

具有有意义的测试方法名称当然很重要,但是,当测试名称变得不实用且不可读时,您始终可以在方法docstrings内提供完整的测试说明。

以下是一些sample tests from the requests library

def test_cookielib_cookiejar_on_redirect(self, httpbin):
    """Tests resolve_redirect doesn't fail when merging cookies
    with non-RequestsCookieJar cookiejar.
    See GH #3579
    """
    cj = cookiejar_from_dict({'foo': 'bar'}, cookielib.CookieJar())
    s = requests.Session()
    # ...

def test_headers_on_session_with_None_are_not_sent(self, httpbin):
    """Do not send headers in Session.headers with None values."""
    ses = requests.Session()
    ses.headers['Accept-Encoding'] = None
    # ...

请注意,您可以使用增加verbosity 在控制台上查看这些文档字符串。演示:

$ cat test_module.py
import unittest


class BasicTestSuite(unittest.TestCase):
    def test_one(self):
        self.assertEqual(1, 1)

    def test_two(self):
        """Extended description"""
        self.assertEqual(2, 2)

if __name__ == '__main__':
    unittest.main()

$ python -m unittest -v test_module
test_one (test_module.BasicTestSuite) ... ok
test_two (test_module.BasicTestSuite)
Extended description ... ok

----------------------------------------------------------------------
Ran 2 tests in 0.000s

OK