Spock无法拦截带有可选参数的方法

时间:2017-04-20 20:06:12

标签: unit-testing groovy spock

我正在尝试更新groovy单元测试,但我遇到以下代码的问题:

def "getDatasetRecords() returns PaginatedSearchResults with a set of DatasetRecords from the repo"() {
    setup:
    def mockRecords = []
    def originalResults = Mock(PaginatedSearchResults)

    def modelMock = Mock(Model) {
        isEmpty() >> false
        filter(_ as org.matonto.rdf.api.Resource, _ as IRI, _ as Value, null) >>> it // Also tried (*_), (_, _, _) w&w/o Classes
    }

    def recordMock = Mock(DatasetRecord)
    recordMock.getModel() >> modelMock

    7.times { mockRecords <<  recordMock }
    originalResults.getPage() >> mockRecords
    originalResults.getPageNumber() >> 1
    originalResults.getTotalSize() >> 7
    originalResults.getPageSize() >> 10
    catalogManagerMock.findRecord(*_) >>> originalResults

    expect:
    def results = service.getDatasetRecords(new DatasetPaginatedSearchParams(vf))
    results.getPage().size() == 7
    results.getPageSize() == 10
    results.getTotalSize() == 7
    results.getPageNumber() == 1
}

当我调试代码时:model.filter似乎正在返回null,并且在此处调用isEmpty()时会抛出NPE:

public Optional<DatasetRecord> getExisting(Resource resource, Model model, ValueFactory valueFactory, ValueConverterRegistry valueConverterRegistry) {
   return (model.filter(resource, valueFactory.createIRI(RDF_TYPE_IRI), this.getTypeIRI()).isEmpty()?Optional.empty():Optional.of(new DatasetRecordImpl(resource, model, valueFactory, valueConverterRegistry)));
}

NPE:

getDatasetRecords() returns PaginatedSearchResults with a set of DatasetRecords from the repo(org.matonto.dataset.impl.SimpleDatasetManagerSpec)  Time elapsed: 0.028 sec  <<< ERROR!
java.lang.NullPointerException
    at org.matonto.dataset.ontology.dataset.DatasetRecordFactory.getExisting(DatasetRecordFactory.java:65)
    at org.matonto.rdf.orm.AbstractOrmFactory.getExisting(AbstractOrmFactory.java:159)
    at org.matonto.rdf.orm.AbstractOrmFactory.getExisting(AbstractOrmFactory.java:167)
    at org.matonto.dataset.pagination.DatasetRecordSearchResults.lambda$new$0(DatasetRecordSearchResults.java:46)
    at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
    at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)
    at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
    at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
    at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
    at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
    at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
    at org.matonto.dataset.pagination.DatasetRecordSearchResults.<init>(DatasetRecordSearchResults.java:47)
    at org.matonto.dataset.impl.SimpleDatasetManager.getDatasetRecords(SimpleDatasetManager.java:155)
    at org.matonto.dataset.impl.SimpleDatasetManagerSpec.getDatasetRecords() returns PaginatedSearchResults with a set of DatasetRecords from the repo(SimpleDatasetManagerSpec.groovy:371)

filter的定义:

Model filter(Resource subject, IRI predicate, Value object, Resource... context);

我尝试使用和不使用最后一个参数为模拟对象完全指定方法签名。任何帮助将不胜感激得到filter方法的模拟拦截器。

2 个答案:

答案 0 :(得分:1)

尝试这种方式:

class Spec extends Specification {
    def 'mock returns itself'() {
        given:
        def mock = Mock(Model) {
            filter(*_) >> it
        }

        expect:
        mock.filter(1, 2, 3, 4) == mock
        mock.filter(1, 2, 3) == mock
    }
}

class Model {
    Model filter(a, b, c, ... d) {
        new Model()
    }
}

答案 1 :(得分:1)

我的代码(上面)很好,但是做了一些小改动:

from scrapy_redis.spiders import RedisSpider
from elasticsearch import Elasticsearch
from serp_crawl.settings import *
from datetime import datetime
from redis import Redis

import scrapy
import json


class SerpSpider(RedisSpider):
    name = "serpcrawler"
    redis_key = 'serp_crawler:request'

    def __init__(self, redis_host='localhost', redis_port='6379',
                 elasticsearch_host='localhost', elasticsearch_port='9200',
                 mysql_host='localhost', dev=False,):
        super(SerpSpider, self).__init__()

        self.platform = None
        self.dev = bool(dev)
        self.q = Redis(redis_host, redis_port)
        self.es = Elasticsearch([{'host': elasticsearch_host, 'port': elasticsearch_port}])

    @classmethod
    def from_crawler(self, crawler, *args, **kwargs):
        crawler.settings.attributes['REDIS_HOST'].value = kwargs['redis_host']
        obj = super(RedisSpider, self).from_crawler(crawler, *args, **kwargs)
        obj.setup_redis(crawler)
        return obj

    def make_requests_from_url(self, url):
        data = json.loads(url)
        self.logger.info('Got new url to parse: ', data['url'])
        self.settings.attributes['DEFAULT_REQUEST_HEADERS'].value.attributes['X-Crawlera-UA'].value = data['platform']
        self.platform = data['platform']
        return scrapy.Request(url=data['url'], callback=self.parse,
                              meta={'keyword': data['keyword'],
                                    'id': data['id_keyword'],
                                    'country': data['country'],
                                    'platform': data['platform']}, dont_filter=True)

    def parse(self, response):
        doc = dict()
        try:
            doc['content'] = response.body.decode('cp1252')
        except:
            doc['content'] = response.body
        doc['date'] = datetime.now().strftime('%Y-%m-%d')
        doc['keyword'] = str(response.meta['keyword'])
        doc['type_platform'] = str(response.meta['platform'])
        doc['country'] = str(response.meta['country'])
        if not self.dev:
            id_index = self.es.index(index='serp_html', doc_type='page', body=doc)
            self.q.lpush('batching_serp',
                               {'id_index': str(id_index['_id']),
                                'type_batching': 'default',
                                'country': doc['country'],
                                'type_platform': doc['type_platform'],
                                'keyword': doc['keyword'],
                                'id_keyword': int(response.meta['id'])})
            self.logger.info('Indexed new page. id_es : [' + str(id_index['_id']) + ']')

我的问题原来是IntelliJ并试图在那里进行调试。