为什么我的二进制节点等于抛出断言错误的方法?

时间:2018-04-29 21:54:27

标签: java binary-tree junit4 equals

import scrapy
from scrapy_splash import SplashRequest

from urllib.parse import urljoin
from autotrader1.items import Autotrader1Item


script = """
function main(splash)
    local scroll_delay = 2 # i have tried to vary this number with some success
    local is_down = splash:jsfunc(
        "function() { return((window.innerHeight + window.scrollY) >= document.body.offsetHeight);}"
        )

    local scroll_to = splash:jsfunc("window.scrollTo")
    local get_body_height = splash:jsfunc(
        "function() {return document.body.scrollHeight;}"
    )
    assert(splash:go(splash.args.url))

    while not is_down() do
        scroll_to(0, get_body_height())
        splash:wait(scroll_delay)
    end        
    return splash:html()
end
"""


class Autotrader1Spider(scrapy.Spider):
    name = "autotrader1"
    allowed_domains = ["autotrader.co.uk"]  
    start_urls = ["http://www.autotrader.co.uk/car-dealers/search?channel=cars&postcode=M4+3AQ&radius=1501&forSale=on&toOrder=on&page=1"]

## this gets the URLs from the start_url page and re-writes them to get straight to the dealer listing page
    def parse(self,response):
        for href in response.xpath('//*[@class="dealerList__itemName"]/a[@class="dealerList__itemUrl tracking-standard-link"]/@href'):
            dealer_id = href.re('([^-]*)$')[0]
            new_url = str(href.extract()) + str('/stock?sort=price-asc&onesearchad=Used&onesearchad=Nearly%20New&dealer=') + str(dealer_id) + str('&advertising-location=at_cars&page=12&advertising-location=at_profile_cars')
            url_dealer = urljoin('https://www.autotrader.co.uk/',new_url)
            yield SplashRequest(url_dealer, self.parse_second, endpoint='render.html', args={'wait':2, 'lua_source': script})            

## these are the dealer listing pages with javascript and infinite scrolling
    def parse_second(self,response):
        item = Autotrader1Item()
        for sel in response.xpath('//div[@class="app-root"]'):

            name = sel.xpath('//*[@class="dealer-stock-view-header"]/span/h1/text()[4]').extract()
            item['name'] = name
            item['page_type'] = "normal"
            item['cars'] = sel.xpath('//*[@class="dealer_stock_search"]/div/div/h2/text()[1]').extract()
            URLs = sel.xpath('//*[@class="stock-view-listing"]/../@href').extract()
            URLs = [URL.strip() for URL in URLs]
            descriptions = sel.xpath('//*[@class="stock-view-listing"]/div[@class="information-container"]/span[@class="information"]/h2/text()').extract()
            descriptions = [description.strip() for description in descriptions]
            prices = sel.xpath('//*[@class="price-container"]/h2').re(r'<h2>(.*?)</h2>')
            prices = [price.strip() for price in prices]
            registrations = sel.xpath('//*[@class="stock-view-listing"]/div[@class="information-container"]/span[@class="information"]/p[1]/text()').re(r'^([^|]*)')
            registrations = [registration.strip() for registration in registrations]
            miless = sel.xpath('//*[@class="stock-view-listing"]/div[@class="information-container"]/span[@class="information"]/p[1]/text()').re(r'([^|.]+)(?=miles)')
            miless = [miles.strip() for miles in miless]
            engines = sel.xpath('//*[@class="stock-view-listing"]/div[@class="information-container"]/span[@class="information"]/p[1]/text()').re(r'(?<=miles \| )(.*?)(?= )')
            engines = [engine.strip() for engine in engines]
            transmissions = sel.xpath('//*[@class="stock-view-listing"]/div[@class="information-container"]/span[@class="information"]/p[1]/text()').re(r'^(?:[^\|]*\|){4}([^\|]*)')
            transmissions = [transmission.strip() for transmission in transmissions]
            fuels = sel.xpath('//*[@class="stock-view-listing"]/div[@class="information-container"]/span[@class="information"]/p[1]/text()').re(r'^(?:[^\|]*\|){5}([^\|]*)')
            fuels = [fuel.strip() for fuel in fuels]

            result = zip(URLs, descriptions, prices, registrations, miless, engines, transmissions, fuels)
            for URL, description, price, registration, miles, engine, transmission, fuel in result:
                item['URL'] = URL
                item['description'] = description
                item['price'] = price
                item['registration'] = registration
                item['miles'] = miles
                item['engine'] = engine
                item['transmission'] = transmission
                item['fuel'] = fuel
                yield item

请帮我弄清楚为什么我的equals方法没有通过我的junit测试。我做了主要的方法测试,它应该完全一样,我一直得到一个断言错误。我甚至添加了一个toString方法来检查,我得到两个节点完全相同的字符串,为什么它声称它不相等?

1 个答案:

答案 0 :(得分:0)

首先,

UPDATE t_client
SET [client_history] = DSUM("final_price", "T_PURCHASE", "client_id = " & client_id)

不正确,应该是

if(this.getLeft() == null && n1.getLeft() == null && this.getRight() == null && n1.getLeft() == null) {

其次,在第二个和第三个if(this.getLeft() == null && n1.getLeft() == null && this.getRight() == null && n1.getRight() == null) { 中,你不是要比较左边节点和右边节点是否相等,相应。例如,在第二个分支上,您只是检查右子树是否为空,左子树是否为空。但是你不检查左子树是否相同。

第三,看起来你错过了一些花括号。这是您的代码的工作版本。

if