如何在序列化/反序列化JSON时擦除实例类型?

时间:2017-01-20 17:06:07

标签: java json serialization deserialization fasterxml

我使用fasterxml来序列化/反序列化JSON

public class A {
    String field;
    B b;
}

public class B {
    int n;
}

我希望以这样的格式获得JSON

{
  "field": "abc",
  "n": 123
}

有可能吗?

3 个答案:

答案 0 :(得分:2)

您可以使用Jackson注释来提供特定的反序列化器。

`>REEDoutput.data<-read.csv("REEDoutput.data.csv", na.strings= "NA",header=TRUE)`  

`>REEDoutput.data<-read.csv("REEDoutput.data.csv", stringsAsFactors=FALSE, na.strings = "NA",  header=TRUE)`  

`>enter code here`REEDoutput.data<-read.csv("REEDoutput.data.csv", stringsAsFactors=FALSE, header=TRUE)`  

您的类型的反序列化器应该是这样的

@JsonDeserialize(using = ADeserializer.class)
public class A {

    private String field;
    private B b;

    // ...
}

对于序列化,可以使用自定义序列化程序。那就是它。

答案 1 :(得分:1)

在Java中无法做到这一点。

答案 2 :(得分:1)

您只需使用class MyScraper(scrapy.Spider): name = "myscraper" start_urls = [ ] def parse(self, response): rows = response.css('table.apas_tbl tr').extract() for row in rows[1:]: soup = BeautifulSoup(row, 'lxml') url = soup.find_all("a")[1]['href'] yield scrapy.Request(url, callback=self.parse_page_contents) def parse_page_contents(self, response): rows = response.xpath('//div[@id="apas_form"]').extract_first() soup = BeautifulSoup(rows, 'lxml') pages = soup.find(id='apas_form_text') for link in pages.find_all('a'): url = link['href'] yield scrapy.Request(url, callback=self.parse_page_listings) def parse_page_listings(self, response): rows = response.xpath('//div[@id="apas_form"]').extract_first() soup = BeautifulSoup(rows, 'lxml') resultTable = soup.find("table", { "class" : "apas_tbl" }) for row in resultTable.find_all('a'): url = row['href'] yield scrapy.Request(url, callback=self.parse_individual_listings) def parse_individual_listings(self, response): rows = response.xpath('//div[@id="apas_form"]').extract_first() soup = BeautifulSoup(rows, 'lxml') fields = soup.find_all('div',{'id':'fieldset_data'}) data = {} for field in fields: data[field.label.text.strip()] = field.p.text.strip() tabs = response.xpath('//div[@id="tabheader"]').extract_first() soup = BeautifulSoup(tabs, 'lxml') links = soup.find_all("a") for link in links: yield scrapy.Request( urlparse.urljoin(response.url, link['href']), callback=self.parse_individual_tabs, meta={'data': data} ) print data def parse_individual_tabs(self, response): data = {} rows = response.xpath('//div[@id="tabContent"]').extract_first() soup = BeautifulSoup(rows, 'lxml') fields = soup.find_all('div',{'id':'fieldset_data'}) for field in fields: data[field.label.text.strip()] = field.p.text.strip() yield json.dumps(data) 即可。无需自定义序列化程序:

@JsonUnwrapped

注意字段辅助功能,否则无效。