使用JsonProperty进行反序列化但不序列化

时间:2019-08-27 09:27:57

标签: c# json serialization deserialization

我有一个称为REST-Api的后端服务。然后将该API的结果反序列化为我的实体:

public class Item {
  [JsonProperty("pkID")]
  public int Id {get;set;}
}

JsonConvert.DeserializeObject<Item>(responseString);

这很好。但我也想将结果作为JSON字符串返回,以便可以从前端使用它。现在,使用JsonConvert.SerializeObject(item)序列化“类型”类型的对象时,我想返回类似

的内容
{ Id: 1 }

代替序列化它还使用JsonProperty并返回

{ pkID: 1 }

相反。

如何告诉序列化程序在序列化时忽略JsonProperty,而在反序列化时使用它呢?

我不是在寻找是否应该序列化属性的方法,而是在寻找序列化时应该使用propertyName还是JsonProperty的名称。

2 个答案:

答案 0 :(得分:3)

您可以使用指向“好”属性的set-only属性。

import math
import numpy as np
import tensorflow as tf
from sklearn import metrics

california_housing_dataframe = np.genfromtxt('https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv', delimiter=',', skip_header=1)

data_X = california_housing_dataframe[:, :8]
data_y = california_housing_dataframe[:, 8]

from sklearn.model_selection import train_test_split

data_X_train, data_X_validate = train_test_split(data_X, test_size=0.2, random_state=42)
data_y_train, data_y_validate = train_test_split(data_y, test_size=0.2, random_state=42)

# Hyperspace Params
learning_rate = 0.01
training_epochs = 1 #40
batch_size = 500 #50
totalBatches = len(data_X_train)/batch_size

n, m = data_X_train.shape # 17,000 Rows + 9 Features
print('n=', n, ', m=', m)

W = tf.Variable(tf.random_uniform([m, 1], -1.0, 1.0, dtype = tf.float64), name="theta") # Random initialization
b = tf.Variable(np.random.randn(), name = "b", dtype = tf.float64)
X = tf.placeholder(tf.float64, shape=(None, m), name="X")
y = tf.placeholder(tf.float64, shape=(None, 1), name="y")

print('X.shape :\n', X.shape, '\n')
print('y.shape :\n', y.shape, '\n')
print('b.shape :\n', b.shape, '\n')
print('Thetha.shape (W):\n', W.shape, '\n')

y_pred = tf.add(tf.matmul(X, W), b, name="predictions")
error = y_pred - y
cost = tf.reduce_mean(tf.square(error), name="mse")
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)

# Global Variables Initializer
init = tf.global_variables_initializer()

def get_batch(X, y, batch_size):
  rnd_idx = np.random.permutation(len(X))
  n_batches = len(X) // batch_size
  for batch_idx in np.array_split(rnd_idx, n_batches):
    X_batch, y_batch = X[batch_idx, :], y[batch_idx]
    yield X_batch, y_batch

with tf.Session() as sess:
  sess.run(init)
  for epoch in range(training_epochs):
    for X_batch, y_batch in get_batch(data_X_train, data_y_train, batch_size):
      y_batch = np.array(y_batch).reshape(-1, 1)
      sess.run(optimizer, feed_dict={X: X_batch, y: y_batch})
      curr_y_pred, curr_error, curr_cost = sess.run([y_pred, error, cost], {X: X_batch, y: y_batch})
      print('Training... batch.shape: ', X_batch.shape,'curr_error:', curr_error)

答案 1 :(得分:0)

您可以使用自己的ContractResolver实现。

以下是一个可能可行的答案:https://stackoverflow.com/a/20639697/5018895

相关问题