获取TypeError:预期为int32,而不是类型为“ NoneType”

时间:2020-07-30 01:39:59

标签: python-3.x tensorflow keras

如果我有300000个数据点,那么我已经实现了带有关注层的序列到序列模型。如果我使用所有数据点,我都没有错误,那么我将遵循错误模型。

TypeError: Expected int32, got None of type 'NoneType' instead.

enter image description here

这是什么原因?

model.fit之前的代码是

class encoder_decoder(tf.keras.Model):
  def __init__(self,embedding_size,encoder_inputs_length,output_length,vocab_size,output_vocab_size,score_fun,units):
    super(encoder_decoder,self).__init__()
    self.vocab_size = vocab_size
    self.enc_units = units
    self.embedding_size = embedding_size
    self.encoder_inputs_length = encoder_inputs_length
    self.output_length = output_length
    self.lstm_output = 0
    self.state_h = 0
    self.state_c = 0
    self.output_vocab_size = output_vocab_size
    self.dec_units = units
    self.score_fun = score_fun
    self.att_units = units
    self.encoder=Encoder(self.vocab_size,self.embedding_size,self.enc_units,self.encoder_inputs_length)
    self.decoder = Decoder(self.output_vocab_size, self.embedding_size, self.output_length, self.dec_units ,self.score_fun ,self.att_units)
    # self.dense = Dense(self.output_vocab_size,activation = "softmax")
  
  def call(self,data):
    input,output = data[0],data[1]
    encoder_hidden = self.encoder.initialize_states(input.shape[0])
    encoder_output,encoder_hidden,encoder_cell = self.encoder(input,encoder_hidden)
    decoder_hidden = encoder_hidden
    decoder_cell =encoder_cell
    decoder_output = self.decoder(output,encoder_output,decoder_hidden,decoder_cell)
    return decoder_output

在调用函数中,我正在初始化编码器的状态 使用以下代码行输入的行数

 encoder_hidden = self.encoder.initialize_states(input.shape[0])

如果我打印输入,我将得到的形状为(None,55) 这就是我收到此错误的原因。 当我使用所有数据时,这里的总数据点数为330614 错误,当我仅使用330000个数据点时,出现此错误, 如果我在def方法中打印批处理,则形状为(64,55)

请在我下面的代码中创建序列模型的数据集

用于重新处理数据的函数和用于创建数据集的函数 和一个函数加载数据集

def preprocess_sentence(w):
  # w = unicode_to_ascii(w.lower().strip())
  w = re.sub(r"([?.!,¿])", r" \1 ", w)
  w = re.sub(r'[" "]+', " ", w)
  w = re.sub(r"[^a-zA-Z?.!,¿]+", " ", w)
  w = w.strip()
  w = '<start> ' + w + ' <end>'
  return w  
def create_dataset(path, num_examples):
  lines = io.open(path, encoding='UTF-8').read().strip().split('\n')
  # lines1 = lines[330000:]
  # lines = lines[0:323386]+lines1

  word_pairs = [[preprocess_sentence(w) for w in l.split('\t')]  for l in lines[:num_examples]]
  word_pairs = [[i[0],i[1]] for i in word_pairs]
  return zip(*word_pairs)

def tokenize(lang):
  lang_tokenizer = tf.keras.preprocessing.text.Tokenizer(
      filters='')
  lang_tokenizer.fit_on_texts(lang)

  tensor = lang_tokenizer.texts_to_sequences(lang)

  tensor = tf.keras.preprocessing.sequence.pad_sequences(tensor,padding='post')
  return tensor, lang_tokenizer

def load_dataset(path, num_examples=None):
  # creating cleaned input, output pairs
  targ_lang, inp_lang = create_dataset(path, num_examples)

  input_tensor, inp_lang_tokenizer = tokenize(inp_lang)
  target_tensor, targ_lang_tokenizer = tokenize(targ_lang)

  return input_tensor, target_tensor, inp_lang_tokenizer, targ_lang_tokenizer,targ_lang,inp_lang

# Try experimenting with the size of that dataset
num_examples = None
input_tensor, target_tensor, inp_lang, targ_lang,targ_lang_text,inp_lang_text = load_dataset(path, num_examples)

# Calculate max_length of the target tensors
max_length_targ, max_length_inp = target_tensor.shape[1], input_tensor.shape[1]
max_length_targ,max_length_inp

input_tensor_train, input_tensor_val, target_tensor_train, target_tensor_val = train_test_split(input_tensor, target_tensor, test_size=0.2)

数据集的形状如下

shape of input train  (269291, 55)
shape of target train  (269291, 53)
shape of input test (67323, 55)
shape of target test (67323, 53)

1 个答案:

答案 0 :(得分:0)

您可以在model.fit之前共享代码块。

NoneType错误表示由于某种原因传递给模型的最终数组为空。您可以在前面的步骤中添加打印语句,以了解数组变空的位置。

将场景与获取所有数据点的情况进行比较,以便在将数组传递给model.fit之前了解数组在哪里发生变化以及如何处理。