具有内联属性的GQL类型

时间:2019-05-25 18:10:34

标签: typescript graphql

我正在尝试将此Typescript接口镜像为GQL类型:

interface List {
  items: [String] | [[String]]
}

我的第一个想法是保持简单:

type List {
  items: [String]! | [[String]!]!
}

但是GQL不喜欢这样,所以我尝试了类似的事情:

type List1D {
  items: [String]!
}

type List2D {
  items: [[String]!]!
}

union ListItems = List1D | List2D

type List {
  items: ListItems
}

但是我相信这会导致如下结果:

{
  items: {
    items: [] // union type
  }
}

我该如何完成我的实际打算?

1 个答案:

答案 0 :(得分:1)

最简单的方法是始终将单例项目打包到数组中,并使架构成为

Resource exhausted: OOM when allocating tensor with shape[51200,2304] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
Traceback (most recent call last):
  File "C:\Users\bhave\AppData\Local\conda\conda\envs\tf_gpu\lib\site-packages\tensorflow\python\client\session.py", line 1334, in _do_call
    return fn(*args)
  File "C:\Users\bhave\AppData\Local\conda\conda\envs\tf_gpu\lib\site-packages\tensorflow\python\client\session.py", line 1319, in _run_fn
    options, feed_dict, fetch_list, target_list, run_metadata)
  File "C:\Users\bhave\AppData\Local\conda\conda\envs\tf_gpu\lib\site-packages\tensorflow\python\client\session.py", line 1407, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[51200,2304] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[{{node model/h0/attn/c_attn/MatMul}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

         [[{{node Mean}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.


During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "train.py", line 293, in <module>
    main()
  File "train.py", line 271, in main
    feed_dict={context: sample_batch()})
  File "C:\Users\bhave\AppData\Local\conda\conda\envs\tf_gpu\lib\site-packages\tensorflow\python\client\session.py", line 929, in run
    run_metadata_ptr)
  File "C:\Users\bhave\AppData\Local\conda\conda\envs\tf_gpu\lib\site-packages\tensorflow\python\client\session.py", line 1152, in _run
    feed_dict_tensor, options, run_metadata)
  File "C:\Users\bhave\AppData\Local\conda\conda\envs\tf_gpu\lib\site-packages\tensorflow\python\client\session.py", line 1328, in _do_run
    run_metadata)
  File "C:\Users\bhave\AppData\Local\conda\conda\envs\tf_gpu\lib\site-packages\tensorflow\python\client\session.py", line 1348, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[51200,2304] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node model/h0/attn/c_attn/MatMul (defined at D:\Python and AI\Generative Chatbot\gpt-2\src\model.py:55) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

         [[node Mean (defined at train.py:96) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.


Caused by op 'model/h0/attn/c_attn/MatMul', defined at:
  File "train.py", line 293, in <module>
    main()
  File "train.py", line 93, in main
    output = model.model(hparams=hparams, X=context_in)
  File "D:\Python and AI\Generative Chatbot\gpt-2\src\model.py", line 164, in model
    h, present = block(h, 'h%d' % layer, past=past, hparams=hparams)
  File "D:\Python and AI\Generative Chatbot\gpt-2\src\model.py", line 126, in block
    a, present = attn(norm(x, 'ln_1'), 'attn', nx, past=past, hparams=hparams)
  File "D:\Python and AI\Generative Chatbot\gpt-2\src\model.py", line 102, in attn
    c = conv1d(x, 'c_attn', n_state*3)
  File "D:\Python and AI\Generative Chatbot\gpt-2\src\model.py", line 55, in conv1d
    c = tf.reshape(tf.matmul(tf.reshape(x, [-1, nx]), tf.reshape(w, [-1, nf]))+b, start+[nf])
  File "C:\Users\bhave\AppData\Local\conda\conda\envs\tf_gpu\lib\site-packages\tensorflow\python\ops\math_ops.py", line 2455, in matmul
    a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name)
  File "C:\Users\bhave\AppData\Local\conda\conda\envs\tf_gpu\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 5333, in mat_mul
    name=name)
  File "C:\Users\bhave\AppData\Local\conda\conda\envs\tf_gpu\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 788, in _apply_op_helper
    op_def=op_def)
  File "C:\Users\bhave\AppData\Local\conda\conda\envs\tf_gpu\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "C:\Users\bhave\AppData\Local\conda\conda\envs\tf_gpu\lib\site-packages\tensorflow\python\framework\ops.py", line 3300, in create_op
    op_def=op_def)
  File "C:\Users\bhave\AppData\Local\conda\conda\envs\tf_gpu\lib\site-packages\tensorflow\python\framework\ops.py", line 1801, in __init__
    self._traceback = tf_stack.extract_stack()

ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[51200,2304] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator
GPU_0_bfc
         [[node model/h0/attn/c_attn/MatMul (defined at D:\Python and AI\Generative Chatbot\gpt-2\src\model.py:55) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

         [[node Mean (defined at train.py:96) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

您在这里遇到的问题是unions can only include named object types(请参阅“类型验证”规则2)。

我使用GraphQL的经验是,您无法使用GraphQL模式语言为任意JSON建模。我建议设计一个希望外部调用者可以使用的API,而无需真正考虑您拥有的内部实现类型。然后,一旦您设计了该API,就可以使用解析器功能将内部存储映射到API所需的类型和字段中。如果API明智的话,不必担心响应中会有其他JSON对象“层”。

如果是一维或二维结果数组,对于API使用者确实在语义上有意义,那么您显示的形式是合理的(是的,JSON格式的响应将具有两个级别的interface List { """If the internal representation has a flat array of strings, it is wrapped into an array of length 1 containing that array.""" items: [[String]] }