The input data for a neural network build with the keras framework exceeds 32GB of memory. The data is structured in a numpy array with a lot of entities which have 128 integer values. So there are easily 50*10^6 times 128 integer values which are simply 0 or 1. These values are fed into an input layer with 128 neurons.
My idea is to represent the 128 integer (only zeros and ones) as four 32-bit integer values. And feed them into the keras network. An additional layer should expand the 4 integer values on the fly to 128 values. Using this there are only batch-size*128 integer values.
Is there a solution for this problem out-of-the box?