This is a follow up to Tensorflow input of varying shapes.
In my particular situation, for each training sample, I have three different inputs. Two are scalars and the third is a sparse tensor.
The two scalars are each used to build tensors to use as state for LSTM. For instance:
hidden_state = tf.tile(self.state_init_values, [scalar_1, 1])
cell_state = tf.zeros([scalar_1, 100])
My confusion right now is mainly on how to get the value of these scalars. The tf.print fuction is capable of that, but I’m not sure how. Here is a simplified example:
class DataGenerator(tf.keras.utils.PyDataset):
def __init__(self):
pass
def __getitem__(self, index):
return tf.constant([0.1, 0.2, 0.3]), tf.constant([1, 1, 1])
def __len__(self):
return 1
class InfoModel(tf.keras.Model):
def __init__(self):
super(InfoModel, self).__init__()
def call(self, inputs, training=False):
#Prints: Tensor("info_model_9_1/strided_slice:0", shape=(), dtype=float32)
#scalar_1 = inputs[0]
#TypeError: int() argument must be a string, a bytes-like object or a real number, not 'SymbolicTensor'
#scalar_1 = int(inputs[0])
#AttributeError: 'SymbolicTensor' object has no attribute 'numpy'
#scalar_1 = inputs[0].numpy()
#print(scalar_1)
#This works...
tf.print(inputs[0])
return tf.convert_to_tensor([1,1,1])
gen = DataGenerator()
model = InfoModel()
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(gen, epochs=1, verbose=1)
model.summary()
Is there a way to get at this value? Or am I going about this the wrong way to accomplish what I want?