Changes

Jump to navigation Jump to search
676 bytes added ,  13:47, 21 September 2020
no edit summary
{{Project
|Has project output=Tool
|Has sponsor=Kauffman Incubator Project
|Has title=DSL Encoding
|Has owner=Hiep Nguyen
prediction = tf.add(tf.matmul(states.h, W), b,name='prediction')
return prediction
 
If we want to stack multiple LSTM layers together, we can replace '''lstm=lstm_cell(keep_prob)''' with '''lstm= tf.contrib.rnn.MultiRNNCell([lstm_cell(keep_prob) for _ in range(num_layers)])''' where '''num_layers''' is an integer representing the number of LSTM layers we want
A sample training code lives in
E:\projects\embedding\Web_extractor_model\train_sample.py In the '''utils.py''' file, there are a few hyperparameters to remember. max_len: the length of each training point step: the number of steps we want to move to generate the next training point num_units: LSTM units, a safe choice is 128 len_unique_chars: total number of unique tokens in all training data

Navigation menu