Original article was published by /u/shahriar49 on Deep Learning
I have read TF pages and some posts and about the use of prefetch() and cache() to speed up model input pipeline and tried to implement it on my data. Cache() worked for me as expected, i.e. reading data from the dist in the first epoch, and in the all subsequent epochs it is just reading data from memory. But I have many difficulties using prefetch() and I really don’t understand when and how to use it. Can anybody help me with it? I really need some help.
My application is like this: I have a set of large TFRecord files, each includes some raw records to be processed before feeding my net. They are going to be mixed (different sample streams) so what I do is:
def read_datasets(pattern, numFiles, numEpochs=125, batchSize=1024, take=dataLength): files = tf.data.Dataset.list_files(pattern) def _parse(x): x = tf.data.TFRecordDataset(x, compression_type='GZIP') return x np = 4 # half of the number of CPU cores dataset = files.interleave(_parse, cycle_length=numFiles, block_length=1, num_parallel_calls=np)\ .map(lambda x: parse_tfrecord(x), num_parallel_calls=np) dataset = dataset.take(take) dataset = dataset.batch(batchSize) dataset = dataset.cache() dataset = dataset.prefetch(buffer_size=10) dataset = dataset.repeat(numEpochs) return dataset
parse_tfrecord(x) function in interleave function is the required preprocessing of data before it applies to the model, and my guess is that the preprocesing time is comparable to the batch processing time by network. My whole dataset (including all input files) contains about 500 batch of 1024 samples. My questions are:
1- If I do caching, do I really need prefetching?
2- Is it the right sequence to do mapping, batching, caching, prefetching, and repeating?
3- Tensorflow documentation says that the buffer size of prefetch refers to the dataset elements and if it is batched, to the number of batches. So in this case I will read 10 batches of 1024 examples, right? My problem is that I don’t see any difference in run time by changing prefetching buffer size and the memory consumption is not changed much even by setting buffer size to 1000 or bigger.
I appreciate any help.