How do you clean up a tf.data.Iterator?










0














I am attempting to train a Keras model using date from a tensorflow.data input pipleline. Instead of training the model in one single run, I intend to do early stopping when validation performance deteriorates and continue the training with a larger batch size. My code for doing this is shown below:



batch_sizes = [256, 512, 1024]
for batch_size in batch_sizes:
input_fn = input_fn_helper(batch_size, ...)
training_set = input_fn().make_one_shot_iterator()

input_fn_test = input_fn_test_helper(batch_size, ...)
testing_set = input_fn_test().make_one_shot_iterator()

model.fit(training_set,
steps_per_epoch=(n_train / batch_size),
epochs=max_epochs,
validation_data=testing_set,
validation_steps=(n_test / batch_size),
callbacks=callbacks)


As you can see I construct a new input pipeline for each increase in batch size (input_fn()returns a tf.data.Dataset). The behaviour I get from this is what I would expect, so it does what it is supposed to do. The problem I do have is that my scripts memory footprint increases every time the loop runs, i.e. it seems the preceding instance of training_set and testing_set are not freed up by being overwritten in the next step. This begs a few questions:



  1. Am I doing something terribly wrong here?

  2. Is there a canonical way of ensuring that a tf.data.Iteratoris properly disposed of when no longer needed?









share|improve this question





















  • have you tried using reinitializable iterator to define one iterator structure and reinitialize it with whatever new dataset object you want to use? Check out the importing data guide for more detailed explanation!
    – kvish
    Nov 12 '18 at 15:31















0














I am attempting to train a Keras model using date from a tensorflow.data input pipleline. Instead of training the model in one single run, I intend to do early stopping when validation performance deteriorates and continue the training with a larger batch size. My code for doing this is shown below:



batch_sizes = [256, 512, 1024]
for batch_size in batch_sizes:
input_fn = input_fn_helper(batch_size, ...)
training_set = input_fn().make_one_shot_iterator()

input_fn_test = input_fn_test_helper(batch_size, ...)
testing_set = input_fn_test().make_one_shot_iterator()

model.fit(training_set,
steps_per_epoch=(n_train / batch_size),
epochs=max_epochs,
validation_data=testing_set,
validation_steps=(n_test / batch_size),
callbacks=callbacks)


As you can see I construct a new input pipeline for each increase in batch size (input_fn()returns a tf.data.Dataset). The behaviour I get from this is what I would expect, so it does what it is supposed to do. The problem I do have is that my scripts memory footprint increases every time the loop runs, i.e. it seems the preceding instance of training_set and testing_set are not freed up by being overwritten in the next step. This begs a few questions:



  1. Am I doing something terribly wrong here?

  2. Is there a canonical way of ensuring that a tf.data.Iteratoris properly disposed of when no longer needed?









share|improve this question





















  • have you tried using reinitializable iterator to define one iterator structure and reinitialize it with whatever new dataset object you want to use? Check out the importing data guide for more detailed explanation!
    – kvish
    Nov 12 '18 at 15:31













0












0








0







I am attempting to train a Keras model using date from a tensorflow.data input pipleline. Instead of training the model in one single run, I intend to do early stopping when validation performance deteriorates and continue the training with a larger batch size. My code for doing this is shown below:



batch_sizes = [256, 512, 1024]
for batch_size in batch_sizes:
input_fn = input_fn_helper(batch_size, ...)
training_set = input_fn().make_one_shot_iterator()

input_fn_test = input_fn_test_helper(batch_size, ...)
testing_set = input_fn_test().make_one_shot_iterator()

model.fit(training_set,
steps_per_epoch=(n_train / batch_size),
epochs=max_epochs,
validation_data=testing_set,
validation_steps=(n_test / batch_size),
callbacks=callbacks)


As you can see I construct a new input pipeline for each increase in batch size (input_fn()returns a tf.data.Dataset). The behaviour I get from this is what I would expect, so it does what it is supposed to do. The problem I do have is that my scripts memory footprint increases every time the loop runs, i.e. it seems the preceding instance of training_set and testing_set are not freed up by being overwritten in the next step. This begs a few questions:



  1. Am I doing something terribly wrong here?

  2. Is there a canonical way of ensuring that a tf.data.Iteratoris properly disposed of when no longer needed?









share|improve this question













I am attempting to train a Keras model using date from a tensorflow.data input pipleline. Instead of training the model in one single run, I intend to do early stopping when validation performance deteriorates and continue the training with a larger batch size. My code for doing this is shown below:



batch_sizes = [256, 512, 1024]
for batch_size in batch_sizes:
input_fn = input_fn_helper(batch_size, ...)
training_set = input_fn().make_one_shot_iterator()

input_fn_test = input_fn_test_helper(batch_size, ...)
testing_set = input_fn_test().make_one_shot_iterator()

model.fit(training_set,
steps_per_epoch=(n_train / batch_size),
epochs=max_epochs,
validation_data=testing_set,
validation_steps=(n_test / batch_size),
callbacks=callbacks)


As you can see I construct a new input pipeline for each increase in batch size (input_fn()returns a tf.data.Dataset). The behaviour I get from this is what I would expect, so it does what it is supposed to do. The problem I do have is that my scripts memory footprint increases every time the loop runs, i.e. it seems the preceding instance of training_set and testing_set are not freed up by being overwritten in the next step. This begs a few questions:



  1. Am I doing something terribly wrong here?

  2. Is there a canonical way of ensuring that a tf.data.Iteratoris properly disposed of when no longer needed?






python tensorflow memory keras






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 10 '18 at 12:20









Harald Husum

346210




346210











  • have you tried using reinitializable iterator to define one iterator structure and reinitialize it with whatever new dataset object you want to use? Check out the importing data guide for more detailed explanation!
    – kvish
    Nov 12 '18 at 15:31
















  • have you tried using reinitializable iterator to define one iterator structure and reinitialize it with whatever new dataset object you want to use? Check out the importing data guide for more detailed explanation!
    – kvish
    Nov 12 '18 at 15:31















have you tried using reinitializable iterator to define one iterator structure and reinitialize it with whatever new dataset object you want to use? Check out the importing data guide for more detailed explanation!
– kvish
Nov 12 '18 at 15:31




have you tried using reinitializable iterator to define one iterator structure and reinitialize it with whatever new dataset object you want to use? Check out the importing data guide for more detailed explanation!
– kvish
Nov 12 '18 at 15:31












0






active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53238889%2fhow-do-you-clean-up-a-tf-data-iterator%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes















draft saved

draft discarded
















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53238889%2fhow-do-you-clean-up-a-tf-data-iterator%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

𛂒𛀶,𛀽𛀑𛂀𛃧𛂓𛀙𛃆𛃑𛃷𛂟𛁡𛀢𛀟𛁤𛂽𛁕𛁪𛂟𛂯,𛁞𛂧𛀴𛁄𛁠𛁼𛂿𛀤 𛂘,𛁺𛂾𛃭𛃭𛃵𛀺,𛂣𛃍𛂖𛃶 𛀸𛃀𛂖𛁶𛁏𛁚 𛂢𛂞 𛁰𛂆𛀔,𛁸𛀽𛁓𛃋𛂇𛃧𛀧𛃣𛂐𛃇,𛂂𛃻𛃲𛁬𛃞𛀧𛃃𛀅 𛂭𛁠𛁡𛃇𛀷𛃓𛁥,𛁙𛁘𛁞𛃸𛁸𛃣𛁜,𛂛,𛃿,𛁯𛂘𛂌𛃛𛁱𛃌𛂈𛂇 𛁊𛃲,𛀕𛃴𛀜 𛀶𛂆𛀶𛃟𛂉𛀣,𛂐𛁞𛁾 𛁷𛂑𛁳𛂯𛀬𛃅,𛃶𛁼

Edmonton

Crossroads (UK TV series)