Policy Improvement Step of Policy Iteration










0















Hello Dear StackOverflow Community!



I am taking a Reinforcement Learning course right now and have a confusion about the Policy Iteration method.



In the Policy Iteration, we have a random policy at the beginning and value functions for each state. In the "Policy Evaluation" part of the Policy Iteration process, we find new value functions for each state. After that, in the "Policy Improvement" part, based on the new value functions, we update our policy. We do these steps iteratively until our value functions converge. But my problem is, how do we adapt our new policy for the next policy evaluation process? Let me explain my point on an example.



Grid world example:
enter image description here



As you can see on the image, black boxes are terminal states, our immediate reward is -1, discount factor is 1, and 0.25 possibility for all directions at the beginning. In the policy we get at the policy improvement part k = 1, we have to go left from the state which is on the right of the top-leftmost terminal state. After that, while updating the value functions for k = 2, why don't we consider this change and write -1.75 (abbreviated as -1.7) to this state like we can go all directions? In my opinion as mathematically:



new value function for this state = -1 + 1 * (0.0) = -1



This should be like that because we have only one possible action from that state based on our last policy instead of:



-1.7 = ((-1) + 1 * (-1)) * (0.75) + (-1 + 1 * (0.0)) * (0.25)



If we do it like that, so what is the intermediate missions of those policies? Just do it as value iteration if we do not use them for the new value functions!



I am so confused about that, can you help me please?










share|improve this question




























    0















    Hello Dear StackOverflow Community!



    I am taking a Reinforcement Learning course right now and have a confusion about the Policy Iteration method.



    In the Policy Iteration, we have a random policy at the beginning and value functions for each state. In the "Policy Evaluation" part of the Policy Iteration process, we find new value functions for each state. After that, in the "Policy Improvement" part, based on the new value functions, we update our policy. We do these steps iteratively until our value functions converge. But my problem is, how do we adapt our new policy for the next policy evaluation process? Let me explain my point on an example.



    Grid world example:
    enter image description here



    As you can see on the image, black boxes are terminal states, our immediate reward is -1, discount factor is 1, and 0.25 possibility for all directions at the beginning. In the policy we get at the policy improvement part k = 1, we have to go left from the state which is on the right of the top-leftmost terminal state. After that, while updating the value functions for k = 2, why don't we consider this change and write -1.75 (abbreviated as -1.7) to this state like we can go all directions? In my opinion as mathematically:



    new value function for this state = -1 + 1 * (0.0) = -1



    This should be like that because we have only one possible action from that state based on our last policy instead of:



    -1.7 = ((-1) + 1 * (-1)) * (0.75) + (-1 + 1 * (0.0)) * (0.25)



    If we do it like that, so what is the intermediate missions of those policies? Just do it as value iteration if we do not use them for the new value functions!



    I am so confused about that, can you help me please?










    share|improve this question


























      0












      0








      0








      Hello Dear StackOverflow Community!



      I am taking a Reinforcement Learning course right now and have a confusion about the Policy Iteration method.



      In the Policy Iteration, we have a random policy at the beginning and value functions for each state. In the "Policy Evaluation" part of the Policy Iteration process, we find new value functions for each state. After that, in the "Policy Improvement" part, based on the new value functions, we update our policy. We do these steps iteratively until our value functions converge. But my problem is, how do we adapt our new policy for the next policy evaluation process? Let me explain my point on an example.



      Grid world example:
      enter image description here



      As you can see on the image, black boxes are terminal states, our immediate reward is -1, discount factor is 1, and 0.25 possibility for all directions at the beginning. In the policy we get at the policy improvement part k = 1, we have to go left from the state which is on the right of the top-leftmost terminal state. After that, while updating the value functions for k = 2, why don't we consider this change and write -1.75 (abbreviated as -1.7) to this state like we can go all directions? In my opinion as mathematically:



      new value function for this state = -1 + 1 * (0.0) = -1



      This should be like that because we have only one possible action from that state based on our last policy instead of:



      -1.7 = ((-1) + 1 * (-1)) * (0.75) + (-1 + 1 * (0.0)) * (0.25)



      If we do it like that, so what is the intermediate missions of those policies? Just do it as value iteration if we do not use them for the new value functions!



      I am so confused about that, can you help me please?










      share|improve this question
















      Hello Dear StackOverflow Community!



      I am taking a Reinforcement Learning course right now and have a confusion about the Policy Iteration method.



      In the Policy Iteration, we have a random policy at the beginning and value functions for each state. In the "Policy Evaluation" part of the Policy Iteration process, we find new value functions for each state. After that, in the "Policy Improvement" part, based on the new value functions, we update our policy. We do these steps iteratively until our value functions converge. But my problem is, how do we adapt our new policy for the next policy evaluation process? Let me explain my point on an example.



      Grid world example:
      enter image description here



      As you can see on the image, black boxes are terminal states, our immediate reward is -1, discount factor is 1, and 0.25 possibility for all directions at the beginning. In the policy we get at the policy improvement part k = 1, we have to go left from the state which is on the right of the top-leftmost terminal state. After that, while updating the value functions for k = 2, why don't we consider this change and write -1.75 (abbreviated as -1.7) to this state like we can go all directions? In my opinion as mathematically:



      new value function for this state = -1 + 1 * (0.0) = -1



      This should be like that because we have only one possible action from that state based on our last policy instead of:



      -1.7 = ((-1) + 1 * (-1)) * (0.75) + (-1 + 1 * (0.0)) * (0.25)



      If we do it like that, so what is the intermediate missions of those policies? Just do it as value iteration if we do not use them for the new value functions!



      I am so confused about that, can you help me please?







      dynamic-programming reinforcement-learning planning value-iteration






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 10 '18 at 17:57









      R.F. Nelson

      1,4011419




      1,4011419










      asked Nov 9 '18 at 17:41









      dummyHeaddummyHead

      42




      42






















          0






          active

          oldest

          votes











          Your Answer






          StackExchange.ifUsing("editor", function ()
          StackExchange.using("externalEditor", function ()
          StackExchange.using("snippets", function ()
          StackExchange.snippets.init();
          );
          );
          , "code-snippets");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "1"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53230810%2fpolicy-improvement-step-of-policy-iteration%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53230810%2fpolicy-improvement-step-of-policy-iteration%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          𛂒𛀶,𛀽𛀑𛂀𛃧𛂓𛀙𛃆𛃑𛃷𛂟𛁡𛀢𛀟𛁤𛂽𛁕𛁪𛂟𛂯,𛁞𛂧𛀴𛁄𛁠𛁼𛂿𛀤 𛂘,𛁺𛂾𛃭𛃭𛃵𛀺,𛂣𛃍𛂖𛃶 𛀸𛃀𛂖𛁶𛁏𛁚 𛂢𛂞 𛁰𛂆𛀔,𛁸𛀽𛁓𛃋𛂇𛃧𛀧𛃣𛂐𛃇,𛂂𛃻𛃲𛁬𛃞𛀧𛃃𛀅 𛂭𛁠𛁡𛃇𛀷𛃓𛁥,𛁙𛁘𛁞𛃸𛁸𛃣𛁜,𛂛,𛃿,𛁯𛂘𛂌𛃛𛁱𛃌𛂈𛂇 𛁊𛃲,𛀕𛃴𛀜 𛀶𛂆𛀶𛃟𛂉𛀣,𛂐𛁞𛁾 𛁷𛂑𛁳𛂯𛀬𛃅,𛃶𛁼

          Crossroads (UK TV series)

          ữḛḳṊẴ ẋ,Ẩṙ,ỹḛẪẠứụỿṞṦ,Ṉẍừ,ứ Ị,Ḵ,ṏ ṇỪḎḰṰọửḊ ṾḨḮữẑỶṑỗḮṣṉẃ Ữẩụ,ṓ,ḹẕḪḫỞṿḭ ỒṱṨẁṋṜ ḅẈ ṉ ứṀḱṑỒḵ,ḏ,ḊḖỹẊ Ẻḷổ,ṥ ẔḲẪụḣể Ṱ ḭỏựẶ Ồ Ṩ,ẂḿṡḾồ ỗṗṡịṞẤḵṽẃ ṸḒẄẘ,ủẞẵṦṟầṓế