Octave code for gradient descent using vectorization not updating cost function correctly










1















I have implemented following code for gradient descent using vectorization but it seems the cost function is not decrementing correctly.Instead the cost function is increasing with each iteration.



Assuming theta to be an n+1 vector, y to be a m vector and X to be design matrix m*(n+1)



function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters)

m = length(y); % number of training examples
n = length(theta); % number of features
J_history = zeros(num_iters, 1);
error = ((theta' * X')' - y)*(alpha/m);
descent = zeros(size(theta),1);

for iter = 1:num_iters
for i = 1:n
descent(i) = descent(i) + sum(error.* X(:,i));
i = i + 1;
end

theta = theta - descent;
J_history(iter) = computeCost(X, y, theta);
disp("the value of cost function is : "), disp(J_history(iter));
iter = iter + 1;
end


The compute cost function is :



function J = computeCost(X, y, theta)
m = length(y);
J = 0;
for i = 1:m,
H = theta' * X(i,:)';
E = H - y(i);
SQE = E^2;
J = (J + SQE);
i = i+1;
end;
J = J / (2*m);









share|improve this question



















  • 1





    Shouldn't for i = 1:n increment i for you? You're also doing it inside the loop. (Long time since I did any Octave...)

    – Fred Foo
    Oct 30 '14 at 15:15











  • yeah thats true

    – Dcoder
    Oct 30 '14 at 15:23















1















I have implemented following code for gradient descent using vectorization but it seems the cost function is not decrementing correctly.Instead the cost function is increasing with each iteration.



Assuming theta to be an n+1 vector, y to be a m vector and X to be design matrix m*(n+1)



function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters)

m = length(y); % number of training examples
n = length(theta); % number of features
J_history = zeros(num_iters, 1);
error = ((theta' * X')' - y)*(alpha/m);
descent = zeros(size(theta),1);

for iter = 1:num_iters
for i = 1:n
descent(i) = descent(i) + sum(error.* X(:,i));
i = i + 1;
end

theta = theta - descent;
J_history(iter) = computeCost(X, y, theta);
disp("the value of cost function is : "), disp(J_history(iter));
iter = iter + 1;
end


The compute cost function is :



function J = computeCost(X, y, theta)
m = length(y);
J = 0;
for i = 1:m,
H = theta' * X(i,:)';
E = H - y(i);
SQE = E^2;
J = (J + SQE);
i = i+1;
end;
J = J / (2*m);









share|improve this question



















  • 1





    Shouldn't for i = 1:n increment i for you? You're also doing it inside the loop. (Long time since I did any Octave...)

    – Fred Foo
    Oct 30 '14 at 15:15











  • yeah thats true

    – Dcoder
    Oct 30 '14 at 15:23













1












1








1


1






I have implemented following code for gradient descent using vectorization but it seems the cost function is not decrementing correctly.Instead the cost function is increasing with each iteration.



Assuming theta to be an n+1 vector, y to be a m vector and X to be design matrix m*(n+1)



function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters)

m = length(y); % number of training examples
n = length(theta); % number of features
J_history = zeros(num_iters, 1);
error = ((theta' * X')' - y)*(alpha/m);
descent = zeros(size(theta),1);

for iter = 1:num_iters
for i = 1:n
descent(i) = descent(i) + sum(error.* X(:,i));
i = i + 1;
end

theta = theta - descent;
J_history(iter) = computeCost(X, y, theta);
disp("the value of cost function is : "), disp(J_history(iter));
iter = iter + 1;
end


The compute cost function is :



function J = computeCost(X, y, theta)
m = length(y);
J = 0;
for i = 1:m,
H = theta' * X(i,:)';
E = H - y(i);
SQE = E^2;
J = (J + SQE);
i = i+1;
end;
J = J / (2*m);









share|improve this question
















I have implemented following code for gradient descent using vectorization but it seems the cost function is not decrementing correctly.Instead the cost function is increasing with each iteration.



Assuming theta to be an n+1 vector, y to be a m vector and X to be design matrix m*(n+1)



function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters)

m = length(y); % number of training examples
n = length(theta); % number of features
J_history = zeros(num_iters, 1);
error = ((theta' * X')' - y)*(alpha/m);
descent = zeros(size(theta),1);

for iter = 1:num_iters
for i = 1:n
descent(i) = descent(i) + sum(error.* X(:,i));
i = i + 1;
end

theta = theta - descent;
J_history(iter) = computeCost(X, y, theta);
disp("the value of cost function is : "), disp(J_history(iter));
iter = iter + 1;
end


The compute cost function is :



function J = computeCost(X, y, theta)
m = length(y);
J = 0;
for i = 1:m,
H = theta' * X(i,:)';
E = H - y(i);
SQE = E^2;
J = (J + SQE);
i = i+1;
end;
J = J / (2*m);






machine-learning octave vectorization gradient-descent






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Oct 30 '14 at 15:26







Dcoder

















asked Oct 30 '14 at 15:09









DcoderDcoder

1342513




1342513







  • 1





    Shouldn't for i = 1:n increment i for you? You're also doing it inside the loop. (Long time since I did any Octave...)

    – Fred Foo
    Oct 30 '14 at 15:15











  • yeah thats true

    – Dcoder
    Oct 30 '14 at 15:23












  • 1





    Shouldn't for i = 1:n increment i for you? You're also doing it inside the loop. (Long time since I did any Octave...)

    – Fred Foo
    Oct 30 '14 at 15:15











  • yeah thats true

    – Dcoder
    Oct 30 '14 at 15:23







1




1





Shouldn't for i = 1:n increment i for you? You're also doing it inside the loop. (Long time since I did any Octave...)

– Fred Foo
Oct 30 '14 at 15:15





Shouldn't for i = 1:n increment i for you? You're also doing it inside the loop. (Long time since I did any Octave...)

– Fred Foo
Oct 30 '14 at 15:15













yeah thats true

– Dcoder
Oct 30 '14 at 15:23





yeah thats true

– Dcoder
Oct 30 '14 at 15:23












2 Answers
2






active

oldest

votes


















3














You can vectorise it even further:



function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters)
m = length(y);
J_history = zeros(num_iters, 1);

for iter = 1:num_iters

delta = (theta' * X'-y')*X;
theta = theta - alpha/m*delta';
J_history(iter) = computeCost(X, y, theta);

end

end





share|improve this answer






























    1














    You can vectorize it better as follows



    function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters)
    m = length(y);
    J_history = zeros(num_iters, 1);

    for iter = 1:num_iters

    theta=theta-(alpha/m)*((X*theta-y)'*X)';
    J_history(iter) = computeCost(X, y, theta);

    end;
    end;


    The ComputeCost function can be written as



    function J = computeCost(X, y, theta)
    m = length(y);

    J = 1/(2*m)*sum((X*theta-y)^2);

    end;





    share|improve this answer






















      Your Answer






      StackExchange.ifUsing("editor", function ()
      StackExchange.using("externalEditor", function ()
      StackExchange.using("snippets", function ()
      StackExchange.snippets.init();
      );
      );
      , "code-snippets");

      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "1"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader:
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      ,
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );













      draft saved

      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f26656640%2foctave-code-for-gradient-descent-using-vectorization-not-updating-cost-function%23new-answer', 'question_page');

      );

      Post as a guest















      Required, but never shown

























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      3














      You can vectorise it even further:



      function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters)
      m = length(y);
      J_history = zeros(num_iters, 1);

      for iter = 1:num_iters

      delta = (theta' * X'-y')*X;
      theta = theta - alpha/m*delta';
      J_history(iter) = computeCost(X, y, theta);

      end

      end





      share|improve this answer



























        3














        You can vectorise it even further:



        function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters)
        m = length(y);
        J_history = zeros(num_iters, 1);

        for iter = 1:num_iters

        delta = (theta' * X'-y')*X;
        theta = theta - alpha/m*delta';
        J_history(iter) = computeCost(X, y, theta);

        end

        end





        share|improve this answer

























          3












          3








          3







          You can vectorise it even further:



          function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters)
          m = length(y);
          J_history = zeros(num_iters, 1);

          for iter = 1:num_iters

          delta = (theta' * X'-y')*X;
          theta = theta - alpha/m*delta';
          J_history(iter) = computeCost(X, y, theta);

          end

          end





          share|improve this answer













          You can vectorise it even further:



          function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters)
          m = length(y);
          J_history = zeros(num_iters, 1);

          for iter = 1:num_iters

          delta = (theta' * X'-y')*X;
          theta = theta - alpha/m*delta';
          J_history(iter) = computeCost(X, y, theta);

          end

          end






          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 17 '16 at 12:30









          Rimma ShafikovaRimma Shafikova

          10115




          10115























              1














              You can vectorize it better as follows



              function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters)
              m = length(y);
              J_history = zeros(num_iters, 1);

              for iter = 1:num_iters

              theta=theta-(alpha/m)*((X*theta-y)'*X)';
              J_history(iter) = computeCost(X, y, theta);

              end;
              end;


              The ComputeCost function can be written as



              function J = computeCost(X, y, theta)
              m = length(y);

              J = 1/(2*m)*sum((X*theta-y)^2);

              end;





              share|improve this answer



























                1














                You can vectorize it better as follows



                function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters)
                m = length(y);
                J_history = zeros(num_iters, 1);

                for iter = 1:num_iters

                theta=theta-(alpha/m)*((X*theta-y)'*X)';
                J_history(iter) = computeCost(X, y, theta);

                end;
                end;


                The ComputeCost function can be written as



                function J = computeCost(X, y, theta)
                m = length(y);

                J = 1/(2*m)*sum((X*theta-y)^2);

                end;





                share|improve this answer

























                  1












                  1








                  1







                  You can vectorize it better as follows



                  function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters)
                  m = length(y);
                  J_history = zeros(num_iters, 1);

                  for iter = 1:num_iters

                  theta=theta-(alpha/m)*((X*theta-y)'*X)';
                  J_history(iter) = computeCost(X, y, theta);

                  end;
                  end;


                  The ComputeCost function can be written as



                  function J = computeCost(X, y, theta)
                  m = length(y);

                  J = 1/(2*m)*sum((X*theta-y)^2);

                  end;





                  share|improve this answer













                  You can vectorize it better as follows



                  function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters)
                  m = length(y);
                  J_history = zeros(num_iters, 1);

                  for iter = 1:num_iters

                  theta=theta-(alpha/m)*((X*theta-y)'*X)';
                  J_history(iter) = computeCost(X, y, theta);

                  end;
                  end;


                  The ComputeCost function can be written as



                  function J = computeCost(X, y, theta)
                  m = length(y);

                  J = 1/(2*m)*sum((X*theta-y)^2);

                  end;






                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Nov 13 '18 at 4:04









                  Vishnu PrasadVishnu Prasad

                  235




                  235



























                      draft saved

                      draft discarded
















































                      Thanks for contributing an answer to Stack Overflow!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid


                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.

                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f26656640%2foctave-code-for-gradient-descent-using-vectorization-not-updating-cost-function%23new-answer', 'question_page');

                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      𛂒𛀶,𛀽𛀑𛂀𛃧𛂓𛀙𛃆𛃑𛃷𛂟𛁡𛀢𛀟𛁤𛂽𛁕𛁪𛂟𛂯,𛁞𛂧𛀴𛁄𛁠𛁼𛂿𛀤 𛂘,𛁺𛂾𛃭𛃭𛃵𛀺,𛂣𛃍𛂖𛃶 𛀸𛃀𛂖𛁶𛁏𛁚 𛂢𛂞 𛁰𛂆𛀔,𛁸𛀽𛁓𛃋𛂇𛃧𛀧𛃣𛂐𛃇,𛂂𛃻𛃲𛁬𛃞𛀧𛃃𛀅 𛂭𛁠𛁡𛃇𛀷𛃓𛁥,𛁙𛁘𛁞𛃸𛁸𛃣𛁜,𛂛,𛃿,𛁯𛂘𛂌𛃛𛁱𛃌𛂈𛂇 𛁊𛃲,𛀕𛃴𛀜 𛀶𛂆𛀶𛃟𛂉𛀣,𛂐𛁞𛁾 𛁷𛂑𛁳𛂯𛀬𛃅,𛃶𛁼

                      Edmonton

                      Crossroads (UK TV series)