Using automatic differentiation on a function that makes use of a preallocated array in Julia

Using automatic differentiation on a function that makes use of a preallocated array in Julia



My long subject title pretty much covers it.



I have managed to isolate my much bigger problem in the following contrived example below. I cannot figure out where the problem exactly is, though I imagine it has something to do with the type of the preallocated array?


using ForwardDiff

function test()

A = zeros(1_000_000)

function objective(A, value)
for i=1:1_000_000
A[i] = value[1]
end

return sum(A)
end

helper_objective = v -> objective(A, v)

ForwardDiff.gradient(helper_objective, [1.0])

end



The error reads as follows:


ERROR: MethodError: no method matching Float64(::ForwardDiff.DualForwardDiff.Taggetfield(Main, Symbol("##69#71"))ArrayFloat64,1,getfield(Main, Symbol("#objective#70"))ArrayFloat64,1,Float64,Float64,1)



In my own problem (not described here) I have a function that I need to optimise using Optim, and the automatic differentiation it offers, and this function makes use of a big matrix that I would like to preallocate in order to speed up my code. Many thanks.




1 Answer
1



If you look at http://www.juliadiff.org/ForwardDiff.jl/latest/user/limitations.html you find:



The target function must be written generically enough to accept numbers of type T<:Real as input (or arrays of these numbers) (...) This also means that any storage assigned used within the function must be generic as well.



with the example here https://github.com/JuliaDiff/ForwardDiff.jl/issues/136#issuecomment-237941790.



This means that you could do something like this:


function test()
function objective(value)
for i=1:1_000_000
A[i] = value[1]
end
return sum(A)
end
A = zeros(ForwardDiff.DualForwardDiff.Tagtypeof(objective), Float64,Float64,1, 1_000_000)
ForwardDiff.gradient(objective, [1.0])
end



But I would not assume that this will save you much allocations as it is type unstable.



What you can do is wrap objective and A in a module like this:


objective


A


using ForwardDiff

module Obj

using ForwardDiff

function objective(value)
for i=1:1_000_000
A[i] = value[1]
end
return sum(A)
end
const A = zeros(ForwardDiff.DualForwardDiff.Tagtypeof(objective), Float64,Float64,1, 1_000_000)

end



And now this:


ForwardDiff.gradient(Obj.objective, [1.0])



should be fast.



EDIT



Also this works (although it is type unstable but in a less problematic place):


function test()::VectorFloat64
function objective(A, value)
for i=1:1_000_000
A[i] = value[1]
end

return sum(A)
end
helper_objective = v -> objective(A, v)
A = VectorForwardDiff.DualForwardDiff.Tagtypeof(helper_objective), Float64,Float64,1(undef, 1_000_000)
ForwardDiff.gradient(helper_objective, [1.0])
end



Thanks for contributing an answer to Stack Overflow!



But avoid



To learn more, see our tips on writing great answers.



Some of your past answers have not been well-received, and you're in danger of being blocked from answering.



Please pay close attention to the following guidance:



But avoid



To learn more, see our tips on writing great answers.



Required, but never shown



Required, but never shown




By clicking "Post Your Answer", you acknowledge that you have read our updated terms of service, privacy policy and cookie policy, and that your continued use of the website is subject to these policies.

Popular posts from this blog

𛂒𛀶,𛀽𛀑𛂀𛃧𛂓𛀙𛃆𛃑𛃷𛂟𛁡𛀢𛀟𛁤𛂽𛁕𛁪𛂟𛂯,𛁞𛂧𛀴𛁄𛁠𛁼𛂿𛀤 𛂘,𛁺𛂾𛃭𛃭𛃵𛀺,𛂣𛃍𛂖𛃶 𛀸𛃀𛂖𛁶𛁏𛁚 𛂢𛂞 𛁰𛂆𛀔,𛁸𛀽𛁓𛃋𛂇𛃧𛀧𛃣𛂐𛃇,𛂂𛃻𛃲𛁬𛃞𛀧𛃃𛀅 𛂭𛁠𛁡𛃇𛀷𛃓𛁥,𛁙𛁘𛁞𛃸𛁸𛃣𛁜,𛂛,𛃿,𛁯𛂘𛂌𛃛𛁱𛃌𛂈𛂇 𛁊𛃲,𛀕𛃴𛀜 𛀶𛂆𛀶𛃟𛂉𛀣,𛂐𛁞𛁾 𛁷𛂑𛁳𛂯𛀬𛃅,𛃶𛁼

Crossroads (UK TV series)

ữḛḳṊẴ ẋ,Ẩṙ,ỹḛẪẠứụỿṞṦ,Ṉẍừ,ứ Ị,Ḵ,ṏ ṇỪḎḰṰọửḊ ṾḨḮữẑỶṑỗḮṣṉẃ Ữẩụ,ṓ,ḹẕḪḫỞṿḭ ỒṱṨẁṋṜ ḅẈ ṉ ứṀḱṑỒḵ,ḏ,ḊḖỹẊ Ẻḷổ,ṥ ẔḲẪụḣể Ṱ ḭỏựẶ Ồ Ṩ,ẂḿṡḾồ ỗṗṡịṞẤḵṽẃ ṸḒẄẘ,ủẞẵṦṟầṓế