-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problems with grad after grad #120
Comments
There is an implementation at https://github.com/KnetML/WGAN.jl that may
help.
Not all functions support higher order derivatives.
Can say more when I have time to check the code in more detail.
…On Thu, Jul 9, 2020, 3:40 PM gforchini ***@***.***> wrote:
Hi
I am trying to implement a WGAN and I have problems to apply grad on a
function to which grad has already been applied (although on a different
variables). An example of the problem and error message is attached.
Giovanni
using Knet, StatsBase, Random, LinearAlgebra, AutoGrad
function D(w,x)
x =elu.( w[1]*x .+ w[2])
return sigm.(w[3]*x.+ w[4])[1]
end
w = Array{Float64}[ randn(12, 12), randn(12, 1),
randn(1, 12), randn(1, 1) ]
∇D = grad(D)
x=randn( 12, 1)
D(w,x)
typeof(D(w,x))
y=∇D(w,x)
typeof(∇D(w,x))
No problem up to now the function D is differentiable in both w and x
Now I construct a penalty as a function of the gradient of D with respect
to x
function penalty(w,x)
g = grad(x -> D(w,x))
return (norm(g(x)) -1)^2
end
penalty(w,x)
typeof(penalty(w,x))
Taking the gradient of penalty with respect to w gives an error.
∇penalty=grad(penalty)
∇penalty(w,x)
MethodError: Cannot convert an object of type
AutoGrad.Result{Array{Float64,2}} to an object of type
Array{Float64,N} where N
Closest candidates are:
convert(::Type{T}, !Matched::AbstractArray) where T<:Array at array.jl:533
convert(::Type{T}, !Matched::T) where T<:AbstractArray at
abstractarray.jl:14
convert(::Type{T}, !Matched::Factorization) where T<:AbstractArray at
/Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.4/LinearAlgebra/src/factorization.jl:55
...
differentiate(::Function, ::Param{Array{Array{Float64,N} where N,1}},
::Vararg{Any,N} where N;
o::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}) at
core.jl:148
differentiate(::Function, ::Param{Array{Array{Float64,N} where N,1}},
::Vararg{Any,N} where N) at core.jl:135
(::AutoGrad.var"#gradfun#7"{AutoGrad.var"#gradfun#6#8"{typeof(penalty),Int64,Bool}})(::Array{Array{Float64,N}
where N,1}, ::Vararg{Any,N} where N;
kwargs::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}})
at core.jl:225
(::AutoGrad.var"#gradfun#7"{AutoGrad.var"#gradfun#6#8"{typeof(penalty),Int64,Bool}})(::Array{Array{Float64,N}
where N,1}, ::Vararg{Any,N} where N) at core.jl:221
top-level scope at Problems.jl:38
Notice that w is a vector of arrays, which works in other situations and
which I suspect is what is creating problems now. In other examples where w
is a proper vector the second gradient works.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#120>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAN43JWHFACB4AQCTBAFEKDR2W3DFANCNFSM4OVR6XVQ>
.
|
Thank you for getting back. Unfortunately the problem I am looking at is almost (but not quite) a WGAN and it doesn't quite fit into the implementation in the link. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi
I am trying to implement a WGAN and I have problems to apply grad on a function to which grad has already been applied (although on a different variables). An example of the problem and error message is attached.
Giovanni
using Knet, StatsBase, Random, LinearAlgebra, AutoGrad
function D(w,x)
x =elu.( w[1]*x .+ w[2])
return sigm.(w[3]*x.+ w[4])[1]
end
w = Array{Float64}[ randn(12, 12), randn(12, 1),
randn(1, 12), randn(1, 1) ]
∇D = grad(D)
x=randn( 12, 1)
D(w,x)
typeof(D(w,x))
y=∇D(w,x)
typeof(∇D(w,x))
No problem up to now the function D is differentiable in both w and x
Now I construct a penalty as a function of the gradient of D with respect to x
function penalty(w,x)
g = grad(x -> D(w,x))
return (norm(g(x)) -1)^2
end
penalty(w,x)
typeof(penalty(w,x))
Taking the gradient of penalty with respect to w gives an error.
∇penalty=grad(penalty)
∇penalty(w,x)
MethodError: Cannot
convert
an object of typeAutoGrad.Result{Array{Float64,2}} to an object of type
Array{Float64,N} where N
Closest candidates are:
convert(::Type{T}, !Matched::AbstractArray) where T<:Array at array.jl:533
convert(::Type{T}, !Matched::T) where T<:AbstractArray at abstractarray.jl:14
convert(::Type{T}, !Matched::Factorization) where T<:AbstractArray at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.4/LinearAlgebra/src/factorization.jl:55
...
differentiate(::Function, ::Param{Array{Array{Float64,N} where N,1}}, ::Vararg{Any,N} where N; o::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}) at core.jl:148
differentiate(::Function, ::Param{Array{Array{Float64,N} where N,1}}, ::Vararg{Any,N} where N) at core.jl:135
(::AutoGrad.var"#gradfun#7"{AutoGrad.var"#gradfun#6#8"{typeof(penalty),Int64,Bool}})(::Array{Array{Float64,N} where N,1}, ::Vararg{Any,N} where N; kwargs::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}) at core.jl:225
(::AutoGrad.var"#gradfun#7"{AutoGrad.var"#gradfun#6#8"{typeof(penalty),Int64,Bool}})(::Array{Array{Float64,N} where N,1}, ::Vararg{Any,N} where N) at core.jl:221
top-level scope at Problems.jl:38
Notice that w is a vector of arrays, which works in other situations and which I suspect is what is creating problems now. In other examples where w is a proper vector the second gradient works.
The text was updated successfully, but these errors were encountered: