Advertisement
jekyllstein

Flux.jl with 32 bit

Aug 6th, 2018
857
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
Julia 0.98 KB | None | 0 0
  1. using Flux
  2. using Flux.Tracker
  3. using Flux: @epochs
  4. using Base.Iterators
  5.  
  6. # Create Data
  7. srand(1234)
  8. batchSize = 256
  9. l = batchSize*100
  10. x = randn(Float32, 1, l)
  11. y = sin.(1./x)
  12.  
  13. #Create batches of size batchSize
  14. batches = [(x[:, i], y[:, i]) for i in partition(1:l, batchSize)]
  15.  
  16. # Create model, loss function, and training optimiser
  17. hidden_dim = 32
  18. typedInit(dims...) = Float32(5/3)*randn(Float32, dims...) .* sqrt(2.0f0/sum(dims))  #32 bit W initialization
  19. typedInitB(l) = zeros(Float32, l)   #32 bit b initialization
  20. typedDense(n1, n2, f=identity) = Dense(n1, n2, f, initW=typedInit, initb=typedInitB)
  21. model = Chain(
  22.     typedDense(1, hidden_dim, tanh),
  23.     typedDense(hidden_dim, hidden_dim, tanh),
  24.     typedDense(hidden_dim, hidden_dim, tanh),
  25.     typedDense(hidden_dim, 1)    
  26. )
  27.  
  28. p = params(model)
  29. opt = ADAM(p, 0.001f0, β1=0.9f0, β2 = 0.999f0, ϵ=1f-8)
  30. loss(x, y) = Flux.mse(model(x), y)
  31.  
  32. # Train model for 100 epochs
  33. @epochs 100 Flux.train!(loss, batches, opt, cb=evalcb)
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement