Advertisement
ERENARD63

RNN backward propagation

May 23rd, 2018
746
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
Python 25.86 KB | None | 0 0
  1.  
  2. # coding: utf-8
  3.  
  4. # # Building your Recurrent Neural Network - Step by Step
  5. #
  6. # Welcome to Course 5's first assignment! In this assignment, you will implement your first Recurrent Neural Network in numpy.
  7. #
  8. # Recurrent Neural Networks (RNN) are very effective for Natural Language Processing and other sequence tasks because they have "memory". They can read inputs $x^{\langle t \rangle}$ (such as words) one at a time, and remember some information/context through the hidden layer activations that get passed from one time-step to the next. This allows a uni-directional RNN to take information from the past to process later inputs. A bidirection RNN can take context from both the past and the future.
  9. #
  10. # **Notation**:
  11. # - Superscript $[l]$ denotes an object associated with the $l^{th}$ layer.
  12. #     - Example: $a^{[4]}$ is the $4^{th}$ layer activation. $W^{[5]}$ and $b^{[5]}$ are the $5^{th}$ layer parameters.
  13. #
  14. # - Superscript $(i)$ denotes an object associated with the $i^{th}$ example.
  15. #     - Example: $x^{(i)}$ is the $i^{th}$ training example input.
  16. #
  17. # - Superscript $\langle t \rangle$ denotes an object at the $t^{th}$ time-step.
  18. #     - Example: $x^{\langle t \rangle}$ is the input x at the $t^{th}$ time-step. $x^{(i)\langle t \rangle}$ is the input at the $t^{th}$ timestep of example $i$.
  19. #    
  20. # - Lowerscript $i$ denotes the $i^{th}$ entry of a vector.
  21. #     - Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the activations in layer $l$.
  22. #
  23. # We assume that you are already familiar with `numpy` and/or have completed the previous courses of the specialization. Let's get started!
  24.  
  25. # Let's first import all the packages that you will need during this assignment.
  26.  
  27. # In[1]:
  28.  
  29. import numpy as np
  30. from rnn_utils import *
  31.  
  32.  
  33. # ## 1 - Forward propagation for the basic Recurrent Neural Network
  34. #
  35. # Later this week, you will generate music using an RNN. The basic RNN that you will implement has the structure below. In this example, $T_x = T_y$.
  36.  
  37. # <img src="images/RNN.png" style="width:500;height:300px;">
  38. # <caption><center> **Figure 1**: Basic RNN model </center></caption>
  39.  
  40. # Here's how you can implement an RNN:
  41. #
  42. # **Steps**:
  43. # 1. Implement the calculations needed for one time-step of the RNN.
  44. # 2. Implement a loop over $T_x$ time-steps in order to process all the inputs, one at a time.
  45. #
  46. # Let's go!
  47. #
  48. #
  49. # ## 3 - Backpropagation in recurrent neural networks (OPTIONAL / UNGRADED)
  50. #
  51. # In modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers do not need to bother with the details of the backward pass. If however you are an expert in calculus and want to see the details of backprop in RNNs, you can work through this optional portion of the notebook.
  52. #
  53. # When in an earlier course you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in recurrent neural networks you can to calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are quite complicated and we did not derive them in lecture. However, we will briefly present them below.
  54.  
  55. # ### 3.1 - Basic RNN  backward pass
  56. #
  57. # We will start by computing the backward pass for the basic RNN-cell.
  58. #
  59. # <img src="images/rnn_cell_backprop.png" style="width:500;height:300px;"> <br>
  60. # <caption><center> **Figure 5**: RNN-cell's backward pass. Just like in a fully-connected neural network, the derivative of the cost function $J$ backpropagates through the RNN by following the chain-rule from calculas. The chain-rule is also used to calculate $(\frac{\partial J}{\partial W_{ax}},\frac{\partial J}{\partial W_{aa}},\frac{\partial J}{\partial b})$ to update the parameters $(W_{ax}, W_{aa}, b_a)$. </center></caption>
  61.  
  62. # #### Deriving the one step backward functions:
  63. #
  64. # To compute the `rnn_cell_backward` you need to compute the following equations. It is a good exercise to derive them by hand.
  65. #
  66. # The derivative of $\tanh$ is $1-\tanh(x)^2$. You can find the complete proof [here](https://www.wyzant.com/resources/lessons/math/calculus/derivative_proofs/tanx). Note that: $ \text{sech}(x)^2 = 1 - \tanh(x)^2$
  67. #
  68. # Similarly for $\frac{ \partial a^{\langle t \rangle} } {\partial W_{ax}}, \frac{ \partial a^{\langle t \rangle} } {\partial W_{aa}},  \frac{ \partial a^{\langle t \rangle} } {\partial b}$, the derivative of  $\tanh(u)$ is $(1-\tanh(u)^2)du$.
  69. #
  70. # The final two equations also follow same rule and are derived using the $\tanh$ derivative. Note that the arrangement is done in a way to get the same dimensions to match.
  71.  
  72. # In[14]:
  73.  
  74. def rnn_cell_backward(da_next, cache):
  75.     """
  76.    Implements the backward pass for the RNN-cell (single time-step).
  77.  
  78.    Arguments:
  79.    da_next -- Gradient of loss with respect to next hidden state
  80.    cache -- python dictionary containing useful values (output of rnn_cell_forward())
  81.  
  82.    Returns:
  83.    gradients -- python dictionary containing:
  84.                        dx -- Gradients of input data, of shape (n_x, m)
  85.                        da_prev -- Gradients of previous hidden state, of shape (n_a, m)
  86.                        dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x)
  87.                        dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a)
  88.                        dba -- Gradients of bias vector, of shape (n_a, 1)
  89.    """
  90.    
  91.     # Retrieve values from cache
  92.     (a_next, a_prev, xt, parameters) = cache
  93.    
  94.     # Retrieve values from parameters
  95.     Wax = parameters["Wax"]
  96.     Waa = parameters["Waa"]
  97.     Wya = parameters["Wya"]
  98.     ba = parameters["ba"]
  99.     by = parameters["by"]
  100.  
  101.     ### START CODE HERE ###
  102.     # compute the gradient of tanh with respect to a_next (≈1 line)
  103.     dtanh = (1-a_next*a_next)*da_next
  104.  
  105.     # compute the gradient of the loss with respect to Wax (≈2 lines)
  106.     dxt = np.dot(Wax.T,dtanh)
  107.     dWax = np.dot(dtanh,xt.T)
  108.  
  109.     # compute the gradient with respect to Waa (≈2 lines)
  110.     da_prev = np.dot(Waa.T,dtanh)
  111.     dWaa = np.dot(dtanh,a_prev.T)
  112.  
  113.     # compute the gradient with respect to b (≈1 line)
  114.     dba = np.sum(dtanh,keepdims=True,axis=-1)
  115.  
  116.     ### END CODE HERE ###
  117.    
  118.     # Store the gradients in a python dictionary
  119.     gradients = {"dxt": dxt, "da_prev": da_prev, "dWax": dWax, "dWaa": dWaa, "dba": dba}
  120.    
  121.     return gradients
  122.  
  123.  
  124. # In[15]:
  125.  
  126. np.random.seed(1)
  127. xt = np.random.randn(3,10)
  128. a_prev = np.random.randn(5,10)
  129. Wax = np.random.randn(5,3)
  130. Waa = np.random.randn(5,5)
  131. Wya = np.random.randn(2,5)
  132. b = np.random.randn(5,1)
  133. by = np.random.randn(2,1)
  134. parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "ba": ba, "by": by}
  135.  
  136. a_next, yt, cache = rnn_cell_forward(xt, a_prev, parameters)
  137.  
  138. da_next = np.random.randn(5,10)
  139. gradients = rnn_cell_backward(da_next, cache)
  140. print("gradients[\"dxt\"][1][2] =", gradients["dxt"][1][2])
  141. print("gradients[\"dxt\"].shape =", gradients["dxt"].shape)
  142. print("gradients[\"da_prev\"][2][3] =", gradients["da_prev"][2][3])
  143. print("gradients[\"da_prev\"].shape =", gradients["da_prev"].shape)
  144. print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
  145. print("gradients[\"dWax\"].shape =", gradients["dWax"].shape)
  146. print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
  147. print("gradients[\"dWaa\"].shape =", gradients["dWaa"].shape)
  148. print("gradients[\"dba\"][4] =", gradients["dba"][4])
  149. print("gradients[\"dba\"].shape =", gradients["dba"].shape)
  150.  
  151.  
  152.  
  153.  
  154. # #### Backward pass through the RNN
  155. #
  156. # Computing the gradients of the cost with respect to $a^{\langle t \rangle}$ at every time-step $t$ is useful because it is what helps the gradient backpropagate to the previous RNN-cell. To do so, you need to iterate through all the time steps starting at the end, and at each step, you increment the overall $db_a$, $dW_{aa}$, $dW_{ax}$ and you store $dx$.
  157. #
  158. # **Instructions**:
  159. #
  160. # Implement the `rnn_backward` function. Initialize the return variables with zeros first and then loop through all the time steps while calling the `rnn_cell_backward` at each time timestep, update the other variables accordingly.
  161.  
  162. # In[20]:
  163.  
  164. def rnn_backward(da, caches):
  165.     """
  166.    Implement the backward pass for a RNN over an entire sequence of input data.
  167.  
  168.    Arguments:
  169.    da -- Upstream gradients of all hidden states, of shape (n_a, m, T_x)
  170.    caches -- tuple containing information from the forward pass (rnn_forward)
  171.    
  172.    Returns:
  173.    gradients -- python dictionary containing:
  174.                        dx -- Gradient w.r.t. the input data, numpy-array of shape (n_x, m, T_x)
  175.                        da0 -- Gradient w.r.t the initial hidden state, numpy-array of shape (n_a, m)
  176.                        dWax -- Gradient w.r.t the input's weight matrix, numpy-array of shape (n_a, n_x)
  177.                        dWaa -- Gradient w.r.t the hidden state's weight matrix, numpy-arrayof shape (n_a, n_a)
  178.                        dba -- Gradient w.r.t the bias, of shape (n_a, 1)
  179.    """
  180.        
  181.     ### START CODE HERE ###
  182.    
  183.     # Retrieve values from the first cache (t=1) of caches (≈2 lines)
  184.    
  185.     (cache, x)= caches
  186.     (a1, a0, x1, parameters) = caches[0]
  187.    
  188.     # Retrieve dimensions from da's and x1's shapes (≈2 lines)
  189.     n_a, m, T_x = da.shape
  190.     n_x, m = x1.shape
  191.    
  192.     # initialize the gradients with the right sizes (≈6 lines)
  193.     dx = np.zeros((n_x,m,T-x))
  194.     dWax = np.zeros((n_a,n_x))
  195.     dWaa = np.zeros((n_a,n_a))
  196.     dba = np.zeros((n_a,1))
  197.     da0 = np.zeros((n_a,m))
  198.     da_prevt = np.zeros((n_a,m))
  199.    
  200.     # Loop through all the time steps
  201.     for t in reversed(range(T_x)):
  202.         # Compute gradients at time step t. Choose wisely the "da_next" and the "cache" to use in the backward propagation step. (≈1 line)
  203.         gradients = rnn_cell_backward(da[:,:,t]+da_prevt,caches[t])
  204.         # Retrieve derivatives from gradients (≈ 1 line)
  205.         dxt, da_prevt, dWaxt, dWaat, dbat = gradients["dxt"], gradients["da_prev"], gradients["dWax"], gradients["dWaa"], gradients["dba"]
  206.         # Increment global derivatives w.r.t parameters by adding their derivative at time-step t (≈4 lines)
  207.         dx[:, :, t] = dxt
  208.         dWax += dWax
  209.         dWaa += dWaa
  210.         dba += dbat
  211.        
  212.     # Set da0 to the gradient of a which has been backpropagated through all time-steps (≈1 line)
  213.     da0 = da_prevt
  214.     ### END CODE HERE ###
  215.  
  216.     # Store the gradients in a python dictionary
  217.     gradients = {"dx": dx, "da0": da0, "dWax": dWax, "dWaa": dWaa,"dba": dba}
  218.    
  219.     return gradients
  220.  
  221.  
  222. # In[21]:
  223.  
  224. np.random.seed(1)
  225. x = np.random.randn(3,10,4)
  226. a0 = np.random.randn(5,10)
  227. Wax = np.random.randn(5,3)
  228. Waa = np.random.randn(5,5)
  229. Wya = np.random.randn(2,5)
  230. ba = np.random.randn(5,1)
  231. by = np.random.randn(2,1)
  232. parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "ba": ba, "by": by}
  233. a, y, caches = rnn_forward(x, a0, parameters)
  234. da = np.random.randn(5, 10, 4)
  235. gradients = rnn_backward(da, caches)
  236.  
  237. print("gradients[\"dx\"][1][2] =", gradients["dx"][1][2])
  238. print("gradients[\"dx\"].shape =", gradients["dx"].shape)
  239. print("gradients[\"da0\"][2][3] =", gradients["da0"][2][3])
  240. print("gradients[\"da0\"].shape =", gradients["da0"].shape)
  241. print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
  242. print("gradients[\"dWax\"].shape =", gradients["dWax"].shape)
  243. print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
  244. print("gradients[\"dWaa\"].shape =", gradients["dWaa"].shape)
  245. print("gradients[\"dba\"][4] =", gradients["dba"][4])
  246. print("gradients[\"dba\"].shape =", gradients["dba"].shape)
  247.  
  248.  
  249.  
  250. # ## 3.2 - LSTM backward pass
  251.  
  252. # ### 3.2.1 One Step backward
  253. #
  254. # The LSTM backward pass is slighltly more complicated than the forward one. We have provided you with all the equations for the LSTM backward pass below. (If you enjoy calculus exercises feel free to try deriving these from scratch yourself.)
  255. #
  256. # ### 3.2.2 gate derivatives
  257. #
  258. # $$d \Gamma_o^{\langle t \rangle} = da_{next}*\tanh(c_{next}) * \Gamma_o^{\langle t \rangle}*(1-\Gamma_o^{\langle t \rangle})\tag{7}$$
  259. #
  260. # $$d\tilde c^{\langle t \rangle} = dc_{next}*\Gamma_u^{\langle t \rangle}+ \Gamma_o^{\langle t \rangle} (1-\tanh(c_{next})^2) * i_t * da_{next} * \tilde c^{\langle t \rangle} * (1-\tanh(\tilde c)^2) \tag{8}$$
  261. #
  262. # $$d\Gamma_u^{\langle t \rangle} = dc_{next}*\tilde c^{\langle t \rangle} + \Gamma_o^{\langle t \rangle} (1-\tanh(c_{next})^2) * \tilde c^{\langle t \rangle} * da_{next}*\Gamma_u^{\langle t \rangle}*(1-\Gamma_u^{\langle t \rangle})\tag{9}$$
  263. #
  264. # $$d\Gamma_f^{\langle t \rangle} = dc_{next}*\tilde c_{prev} + \Gamma_o^{\langle t \rangle} (1-\tanh(c_{next})^2) * c_{prev} * da_{next}*\Gamma_f^{\langle t \rangle}*(1-\Gamma_f^{\langle t \rangle})\tag{10}$$
  265. #
  266. # ### 3.2.3 parameter derivatives
  267. #
  268. # $$ dW_f = d\Gamma_f^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{11} $$
  269. # $$ dW_u = d\Gamma_u^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{12} $$
  270. # $$ dW_c = d\tilde c^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{13} $$
  271. # $$ dW_o = d\Gamma_o^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{14}$$
  272. #
  273. # To calculate $db_f, db_u, db_c, db_o$ you just need to sum across the horizontal (axis= 1) axis on $d\Gamma_f^{\langle t \rangle}, d\Gamma_u^{\langle t \rangle}, d\tilde c^{\langle t \rangle}, d\Gamma_o^{\langle t \rangle}$ respectively. Note that you should have the `keep_dims = True` option.
  274. #
  275. # Finally, you will compute the derivative with respect to the previous hidden state, previous memory state, and input.
  276. #
  277. # $$ da_{prev} = W_f^T*d\Gamma_f^{\langle t \rangle} + W_u^T * d\Gamma_u^{\langle t \rangle}+ W_c^T * d\tilde c^{\langle t \rangle} + W_o^T * d\Gamma_o^{\langle t \rangle} \tag{15}$$
  278. # Here, the weights for equations 13 are the first n_a, (i.e. $W_f = W_f[:n_a,:]$ etc...)
  279. #
  280. # $$ dc_{prev} = dc_{next}\Gamma_f^{\langle t \rangle} + \Gamma_o^{\langle t \rangle} * (1- \tanh(c_{next})^2)*\Gamma_f^{\langle t \rangle}*da_{next} \tag{16}$$
  281. # $$ dx^{\langle t \rangle} = W_f^T*d\Gamma_f^{\langle t \rangle} + W_u^T * d\Gamma_u^{\langle t \rangle}+ W_c^T * d\tilde c_t + W_o^T * d\Gamma_o^{\langle t \rangle}\tag{17} $$
  282. # where the weights for equation 15 are from n_a to the end, (i.e. $W_f = W_f[n_a:,:]$ etc...)
  283. #
  284. # **Exercise:** Implement `lstm_cell_backward` by implementing equations $7-17$ below. Good luck! :)
  285.  
  286. # In[22]:
  287.  
  288. def lstm_cell_backward(da_next, dc_next, cache):
  289.     """
  290.    Implement the backward pass for the LSTM-cell (single time-step).
  291.  
  292.    Arguments:
  293.    da_next -- Gradients of next hidden state, of shape (n_a, m)
  294.    dc_next -- Gradients of next cell state, of shape (n_a, m)
  295.    cache -- cache storing information from the forward pass
  296.  
  297.    Returns:
  298.    gradients -- python dictionary containing:
  299.                        dxt -- Gradient of input data at time-step t, of shape (n_x, m)
  300.                        da_prev -- Gradient w.r.t. the previous hidden state, numpy array of shape (n_a, m)
  301.                        dc_prev -- Gradient w.r.t. the previous memory state, of shape (n_a, m, T_x)
  302.                        dWf -- Gradient w.r.t. the weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
  303.                        dWi -- Gradient w.r.t. the weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
  304.                        dWc -- Gradient w.r.t. the weight matrix of the memory gate, numpy array of shape (n_a, n_a + n_x)
  305.                        dWo -- Gradient w.r.t. the weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)
  306.                        dbf -- Gradient w.r.t. biases of the forget gate, of shape (n_a, 1)
  307.                        dbi -- Gradient w.r.t. biases of the update gate, of shape (n_a, 1)
  308.                        dbc -- Gradient w.r.t. biases of the memory gate, of shape (n_a, 1)
  309.                        dbo -- Gradient w.r.t. biases of the output gate, of shape (n_a, 1)
  310.    """
  311.  
  312.     # Retrieve information from "cache"
  313.     (a_next, c_next, a_prev, c_prev, ft, it, cct, ot, xt, parameters) = cache
  314.    
  315.     ### START CODE HERE ###
  316.     # Retrieve dimensions from xt's and a_next's shape (≈2 lines)
  317.     n_x, m =xt.shape
  318.     n_a, m = a_next.shape
  319.    
  320.     # Compute gates related derivatives, you can find their values can be found by looking carefully at equations (7) to (10) (≈4 lines)
  321.     dot = da_next*np.tanh(c_next)*ot*(1-ot)
  322.     dcct = (dc_next*it+ot*(1-np.square(np.tanh(c_next)))*it*da_next)*(1-np.square(cct))
  323.     dit = (dc_next*cct+ot*(1-np.square(np.tanh(c_next)))*cct*da_next)*it*(1-it)
  324.     dft = (dc_next*c_prev+ot*(1-np.square(np.tanh(c_next)))*c_prev*da_next)*ft*(1-ft)
  325.    
  326.     # Code equations (7) to (10) (≈4 lines)
  327.     dit = None
  328.     dft = None
  329.     dot = None
  330.     dcct = None
  331.  
  332.     # Compute parameters related derivatives. Use equations (11)-(14) (≈8 lines)
  333.     dWf = np.dot(dft,np.concatenate((a_prev,xt), axis=0).T)
  334.     dWi = np.dot(dit,np.concatenate((a_prev,xt), axis=0).T)
  335.     dWc = np.dcct(dft,np.concatenate((a_prev,xt), axis=0).T)
  336.     dWo = np.dot(dot,np.concatenate((a_prev,xt), axis=0).T)
  337.     dbf = np.sum(dft, axis=1,keepdims=True)
  338.     dbi = np.sum(dcct, axis=1,keepdims=True)
  339.     dbc = np.sum(dft, axis=1,keepdims=True)
  340.     dbo = np.sum(dot, axis=1,keepdims=True)
  341.  
  342.     # Compute derivatives w.r.t previous hidden state, previous memory state and input. Use equations (15)-(17). (≈3 lines)
  343.     da_prev = np.dot(parameters["Wf"][:,:n_a].T,dft)+np.dot(parameters["Wi"][:,:n_a].T,dit)+np.dot(parameters["Wc"][:,:n_a].T,dcct)+np.dot(parameters["Wo"][:,:n_a].T,dot)
  344.     dc_prev = dc_next*ft+ot*(1-np.square(np.tanh(c_next)))*ft*da_next
  345.     dxt = np.dot(parameters["Wf"][:,:n_a].T,dft)+np.dot(parameters["Wi"][:,:n_a].T,dit)+np.dot(parameters["Wc"][:,:n_a].T,dcct)+np.dot(parameters["Wo"][:,:n_a].T,dot)
  346.     ### END CODE HERE ###
  347.    
  348.     # Save gradients in dictionary
  349.     gradients = {"dxt": dxt, "da_prev": da_prev, "dc_prev": dc_prev, "dWf": dWf,"dbf": dbf, "dWi": dWi,"dbi": dbi,
  350.                 "dWc": dWc,"dbc": dbc, "dWo": dWo,"dbo": dbo}
  351.  
  352.     return gradients
  353.  
  354.  
  355. # In[24]:
  356.  
  357. np.random.seed(1)
  358. xt = np.random.randn(3,10)
  359. a_prev = np.random.randn(5,10)
  360. c_prev = np.random.randn(5,10)
  361. Wf = np.random.randn(5, 5+3)
  362. bf = np.random.randn(5,1)
  363. Wi = np.random.randn(5, 5+3)
  364. bi = np.random.randn(5,1)
  365. Wo = np.random.randn(5, 5+3)
  366. bo = np.random.randn(5,1)
  367. Wc = np.random.randn(5, 5+3)
  368. bc = np.random.randn(5,1)
  369. Wy = np.random.randn(2,5)
  370. by = np.random.randn(2,1)
  371.  
  372. parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
  373.  
  374. a_next, c_next, yt, cache = lstm_cell_forward(xt, a_prev, c_prev, parameters)
  375.  
  376. da_next = np.random.randn(5,10)
  377. dc_next = np.random.randn(5,10)
  378. gradients = lstm_cell_backward(da_next, dc_next, cache)
  379. print("gradients[\"dxt\"][1][2] =", gradients["dxt"][1][2])
  380. print("gradients[\"dxt\"].shape =", gradients["dxt"].shape)
  381. print("gradients[\"da_prev\"][2][3] =", gradients["da_prev"][2][3])
  382. print("gradients[\"da_prev\"].shape =", gradients["da_prev"].shape)
  383. print("gradients[\"dc_prev\"][2][3] =", gradients["dc_prev"][2][3])
  384. print("gradients[\"dc_prev\"].shape =", gradients["dc_prev"].shape)
  385. print("gradients[\"dWf\"][3][1] =", gradients["dWf"][3][1])
  386. print("gradients[\"dWf\"].shape =", gradients["dWf"].shape)
  387. print("gradients[\"dWi\"][1][2] =", gradients["dWi"][1][2])
  388. print("gradients[\"dWi\"].shape =", gradients["dWi"].shape)
  389. print("gradients[\"dWc\"][3][1] =", gradients["dWc"][3][1])
  390. print("gradients[\"dWc\"].shape =", gradients["dWc"].shape)
  391. print("gradients[\"dWo\"][1][2] =", gradients["dWo"][1][2])
  392. print("gradients[\"dWo\"].shape =", gradients["dWo"].shape)
  393. print("gradients[\"dbf\"][4] =", gradients["dbf"][4])
  394. print("gradients[\"dbf\"].shape =", gradients["dbf"].shape)
  395. print("gradients[\"dbi\"][4] =", gradients["dbi"][4])
  396. print("gradients[\"dbi\"].shape =", gradients["dbi"].shape)
  397. print("gradients[\"dbc\"][4] =", gradients["dbc"][4])
  398. print("gradients[\"dbc\"].shape =", gradients["dbc"].shape)
  399. print("gradients[\"dbo\"][4] =", gradients["dbo"][4])
  400. print("gradients[\"dbo\"].shape =", gradients["dbo"].shape)
  401.  
  402.  
  403.  
  404. # ### 3.3 Backward pass through the LSTM RNN
  405. #
  406. # This part is very similar to the `rnn_backward` function you implemented above. You will first create variables of the same dimension as your return variables. You will then iterate over all the time steps starting from the end and call the one step function you implemented for LSTM at each iteration. You will then update the parameters by summing them individually. Finally return a dictionary with the new gradients.
  407. #
  408. # **Instructions**: Implement the `lstm_backward` function. Create a for loop starting from $T_x$ and going backward. For each step call `lstm_cell_backward` and update the your old gradients by adding the new gradients to them. Note that `dxt` is not updated but is stored.
  409.  
  410. # In[27]:
  411.  
  412. def lstm_backward(da, caches):
  413.    
  414.     """
  415.    Implement the backward pass for the RNN with LSTM-cell (over a whole sequence).
  416.  
  417.    Arguments:
  418.    da -- Gradients w.r.t the hidden states, numpy-array of shape (n_a, m, T_x)
  419.    dc -- Gradients w.r.t the memory states, numpy-array of shape (n_a, m, T_x)
  420.    caches -- cache storing information from the forward pass (lstm_forward)
  421.  
  422.    Returns:
  423.    gradients -- python dictionary containing:
  424.                        dx -- Gradient of inputs, of shape (n_x, m, T_x)
  425.                        da0 -- Gradient w.r.t. the previous hidden state, numpy array of shape (n_a, m)
  426.                        dWf -- Gradient w.r.t. the weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
  427.                        dWi -- Gradient w.r.t. the weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
  428.                        dWc -- Gradient w.r.t. the weight matrix of the memory gate, numpy array of shape (n_a, n_a + n_x)
  429.                        dWo -- Gradient w.r.t. the weight matrix of the save gate, numpy array of shape (n_a, n_a + n_x)
  430.                        dbf -- Gradient w.r.t. biases of the forget gate, of shape (n_a, 1)
  431.                        dbi -- Gradient w.r.t. biases of the update gate, of shape (n_a, 1)
  432.                        dbc -- Gradient w.r.t. biases of the memory gate, of shape (n_a, 1)
  433.                        dbo -- Gradient w.r.t. biases of the save gate, of shape (n_a, 1)
  434.    """
  435.  
  436.     # Retrieve values from the first cache (t=1) of caches.
  437.     (caches, x) = caches
  438.     (a1, c1, a0, c0, f1, i1, cc1, o1, x1, parameters) = caches[0]
  439.    
  440.     ### START CODE HERE ###
  441.     # Retrieve dimensions from da's and x1's shapes (≈2 lines)
  442.     n_a, m, T_x = da.shape  
  443.     n_x, m = x1.shape  
  444.    
  445.     # initialize the gradients with the right sizes (≈12 lines)
  446.     dx = np.zeros((n_x, m, T_x))  
  447.     da0 = np.zeros((n_a, m))  
  448.     da_prevt = np.zeros((n_a, m))  
  449.     dc_prevt = np.zeros((n_a, m))  
  450.     dWf = np.zeros((n_a, n_a + n_x))  
  451.     dWi = np.zeros((n_a, n_a + n_x))  
  452.     dWc = np.zeros((n_a, n_a + n_x))  
  453.     dWo = np.zeros((n_a, n_a + n_x))  
  454.     dbf = np.zeros((n_a, 1))  
  455.     dbi = np.zeros((n_a, 1))  
  456.     dbc = np.zeros((n_a, 1))  
  457.     dbo = np.zeros((n_a, 1))  
  458.    
  459.    
  460.     # loop back over the whole sequence
  461.     for t in reversed(range(T_x)):
  462.         # Compute all gradients using lstm_cell_backward
  463.         gradients = lstm_cell_backward(da[:,:,t]+da_prevt,dc_prevt,caches[t])  
  464.         # Store or add the gradient to the parameters' previous step's gradient
  465.         dx[:, :, t] = gradients['dxt']  
  466.         dWf = dWf+gradients['dWf']  
  467.         dWi = dWi+gradients['dWi']  
  468.         dWc = dWc+gradients['dWc']  
  469.         dWo = dWo+gradients['dWo']  
  470.         dbf = dbf+gradients['dbf']  
  471.         dbi = dbi+gradients['dbi']  
  472.         dbc = dbc+gradients['dbc']  
  473.         dbo = dbo+gradients['dbo']  
  474.        
  475.        
  476.     # Set the first activation's gradient to the backpropagated gradient da_prev.
  477.     da0 = gradients['da_prev']  
  478.    
  479.     ### END CODE HERE ###
  480.  
  481.     # Store the gradients in a python dictionary
  482.     gradients = {"dx": dx, "da0": da0, "dWf": dWf,"dbf": dbf, "dWi": dWi,"dbi": dbi,
  483.                 "dWc": dWc,"dbc": dbc, "dWo": dWo,"dbo": dbo}
  484.    
  485.     return gradients
  486.  
  487.  
  488. # In[28]:
  489.  
  490. np.random.seed(1)
  491. x = np.random.randn(3,10,7)
  492. a0 = np.random.randn(5,10)
  493. Wf = np.random.randn(5, 5+3)
  494. bf = np.random.randn(5,1)
  495. Wi = np.random.randn(5, 5+3)
  496. bi = np.random.randn(5,1)
  497. Wo = np.random.randn(5, 5+3)
  498. bo = np.random.randn(5,1)
  499. Wc = np.random.randn(5, 5+3)
  500. bc = np.random.randn(5,1)
  501.  
  502. parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
  503.  
  504. a, y, c, caches = lstm_forward(x, a0, parameters)
  505.  
  506. da = np.random.randn(5, 10, 4)
  507. gradients = lstm_backward(da, caches)
  508.  
  509. print("gradients[\"dx\"][1][2] =", gradients["dx"][1][2])
  510. print("gradients[\"dx\"].shape =", gradients["dx"].shape)
  511. print("gradients[\"da0\"][2][3] =", gradients["da0"][2][3])
  512. print("gradients[\"da0\"].shape =", gradients["da0"].shape)
  513. print("gradients[\"dWf\"][3][1] =", gradients["dWf"][3][1])
  514. print("gradients[\"dWf\"].shape =", gradients["dWf"].shape)
  515. print("gradients[\"dWi\"][1][2] =", gradients["dWi"][1][2])
  516. print("gradients[\"dWi\"].shape =", gradients["dWi"].shape)
  517. print("gradients[\"dWc\"][3][1] =", gradients["dWc"][3][1])
  518. print("gradients[\"dWc\"].shape =", gradients["dWc"].shape)
  519. print("gradients[\"dWo\"][1][2] =", gradients["dWo"][1][2])
  520. print("gradients[\"dWo\"].shape =", gradients["dWo"].shape)
  521. print("gradients[\"dbf\"][4] =", gradients["dbf"][4])
  522. print("gradients[\"dbf\"].shape =", gradients["dbf"].shape)
  523. print("gradients[\"dbi\"][4] =", gradients["dbi"][4])
  524. print("gradients[\"dbi\"].shape =", gradients["dbi"].shape)
  525. print("gradients[\"dbc\"][4] =", gradients["dbc"][4])
  526. print("gradients[\"dbc\"].shape =", gradients["dbc"].shape)
  527. print("gradients[\"dbo\"][4] =", gradients["dbo"][4])
  528. print("gradients[\"dbo\"].shape =", gradients["dbo"].shape)
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement