In practice most of the time we'll only need to provide the forward propagation and the frameworks will do the backward propagation for us, but in case you're interested in getting through all pieces of the neural networks by yourselves, I'd like to share some tricks that allow us to derive the backward propagations easily, for all kinds of neural networks (LSTM, CNN, etc.).
Check out this notebook backward_propagation_for_all.ipynb
for all details. It's preferred to open it in Jupyter Notebook, as the Github website renders some equations very ugly.