This is a fork of neuraltalk2 and vqa that can write questions about images in the VQA dataset
Visit NeuralTalk2 for the original code.
Changes, include adding a second parameter to the LSTM, that allows you to train the network to expect certain words with certain types of sentences. This allows you to build a series of questions similar to those in the VQA project