
I think it might be useful to include the numpy/scipy equivalent for both nn.LSTM and nn.linear.
Tf data generator how to#
So, the question is, how can I "translate" this RNN definition into a class that doesn't need pytorch, and how to use the state dict weights for it?Īlternatively, is there a "light" version of pytorch, that I can use just to run the model and yield a result? EDIT However, can I have some implementation for the nn.LSTM and nn.Linear using something not involving pytorch? Also, how will I use the weights from the state dict into the new class? I think I can easily implement the sigmoid function using numpy. I can work with numpy array instead of tensors, and reshape instead of view, and I don't need a device setting.īased on the class definition above, what I can see here is that I only need the following components from torch to get an output from the forward function: I am aware of this question, but I'm willing to go as low level as possible. Torch.rand(self.num_layers, self.batch_size, self.hidden_size).to(device)) Return (torch.rand(self.num_layers, self.batch_size, self.hidden_size).to(device), #return torch.rand(self.num_layers, self.batch_size, self.hidden_size) Lstm_out, self.hidden = self.lstm(cur_ft_tensor, self.hidden) Output_scores = torch.sigmoid(output_space) #we'll need to check if we need this sigmoidĬur_ft_tensor=feature_list#.view()Ĭur_ft_tensor=cur_ft_tensor.view() Output_space = self.hidden2out(lstm_out.view(len( feature_list), -1)) Lstm_out, _ = self.lstm( feature_list.view(len( feature_list), 1, -1)) Self.hidden2out = nn.Linear(hidden_size, output_size) Self.lstm = nn.LSTM(input_size, hidden_size,num_layers)

Self.matching_in_out = matching_in_out #length of input vector matches the length of output vector Def _init_(self, input_size, hidden_size, output_size,num_layers, matching_in_out=False, batch_size=1):
