Questions tagged [backpropagation]

3

votes
0

answer
20

Views

Computational graph vs (computer algebra) symbolic expression

I was reading Baydin et al, Automatic Differentiation in Machine Learning: a Survey, 2018 (Arxiv), which differentiates between symbolic differentiation and automatic differentiation (AD). It then says: AD Is Not Symbolic Differentiation. Symbolic differentiation is the automatic manipulation of [sy...
Albert
1

votes
1

answer
28

Views

Python simple backpropagation not working as expected

I am trying to implement the backpropagation algorithm to show how a two layered neural network can be used to behave as the XOR logic gate. I followed this tutorial here. After running, I expect the output to follow the XOR logic truth table: [[0] [1] [1] [0]] However I get: output after training:...
Rrz0
1

votes
0

answer
263

Views

I am having trouble training a simple neural network

I am learning to make neural network and I have derived the following equations for backpropagation. Is something wrong with this because I can't seem to get the neural network training. I am coding in python and the accuracy I get is around 10 (I get this even when I don't train my network). Howeve...
Nischit Pradhan
1

votes
0

answer
92

Views

MATLAB 2-Layer Neural Network from Scratch

Currently, I'm working on a simple two Layer NN (25 input - sigmoid, 199 outputs - softmax) from scratch for debug reasons - Precisely, I want to track some values. My input are batches or generally speaking matrices of dimension (rows x 25) in order to fit the input layer structure. Regarding my w...
Billabong
1

votes
1

answer
106

Views

When neural network loss is descending but the accuracy is not increased?

I implement a batch-based back-propagation algorithm for a neural network with one hidden layer and sigmoid activation function. The output layer is one-hot Sigmoid layer. The net of first layer is z1. After apply sigmoid it becomes a1. similarly, we have z2 and a2 for the second layer. The back-pr...
MohsenIT
1

votes
0

answer
41

Views

Backpropagation (matrixis not aligned)

Well, the problem is with delta1, I've checked over math couple times, it seems good to me, everything should be correct with delta2, but it doesn't match with W2 transposed, here is backpropagation: def backward(self, X, Y): X = np.array(X) Y = np.array(Y) delta2 = -(Y - self.yHat) * self.deriv_sig...
Igracx33
1

votes
1

answer
165

Views

How does backpropagation work for Variational AutoEncoder?

I understand that by reparameterization, Gaussian noise(0,I) is taken as input whereas, thereby making the entire network differentiable. I am not able to understand how this is implemented. As such, how are the dimensions of gradient mapped.
dulla
1

votes
0

answer
122

Views

Can Tensorflow backpropagate through distributions, e.g. beta?

tf.distributions gives access to several distributions. My network should predict parameters of a probability density function (i.e. a policy in my case), the loss is then dependent on these again. I would like to ask for the beta-distribution especially, as that is the one i intend to use. E.g.: lo...
LJKS
1

votes
0

answer
302

Views

What makes a (too large) neural network diverge?

I've build a neural network (NN) having only 6 inputs and 8 classes (it is thus a classifier I'm building). To give some more background: The number of training samples is 80,000 samples as is the dev/cross-validation set size (I know this is not a good split, but to compare my outputs to those of...
furk
1

votes
0

answer
131

Views

simple implementation of Adam in NN

Please look at this pseudo python code. It is just for ilustration. import numpy as np X = np.array([[1,0],[0,0]]) y = np.array([1,0]) w0 = np.random.random((2,1)) learning_rate= 0.01 for i in range(100): # we shuffle the data # we take part of data #forward propagation(bias, matrix multiplication,...
Stenga
1

votes
0

answer
82

Views

Calculating error in neural network back propagation

This is a very simple question as I am new to the concepts. I have a 4-4-1 neural network that I am running on 16x4 binary data to predict a 16x1 column of outputs. I have utilized random weights and biases to generate a rough predicted output vector. I then calculate a vector of errors (actual-outp...
kss
1

votes
1

answer
110

Views

Neural network back propagation fails to overfit small data

I'm trying to implement neural network with back propagation algorithm in Racket. To test the implementation, I decided to train it on a very small data for large amount of iterations, and see if it fits the data it was trained on. However it does not -- using the sigmoid function it outputs extreme...
Coderino Javarino
1

votes
1

answer
42

Views

Wrong Backprop updates in Batchnorm

I have these updates for Backprop, please let me know where the dx part is wrong. In the computation graph, I am using X, sample_mean and sample_var. Thanks for your help (x, norm, sample_mean, sample_var, gamma, eps) = cache dbeta = np.sum(dout, axis = 0) dgamma = np.sum(dout * norm, axis = 0) dxmi...
srinadhu k
1

votes
1

answer
54

Views

Unsure whether function breaks backpropagation

I have been tinkering around a lot with tensorflow in the past few days however I am quite unsure whether a function I wrote would break the backpropagation in a Neural network. I thought I'd ask here before I try to integrate this function in a NN. So the basic setup is I want to add two matricies...
Hadjimina
1

votes
1

answer
52

Views

What is the error term in backpropagation through time if I only have one output?

In this question, RNN: Back-propagation through time when output is taken only at final timestep I've seen that if I only have one output at final time step T, which is y(T), then the error at earlier time step is unneeded. Then, is the loss function term E = sum(E(t)) instead the value of E = E(T)...
林彥良
1

votes
0

answer
64

Views

My backpropagation is producing utter trash

I'm a hobbyist and thus dont have much experience on neural networks, this is my first attempt to implement one. My backpropagation function is doing something, but not what it is supposed to do. The network is not getting any better at what I'm training it on (after like 10000 training examples). I...
yazurika
1

votes
0

answer
51

Views

RNN backprop-through-time

I'm investigating RNNs and after read this paper, got the how Backrpop-through-time works on RNNs: https://arxiv.org/pdf/1610.02583.pdf But I have some confusion with the following implementation (from cs231): for t in reversed(xrange(T)): dh_current = dh[t] + dh_prev dx_t, dh_prev, dWx_t, dWh_t, db...
Ali Khalilli
1

votes
0

answer
22

Views

Network stops converging after reaching a particular accuracy

After completing this course, I decided to implement a deep neural network from scratch to deepen my understanding but on training the network the accuracy steadily increases until it reaches 35% and then starts decreasing. Here is the backprop algorithm I've implemented. I've set the learning rate...
Ayush Chaurasia
1

votes
0

answer
61

Views

Backpropagation not working

I am writing code for a neural network with 2 hidden layers. Here is my code snippet: for i in range(200): z1 = sigmoid(np.dot(W1, X_train) + b1) z2 = sigmoid(np.dot(W2, z1) + b2) z3 = sigmoid(np.dot(W3, z2) + b3) C = cost(z3, Y_train) print(C) d3 = - np.multiply(np.multiply(z3, 1-z3), (np.divide(Y_...
tahsin314
1

votes
0

answer
54

Views

Neural Network in C for XOR, outputs all converge to same value

While the problem in the title is a specific one (converging to same value just for sigmoid, with other activation functions costs are not reduced in general), my network is buggy in general, and after many hours spent debugging/testing I cannot figure out why, even after catching some minor mistake...
TheeNinjaDev
1

votes
2

answer
296

Views

Using stop_gradient with AdamOptimizer in TensorFlow

I am trying to implement a training/finetuning framework when in each backpropagation iteration a certain set of parameters stay fixed. I want to be able to change the set of updating or fixed parameters from iteration to iteration. TensorFlow method tf.stop_gradient, which apparently forces gradien...
Jamshid
1

votes
0

answer
201

Views

Matlab neural network for regression

I have implemented 3 function for neural network regression: 1) a forward propagation function that given the training inputs and the net structure calculates the predicted output function [y_predicted] = forwardProp(Theta,Baias,Inputs,NumberOfLayers,RegressionSwitch) for i = 1:size(Inputs{1},2) Act...
Andrea G
1

votes
1

answer
69

Views

Loss Not changing: Backpropagation in Python 3.6 with MNIST Dataset

I set out to learn backpropagation using gradient descend using mathematical approach to get a grasp of how things work without using any libraries like Keras. I took a sample program from web and made sure I tried to understand each step. It uses following things: 1)3 layer network. Input has 784 c...
ChandanJha
1

votes
2

answer
164

Views

Trouble with numpy arrays and matrices while doing backpropagation and gradient descent

I'm following this video tutorial series by Dan Shiffman about creating a small 'toy' neural network library. The tutorial uses JS and a matrix library he teaches how to code, earlier in the series. I, however, use numpy. In this video he programs gradient descent and backpropagation. However, becau...
Ghost
1

votes
1

answer
36

Views

Tensorflow - Access weights while doing backprop

I want to implement C-MWP as described here: https://arxiv.org/pdf/1608.00507.pdf in keras/tensorflow. This involves modifying the way backprop is performed. The new gradient is a function of the bottom activation responses the weight parameters and the gradients of the layer above. As a start, I wa...
PaperBuddy
1

votes
0

answer
36

Views

Backpropagation over inputs python

I have developed a Bidirectional Recurrent Neural Network(BRNN) for predicting values given in input a vector with 5 entries. For example: X_test=[0,2,3,4,154.8] it returns a predicted value having the form: y_predict=354.45 Now my desired target for y is y_target=540.11 I can achieve the target va...
user3043636
1

votes
1

answer
326

Views

RELU Backpropagation

I am having trouble with implementing backprop while using the relu activation function. My model has two hidden layers with 10 nodes in both hidden layers and one node in the output layer (thus 3 weights, 3 biases). My model works other than for this broken broken backward_prop function. However, t...
Nate
1

votes
0

answer
47

Views

Neural Net adapts to change in output instead of learning features

I was trying to implement a Neural Net from scratch on the IRIS dataset. However, instead of learning the features in the dataset the network simply kept changing the prediction probabilities to suit the output, i.e., As soon as a new output came the probability of that output increased while others...
Ritesh Singh
1

votes
0

answer
35

Views

Why is the weight matrix 3 x 3 in this code when it is a neural network with 2 input nodes and 3 hidden nodes?

I am trying to understand some codes from a book here, however I am unable to comprehend why a 3x3 weight matrix was utilized in this code. import numpy as np class NeuralNetwork: def __init__(self,layers,alpha=0.1): self.W = [] self.layers = layers self.alpha = alpha for i in np.arange(0,len(layers...
Ben
1

votes
0

answer
52

Views

My Neural Network Doesn't Work [XOR problem]

I'm trying to make a neural network for solving XOR problem.But I couldn't make it.Always giving false results.Maybe I'm making a mistake in your math.The network does not learn.Result always similarly. I am not using BIAS. Note: execute function = (feed-forward + backpropagation) ALPHA = 0.5 Here i...
s3ms
1

votes
0

answer
37

Views

Backpropagation shapes do not match

I have been trying to create a 3 layer (1 input - 2 hidden - 1 output) neural network from scratch using numpy. The output layer has only 1 neuron. I am trying to use stochastic gradient descent with mini batches. What I have done so far is as follows: N1 = 50 # number of neurons in the first hidden...
Alperen Görmez
1

votes
1

answer
56

Views

Memory requirements for back propagation - why not use the mean activation?

I need help understanding the memory requirements of a neural network and their differences between training and evaluation processes. More specifically, the memory requirements of the training process (I'm using a Keras API running on top of TensorFlow). For a CNN that contains N weights, when usin...
Mark.F
1

votes
0

answer
19

Views

Error In Backpropagation for Fitting an ANN to a function

I was going to Michael Nielson's Tutorial on Artificial Neural Networks here: http://neuralnetworksanddeeplearning.com/ and I was messing around with the code he provided at the end. I tried to fit a shallow ANN to the function y=1/x and it worked. However, whenever I try to make an ANN architecture...
Shrey Joshi
1

votes
0

answer
11

Views

Is this is a correct backpropagation implementation?

Is this is a correct implementation for backpropagation? The network has three layers and the activation function is relu. def back_propagation(error, learning_rate, layer, wights, bias): for i0 in range(len(layer)): for i1 in range(len(layer[0])): wights[i0][i1] = wights[i0][i1] - (learning_rate *...
mohamed ibrahim
1

votes
1

answer
51

Views

How to Input my Training Data into this Neural Network

I'm trying to solve a classification problem with a specific piece of code, and I'm having trouble understanding exactly how my data is to be fed into the Neural Network. I started encoding the data using 1-of-C dummy-encoding so I can preserve categorical context in the data. I haven't finished en...
junfanbl
1

votes
0

answer
55

Views

neural network predicting same output class for every input while training also

I'm implementing a neural network using backpropogation algorithm in python.the method used is similar to that taught by Andrew Ng in his Machine Learning course. But the NN is predicting same class and nearly similar values for every input while training and testing.For every input set the output c...
Aditya Birhman
1

votes
0

answer
169

Views

How to build an RNN using numpy

I'm trying to Implement a Recurrent Neural Network using Numpy in python. I'm trying to implement a Many-to-One RNN, for a classification problem. I'm a little fuzzy on the psuedo code, especially on the BPTT concept. I'm comfortable with the forward pass ( not entirely sure if my implementation is...
Amith Adiraju
1

votes
1

answer
45

Views

I obtain the same output for every input data after training my neural network (2000 inputs, 1 output)

I am trying to implement a neural network which have around 2000 inputs. I have made some tests with the iris data set in order to check it and it seems to work, but when I am running my test it throws wrong results, most of the time, for all the tests, I obtain the same output for every data. I am...
Cristiam MJ
1

votes
0

answer
50

Views

Keras backpropagation vs self made backpropagation

I have a CNN built with keras and my task is to run one step of the training data and get the gradients achieved by backpropagation and compare those to gradients that I calculate. In order to do that, I need the weights and the loss function (the difference between expected and achieved outputs, wh...
figure09
1

votes
0

answer
39

Views

Backpropogation with self defined loss function and network

I want to create a CNN network for object localization (It is given that there is only one object). For this I am using some general layers and in the end I want to get the nearest and farthest corner to origin. I am also using self defined loss function which is (100 - intersaction over union in %)...
Parthapritam P

View additional questions