# Building a Neural Network in PyTorch

Today let us create a simple two-layered Neural Network using PyTorch. We will be implementing the neural network shown in Figure 1.

First, you need to install PyTorch. Check this article of mine for the installation process.

Like any other program, we need to import the packages at the beginning.

**import** torch

**import** torch.nn **as** nn

**import** torch.nn.functional **as** F

**import** torch.optim **as** optim

torch.nn is the neural network package of PyTorch, torch.nn.functional provides many neural network operations, and torch.optim is a package implementing various optimization algorithms.

To create a neural network, define a class that extends a module within the torch**.nn** called Module. Then a `forward() `

function is defined which implements the forward pass of the neural network.

classNet(nn.Module):

def__init__(self):

super(Net, self).__init__()

self.fc1=nn.Linear(3,4)

self.fc2=nn.Linear(4,2)

defforward(self, x):

x=F.relu(self.fc1(x))

x=F.softmax(self.fc2(x))

returnxnet = Net()

At first, we initialize the layers, then in the `forward()`

** **function**, **we** **implement the forward function. ReLu is applied to the first layer and softmax is used in the output layer.

So the neural network is ready to let’s look at how to train it.

We need an optimizer, let us use SGD with learning rate 0.01. Learnable parameters will be passed to it using net.parameters(). Let our loss function be cross entropy.

`optimizer `**=** optim**.**SGD(net**.**parameters(), lr**=**0.01)

criterion = nn.CrossEntropyLoss()

For each data in the training set, we do the following steps.

`optimizer`**.**zero_grad()

output **=** net(input)

loss **=** criterion(output, target)

loss**.**backward()

optimizer**.**step()

`zero_grad()`

will set gradient buffer of the parameter to zero. Then we will pass input the network whose output is used in the next step for loss calculation. Loss is backpropagated and weights are adjusted.

Suppose we train_set contain training set and the number of epochs is 4, the training looks as shown below.

**for** epoch **in** range(4):

**for** data **in** train_set:

input,target = data

optimizer**.**zero_grad()

output **=** net(input)

loss **=** criterion(output, target)

loss**.**backward()

optimizer**.**step()

So that is how to create and train a neural network in PyTorch.

I hope you enjoyed it. If you’ve made it this far and found any errors in any of the above or can think of any ways to make it clearer for future readers, don’t hesitate to drop a comment. Thanks!