Blog, COLDSURF

Single-Layer Feedforward Neural Networks and Perceptron

Single-layer Feedforward Neural Network
Perceptron

Perceptron

The perceptron is an artificial neural network model proposed by Frank Rosenblatt in the late 1950s. It is the simplest form of neural network.
A perceptron takes input values, multiplies each by a weight, sums the results, and then applies an activation function to produce an output. The activation function is typically a unit step function.
The perceptron’s structure includes only an input layer and an output layer with a single neuron (node). As a model capable of solving linear classification problems, it successfully learns only when the input data is linearly separable.

Unit Step Function

The unit step function, often used as an activation function in simple neural networks like the perceptron, is a non-linear function that determines output based on whether the input exceeds a specific threshold. Mathematically, it is defined as follows:
notion image

Characteristics of the Unit Step Function

  • Non-linearity: The unit step function is non-linear, changing the output only if the input value crosses a threshold. This makes it suitable for binary classification tasks in models like the perceptron.
  • Binary Output: The function output is restricted to 0 or 1, effectively categorizing input values into two states. This binary nature enables the perceptron to act as a linear classifier.
  • Simplicity: The unit step function is computationally simple and therefore commonly used in early, straightforward models like the perceptron. However, due to its non-differentiability, other activation functions (e.g., sigmoid, ReLU) are now more commonly used in modern deep learning.

Example

In practice, the unit step function outputs 1 if the input value is 0 or greater and 0 if it is less than 0. For example:
  • When \(x = 1\), \(f(1) = 1\)
  • When \(x = -0.5\), \(f(-0.5) = 0\)
This function is useful in networks that make decisions based on a specific threshold.

Single-Layer Feedforward Neural Network

The single-layer feedforward neural network is an extended concept of the perceptron.
A single-layer feedforward neural network has one input layer and one output layer, with no hidden layers in between. Thus, it has a structure in which inputs are directly connected to outputs.
Unlike a perceptron, a single-layer feedforward network may have multiple output neurons. Furthermore, it can use non-linear activation functions, such as sigmoid or ReLU, instead of the unit step function.
Although more versatile than a single perceptron, a single-layer network remains limited to linear classification problems.

Summary: Relationship between Perceptron and Single-Layer Feedforward Neural Network

  • The perceptron is a specific case of a single-layer feedforward neural network, with only one output neuron and a simple activation function like the unit step function.
  • A single-layer feedforward neural network generalizes the perceptron, potentially having multiple output neurons and supporting a variety of activation functions, enabling it to perform more complex classification tasks.
  • Both models lack hidden layers and have direct connections between input and output layers, which is a shared characteristic.
← Go home