Simple problem: a linearly separable two-class \( (\omega_1, \omega_2) \) classification task
Given: a set of training samples, \( x_n, y_n, n = 1, 2, ..., N \) with \( y_n \in \{-1, +1\}, x_n \in R^l \)
Assume: there is a hyperplane,
\( \theta^{T}_{*} x = 0 \)
such that,
\( \theta^{T}_{*} x > 0,\) if \(x \in \omega_1\)
\( \theta^{T}_{*} x < 0,\) if \(x \in \omega_2\)
the bias term of the hyperplane has been absorbed in \( \theta_{*} \) for simplification
Goal: iteratively computes a hyperplane that classifies correctly all the training samples
==> a cost function is adopted