\documentclass[]{article}
\begin{document}
\title{The Error Correcting Code Design Problem}
\date{}
\maketitle
The \emph{Error Correcting Code Design Problem} \cite{GHSW87}
(ECC) consists of assigning codewords to an alphabet that
minimizes the length of transmitted messages and that provides
maximal correction of single uncorrelated bit errors, when the
messages are transmitted over noisy channels. Note that the two
restrictions are conflicting in nature. On one hand, we would like
to assign codewords that are as short as possible, and on the
other hand, good error correction is achieved by adding redundant
bits so as to maximize the Hamming distance between every pair of
codewords.
A code can be formally represented by a three-tuple $(n, M, d)$,
where $n$ is the length (number of bits) of each codeword, $M$ is
the number of codewords and $d$ is the minimum Hamming distance
between any pair of codewords. An optimal code consists in
constructing $M$ binary codewords, each of length $n$, such that
$d$, the minimum Hamming distance between each codeword and all
other codewords, is maximized. In other words, a good $(n, M, d)$
code has a small value of $n$ (reflecting smaller redundancy and
faster transmission), a large value for $M$ (denoting a larger
vocabulary) and a large value for $d$ (reflecting greater
tolerance to noise and error).
The optimization criteria could be simply the minimum Hamming
distance $d$ taken over all pairs of distinct codewords. However,
this value will only depend upon a few words and provides very
little information as to the progress towards the optimal. A
better approach is to measure how well the $M$ words are placed in
the corners of $n$-dimensional space \cite{Dontas90} by
considering the minimal energy configuration of $M$ particles
(where $d_{ij}$ is the hamming distance between words $i$ and
$j$):
\begin{equation}
F(C) =
\frac{1}{\sum_{i=1}^{M}{\sum_{j=1,i\neq{}j}^{M}{\frac{1}{d^{2}_{ij}}}}}
\end{equation}
\bibliographystyle{alpha}
\bibliography{ecc}
\end{document}