math - Overhead of error correcting codes as the error rate increases -


i looking understanding overhead (# of additional symbols need transmitted) associated error correcting codes (like reed-solomon) error rate designed handle increases. instance if process needs able correct 1 wrong symbol per 500 overhead compared 1 in 100.

i realize in practice complex schemes used (cds use overlapping sets of encoding, etc), trying sense of simplest case first. relationship between overhead , error rate approximately linear? quadratic? exponential? realize big o notation isn't right tool here, please forgive me if not how math community formulates problem.

i thrilled answer calculated overhead associated following error rates reed-solomon encoding:

1 symbol error per 10000 1 symbol error per 2000 1 symbol error per 1000 1 symbol error per 500 1 symbol error per 50

look hamming bound. there read code has hamming distance d, i.e. can detect d−1 single digit errors , correct t=⌊(d−1)/2⌋ single digit errors, has size of @

\frac{q^n}{\displaystyle\sum_{k=0}^t\binom{n}{k}(q-1)^k}

different code words. simplest case, assume binary codes. assume q=2 alphabet size , n=2b number of code words of b bits. base-two logarithm of above bound give maximal number of bits can communicate, , code rate ratio between these 2 bit counts.

when express code rate in terms of error tolerance, might want investigate limit large code words. make sense, you'd turn absolute number of errors error rate, number of errors proportional code length. should possible obtain useful limits.


Comments

Popular posts from this blog

google api - Incomplete response from Gmail API threads.list -

Installing Android SQLite Asset Helper -

Qt Creator - Searching files with Locator including folder -