What Is a Random String?
Computer Science Department
The University of Auckland, New Zealand
Chaitin's algorithmic definition of random strings - based
on the complexity induced by self-delimiting computers - is critically discussed.
One shows that Chaitin's model
satisfy many natural
requirements related to randomness, so it can be considered as an
adequate model for finite random objects. It is a better model
than the original (Kolmogorov) proposal. Finally, some open problems
will be discussed.
Blank-endmarker complexity, Chaitin (self-delimiting) complexity, random strings.
Suppose that persons and give us a sequence of 32 bits each,
saying that they were obtained from independent coin flips. If gives the string
and gives the string
then we would tend to believe and would not believe : the string seems to be random,
but the string does not. Further on, if we change the value of a bit (say, from 1 to 0) in
a (non) ´random´ string, then the result is still a (non) ´random´ string.
If we keep making such changes in a ´random´ string, then we
will eventually complete distroy randomness.
Laplace, pp.16-17 was, in a sense, aware of the above paradox,
as it may be clear from the following phrase:
In the game of heads and tails, if head comes up a hundred times in a row
then this appears to us extraordinary, because after dividing the nearly
infinite number of combinations that can arise in a hundred throws into regular
sequences, or those in which we observe a rule that is easy to grasp, and into
irregular sequences, the latter are incomparably more numerous.
In other words: non random strings are strings possessing some kind
of regularity, and since the number of all those strings (of a given length)
is small, the occurrence of such a string is extraordinary.
Furthermore, regularity is a good basis for compression. Accordingly,
randomness means the absence of any compression possibility; it corresponds
to maximum information content (because after dropping any part of the
string, there remains no possibility of recovering it). As we shall prove
in Section 5, most strings have this property. In opposition, most strings we deal
with do not.
The information content of a phrase in a natural language (English, for example)
can be recovered even some letters (words) are
The reason comes
from the redundancy of most spoken languages. As a consequence, there exist
many efficient programs to compress texts written in natural languages. It is
important to emphasize that all these methods work very well on texts written
in some natural language, but they do
not work well on average, i.e.
on all possible combinations of letters of the same length. Redundancy
is also a very powerful handle
to readers of mathematical books (and, in general, of scientific literature),
and also to
cryptanalysts (for example, Caesar's
ciphers---just permutations of letters---can be broken by frequency analysis;
see more on this topic in Salomaa ). A hypothetical language
in which there are only strings with maximum information content gives no
preference to strings (i.e. they have
equal frequency); this makes the cipher impossible
to break. However, such languages do not exist (and cannot be constructed,
even with the help of the best computers, available now and in the future);
redundancy is essential and inescapable in a spoken language (and to a large
extent in most artificial languages; see Marcus).
Before passing to some the formal treatment it is natural to ask the
following question: Are there any random strings? Of course, we do not
have yet the necessary tools to properly answer this question, but we may
try to approach it informally. Let us call canonical
program the smallest
program generating a string. We claim that every
canonical program should be random, independently if it generates or not a
random output. Indeed, assume that is a canonical program generating . If is not random, then there exists a program
generating which is substantially smaller than . Now, consider the program
from calculate , then from calculate .
This program is only a few letters longer than , and thus it should be
much shorter than , which was supposed to be canonical. We have reached
Borel  was the first author who systematically studied random
sequences. The complexity-theoretic approach was independently initiated
by Kolmogorov  and Chaitin . For more historical facts see
Chaitin  (A Life in Math), Uspensky , Li and Vitanyi  and Calude .
2 Computers and Complexities
the set of natural numbers;
If is a finite set, then denotes the cardinality
of . We shall use the following functions: i) , the
remainder of the integral division of by ,
ii), the integral part of the real , iii)
the base logarithm, .
Fix , a finite alphabet. By
we denote the free monoid generated by (under
concatenation). The elements of are called strings;
is the empty string.
For in , is the
length of .
For in , . For every and natural put , (n times); .
Every total ordering on , say
, induces a quasi-lexicographical
order on .
We denote by string(n) the nth string in
according to the quasi-lexicographical order. The induced order on each set
coincides with the lexicographical order.
Working with partial recursive (p.r.) functions
(called sometime blank-endmarker
computer---see Chaitin )
we adopt the notations from Calude .
If , that is is in the domain
of , then we write .
A Chaitin computer
is a p.r. function with a
prefix-free domain (i.e. for every string , there is no
pair of distinct strings such that ,
is a prefix of ). To a Chaitin computer
one associates the self-delimiting complexity or Chaitin complexity
with the convention min ;
the operator min being taken according to the quasi-lexicographical order.
The basic result obtained by Chaitin  (called the Invariance Theorem)
states the existence of a Chaitin computer
(called universal Chaitin computer) such that
for every Chaitin computer there exists a constant (depending
upon U and C) such that
( Exact values for all
additive constants discussed in this paper have been recently
computed by Chaitin ---using a Lisp model of computation.)
The complexity induced by a blank-endmarker
computer is defined by .
A similar Invariance Theorem holds true for
See also Chaitin , Kolmogorov , Martin-Löf ,
For this paper we fix a universal Chaitin computer U and denote by H
the induced complexity. Also, fix a universal blank-endmarker computer
and denote by the induced complexity.
By we denote
the complexities , respectively.
Let be three functions. We write
in case there exists such that , for almost all strings
. We write in case and ;
means that there exists two positive reals such that
and , for almost all strings .
3 Chaitin Random Strings
To motivate our approach we use
the analogy between ´tallness´ and ´randomness´. To
appreciate if a person is or is not tall we proceed as follows. We choose
a unity measure (say, centimetre) and we evaluate the height. We get an absolute
value. Next, we establish ´a set of people of reference´ .
For instance, if we have to appreciate how tall is a little girl
an age and we relate her height to the average height of girls of
that age. But, if we discuss the same question for a teenager, the
situation is completely different. It follows that the adjective tall is
relative. To correctly appreciate it we need both components: the exact
one (height) and the relative one (comparison within a fixed set). It
is fortunate that in English we have two words to express this:
height and tall.
For randomness we proceed in a similar way, trying
to capture, as best as
possible, the idea that a string is random if it cannot be
algorithmically compressed. First we use a measure of complexity for
strings (H); this represents the ´absolute component´. Secondly,
we define randomness ´relative to a set´---the relative component. In
our case we appreciate the degree of randomness of a string with
respect to the set of all strings, over a fixed
alphabet, having the same length.
(So, the ´context´ is determined
by the length and the size of the alphabet.)
Of course, the success or failure of the approach depends upon the
measure of complexity we are adopting.
The use of self-delimiting programs instead
of blank-endmarker programs is motivated by Chaitin  as
A key point that must be stipulated ... is that an input program must be
self-delimited: its total length (in bits) must be given within the program
(This seemingly minor point, which paralyzed progress in the field for
nearly a decade, is what entailed the redefinition of algorithmic
randomness.) Real programming languages are self-delimiting, because they
provide constructs for beginning and ending a program. Such constructs
allow a program to contain well-defined subprograms nested in them. Because
a self-delimiting program is built up by concatenation and nesting
self-delimiting subprograms, a program is syntactically complete only when
the last open subprogram is closed. In essence the beginning and ending
programs and subprograms function respectively like left and right
parentheses in mathematical expressions.
We first recall the asymptotical behaviour of the complexity H (see
Let f: be an injective, recursive function.
a) One has:
b) Consider a recursive function
(Actually, this part is still true in case g is a function semi-computable in the limit from above.)
a) It is plain that:
b) i) Assume first that .
If there exists a natural N such that
then we get a contradiction:
In view of the hypothesis in b) ii),
there exists a natural N such that . We can use Kraft-Chaitin Theorem in order
to construct a Chaitin computer with the following property: For every there
exists with and . So,
there exists a natural c such that for all ,
i) Take . It is seen
so , for infinitely many .
ii) For , one has:
so . For , one has:
Chaitin complexity H can be characterized as a minimal
function, semi-computable in the limit from above,
that lies on the borderline between the convergence and the
divergence of the series
Put , where <,> : is a recursive bijection.
The following three formulae come from Chaitin :
There exists a natural such that for all string one has
We are in the position to evaluate the complexity of the most complex
strings of a given length (first obtained in Chaitin ).
For every , one has:
Proof. In view of Lemma 3.4, for every string x of length n,
To get the relation
we shall prove that for every string x of length n,
Fix and define the Chaitin computer by
By Lemma ?? one has:
Accordingly, not all strings of length n have the complexity less than , i.e.
The above discussion may be concluded with the following definition.
Let be the function defined by
In view of Theorem 3.5,. We define the random
strings of length n to be the strings with maximal self-delimiting complexity among the strings of length n, i. e. the strings having .
A string is Chaitin m-random
(m is a natural number) if ; is Chaitin random if it is 0-random.
The above definition depends upon the fixed universal computer U; the generality of the approach comes from the Invariance Theorem.
Obviously, for every length n and for every there exists a Chaitin m-random string
of length n.
Denote by , , respectively, the sets of Chaitin m-random strings and random strings.
It is worth to note that the property of Chaitin m-randomness is asymptotic.
Indeed, for , the larger is the difference between and m, the more random is . There is no sharp
dividing line between randomness and pattern, but it looks as though all
with have a true random behaviour.
How many strings have maximal complexity, i.e. ? The answer was given by Chaitin :
There exists a natural constant (which depends upon the size of the underlying alphabet, ) such that
for all natural n.
How large is c? Out of strings of length n, at most
can be described by programs of
length less than n-m. The ratio between and
is less than as , irrespective of the value of n.
For instance, this happens in case Q=2, m=20, i=6; it says that
less than one in a million among the binary strings of any given length
is not Chaitin 20-random.
So, in a strictly quantitative sense, almost all strings are Chaitin random.
Problem. Denote by the sequence of constants appearing in Theorem 3.7. Is this sequence bounded?
The rest of this paper will be devoted to the analysis of the adequacy of
Chaitin's definition of randomness.
4 A Statistical Analysis Random Strings
In this section we confront Chaitin's definition of randomness with the
probability point of view. As we have already said, the present proposal
identifies randomness with incompressibility. In order to
justify this option we have to show that the strings that are
incompressible justify the various properties of stochasticity
identified by the classical Probability Theory. It is not so difficult,
although tedious, to check separately such a single property.
However, we may proceed in a better way, due to the celebrated theory
developed by Martin-Löf: We demonstrate that the incompressible
strings do possess all conceivable effectively testable properties of
stochasticity. Here we include the known properties, but also the
possible unknown ones. A general transfer principle will emerge,
virtue of which various results from classical probability
theory carry automatically for random strings.
The ideas of Martin-Löf theory are rooted in the statistical
practice. We are given an element x of some sample space (associated
to some distribution) and we want to test the hypothesis x
is a typical outcome. Being typical means ´belonging to every
An element will be ´random´ just in case lies in the intersection of all such majorities.
A level of a statistical test is a set of strings which are found relatively
non-random (by the test). Each level is a subset of the previous level,
containing less and less strings, considered more and more
non-random. The number of strings decreases exponentially fast at each
level. In the binary case, a test contains at level 0 all possible
strings, at level two only at most 1/2 of the strings, at level three only
1/4 of all strings, and so on; accordingly, at level m the test
contains at most strings of length n.
We give now the formal definition.
An r.e. set is called a
Martin-Löf test if the following two properties hold true:
By definition, the empty set is a Martin-Löf test.
The set is called the
critical region at level .
(Getting an outcome string in means the rejection of the randomness hypothesis
for x.) A string x is declared ´random´ at level m by V
in case .
The following example models the following simple idea: If a binary string x
has too many ones (zeros), then it cannot be random.
where is the number of occurrences of the letter in x, is a Martin-Löf test.
Proof. Clearly, V is r.e. and satisfies condition 1).
In view of the formula:
To every Martin-Löf test V we associate the critical level
A string x is declared random by a Martin-Löf test V if , for some , or, equivalently, if .
A Martin-Löf test is called
universal in case for every
Martin-Löf test V,
there exists a constant c (depending upon and V) such that
It is easy to see that a Martin-Löf test is
universal iff for every Martin-Löf
test V there exists a constant c (depending upon and V) such that
Our aim is to prove that almost all Chaitin random strings pass all
conceivable effective tests of stochasticity, i.e. they are declared
random by every Martin-Löf test.
Fix . Almost all strings
in will be declared eventually random by every Martin-Löf test.
Proof. If K is the complexity induced by a fixed universal
blank-endmarker computer, then by a result in Calude, Chitescu , the set
is a universal Martin-Löf of test. Every section is r.e.,
so when a string belongs to , then
we can eventually discover this.
Moreover, there are less than
strings of length n having this property. Thus
if we are given and t we need to only know digits to
pick out any particular string with this property. I.e.,
as the first x that we discover has this property, the second x that
we discover has this property,...,the ith x that we discover has
this property, and . Accordingly, every string has the property that
So, by Lemma 3.4:
since in general
Finally, take , i.e. ,
and fix an arbitrary Martin-Löf test V.
Take a natural T such that . It
follows that - otherwise we would have
which means .
By virtue of the universality of we get a constant c such that . The
constant is independent of x. So, for large enough random
strings x, one has and .
5 A Computational Analysis Random Strings
We pursue the analysis of the relevance of Chaitin's
by confronting it with a natural, computational requirement:
there should be no algorithmic way to recognize what strings are
The following result, due to Chaitin , Theorem 4.1, was used
for deriving the first information-theoretic incompleteness results (see
also Chaitin ). We
use it now to discuss the uncomputability of .
Let be a recursive bijective function.
We can effectively compute a constant d such that if
for all .
Recall that a subset is immune iff it is infinite and
has no infinite r.e. subsets.
Let be a recursive function
such that the set
is infinite. Then, the set is immune.
clearly, is r.e. and for
some recursive function .
and in view of Theorem 5.1 from
we deduce ,
is finite. This shows that itself is finite.
The set is immune for every .
Fix . It is plain that
The set is immune by
Corollary 5.2 and every infinite subset of an immune set is
The above theorem can be expressed as:
There are two (classically equivalent) ways to represent the above statement:
Based on theses statements we can formulate two constructive
versions of immunity:
It is worth noticing that
there exist constructively immune sets which are not
effectively immune and vice-versa. Moreover, if the
complement of an immune set is r.e., then that set is constructively immune.
Hence, we get:
Theorem 5.4. For every is constructively immune.
Fix . The set is
constructively immune since its complement is r.e.
As is an infinite subset of a constructive
immune set it follows that itself is constructive immune.
As a scholium of Corollary 5.2 one obtains:
If is a recursive function with (i.e. there
exists an increasing recursive function ,
such that if , then and
then S is effectively immune.
In the context of the proof of
and if , then
and the upper bound is a recursive function of e.
For all is effectively immune.
Proof. An infinite subset of an effectively immune set is effectively immune.
6 Random Strings Are Borel Normal
Another important restriction pertaining a good definition of
randomness concerns the frequency of letters and blocks of letters.
In a ´true random´ string
each letter has to appear with
approximately the same frequency, namely .
Moreover, the same property should extend to ´reasonably long´ substrings.
These ideas have been stated by Borel  for sequences. In
Chaitin  one shows that Chaitin Omega Number representing the
halting probability of a universal self-delimiting computer is Borel
Motivated by these facts we
formalize the Borel normality property for strings.
First, let be the number of occurrences of the letter
in the string , .
the ratio is
the relative frequency of the letter in the string x.
For strings of length
we proceed as follows. We consider the alphabet
and construct the free monoid .
Every belongs to , but
the converse is false. For we denote by
the length of x (according to ) which is exactly .
every denote by the
number of occurrences of in the string .
For example, take . It is easy to see that .
Note that the string appears
three times into x, but not on the right positions.
Not every string belongs to . However,
there is a possibility "to approximate" such a string by a string
in . We proceed as follows. For and
we denote by the prefix of x of
length (i.e. is the longest prefix of whose length is divisible by ). Clearly, and . We are now in a position to extend
the functions from to : put , in case is not
divisible by . Similarly, .
A non-empty string is called
-limiting ( is a fixed positive real)
if for all satisfies the inequality:
A string is called Borel normal iff for every natural
for every .
In Calude  one proves the following result:
For every natural we can effectively compute a natural
number (depending upon ) such that every
string of length greater than in is Borel normal.
Theorem 6.3 can be used to prove the following result (a weaker
version was obtained in Calude, Campeanu ):
For every natural and for every string we can find two strings such that .
Proof. Fix . Almost all strings are
Borel normal by Theorem 6.3, i.e. they satisfy the inequality
for every .
Take , to be the jth string of length and pick a
string such that . It
To prove that it is enough to show that
which is true because
7 Extensions of Random Strings
In this section we deal with the following problem: To what extent is it
possible to extend an arbitrary string to a Chaitin random or no
n-random string ?
Theorem 6.4 says that every string can be embedded
into a Chaitin random string. The next results will put some
more light on this phenomenon.
For every natural and every string there exists a string
such that for every string
Fix and . Append to a string such that and ,
where is a natural number such
that . Put
Next define the Chaitin computer ,
where is a self-delimiting program for . It is seen
This shows that every extension of lies in .
For every natural we can find a string no extension
of which is in .
The above result shows that in Theorem 6.4 we need both
the prefix and the suffix , i.e. it is not possible to fix
and then find an appropriate . However, such a
possibility is regained---conforming
with the probabilistic intuition---as
far as we switch from with a fixed t to
with an appropriate, small .
We start first with a preliminary result:
Let be a partial order on which is recursive and unbounded.
Assume that satisfies the relation
Then, for every string we can find a string such that and
Assume, by absurdity, that there exists
such that for all one has .
and notice that
We have got a contradiction.
For every string and natural we can find a string such that:
i) , ii) for some natural (which is about , .
be the prefix order ( in case ,
for some string ). First it is seen that all
conditions in Lemma 7.3 are verified: is recursive,
Now take and . Let be an arbitrary string
such that . By virtue of
the preceding argument we can
find a string
such that and
where is about
8 Chaitin's Model vs Kolmogorov's Model
The original definition of random strings (see Kolmogorov ,
is motivated by the fact that
accordingly, is called Kolmogorov t-random if
stands for the set of Kolmogorov t-random strings.
used the blank-endmarker complexity of a string relative to its
length to measure the degree of randomness of a string ´within´ the context
of all strings having the same length.)
All results proven in this paper concerning the adequacy
of Chaitin's definition of random strings
actually hold true for Kolmogorov's model
of random strings.
, Martin-Löf , Solovay ,
Calude , Li and Vitanyi  for a more
To the best of our knowledge there are no ´natural´
properties associated with randomness valid for one model and not valid for the other
The underlying complexities and are ´asymptotical
equivalent´. Indeed, a crude relation between and is the following:
A more exact relation was obtained by Solovay . Put:
The following relations hold true:
In view of Theorem 8.1 it might be the case
that the set of Kolmogorov random strings
es with the set of Chaitin random strings. This is not the case!
Using the proof of Theorem 3.5 one can show that
every Chaitin random string is Kolmogorov random. However, the converse is not true as
Solovay  has shown.
Actually, Solovay  conjectures that
there exists a constant such that for all sufficiently large , there
are at least strings of length , such that:
Kolmogorov random strings only ´look´ random, but in fact, they are not.
It is an open question to find out ´natural´ properties related to the informal
notion of randomness which hold true for Chaitin random strings, but fail to be true for
Kolmogorov random strings. Martin-Löf analysis, developed in Section 4, is not fine
enough for this problem.
9 The Role of the Underlying Alphabet
It seems th
at there is a wide spread feeling that the binary case
encompasses the whole strength and generality of coding phenomena, at
least from an algorithmic point of view. The problem is the following:
Does there exist a binary asymptotical optimal coding of
all strings over an alphabet with elements? Surprisingly,
the answer is negative. The answer is negative for
both complexities K and H. As our main
interest is directed to Chaitin complexity we shall outline the results for
this complexity measure.
Let be naturals, and fix two alphabets, , having
and elements, respectively. The lengths of
and will be
denoted by and , respectively. Fix a
universal Chaitin computer and denote by its
Does there exist a Chaitin computer
which is universal for the class of all Chaitin computers acting on ?
The upshot is the following result (see Calude , Calude, Jürgensen, and
There is no Chaitin computer
which is universal for the class of all Chaitin
computers acting on .
The proof is based on the fact that Chaitin complexity cannot be optimized better than linearly, i.e.
the Invariance Theorem is the best possible result in this direction:
For every fix a real number ,
there is no Chaitin
computer such that for all
Chaitin computers one has:
Let us study Chaitin complexity acting on alphabets of different size. We need some more notation.
For every natural put , and let us denote
nth string in (according to the quasi-lexicographical
order induced by ; let be Chaitin complexity.
Let . Then, there exists a constant (which depends upon )
such that for all we have:
For every and all ,
So, no string is random over .
(This result follows also from Theorem 6.3.)
In the binary case we have only
two such strings, namely
which are obviously non-random. In the non-binary case we have
strings over the alphabet which are non-binary because they do
not contain all letters. For instance, for one has
such strings, some of them (in fact, according to Theorem 3.7,
more then , where is a constant which depends
on the size of the alphabet but not on the length n) are random as
binary strings. So, it is shown once again, that randomness is
a contextual property.
In view of the above discussion we conclude that Chaitin's model of
random strings satisfy many natural
requirements related to randomness, so it can be considered as an
adequate model for finite random objects. It is a better model
than the original (Kolmogorov) proposal.
The distinction between Chaitin and
Kolmogorov models is far from being elucidated, e.g. no property---naturally associated
with randomness---holding true for Chaitin random strings and failing to be
satisfied by Kolmogorov random strings is actually known.
All descriptional complexities in the binary and non-binary cases have
crucial differences, so it appears that it is only natural to discuss the
and randomness of finite objects in a non-necessarily
I wish to warmly thank Greg Chaitin for many stimulating discussions
on random strings (by email, in Auckland, New Zealand and Bar
Harbor, Maine, US) and for kindly permitting to incorporate his
proofs for Theorems 4.5 and 6.4. I express my gratitude to Helmut
Jürgensen, Per Martin-Löf, Charles Rackhoff, Arto Salomaa, and Bob
Solovay for their illuminating comments.
 É. Borel. Les probabilités dénombrables et leurs
applications arithmétiques, Rend. Circ. Mat. Palermo 27(1909), 247-271.
 É. Borel. Leçons sur la théorie des fonctions,
Gauthier-Villars, Paris, 2nd ed., 1914.
 C. Calude. Theories of Computational Complexity, North-Holland,
Amsterdam, New York, Oxford, Tokyo, 1988.
 C. Calude. Information and Randomness. An Algorithmic Perspective,
Springer-Verlag, Berlin, 1994. (Forewords by G. J. Chaitin and A. Salomaa)
 C. Calude. Borel normality and algorithmic randomness, in G. Rozenberg,
A. Salomaa (eds.). Developments in Language Theory, World Scientific,
Singapore, 1994, 113-129. (With a note by G. J. Chaitin)
 C. Calude, C. Cámpeanu. Note on the topological structure of random
strings, Theoret. Comput. Sci. 112(1993), 383-390.
 C. Calude, I. Chiçescu. A class of universal P. Martin-Löf tests,
EATCS Bull. 25 (1984), 14-19.
 C. Calude, H. Jürgensen, A. Salomaa. Coding without Tears, manuscript,
February 1994, 15 pp.
 G. J. Chaitin. On the length of programs for computing finite binary
sequences, J. Assoc. Comput. Mach.. 13(1966),547-569. (Reprinted
in: Chaitin , 369-410.)
 G. J. Chaitin. On the length of programs for computing finite binary
sequences: statistical considerations, J. Assoc. Comput.
Mach. 16(1969), 145-159. (Reprinted in: Chaitin , 411-434.)
 G. J. Chaitin. Information-theoretic limitations of formal systems,
J.Assoc. Comput. Mach. 21(1974), 403-424. (Reprinted in: Chaitin , 291-333.)
 G. J. Chaitin. A theory of program size formally identical to
information theory, J. Assoc. Comput. Mach. 22(1975), 329-340.
(Reprinted in: Chaitin [?], 197-223.)
 G. J. Chaitin. Algorithmic information theory, IBM J. Res. Develop.
21(1977), 350-359,496. (Reprinted in: Chaitin , 83-108.)
 G. J. Chaitin. Algorithmic Information Theory,
Cambridge University Press, Cambridge,1987. (third printing 1990)
 G. J. Chaitin. Information, Randomness and Incompleteness,
Papers on Algorithmic Information Theory,
World Scientific, Singapore, New Jersey, Hong Kong, 1987.( 2nd ed., 1990)
 G. J. Chaitin. Randomness in arithmetic, Scientific American 259(1988), 80-85. (Reprinted in: Chaitin , 14-19.)
 G. J. Chaitin. Information-Theoretic Incompleteness,
World Scientific, Singapore, New Jersey, Hong Kong, 1992.
 G. J. Chaitin. On the number of N-bit strings with
maximum complexity, Applied Mathematics and Computation 59(1993), 97-100.
 G. J. Chaitin. The Limits of Mathematics, IBM Watson Center,
Yorktown Heights, Draft July 23, 1994, 219 pp.
 P. Gács. Lecture Notes on Descriptional Complexity and Randomness,
Boston University, 1988, manuscript, 62 pp.
 P. S. Laplace. A Philosophical Essay on Probability Theories,
Dover, New York, 1951.
 A. N. Kolmogorov. Three approaches for defining the concept of
´information quantity´, Problems Inform. Transmission 1(1965), 3-11.
 M. Li, P. M. Vitányi. An Introduction to Kolmogorov Complexity
and Its Applications, Springer-Verlag, Berlin, 1993.
 X. Li. Effective immune sets, program index sets and
effectively simple sets - generalizations and applications of the
recursion theorem, in C. -T. Chong, M. J. Wicks (eds.). South-East
Asian Conference on Logic, Elsevier, Amsterdam, 1983, 97-106.
 S. Marcus (ed.). Contextual Ambiguities in Natural & Artificial
Languages, Vol. 2, Ghent, Belgium, 1983.
 P. Martin-Löf. The definition of random sequences, Inform. and Con-
 A. Salomaa. Public-Key Cryptography, Springer Verlag, Berlin, 1990.
 R. M. Solovay. Draft of a paper (or series of papers) on Chaitin´s
work ... done for the most part during the period of Sept. - Dec. 1974, unpublished manuscript, IBM
Thomas J. Watson Research Center, Yorktown Heights, New York, May 1975, 215 pp.
 R. M. Solovay. Email to C. Calude, August 13, 1994.
 R. M. Smullyan. Effectively simple sets, Proc. Amer. Math. 15(1964), 893-895.
 V. A. Uspensky. Kolmogoro and mathematical logic, J. Symbolic Logic 57(1992), 385-412.