`
`Page 1 of 13
`
`Big O notation
`
`From Wikipedia, the free encyclopedia
`
`In mathematics, big O notation describes the limiting behavior of a
`function when the argument tends towards a particular value or
`infinity, usually in terms of simpler functions. It is a member of a
`larger family of notations that is called Landau notation,
`Bachmann–Landau notation (after Edmund Landau and Paul
`Bachmann),[1][2] or asymptotic notation. In computer science, big O
`notation is used to classify algorithms by how they respond (e.g., in
`their processing time or working space requirements) to changes in
`input size.[3] In analytic number theory, it is used to estimate the
`"error committed" while replacing the asymptotic size, or
`asymptotic mean size, of an arithmetical function, by the value, or
`mean value, it takes at a large finite argument. A famous example is
`the problem of estimating the remainder term in the prime number
`theorem.
`
`Big O notation characterizes functions according to their growth
`rates: different functions with the same growth rate may be
`represented using the same O notation. The letter O is used because
`the growth rate of a function is also referred to as order of the
`function. A description of a function in terms of big O notation
`usually only provides an upper bound on the growth rate of the
`function. Associated with big O notation are several related
`notations, using the symbols o, (cid:525), (cid:550), and (cid:300), to describe other kinds
`of bounds on asymptotic growth rates.
`
`Example of Big O notation: f(x) (cid:1223) O(g(x)) as there exists c > 0 (e.g.,
`c = 1) and x0 (e.g., x0 = 5) such that f(x) < cg(x) whenever x > x0.
`
`Big O notation is also used in many other fields to provide similar estimates.
`
`Contents
`
`(cid:1389) 1 Formal definition
`(cid:1389) 2 Example
`(cid:1389) 3 Usage
`(cid:1389) 3.1 Infinite asymptotics
`(cid:1389) 3.2 Infinitesimal asymptotics
`(cid:1389) 4 Properties
`(cid:1389) 4.1 Product
`(cid:1389) 4.2 Sum
`(cid:1389) 4.3 Multiplication by a constant
`(cid:1389) 5 Multiple variables
`(cid:1389) 6 Matters of notation
`(cid:1389) 6.1 Equals sign
`(cid:1389) 6.2 Other arithmetic operators
`(cid:1389) 6.2.1 Example
`(cid:1389) 6.3 Declaration of variables
`(cid:1389) 6.4 Multiple usages
`(cid:1389) 7 Orders of common functions
`(cid:1389) 8 Related asymptotic notations
`(cid:1389) 8.1 Little-o notation
`(cid:1389) 8.2 Big Omega notation
`(cid:1389) 8.2.1 The Hardy–Littlewood definition
`(cid:1389) 8.2.2 Simple examples
`(cid:1389) 8.2.3 The Knuth definition
`(cid:1389) 8.3 Family of Bachmann–Landau notations
`(cid:1389) 8.4 Use in computer science
`
` NETWORK-1 EXHIBIT A2009
` Google Inc. v. Network-1 Technologies, Inc.
` IPR2015-00345
`Page 1 of 13
`file:///C:/Users/kla/AppData/Local/Temp/IW2EIM09.htm
`9/15/2015
`
`
`
`Big O notation - Wikipedia, the free encyclopedia
`
`Page 2 of 13
`
`(cid:1389) 8.5 Extensions to the Bachmann–Landau notations
`(cid:1389) 9 Generalizations and related usages
`(cid:1389) 10 History (Bachmann–Landau, Hardy, and Vinogradov notations)
`(cid:1389) 11 See also
`(cid:1389) 12 References and Notes
`(cid:1389) 13 Further reading
`(cid:1389) 14 External links
`
`Formal definition
`
`Let f and g be two functions defined on some subset of the real numbers. One writes
`
`if and only if there is a positive constant M such that for all sufficiently large values of x, the absolute value of f(x) is at most M
`multiplied by the absolute value of g(x). That is, f(x) = O(g(x)) if and only if there exists a positive real number M and a real number x0
`such that
`
`In many contexts, the assumption that we are interested in the growth rate as the variable x goes to infinity is left unstated, and one
`writes more simply that f(x) = O(g(x)).
`
`The notation can also be used to describe the behavior of f near some real number a (often, a = 0): we say
`
`if and only if there exist positive numbers (cid:303) and M such that
`
`If g(x) is non-zero for values of x sufficiently close to a, both of these definitions can be unified using the limit superior:
`
`if and only if
`
`Additionally, the notation O(g(x)) is also used to denote the set of all functions f(x) that satisfy the relation f(x)=O(g(x)). In this case we
`write
`
`Example
`
`In typical usage, the formal definition of O notation is not used directly; rather, the O notation for a function f is derived by the
`following simplification rules:
`
`(cid:1389) If f(x) is a sum of several terms, the one with the largest growth rate is kept, and all others omitted.
`(cid:1389) If f(x) is a product of several factors, any constants (terms in the product that do not depend on x) are omitted.
`
`For example, let f(x) = 6x4 (cid:237) 2x3 + 5, and suppose we wish to simplify this function, using O notation, to describe its growth rate as x
`approaches infinity. This function is the sum of three terms: 6x4, (cid:237)2x3, and 5. Of these three terms, the one with the highest growth rate
`is the one with the largest exponent as a function of x, namely 6x4. Now one may apply the second rule: 6x4 is a product of 6 and x4 in
`
`Page 2 of 13
`file:///C:/Users/kla/AppData/Local/Temp/IW2EIM09.htm
`
`9/15/2015
`
`
`
`Big O notation - Wikipedia, the free encyclopedia
`
`Page 3 of 13
`
`which the first factor does not depend on x. Omitting this factor results in the simplified form x4. Thus, we say that f(x) is a "big-oh" of
`(x4). Mathematically, we can write f(x) = O(x4). One may confirm this calculation using the formal definition: let f(x) = 6x4 (cid:237) 2x3 + 5
`and g(x) = x4. Applying the formal definition from above, the statement that f(x) = O(x4) is equivalent to its expansion,
`
`for some suitable choice of x0 and M and for all x > x0. To prove this, let x0 = 1 and M = 13. Then, for all x > x0:
`
`so
`
`Usage
`
`Big O notation has two main areas of application. In mathematics, it is commonly used to describe how closely a finite series
`approximates a given function, especially in the case of a truncated Taylor series or asymptotic expansion. In computer science, it is
`useful in the analysis of algorithms. In both applications, the function g(x) appearing within the O(...) is typically chosen to be as simple
`as possible, omitting constant factors and lower order terms. There are two formally close, but noticeably different, usages of this
`notation: infinite asymptotics and infinitesimal asymptotics. This distinction is only in application and not in principle, however—the
`formal definition for the "big O" is the same for both cases, only with different limits for the function argument.
`
`Infinite asymptotics
`
`Big O notation is useful when analyzing algorithms for efficiency. For example, the time (or the number of steps) it takes to complete a
`problem of size n might be found to be T(n) = 4n2 (cid:237) 2n + 2. As n grows large, the n2 term will come to dominate, so that all other terms
`can be neglected—for instance when n = 500, the term 4n2 is 1000 times as large as the 2n term. Ignoring the latter would have
`negligible effect on the expression's value for most purposes. Further, the coefficients become irrelevant if we compare to any other
`order of expression, such as an expression containing a term n3 or n4. Even if T(n) = 1,000,000n2, if U(n) = n3, the latter will always
`exceed the former once n grows larger than 1,000,000 (T(1,000,000) = 1,000,0003= U(1,000,000)). Additionally, the number of steps
`depends on the details of the machine model on which the algorithm runs, but different types of machines typically vary by only a
`constant factor in the number of steps needed to execute an algorithm. So the big O notation captures what remains: we write either
`
`or
`
`and say that the algorithm has order of n2 time complexity. Note that "=" is not meant to express "is equal to" in its normal
`mathematical sense, but rather a more colloquial "is", so the second expression is technically accurate (see the "Equals sign" discussion
`below) while the first is considered by some as an abuse of notation.[4]
`
`Infinitesimal asymptotics
`
`Big O can also be used to describe the error term in an approximation to a mathematical function. The most significant terms are
`written explicitly, and then the least-significant terms are summarized in a single big O term. Consider, for example, the exponential
`series and two expressions of it that are valid when x is small:
`
`Page 3 of 13
`file:///C:/Users/kla/AppData/Local/Temp/IW2EIM09.htm
`
`9/15/2015
`
`
`
`Big O notation - Wikipedia, the free encyclopedia
`
`Page 4 of 13
`
`The second expression (the one with O(x3)) means the absolute-value of the error ex (cid:237) (1 + x + x2/2) is smaller than some constant
`times |x3| when x is close enough to 0.
`Properties
`
`If the function f can be written as a finite sum of other functions, then the fastest growing one determines the order of f(n). For example
`
`In particular, if a function may be bounded by a polynomial in n, then as n tends to infinity, one may disregard lower-order terms of the
`polynomial. Another thing to notice is the sets O(nc) and O(cn) are very different. If c is greater than one, then the latter grows much
`faster. A function that grows faster than nc for any c is called superpolynomial. One that grows more slowly than any exponential
`function of the form cn is called subexponential. An algorithm can require time that is both superpolynomial and subexponential;
`examples of this include the fastest known algorithms for integer factorization and the function nlog n.
`
`We may ignore any powers of n inside of the logarithms. The set O(log n) is exactly the same as O(log(nc)). The logarithms differ only
`by a constant factor (since log(nc) = c log n) and thus the big O notation ignores that. Similarly, logs with different constant bases are
`equivalent. On the other hand, exponentials with different bases are not of the same order. For example, 2n and 3n are not of the same
`order.
`
`Changing units may or may not affect the order of the resulting algorithm. Changing units is equivalent to multiplying the appropriate
`variable by a constant wherever it appears. For example, if an algorithm runs in the order of n2, replacing n by cn means the algorithm
`runs in the order of c2n2, and the big O notation ignores the constant c2. This can be written as c2n2 = O(n2). If, however, an algorithm
`runs in the order of 2n, replacing n with cn gives 2cn = (2c)n. This is not equivalent to 2n in general. Changing variables may also affect
`the order of the resulting algorithm. For example, if an algorithm's run time is O(n) when measured in terms of the number n of digits of
`an input number x, then its run time is O(log x) when measured as a function of the input number x itself, because n = O(log x).
`
`Product
`
`Sum
`
`This implies
`
`If f and g are positive functions,
`
`Multiplication by a constant
`
`Let k be a constant. Then:
` if k is nonzero.
`
`Multiple variables
`
`, which means that
`
` is a convex cone.
`
`Big O (and little o, and (cid:525)...) can also be used with multiple variables. To define Big O formally for multiple variables, suppose
`
`Page 4 of 13
`file:///C:/Users/kla/AppData/Local/Temp/IW2EIM09.htm
`
`9/15/2015
`
`
`
`Big O notation - Wikipedia, the free encyclopedia
`
`Page 5 of 13
`
`and
`
` are two functions defined on some subset of
`
`. We say
`
`if and only if[5]
`
` for some can be replaced with the condition that
`Equivalently, the condition that
`Chebyshev distance. For example, the statement
`
`, where
`
` denotes the
`
`asserts that there exist constants C and M such that
`
`where g(n,m) is defined by
`
`Note that this definition allows all of the coordinates of
`
` to increase to infinity. In particular, the statement
`
`(i.e.,
`
`(i.e.,
`
`) is quite different from
`
`).
`
`This is not the only generalization of big O to multivariate functions, and in practice, there is some inconsistency in the choice of
`definition.[6]
`Matters of notation
`
`Equals sign
`
`The statement "f(x) is O(g(x))" as defined above is usually written as f(x) = O(g(x)). Some consider this to be an abuse of notation, since
`the use of the equals sign could be misleading as it suggests a symmetry that this statement does not have. As de Bruijn says, O(x) = O
`(x2) is true but O(x2) = O(x) is not.[7] Knuth describes such statements as "one-way equalities", since if the sides could be reversed, "we
`could deduce ridiculous things like n = n2 from the identities n = O(n2) and n2 = O(n2)."[8] For these reasons, it would be more precise to
`use set notation and write f(x) (cid:1223) O(g(x)), thinking of O(g(x)) as the class of all functions h(x) such that |h(x)| (cid:148) C|g(x)| for some constant
`C.[8] However, the use of the equals sign is customary. Knuth pointed out that "mathematicians customarily use the = sign as they use
`the word 'is' in English: Aristotle is a man, but a man isn't necessarily Aristotle."[9]
`
`Other arithmetic operators
`
`Big O notation can also be used in conjunction with other arithmetic operators in more complicated equations. For example, h(x) + O(f
`(x)) denotes the collection of functions having the growth of h(x) plus a part whose growth is limited to that of f(x). Thus,
`
`expresses the same as
`
`Example
`
`Page 5 of 13
`file:///C:/Users/kla/AppData/Local/Temp/IW2EIM09.htm
`
`9/15/2015
`
`
`
`Big O notation - Wikipedia, the free encyclopedia
`
`Page 6 of 13
`
`Suppose an algorithm is being developed to operate on a set of n elements. Its developers are interested in finding a function T(n) that
`will express how long the algorithm will take to run (in some arbitrary measurement of time) in terms of the number of elements in the
`input set. The algorithm works by first calling a subroutine to sort the elements in the set and then perform its own operations. The sort
`has a known time complexity of O(n2), and after the subroutine runs the algorithm must take an additional 55n3 + 2n + 10 steps before it
`terminates. Thus the overall time complexity of the algorithm can be expressed as T(n) = 55n3 + O(n2). Here the terms 2n+10 are
`subsumed within the faster-growing O(n2). Again, this usage disregards some of the formal meaning of the "=" symbol, but it does
`allow one to use the big O notation as a kind of convenient placeholder.
`
`Declaration of variables
`
`Another feature of the notation, although less exceptional, is that function arguments may need to be inferred from the context when
`several variables are involved. The following two right-hand side big O notations have dramatically different meanings:
`
`The first case states that f(m) exhibits polynomial growth, while the second, assuming m > 1, states that g(n) exhibits exponential
`growth. To avoid confusion, some authors use the notation
`
`rather than the less explicit
`
`Multiple usages
`
`In more complicated usage, O(...) can appear in different places in an equation, even several times on each side. For example, the
`following are true for
`
`The meaning of such statements is as follows: for any functions which satisfy each O(...) on the left side, there are some functions
`satisfying each O(...) on the right side, such that substituting all these functions into the equation makes the two sides equal. For
`example, the third equation above means: "For any function f(n) = O(1), there is some function g(n) =O(en) such that nf(n) = g(n)." In
`terms of the "set notation" above, the meaning is that the class of functions represented by the left side is a subset of the class of
`functions represented by the right side. In this use the "=" is a formal symbol that unlike the usual use of "=" is not a symmetric relation.
`Thus for example nO(1) = O(en) does not imply the false statement O(en) = nO(1)
`Orders of common functions
`
`Further information: Time complexity § Table of common time complexities
`
`Here is a list of classes of functions that are commonly encountered when analyzing the running time of an algorithm. In each case, c is
`a constant and n increases without bound. The slower-growing functions are generally listed first.
`
`Page 6 of 13
`file:///C:/Users/kla/AppData/Local/Temp/IW2EIM09.htm
`
`9/15/2015
`
`
`
`Big O notation - Wikipedia, the free encyclopedia
`
`Page 7 of 13
`
`Notation
`
`Name
`
`; Using a
`
`Example
`Determining if a binary number is even or odd; Calculating
`constant-size lookup table
`double logarithmic Number of comparisons spent finding an item using interpolation search in a
`sorted array of uniformly distributed values
`Finding an item in a sorted array with a binary search or a balanced search tree as
`well as all operations in a Binomial heap
`Matrix chain ordering can be solved in polylogarithmic time on a Parallel Random
`Access Machine.
`Searching in a kd-tree
`Finding an item in an unsorted list or a malformed tree (worst case) or in an
`unsorted array; adding two n-bit integers by ripple carry
`Performing triangulation of a simple polygon using Seidel's algorithm, or the
`
`constant
`
`logarithmic
`
`polylogarithmic
`
`fractional power
`
`linear
`
`n log-star n
`
`linearithmic,
`loglinear, or
`quasilinear
`
`quadratic
`
`polynomial or
`algebraic
`L-notation or sub-
`exponential
`
`exponential
`
`factorial
`
`union–find algorithm. Note that
`
`Performing a fast Fourier transform; heapsort, quicksort (best and average case),
`or merge sort
`
`Multiplying two n-digit numbers by a simple algorithm; bubble sort (worst case or
`naive implementation), Shell sort, quicksort (worst case), selection sort or
`insertion sort
`
`Tree-adjoining grammar parsing; maximum matching for bipartite graphs
`
`Factoring a number using the quadratic sieve or number field sieve
`
`Finding the (exact) solution to the travelling salesman problem using dynamic
`programming; determining if two logical statements are equivalent using brute-
`force search
`Solving the traveling salesman problem via brute-force search; generating all
`unrestricted permutations of a poset; finding the determinant with expansion by
`minors; enumerating all partitions of a set
`
` is sometimes weakened to
` is a subset of
`
` and
`
`,
`
`The statement
`For any
`bigger order.
`Related asymptotic notations
`
` to derive simpler formulas for asymptotic complexity.
` for any
`, so may be considered as a polynomial with some
`
`Big O is the most commonly used asymptotic notation for comparing functions, although in many cases Big O may be replaced with
`Big Theta (cid:300) for asymptotically tighter bounds. Here, we define some related notations in terms of Big O, progressing up to the family
`of Bachmann–Landau notations to which Big O notation belongs.
`
`Little-o notation
`
`"Little o" redirects here. For the baseball player, see Omar Vizquel.
`
`The informal assertion "f(x) is little-o of g(x)" is formally written f(x) = o(g(x)), or in set notation f(x) (cid:1223) o(g(x)). Intuitively, it means
`that g(x) grows much faster than f(x), or similarly, that the growth of f(x) is nothing compared to that of g(x). It assumes that f and g are
`both functions of one variable. Formally, f(n) = o(g(n)) (or f(n) (cid:1223) o(g(n))) as n (cid:314) (cid:146) means that for every positive constant (cid:304) there
`exists a constant N such that
`
`[8]
`
`Page 7 of 13
`file:///C:/Users/kla/AppData/Local/Temp/IW2EIM09.htm
`
`9/15/2015
`
`
`
`Big O notation - Wikipedia, the free encyclopedia
`
`Page 8 of 13
`
`Note the difference between the earlier formal definition for the big-O notation, and the present definition of little-o: while the former
`has to be true for at least one constant M the latter must hold for every positive constant (cid:304), however small.[4] In this way little-o notation
`makes a stronger statement than the corresponding big-O notation: every function that is little-o of g is also big-O of g, but not every
`function that is big-O g is also little-o of g (for instance g itself is not, unless it is identically zero near (cid:146)).
`
`If g(x) is nonzero, or at least becomes nonzero beyond a certain point, the relation f(x) = o(g(x)) is equivalent to
`
`For example,
`
`(cid:1389) (cid:1389) (cid:1389)
`
`Little-o notation is common in mathematics but rarer in computer science. In computer science the variable (and function value) is most
`often a natural number. In mathematics, the variable and function values are often real numbers. The following properties (expressed in
`the more recent, set-theoretical notation) can be useful:
`
` for
`
` (and thus the above properties apply with most combinations of o and O).
`
`(cid:1389)
`
`(cid:1389)(cid:1389)(cid:1389)
`
`As with big O notation, the statement "
`notation.
`
`Big Omega notation
`
` is
`
`" is usually written as
`
`, which some consider an abuse of
`
`There are two very widespread and incompatible definitions of the statement
`
`where a is some real number, (cid:146), or (cid:237)(cid:146), where f and g are real functions defined in a neighbourhood of a, and where g is positive in this
`neighbourhood.
`
`The first one (chronologically) is used in analytic number theory, and the other one in computational complexity theory. When the two
`subjects meet, this situation is bound to generate confusion.
`
`The Hardy–Littlewood definition
`
`In 1914 G.H. Hardy and J.E. Littlewood introduced the new symbol
`
`,[10] which is defined as follows:
`
`Thus
`
` is the negation of
`
`.
`
`In 1918 the same authors introduced the two new symbols
`
` and
`
`.
`
`;
`
`,[11] thus defined:
`
`Hence
`
` is the negation of
`
`.
`
`, and
`
` the negation of
`
`.
`
`Page 8 of 13
`file:///C:/Users/kla/AppData/Local/Temp/IW2EIM09.htm
`
`9/15/2015
`
`
`
`Big O notation - Wikipedia, the free encyclopedia
`
`Page 9 of 13
`
`Contrary to a later assertion of D.E. Knuth,[12] Edmund Landau did use these three symbols, with the same meanings, in 1924.[13]
`
`These Hardy-Littlewood symbols are prototypes, which after Landau were never used again exactly thus.
`
` became
`
`, and
`
` became
`
`.
`
` (meaning that
`, as well as
`These three symbols
`both satisfied), are now currently used in analytic number theory.[14]
`
` and
`
` are
`
`Simple examples
`
`We have
`
`and more precisely
`
`We have
`
`and more precisely
`
`however
`
`The Knuth definition
`
`-symbol to describe a stronger property. Knuth wrote: "For all the
`In 1976 D.E. Knuth published a paper to justify his use of the
`applications I have seen so far in computer science, a stronger requirement […] is much more appropriate". He defined
`
`, I feel justified in doing so because their
`with the comment: "Although I have changed Hardy and Littlewood's definition of
`definition is by no means in wide use, and because there are other ways to say what they want to say in the comparatively rare cases
`when their definition applies".[12] However, the Hardy–Littlewood definition had been used for at least 25 years.[15]
`
`Page 9 of 13
`file:///C:/Users/kla/AppData/Local/Temp/IW2EIM09.htm
`
`9/15/2015
`
`
`
`Big O notation - Wikipedia, the free encyclopedia
`
`Page 10 of 13
`
`Family of Bachmann–Landau notations
`
`Notation
`
`Name
`
`Intuition
`
`Informal definition: for
`sufficiently large
`...
`
`Formal Definition
`
`Big O;
`Big Oh;
`Big
`Omicron
`[12]
`
`Big
`Omega
`
`Big
`Theta
`
`Small
`O;
`Small
`Oh
`Small
`Omega
`
` is bounded
`above by
`(up to
`constant
`factor)
`asymptotically
`Two
`definitions :
`
`Number
`theory:
`
` is not
`dominated by
`
`asymptotically
`
`Complexity
`theory:
`
` is bounded
`below by
`asymptotically
`
` is bounded
`both above
`and below by
`
`asymptotically
` is
`dominated by
`
`asymptotically
` dominates
`asymptotically
`
`On the
`order of
`
` is equal to
`asymptotically
`
`positive k
`
` for some
`
`Number theory:
`
` for infinitely
`many values of n and for some
`positive k
`
`Complexity theory:
`
` for some
`
`positive k
`
`Number theory:
`
`Complexity theory:
`
`for some positive k1, k2
`
`fixed positive number
`
`fixed positive number
`
`, for every
`
`, for every
`
`Aside from the Big O notation, the Big Theta (cid:300) and Big Omega (cid:525) notations are the two most often used in computer science; the small
`omega (cid:550) notation is occasionally used in computer science.
`
`Aside from the Big O notation, the small o, Big Omega (cid:525) and
`omega (cid:550) notation is never used in number theory.
`
` notations are the three most often used in number theory; the small
`
`Use in computer science
`
`For more details on this topic, see Analysis of algorithms.
`
`Informally, especially in computer science, the Big O notation often is permitted to be somewhat abused to describe an asymptotic tight
`bound where using Big Theta (cid:300) notation might be more factually appropriate in a given context. For example, when considering a
`function T(n) = 73n3 + 22n2 + 58, all of the following are generally acceptable, but tighter bounds (i.e., numbers 2 and 3 below) are
`usually strongly preferred over looser bounds (i.e., number 1 below).
`
`1. T(n) = O(n100)
`
`Page 10 of 13
`file:///C:/Users/kla/AppData/Local/Temp/IW2EIM09.htm
`
`9/15/2015
`
`
`
`Big O notation - Wikipedia, the free encyclopedia
`
`Page 11 of 13
`
`2. T(n) = O(n3)
`3. T(n) = (cid:300)(n3)
`
`The equivalent English statements are respectively:
`
`1. T(n) grows asymptotically no faster than n100
`2. T(n) grows asymptotically no faster than n3
`3. T(n) grows asymptotically as fast as n3.
`
`So while all three statements are true, progressively more information is contained in each. In some fields, however, the Big O notation
`(number 2 in the lists above) would be used more commonly than the Big Theta notation (bullets number 3 in the lists above). For
`example, if T(n) represents the running time of a newly developed algorithm for input size n, the inventors and users of the algorithm
`might be more inclined to put an upper asymptotic bound on how long it will take to run without making an explicit statement about the
`lower asymptotic bound.
`
`Extensions to the Bachmann–Landau notations
`
`Another notation sometimes used in computer science is Õ (read soft-O): f(n) = Õ(g(n)) is shorthand for f(n) = O(g(n) logk g(n)) for
`some k. Essentially, it is Big O notation, ignoring logarithmic factors because the growth-rate effects of some other super-logarithmic
`function indicate a growth-rate explosion for large-sized input parameters that is more important to predicting bad run-time
`performance than the finer-point effects contributed by the logarithmic-growth factor(s). This notation is often used to obviate the
`"nitpicking" within growth-rates that are stated as too tightly bounded for the matters at hand (since logk n is always o(n(cid:304)) for any
`constant k and any (cid:304) > 0).
`
`Also the L notation, defined as
`
`is convenient for functions that are between polynomial and exponential.
`Generalizations and related usages
`
`The generalization to functions taking values in any normed vector space is straightforward (replacing absolute values by norms),
`where f and g need not take their values in the same space. A generalization to functions g taking values in any topological group is also
`possible. The "limiting process" x(cid:314)xo can also be generalized by introducing an arbitrary filter base, i.e. to directed nets f and g. The o
`notation can be used to define derivatives and differentiability in quite general spaces, and also (asymptotical) equivalence of functions,
`
`which is an equivalence relation and a more restrictive notion than the relationship "f is (cid:300)(g)" from above. (It reduces to lim f / g = 1 if f
`and g are positive real valued functions.) For example, 2x is (cid:300)(x), but 2x (cid:237) x is not o(x).
`History (Bachmann–Landau, Hardy, and Vinogradov notations)
`
`The symbol O was first introduced by number theorist Paul Bachmann in 1894, in the second volume of his book Analytische
`Zahlentheorie ("analytic number theory"), the first volume of which (not yet containing big O notation) was published in 1892.[16] The
`number theorist Edmund Landau adopted it, and was thus inspired to introduce in 1909 the notation o;[17] hence both are now called
`Landau symbols. These notations were used in applied mathematics during the 1950s for asymptotic analysis.[18] The big O was
`popularized in computer science by Donald Knuth, who re-introduced the related Omega and Theta notations.[12] Knuth also noted that
`the Omega notation had been introduced by Hardy and Littlewood[10] under a different meaning "(cid:143)o" (i.e. "is not an o of"), and
`proposed the above definition. Hardy and Littlewood's original definition (which was also used in one paper by Landau[13]) is still used
`in number theory (where Knuth's definition is never used). In fact, Landau also used in 1924, in the paper just mentioned, the symbols
` ("left"), which were introduced in 1918 by Hardy and Littlewood,[11] and which were precursors for the modern
` ("right") and
`symbols
` ("is not smaller than a small o of") and
` ("is not larger than a small o of"). Thus the Omega symbols (with their original
`meanings) are sometimes also referred to as "Landau symbols".
`
`Also, Landau never used the Big Theta and small omega symbols.
`
`Page 11 of 13
`file:///C:/Users/kla/AppData/Local/Temp/IW2EIM09.htm
`
`9/15/2015
`
`
`
`Big O notation - Wikipedia, the free encyclopedia
`
`Page 12 of 13
`
`Hardy's symbols were (in terms of the modern O notation)
`
` and
`
`, as it has been sometimes reported). It should also be noted that Hardy
`, nor
`(Hardy however never defined or used the notation
`introduces the symbols
` and
` (as well as some other symbols) in his 1910 tract "Orders of Infinity", and makes use of it only in three
`papers (1910–1913). In his nearly 400 remaining papers and books he consistently uses the Landau symbols O and o.
`
`Hardy's notation is not used anymore. On the other hand, in the 1930s,[19] the Russian number theorist Ivan Matveyevich Vinogradov
`introduced his notation
`, which has been increasingly used in number theory instead of the
` notation. We have
`
`and frequently both notations are used in the same paper.
`
`The big-O originally stands for "order of" ("Ordnung", Bachmann 1894), and is thus a roman letter. Neither Bachmann nor Landau ever
`call it "Omicron". The symbol was much later on (1976) viewed by Knuth as a capital omicron,[12] probably in reference to his
`definition of the symbol Omega. The digit zero should not be used.
`See also
`
`(cid:1389) Asymptotic expansion: Approximation of functions generalizing Taylor's formula
`(cid:1389) Asymptotically optimal: A phrase frequently used to describe an algorithm that has an upper bound asymptotically within a
`constant of a lower bound for the problem
`(cid:1389) Big O in probability notation: Op,op
`(cid:1389) Limit superior and limit inferior: An explanation of some of the limit notation used in this article
`(cid:1389) Nachbin's theorem: A precise method of bounding complex analytic functions so that the domain of convergence of integral
`transforms can be stated
`References and Notes
`
`1. Homayoon Beigi (9 December 2011). Fundamentals of Speaker Recognition (http://books.google.com/books?
`id=qIMDvu3gJCQC&pg=PA777). Springer. p. 777. ISBN 978-0-387-77592-0.
`2. Mark H. Holmes (5 December 2012). Introduction to Perturbation Methods (http://books.google.com/books?id=EX5iNglbqB4C&pg=PA4).
`Springer. pp. 4–. ISBN 978-1-4614-5477-9.
`3. Mohr, Austin. "Quantum Computing in Complexity Theory and Theory of
`Computation" (http://www.austinmohr.com/Work_files/complexity.pdf) (PDF). p. 2. Retrieved 7 June 2014.
`4. Thomas H. Cormen et al., 2001, Introduction to Algorithms, Second Edition (http://highered.mcgraw-hill.com/sites/0070131511/)
`5. Cormen, Thomas; Leiserson, Charles; Rivest, Ronald; Stein, Clifford (2009). Introduction to Algorithms (Third ed.). MIT. p. 53.
`6. Howell, Rodney. "On Asymptotic Notation with Multiple Variables" (http://people.cis.ksu.edu/~rhowell/asymptotic.pdf) (PDF). Retrieved
`2015-04-23.
`7. N. G. de Bruijn (1958). Asymptotic Methods in Analysis (http://books.google.com/?id=_tnwmvHmVwMC&pg=PA5&vq=%
`22The+trouble+is%22). Amsterdam: North-Holland. pp. 5–7. ISBN 978-0-486-64221-5.
`8. Ronald Graham, Donald Knuth, and Oren Patashnik (1994). 0-201-55802-5 Concrete Mathematics (http://books.google.com/?
`id=pntQAAAAMAAJ&dq=editions:ISBN) (2 ed.). Reading, Massachusetts: Addison–Wesley. p. 446. ISBN 978-0-201-55802-9.
`9. Donald Knuth (June–July 1998). "Teach Calculus with Big O" (http://www.ams.org/notices/199806/commentary.pdf) (PDF). Notices of the
`American Mathematical Society 45 (6): 687. (Unabridged version (http://www-cs-staff.stanford.edu/~knuth/ocalc.tex))
`10. G. H. Hardy and J. E. Littlewood, "Some problems of Diophantine approximation", Acta Mathematica 37 (1914), p. 225
`11. G. H. Hardy and J. E. Littlewood, « Contribution to the theory of the Riemann zeta-function and the theory of the distribution of primes », Acta
`Mathematica, vol. 41, 1918.
`12. Donald Knuth. "Big Omicron and big Omega and big Theta (http://www.phil.uu.nl/datastructuren/10-11/knuth_big_omicron.pdf)