Monday, November 9, 2015

Time Complexity: Graph & Machine Learning Algorithms

Summary of the time complexity of some common optimization and machine learning algorithms

Overview
Time complexity (or worst case scenario for the duration of execution given a number of elements) is commonly used in computing science. However, you will be hard pressed to find a comparison of machine learning algorithms using their asymptotic execution time.

Summary
The following summary list the time complexity of some of the most common algorithms used in machine learning including, recursive computation for dynamic programming, linear algebra, optimization and discriminative supervised training.

AlgorithmTime complexityDescription
Recursion with one element reductionN2N: Number of elements
Recursion with halving reductionLogN
Recursion with halving reduction
and processing
N
Nearest neighbors searchM.logk.N.LogNM: number of features
k: number of neighbors
N: number of observations
Matrix multiplication (m,n)x(n,d)m.n.dm,n,d matrices dimension
Matrix multiplication (n,n)n3n matrix dimension
Matrix multiplication (n,n)
Strassen
n2.8n matrix dimension
Partial eigenvalues extraction (n,n)e.N2e: number of eigenvalues
N: number of observations
Complete eigenvalues extraction (n,n)N3N: number of observations
Minimum spanning tree
Prim linear queue
V2V: number vertices
Minimum spanning tree
Prim binary heap
(E + V).LogVE: number of edges
V: number vertices
Minimum spanning tree
Prim Fibonacci heap
V.LogVE: number of edges
V: number vertices
Shortest paths Graph
Dijkstra linear sorting
V2V: number of vertices
Shortest paths Graph
Dijkstra binary heap
(E + V).logVV: number of vertices
Shortest paths Graph
Dijkstra Fibonacci heap
V.logE: number of edges
V: number of vertices
Shortest paths kNN
Graph - Dijkstra
(k + logN).N2k: number of neighbors
N: number of observations
Shortest paths kNN
Graph - Floyd-Warshall
N3N: number of observations
Fast Fourier transformN.LogNN: Number of observations
Batched gradient descentN.P.IN: Number of observations
P: number of parameters
I: number of iterations
Stochastic gradient descentN.P.IN: number of observations
P: Number of variables
I: number of epochs
Newton with Hessian computationN3.IN: number of observations
I: number iterations
Conjugate gradientN.m.sqrt(k)N: number of observations
m: number of non-zero
k condition of the matrix
L-BFGSN.M.IN: number of observations
M: estimated number of grads
I: number of iterations
K-means (*)C.N.M.IC: Number of clusters
M: Dimension
N: number observations
I: number of iterations
Lasso regularization - LARS(*)N.M.min(N,M)M: Dimension
N: number observations
Hidden Markov model
Forward-backward pass
N2.MN: number of states
M: number of observations
Multilayer Perceptron (*)n.M.P.N.en: input variables
M: number hidden neurons
P: number output values
N: number of observations
e: Number of epochs
Support vector machine (*)
using Newton
N3N: number of observations
Support vector machine (*)
using Cholesky
N2N: number of observations
Support vector machine (*)
Sequential minimal optimization
N2N: number of observations
Principal Components Analysis (*)M2N + N3N: Number of observations
M: number of features
Expectation-Maximization (*)M2NN: Number of observations
M: number of variables
Laplacian EigenmapsM.logk.N.logN + m.N.k2 + d.N2N: Number of observations
M: number of variables
Genetic algorithmsP.logP.I.CC: number of genes/chromosome
P: population size
I: Number of iterations
(*): Training

References
Introduction to Algorithms T. Cormen, C. Leiserson, R. Rivest - MIT Press 1990
Machine Learning: A probabilistic Perspective K. Murphy - MIT Press, 2012
Big O cheat sheet

1 comment:

  1. These provided information was really so nice,thanks for giving that post and the more skills to develop after refer that post. Your articles really impressed for me,because of all information so nice.

    SAP training in Chennai

    ReplyDelete