Skip to the content.

Fast PageRank Implementation in Python

I needed a fast PageRank for Wikisim project. It had to be fast enough to run real time on relatively large graphs. NetworkX was the obvious library to use, however, it needed back and forth translation from my graph representation (which was the pretty standard csr matrix), to its internal graph data structure. These translations were slowing down the process.

I implemented two versions of the algorithm in Python, both inspired by the sparse fast solutions given in Cleve Moler’s book, Experiments with MATLAB. The power method is much faster with enough precision for our task.

Personalized PageRank

I modified the algorithm a little bit to be able to calculate personalized PageRank as well.

Both implementations (exact solution and power method) are much faster than their correspondent methods in NetworkX. The power method is also faster than the iGraph native implementation, which is also an eigenvector based solution. Benchmarking is done on a ml.t3.2xlarge SageMaker instance.

What is the major drawback of NetworkX PageRank?

I gave up using NetworkX for one simple reason: I had to calculate PageRank several times, and my internal representation of a graph was a simple sparse matrix. Every time I wanted to calculate PageRank I had to translate it to the graph representation of NetworkX, which was slow. My benchmarking shows that NetworkX has a pretty fast implementation of PageRank ( networkx.pagerank_numpy and ‘networkx.pagerank_scipy), but translating from its own graph data structure to a csr matrix before doing the actual calculations is exactly what exactly slows down the whole algorithm.

Note: I didn’t count the time spent on nx.from_scipy_sparse_matrix (converting a csr matrix before passing it to NetworkX PageRank) in my benchmarking, But I could! Because that was another bottleneck for me, and for many other cases that one has a csr adjacency matrix.

Python Implementation

The python package is hosted at https://github.com/asajadi/fast-pagerank and you can find the installation guide in the README.md file. You also can find this jupyter notebook in the notebook directory.

Appendix

What is Google PageRank Algorithm?

PageRank is another link analysis algorithm primarily used to rank search engine results. It is defined as a process in which starting from a random node, a random walker moves to a random neighbour with probability or jumps to a random vertex with the probability . The PageRank values are the limiting probabilities of finding a walker on each node. In the original PageRank, the jump can be to any node with a uniform probability, however later in Personalized PageRank, this can be any custom probability distribution over the nodes.

How Google PageRank is Calculated? [1, 2]

Let be the adjacency matrix ( is the weight of the edge from node to node ) and be the teleporting probability, that is is the probability of jumping to node . Probability of being at node at time can be determined by two factors:

  1. Sum over the out-neighbors of of the probability that the walk was at at time t, times the probability it moved from to in time .
  2. Probability of teleporting from somewhere else in the graph to .

Formally:

where is the out-degree of node . To give a matrix form, we define to be the diagonal matrix with the out-degree of each node in on the diagonal. Then the PageRank vector, initialized with , can be obtained from the following recursion:

There is a serious problem that we need to take care: is the inverse of , which for a diagonal matrix it will be simply inverting the elements on the diagonal. This will break if there are nodes with no out neighbors, a.k.a, dangling nodes. What happens when you hit a page with no out link? You only have one option and that is to jump to a random page.

To simulate this behavior we alter by adding an edge from every dangling node to every other node with a weight of . In other words, we create by replacing each all zero row by . Formally, if we define to be the vector of row-wise sum of the elements of , that is , then:

We need to re-define . In our new definition of , we ignore nodes with no out-neighbors (or in other words, replace by ). Similar to , we define to be the diagonal matrix of the out-degrees of . So we can rewrite the recursion as:

Now , the stationary probabilities (i.e, when ) can be calculated by either of the following approaches:

1. Linear System Solving

We can solve Eq. and get:

And use a linear system solver to calculate .

2. Power-Method

Basically, reiterating the Eq. until it converges.

How Fast Google PageRank Is Calculated? [3]

To speed up, we need to take advantage of sparse matrix calculations. The only problem with the current formulation is that has a lower sparsity than the original . However, we can move around pieces of the equation a little bit to skip forming this matrix. We know that:

For the first term, multiplying by this diagonal matrix scales each column and and are different only in the elements whose correspondent columns were all zero in , so we can safely replace with . Also because the non zero columns of are all , which add up to , and therefore their correspondent element on will be . Therefore,

and using the above equation we can rewrite Eq. and get

This recursion has three multiplications, and the last one is a rather expensive one ( is a matrix, therefore the whole multiplication will be ).

Being a normalized vector, we know that . We can multiply the last term of Eq. with and factor out :

Let be . Notice that is a matrix with as its columns, and substituting the definition of , the matrix will be:

If we let be:

then

So by replacing () in Eq. with , we’ll get:

How does this help to improve the calculations? We’ll see:

1. Solving a Linear System

Similar to before, we can solve Eq. and get:

Being able to re-parenthesize, is just a number, so we can ignore it and renormalize at the end, and solve:

We almost have the same linear equation system that we had before, except for one big improvement, we replaced the less-sparse with .

2. Power Method

We can apply one last smart modification to Eq. : if we change the parenthesizing of the last multiplication (remember the famous dynamic programming algorithm?), and also define , we will have:

Therefore, the complexity decreased to , and the whole recursion will be . The rate of convergence is another thing, which we ignore here, and depends on the value of the second eigenvalue () of the modified transition matrix (), which is defined as:

References

[1] Daniel A. Spielman, Graphs and Networks Lecture Notes, Lecture 11: Cutting Graphs, Personal PageRank and Spilling Paint, 2013.

[2] Daniel A. Spielman, Spectral Graph Theory Lecture Notes, Lecture 10: Random Walks on Graphs, 2018

[3] Cleve Moler, Experiments with MATLAB, Chapter 7: Google PageRank

Implementation

%%writefile ../fast_pagerank/fast_pagerank.py
"""Two fast implementations of PageRank:
    An exact solution using a sparse linear system solver,
    and an a power method approximation.
    Both solutions are taking full advantage of sparse matrix calculations.

    [Reference]:
    Cleve Moler. 2011. Experiments with MATLAB (Electronic ed.).
    MathWorks, Inc.
"""
# uncomment
from __future__ import division

import scipy as sp
import scipy.sparse as sprs
import scipy.spatial
import scipy.sparse.linalg

__author__ = "Armin Sajadi"
__copyright__ = "Copyright 2015, The Wikisim Project"
__email__ = "asajadi@gmail.com"


def pagerank(A, p=0.85,
             personalize=None, reverse=False):
    """ Calculates PageRank given a csr graph

    Inputs:
    -------

    G: a csr graph.
    p: damping factor
    personlize: if not None, should be an array with the size of the nodes
                containing probability distributions.
                It will be normalized automatically
    reverse: If true, returns the reversed-PageRank

    outputs
    -------

    PageRank Scores for the nodes

    """
    # In Moler's algorithm, $$A_{ij}$$ represents the existences of an edge
    # from node $$j$$ to $$i$$, while we have assumed the opposite!
    if reverse:
        A = A.T

    n, _ = A.shape
    r = sp.asarray(A.sum(axis=1)).reshape(-1)

    k = r.nonzero()[0]

    D_1 = sprs.csr_matrix((1 / r[k], (k, k)), shape=(n, n))

    if personalize is None:
        personalize = sp.ones(n)
    personalize = personalize.reshape(n, 1)
    s = (personalize / personalize.sum()) * n

    I = sprs.eye(n)
    x = sprs.linalg.spsolve((I - p * A.T @ D_1), s)

    x = x / x.sum()
    return x


def pagerank_power(A, p=0.85, max_iter=100,
                   tol=1e-06, personalize=None, reverse=False):
    """ Calculates PageRank given a csr graph

    Inputs:
    -------
    A: a csr graph.
    p: damping factor
    max_iter: maximum number of iterations
    personlize: if not None, should be an array with the size of the nodes
                containing probability distributions.
                It will be normalized automatically.
    reverse: If true, returns the reversed-PageRank

    Returns:
    --------
    PageRank Scores for the nodes

    """
    # In Moler's algorithm, $$G_{ij}$$ represents the existences of an edge
    # from node $$j$$ to $$i$$, while we have assumed the opposite!
    if reverse:
        A = A.T

    n, _ = A.shape
    r = sp.asarray(A.sum(axis=1)).reshape(-1)

    k = r.nonzero()[0]

    D_1 = sprs.csr_matrix((1 / r[k], (k, k)), shape=(n, n))

    if personalize is None:
        personalize = sp.ones(n)
    personalize = personalize.reshape(n, 1)
    s = (personalize / personalize.sum()) * n

    z_T = (((1 - p) * (r != 0) + (r == 0)) / n)[sp.newaxis, :]
    W = p * A.T @ D_1

    x = s
    oldx = sp.zeros((n, 1))

    iteration = 0

    while sp.linalg.norm(x - oldx) > tol:
        oldx = x
        x = W @ x + s @ (z_T @ x)
        iteration += 1
        if iteration >= max_iter:
            break
    x = x / sum(x)

    return x.reshape(-1)
Overwriting ../fast_pagerank/fast_pagerank.py

Testing the algorithm

%%writefile ../test/fast_pagerank_test.py
import os
import sys
import scipy as sp
import scipy.sparse as sparse
from numpy.testing import assert_allclose
import unittest

sys.path.insert(
    0,
    os.path.abspath(
        os.path.join(
            os.path.dirname(__file__),
            '..')))

from fast_pagerank import pagerank
from fast_pagerank import pagerank_power


class TestMolerPageRank(unittest.TestCase):
    def setUp(self):
        # ---G1---
        n1 = 5
        edges1 = sp.array([[0, 1],
                           [1, 2],
                           [2, 1],
                           [2, 3],
                           [2, 4],
                           [3, 0],
                           [3, 2],
                           [4, 0],
                           [4, 2],
                           [4, 3]])
        weights1 = [0.4923,
                    0.0999,
                    0.2132,
                    0.0178,
                    0.5694,
                    0.0406,
                    0.2047,
                    0.8610,
                    0.3849,
                    0.4829]

        self.p1 = 0.83
        self.personalize1 = sp.array([0.6005, 0.1221, 0.2542, 0.4778, 0.4275])
        self.G1 = sparse.csr_matrix(
            (weights1, (edges1[:, 0], edges1[:, 1])), shape=(n1, n1))
        self.pr1 = sp.array([0.1592, 0.2114, 0.3085, 0.1, 0.2208])

        # ---G2---
        n2 = 10
        edges2 = sp.array([[2, 4],
                           [2, 5],
                           [4, 5],
                           [5, 3],
                           [5, 4],
                           [5, 9],
                           [6, 1],
                           [6, 2],
                           [9, 2],
                           [9, 4]])
        weights2 = [0.4565,
                    0.2861,
                    0.5730,
                    0.0025,
                    0.4829,
                    0.3866,
                    0.3041,
                    0.3407,
                    0.2653,
                    0.8079]
        self.G2 = sparse.csr_matrix(
            (weights2, (edges2[:, 0], edges2[:, 1])), shape=(n2, n2))
        self.personalize2 = sp.array([0.8887, 0.6491, 0.7843, 0.7103, 0.7428,
                                      0.6632, 0.7351, 0.3006, 0.8722, 0.1652])
        self.p2 = 0.92
        self.pr2 = sp.array([0.0234, 0.0255, 0.0629, 0.0196, 0.3303,
                             0.3436, 0.0194, 0.0079, 0.023, 0.1445])

        # ---G3---
        n3 = 5
        edges3 = sp.array([[2, 4]])
        weights3 = [0.5441]
        self.G3 = sparse.csr_matrix(
            (weights3, (edges3[:, 0], edges3[:, 1])), shape=(n3, n3))

        self.personalize3 = sp.array([0.0884, 0.2797, 0.3093, 0.5533, 0.985])
        self.p3 = 0.81
        self.pr3 = sp.array([0.0358, 0.1134, 0.1254, 0.2244, 0.501])

        # ---G4---
        n4 = 5
        edges4_rows = []
        edges4_cols = []
        weights4 = []
        self.G4 = sparse.csr_matrix(
            (weights4, (edges4_rows, edges4_cols)), shape=(n4, n4))

        self.personalize4 = sp.array([0.2534, 0.8945, 0.9562, 0.056, 0.9439])
        self.p4 = 0.70
        self.pr4 = sp.array([0.0816, 0.2882, 0.3081, 0.018, 0.3041])

        # ---G5---
        n5 = 0
        edges5_rows = []
        edges5_cols = []
        weights5 = []
        self.G5 = sparse.csr_matrix(
            (weights5, (edges5_rows, edges5_cols)), shape=(n5, n5))

        self.personalize5 = sp.array([])
        self.p5 = 0.70
        self.pr5 = sp.array([])

    def test_pagerank_1(self):
        calculated_pagerank = pagerank(self.G1, p=self.p1,
                                       personalize=self.personalize1)
        assert_allclose(calculated_pagerank, self.pr1, rtol=0, atol=1e-04)

    def test_pagerank_2(self):

        calculated_pagerank = pagerank(self.G2, p=self.p2,
                                       personalize=self.personalize2)
        assert_allclose(calculated_pagerank, self.pr2, rtol=0, atol=1e-04)

    def test_single_edge(self):
        calculated_pagerank = pagerank(self.G3, p=self.p3,
                                       personalize=self.personalize3)
        assert_allclose(calculated_pagerank, self.pr3, rtol=0, atol=1e-04)

    def test_zero_edge(self):
        calculated_pagerank = pagerank(self.G4, p=self.p4,
                                       personalize=self.personalize4)
        assert_allclose(calculated_pagerank, self.pr4, rtol=0, atol=1e-04)

    def test_empty_graph(self):
        calculated_pagerank = pagerank(self.G5, p=self.p5,
                                       personalize=self.personalize5)
        self.assertEqual(calculated_pagerank.size, 0)

    def test_power_pagerank_1(self):
        calculated_pagerank = pagerank_power(self.G1, p=self.p1,
                                             personalize=self.personalize1)
        assert_allclose(calculated_pagerank, self.pr1, rtol=0, atol=1e-04)

    def test_power_pagerank_2(self):

        calculated_pagerank = pagerank_power(self.G2, p=self.p2,
                                             personalize=self.personalize2)
        assert_allclose(calculated_pagerank, self.pr2, rtol=0, atol=1e-04)

    def test_power_single_edge(self):
        calculated_pagerank = pagerank_power(self.G3, p=self.p3,
                                             personalize=self.personalize3)
        assert_allclose(calculated_pagerank, self.pr3, rtol=0, atol=1e-04)

    def test_power_zero_edge(self):
        calculated_pagerank = pagerank_power(self.G4, p=self.p4,
                                             personalize=self.personalize4)
        assert_allclose(calculated_pagerank, self.pr4, rtol=0, atol=1e-04)

    def test_power_empty_graph(self):
        calculated_pagerank = pagerank_power(self.G5, p=self.p5,
                                             personalize=self.personalize5)
        self.assertEqual(calculated_pagerank.size, 0)


#             assert_array_almost_equal(Ynx,  Yml, decimal = 5)
if __name__ == '__main__':
    unittest.main()
Overwriting ../test/fast_pagerank_test.py
!python  ../test/fast_pagerank_test.py
/Users/arminsajadi/anaconda3/lib/python3.7/site-packages/numpy/matrixlib/defmatrix.py:71: PendingDeprecationWarning: the matrix subclass is not the recommended way to represent matrices or deal with linear algebra (see https://docs.scipy.org/doc/numpy/user/numpy-for-matlab-users.html). Please adjust your code to use regular ndarray.
  return matrix(data, dtype=dtype, copy=False)
..........
----------------------------------------------------------------------
Ran 10 tests in 0.020s

OK

Benchmarking

To avoid the clutter, we only visualize the fastest method from each implementation, that is:

  • networkx.pagerank_scipy
  • Latest implementation of iGraph.personalized_pagerank (PRPACK)
  • Our pagerank_power
''' Calcualate PageRank on several random graphs.
'''
import scipy as sp
import pandas as pd
import timeit
import os
import sys
import random
import igraph
import networkx as nx
sys.path.insert(0, '..')
from fast_pagerank.pagerank import pagerank
from fast_pagerank.pagerank import pagerank_power

# def print_and_flush(args):

#     sys.stdout.flush()
def get_random_graph(
        min_size=20,
        max_size=2000,
        min_density=0.1,
        max_density=0.5):
    ''' Creates a random graph and a teleport vector and output them
        in different formats for different algorithms

    Inputs
    ------

    min_size and max_size: The size of the graph will be a random number
        in the range of (min_size, max_size)
    min_sparsity and max_sparsity: The sparcity of the graph
        will be a random number in the range of (min_sparsity, max_sparsity)

    Returns
    -------

    nxG: A random Graph for NetworkX
    A: The equivallent csr Adjacency matrix, for our PageRank
    iG: The equivallent iGraph
    personalize_vector: Personalization probabily vector
    personalize_dict: Personalization probabily vector,
                    in the form of a dictionary for NetworkX

    '''
    G_size = random.randint(min_size, max_size)
    p = random.uniform(min_density, max_density)

    A = sp.sparse.random(G_size, G_size, p, format='csr')
    nxG = nx.from_scipy_sparse_matrix(A, create_using=nx.DiGraph())

    iG = igraph.Graph(list(nxG.edges()), directed=True)
    iG.es['weight'] = A.data

    personalize_vector = sp.random.random(G_size)
    personalize_dict = dict(enumerate(personalize_vector.reshape(-1)))
    return A, nxG, iG, personalize_vector, personalize_dict


n = 5
number_of_graphs = 50

node_size_vector = sp.zeros(number_of_graphs)
edge_size_vector = sp.zeros(number_of_graphs)
# netx_pagerank_times = sp.zeros(number_of_graphs)
netx_pagerank_times_numpy = sp.zeros(number_of_graphs)
netx_pagerank_times_scipy = sp.zeros(number_of_graphs)
ig_pagerank_times = sp.zeros(number_of_graphs)
pagerank_times = sp.zeros(number_of_graphs)
pagerank_times_power = sp.zeros(number_of_graphs)

damping_factor = 0.85
tol = 1e-3


for i in range(number_of_graphs):
    A, nxG, iG, personalize_vector, personalize_dict = get_random_graph()
    node_size_vector[i] = A.shape[0]
    edge_size_vector[i] = A.count_nonzero()
    print ("Graph %d: Nodes: %d, Edges: %d ..." %(i, node_size_vector[i], edge_size_vector[i]))
    sys.stdout.flush()

#     networkx.pagerank commented out, because it is too slow

#     netx_pagerank_times[i] = timeit.timeit(
#         lambda: nx.pagerank(nxG, alpha=damping_factor, tol=tol),
#         number=n) / n

    netx_pagerank_times_numpy[i] = timeit.timeit(
        lambda: nx.pagerank_numpy(nxG, alpha=damping_factor),
        number=n) / n

    netx_pagerank_times_scipy[i] = timeit.timeit(
        lambda: nx.pagerank_scipy(nxG, alpha=damping_factor, tol=tol),
        number=n) / n

    #iGraph, only "prpack", which is their latest version.
    ig_pagerank_times[i] = timeit.timeit(
        lambda: iG.personalized_pagerank(directed=True,
                                         damping=damping_factor,
                                         weights=iG.es['weight'],
                                         implementation="prpack"),
        number=n) / n

#     My implementations

    pagerank_times[i] = timeit.timeit(
        lambda: pagerank(A, p=damping_factor),
        number=n) / n
    pagerank_times_power[i] = timeit.timeit(
        lambda: pagerank_power(A, p=damping_factor, tol=tol),
        number=n) / n


argsort = edge_size_vector.argsort()

edge_size_vector_sorted = edge_size_vector[argsort]
node_size_vector_sorted = node_size_vector[argsort]

# netx_pagerank_times_sorted = netx_pagerank_times[argsort]
netx_pagerank_times_numpy_sorted = netx_pagerank_times_numpy[argsort]
netx_pagerank_times_scipy_sorted = netx_pagerank_times_scipy[argsort]

ig_pagerank_times_sorted = ig_pagerank_times[argsort]

pagerank_times_sorted = pagerank_times[argsort]
pagerank_times_power_sorted = pagerank_times_power[argsort]

comparison_table = pd.DataFrame(list(zip(node_size_vector_sorted,
                                         edge_size_vector_sorted,
#                                          netx_pagerank_times_sorted,
                                         netx_pagerank_times_numpy_sorted,
                                         netx_pagerank_times_scipy_sorted,
                                         ig_pagerank_times_sorted,
                                         pagerank_times_sorted,
                                         pagerank_times_power_sorted)),
                                columns=['Nodes', 'Edges',
#                                          'NetX',
                                         'NetX (numpy)',
                                         'NetX (scipy)',
                                         'iGraph',
                                         '(fast) pagerank',
                                         '(fast) pagerank_power']).\
                    astype({'Nodes': 'int32', 'Edges': 'int32'})
comparison_table.to_csv('pagerank_methods_comparison.csv')
print("Done")

Plotting

import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline

comparison_table = pd.read_csv('pagerank_methods_comparison.csv', index_col=0)
display(comparison_table)

plt.ioff()
fig=plt.figure(num=None, figsize=(10, 10), dpi=80, facecolor='w', edgecolor='k')


# plt.plot(comparison_table['Edges'], comparison_table['NetX'],
#          'o-', ms=8, lw=2, alpha=0.7, color='cyan',
#          label='networkx.PageRank')
plt.plot(comparison_table['Edges'], comparison_table['NetX (numpy)'],
         'v-', ms=8, lw=2, alpha=0.7, color='magenta',
         label='networkx.PageRank_numpy')

plt.plot(comparison_table['Edges'], comparison_table['NetX (scipy)'],
         'P-', ms=8, lw=2, alpha=0.7, color='blue',
         label='networkx.PageRank_scipy')

plt.plot(comparison_table['Edges'], comparison_table['iGraph'],
         'x-', ms=8, lw=2, alpha=0.7, color='black',
         label='iGraph_PageRank_ARPACK')

plt.plot(comparison_table['Edges'], comparison_table['(fast) pagerank'],
         '*-', ms=8, lw=2, alpha=0.7, color='red',
         label='fast_pagerank.pagerank')

plt.plot(comparison_table['Edges'], comparison_table['(fast) pagerank_power'],
         '^-', ms=8, lw=2, alpha=0.7, color='green',
         label='fast_pagerank.pagerank_power')


plt.xlabel('Number of the edges')
plt.ylabel('Time (Seconds)')


plt.tight_layout()
plt.legend(loc=2)
plt.savefig('pagerank_methods_comparison.png')
plt.show()

Nodes Edges NetX (numpy) NetX (scipy) iGraph (fast) pagerank (fast) pagerank_power
0 29 294 0.001380 0.002786 0.000064 0.001197 0.000987
1 50 297 0.002001 0.002855 0.000116 0.001359 0.001350
2 57 511 0.003088 0.004122 0.000207 0.001339 0.001040
3 56 1268 0.005002 0.008682 0.000302 0.001361 0.000923
4 153 4239 0.025173 0.026826 0.001618 0.002090 0.001170
5 161 7313 0.034990 0.045815 0.002600 0.002153 0.001118
6 194 13112 0.059986 0.079006 0.004251 0.002485 0.001197
7 235 23995 0.100380 0.144351 0.007696 0.003906 0.001367
8 399 24620 0.152208 0.147999 0.008201 0.009624 0.001632
9 331 28117 0.138913 0.167950 0.009146 0.006118 0.001569
10 400 31555 0.170775 0.189163 0.010441 0.009061 0.001658
11 432 34350 0.193550 0.206139 0.011067 0.010935 0.002538
12 327 40070 0.172449 0.236692 0.013152 0.006070 0.001624
13 345 43278 0.185561 0.257730 0.013519 0.006637 0.001671
14 372 51392 0.217195 0.306374 0.016442 0.007388 0.001801
15 443 53407 0.257006 0.318984 0.016917 0.010931 0.001912
16 513 53818 0.332424 0.322413 0.017470 0.017187 0.002074
17 657 58504 0.431417 0.349819 0.019168 0.029765 0.002254
18 600 66924 0.424966 0.427820 0.021523 0.022974 0.002296
19 708 68816 0.493887 0.412282 0.023177 0.035180 0.002426
20 595 70160 0.447111 0.421269 0.022601 0.021008 0.002358
21 402 72749 0.286772 0.432235 0.022592 0.009006 0.002071
22 527 103922 0.487291 0.621258 0.032654 0.015822 0.002414
23 552 113892 0.511531 0.682059 0.035800 0.018713 0.002606
24 643 121678 0.608551 0.733741 0.038143 0.025314 0.003068
25 535 131004 0.548765 0.780345 0.041276 0.017120 0.002782
26 788 134715 0.738787 0.805370 0.043985 0.040695 0.002948
27 659 139356 0.655564 0.832847 0.044811 0.027062 0.002884
28 574 142418 0.614663 0.851331 0.044907 0.019513 0.002872
29 671 211463 0.860593 1.270099 0.066796 0.028440 0.003524
30 991 217314 1.195324 1.296652 0.069362 0.074023 0.003918
31 1145 224636 1.376988 1.351827 0.072020 0.115271 0.004329
32 814 240810 1.044791 1.447768 0.076709 0.044440 0.004033
33 946 329445 1.435774 1.980942 0.108792 0.063489 0.004937
34 876 368149 1.467089 2.206095 0.122426 0.054545 0.005101
35 1226 382630 1.951984 2.299428 0.128249 0.125456 0.005811
36 1082 414322 1.791635 2.493037 0.140365 0.093733 0.005861
37 1200 463903 2.105691 2.787735 0.158396 0.122534 0.007137
38 1664 482849 3.057695 2.898690 0.166557 0.295168 0.008168
39 1502 488347 2.682661 2.949617 0.168611 0.227022 0.007144
40 1440 621314 2.905153 3.730029 0.219487 0.196364 0.008039
41 1671 629123 3.468734 3.764259 0.223085 0.310846 0.008445
42 1476 800797 3.515316 4.820850 0.298863 0.221590 0.009985
43 1567 858346 3.868013 5.151185 0.319868 0.276208 0.010889
44 1442 920573 3.741552 5.525220 0.349847 0.202256 0.010942
45 1773 1039771 4.812556 6.241451 0.396200 0.327685 0.013697
46 1845 1089758 5.186088 6.565350 0.413339 0.391772 0.013254
47 1995 1368626 6.392365 8.204485 0.535340 0.461299 0.016423
48 1748 1489884 6.044785 8.972130 0.608673 0.353495 0.016298
49 1996 1552304 6.911417 9.262027 0.613982 0.475427 0.018804

png

Written on June 4, 2019