Nonasymptotic Convergence Rates for Cooperative Learning
Over TimeVarying Directed Graphs
Abstract
We study the problem of distributed hypothesis testing with a network of agents where some agents repeatedly gain access to information about the correct hypothesis. The group objective is to globally agree on a joint hypothesis that best describes the observed data at all the nodes. We assume that the agents can interact with their neighbors in an unknown sequence of timevarying directed graphs. Following the pioneering work of Jadbabaie, Molavi, Sandroni, and TahbazSalehi, we propose local learning dynamics which combine Bayesian updates at each node with a local aggregation rule of private agent signals. We show that these learning dynamics drive all agents to the set of hypotheses which best explain the data collected at all nodes as long as the sequence of interconnection graphs is uniformly strongly connected. Our main result establishes a nonasymptotic, explicit, geometric convergence rate for the learning dynamic.
I Introduction
Recent years have seen a considerable amount of work on the analysis of distributed algorithms. Nonetheless, the study of distributed decision making and computation can be traced back to the classic papers [1, 2, 3] from the 70s and 80s. Applications of such algorithms range from opinion dynamics analysis, network learning and inference, cooperative robotics, communication networks, to social as well as sensor networks. It is the latter settings of social and sensor networks which is the focus of the current paper.
Interactions among people produce exchange of ideas, opinions, observations and experiences, on which new ideas, opinions, and observations are generated. Analyzing dynamic model of such processes generates insight into human behavior and produce algorithms useful in the sensor networking context.
We consider an agent network where agents repeatedly receive information from their neighbors and private signals from an external source, which provide samples from random variable with unknown distribution. The agents would like to collectively agree on a hypothesis (distribution) that best explains the data.
Initial results on learning in social networks are described in [4], where local update rules are designed such that it matches the Bayes’ Theorem. That is, given a prior and new observations, the agent is able to compute likelihood functions in order to generate a new posterior, see [5]. Nevertheless, a fully Bayesian approach might not be possible in general since full knowledge of neither the network structure nor other agents hypothesis might be available [6]. Fortunately, nonBayesian methods have been shown successful in learning as well. For example, in [7], the authors propose a modification of Bayes’ rule that accounts for overreactions or underreactions to new information.
In a distributed setting, several groundbreaking papers have described ways agents achieve global behaviors by repeatedly aggregating local information in a network [8, 9, 10]. For example, in distributed hypothesis testing using belief propagation, convergence and dependence of the communication structure were shown [10]. Later, extensions to finite capacity channels, packet losses, delayed communications and tracking where developed [11, 12]. In [9], the authors proved convergence in probability, asymptotic normality of the distributed estimation and provided conditions under which the distributed estimation is as good as a centralized one. Later in [8], almost sure convergence of a nonBayesian rule based on arithmetic mean was shown for fixed topology graphs. Extensions to information heterogeneity and asymptotic convergence rates have been derived as well [13]. Following [8], other methods to aggregate Bayes estimates in a network have been explored. In [14], geometric means are used for fixed topologies as well, however the consensus and learning steps are separated. The work in [15] extends the results of [8] to timevarying undirected graphs. In [16], local exponential rates of convergence for undirected gossiplike graphs are studied.
In this paper we propose a nonBayesian learning rule, analyze its consistency and derive a nonasymptotic rate of convergence for timevarying directed graphs. Our first result shows consistency: we show that over time, the protocol learns the hypothesis or set of hypotheses which better explain the data collected by all the nodes. Moreover, our main result provides a geometric, nonasymptotic, and explicit characterization of the rate of convergence which immediately leads to finitetime bounds which scale intelligibly with the number of nodes.
In a simultaneous independent effort, the authors in [17, 18] proposed a similar nonBayesian learning algorithm where a local Bayes update is followed by a consensus step. In [17], convergence result for fixed graphs is provided and large deviation convergence rates are given, proving the existence of a random time after which the beliefs will concentrate exponentially fast. In [18], similar probabilistic bounds for the rate of convergence are derived for fixed graphs and comparisons with the centralized version of the learning rule are provided.
This paper is organized as follows. In Section II we describe the model that we study and the proposed update rule. In Section III we analyze the consistency of the information aggregation and estimation models, while in Section IV we establish nonasymptotic convergence rates of the agent beliefs. Some conclusions and future work directions are given in Section VI.
Notation
Upper case letters represent random variables (e.g. ), and the corresponding lower case letters for their realizations (e.g. ). Subindex will generally indicate the time index. We write as the th row and th column entry of matrix . We write for the transpose of a matrix and for the transpose of a vector . We use for the identity matrix. Bold letters represent vectors which are assumed to be column vectors. The ’th entry of a vector will be denoted by a superscript , i.e., . We write to denote the allones vector of size . For a sequence of matrices , we let for all .We terms ”almost surely” and ”independent identically distributed” are abbreviated by a.s. and i.i.d. respectively.
Ii Problem Setup and Main Results
We consider a group of agents each of which observes a random variable at each time step . We use to denote the random variable whose samples are observed by agent at time step . We denote the set of outcomes of the random variable by , and we assume that this set is finite, i.e., Furthermore, we assume that all are i.i.d. and drawn according to some probability distribution . For convenience, we stack up all ’s into a single random vector .
We assume there is a finite set of hypothesis, and there is a probability distribution for each agent and hypothesis . Intuitively, we think of as the probability distribution seen by agent if hypothesis were true. Note that, it is not required for the agents to have an hypothesis that is exactly equal to the unknown distribution . The goal of the agents is to agree on an element of that fits all the observations in the network best (in a technical sense to be described soon).
Agents communicate with their neighbors, this communication is modedeled as a graph composed of a node set and a set of directed links .
We will refer to probability distributions over as beliefs and assume that agent begins with an initial belief , which we also refer to as its prior distribution or prior belief.
This paper focuses in the study of the group dynamics wherein, at time , each agent updates its previous belief to a new belief as follows:
(1) 
with when receives information from at time , and else .
The “weight matrices” satisfy some technical connectivity conditions which have been previously used in convergence analysis of distributed averaging and other consensus algorithms [19, 20, 21]. The assumptions on the communication graph are presented next.
Assumption 1
The graph sequence and the matrix sequence are such that:

is rowstochastic with if .

has positive diagonal entries, .

If then for some positive constant .

is strongly connected, i.e., there is an integer such that the graph is strongly connected for all .
As a measure for the explanatory quality of the hypotheses in the set we use the KullbackLeibler divergence between two discrete probability distributions p and q:
Concretely, the quality of hypothesis for agent is measured by the KullbackLeibler divergence between the true distribution of the signals and the probability distribution as seen by agent if hypothesis were correct. We use the following assumption on the agents’ best hypotheses.
Assumption 2
The set defined as , where for each , is nonempty.
Assumption 2 is satisfied if there is some “true state of the world” such that each agent sees distributions generated according to , i.e., . However, this need not be the case for Assumption 2 to hold. Indeed, the assumption is considerably weaker as it merely requires that the set of hypotheses, which provide the “best fits” for each agent, have at least a single element in common.
We will further require the following assumptions on the initial distribution and the likelihood functions. The first of these is sometimes referred to as the Zero Probability Property [22].
Assumption 3
For all agents ,

The prior beliefs on all are positive, i.e. for all .

There exists an such that for all and .
Assumption 3.1 can be relaxed to a requirement that all prior beliefs are positive for some . Both of these conditions are equally complex to be satisfied. They can be satisfied by letting each agent have a uniform prior belief, which is reasonable in the absence of any initial information about the goodness of the hypotheses.
We now state our first result, which asserts that the dynamics in Eq. (1) concentrate all agent’s believes in the optimal hypothesis set. We provide its proof in Section III.
The result states that the agents’ beliefs will concentrate on the set asymptotically as .
Our main result is a nonasymptotic explicit convergence rate, given in the following theorem, proven in Section IV.
Theorem 2
Let Assumptions 1, 2, and 3 hold. Also, let be a given error percentile (or confidence value). Then, the update rule of Eq. (1) has the following property: there exists an integer such that, with probability , for all there holds that for any ,
with from Assumption 3.2.
The constants , and satisfy the following relations:
(1)
For general connected graph sequences ,
(2) If every matrix is doubly stochastic,
(3) If each is an undirected graph and each is the lazy Metropolis matrix, i.e. the stochastic matrix which satisfies
Note that does not depend on since is the same for all .
In contrast to the previous literature, this convergence rate is not only geometric but also nonasymptotic and explicit in the sense of immediately leading to bounds which scale intelligible in terms of the number of nodes. For example, in the case of doubly stochastic matrices, Theorem 2 immediately implies that, after a transient time, which scales cubically in the number of nodes, the network will achieve exponential decay to the correct answer with the exponent .
Now, consider the case when Assumption 3.1 is relaxed to the following requirement: The prior beliefs on some are positive (i.e. for some and all ). Then, it can be seen that the Theorem 2 is valid with and replaced, respectively, by and , where is the set of all for which all the agents priors are positive.
Iii Consistency of the Learning Rule
In this section we prove Theorem 1, which provides a statement about the consistency (see [23, 24]) of the distributed estimator given in Eq. (1). Our analysis will require some auxiliary results. First, we will recall some results from [25] about the convergence of a product of row stochastic matrices.
Lemma 1
The proof of Lemma 1 may be found in [25], with the exception of the bounds on for the lazy Metropolis chains which we omit here due to space constraints.
Lemma 2
Next, we need a technical lemma regarding the weighted average of random variables with a finite variance.
Lemma 3
Proof.
Adding and subtracting yields
(3) 
By Lemma 1, for all . Moreover, each of the entries of are upper bounded by Assumption 2. Thus, the first term on the right hand side of Eq. (3) goes to zero as we take the limit over . Regarding the second term in Eq. (3), by the definition of the KL divergence measure, we have that
or equivalently .
Kolmogorov’s strong law of large numbers states that if is a sequence of independent random variables with variances such that , then . Then, by using Assumptions 1 and 3.2, it can be seen that . a.s. Let
The result follows by Lemma 1 and Kolmogorov’s strong law of large numbers. ∎
With Lemma 3 in place, we are ready to prove Theorem 1. The proof of Theorem 1 (and also Theorem 2) makes use of the following quantities: for all and ,
(4) 
defined for any (dependence on is suppressed).
Proof.
(Theorem 1) Dividing both sides of (1) by , then using the log function and the definition of we obtain:
Stacking up the values over agents into a single vector , we can compactly write the preceding relations, as follows:
(5) 
which implies that for all
(6) 
We add and subtract in Eq. (6), then
By using the lower bounds on described in Lemma 2 and the fact that , we obtain
Therefore, we have
The first term of the right hand side of the preceding relation converges to zero deterministically. The third term goes to zero as well since is bounded, and the fourth term converges to zero almost surely by Lemma 3. Consequently,
(7) 
Now if , then and, thus, almost surely. This implies almost surely. ∎
Iv NonAsymptotic Rate of Convergence
In this section, we prove Theorem 2, which states an explicit rate of convergence for cooperative agent learning process. Before proving the theorem, we will estate an auxiliary lemma that provides a bound for the expectation of the random variables as defined in Eq. (4).
Lemma 4
Proof.
The expected value of Eq. (5) and , gives
Therefore, by recursion we can see that for all ,
By adding and subtracting , we obtain
We removed the last term of the right hand side in the preceding relation since . Moreover, bounding the entries for the first two terms on the right hand side and using the fact that is a stochastic matrix, we have that
Next, we use the upper bound on terms from Lemma 1 and the lower bound for the entries in as given in Lemma 2, and we arrive at the following relation:
and the result follows. ∎
The proof of Theorem 2 uses the McDiarmid’s inequality [27]. This will provide bounds on the probability that the beliefs exceed a given value . McDiarmid’s inequality is provided below.
Theorem 3
(McDiarmid’s inequality [27]) Let be a sequence of independent random variables with . If a function has bounded differences, i.e., for all ,
then for any and all ,
Now, we are ready to prove Theorem 2.
Proof.
(Theorem 2) First we will express the belief in terms of the variable . This will allow us to use the McDiarmid’s inequality to obtain the concentration bounds. By dynamics of Eq. (1) and Assumption 3.1, since for any , we have
Therefore,
where the last equality follows from Lemma 4.
We now view a function of the random vectors , see Eq. (6), where for all . Thus, for all with , we have
Similarly, from Eq. (6) we can see that
It follows that has bounded variations and by McDiarmid’s inequality (Theorem 3) we obtain the following concentration inequality,
Finally, for a given confidence level , in order to have the desired result follows. ∎
V Simulation Results
In this section we show simulation results for a group of agents connected over a timevarying directed graph, shown in Figure 1, for some specific weighting matrices. Each agent updates its beliefs according to Eq. (1).
Note that the graph is such that the edge connecting agent 1 and agent 2 is switching on and off at each time step. Agents 26 connecting edges are changing at each time step as well.
Every agent receives information from a binary random variable with probability distribution and for all ’s. Moreover, every agent has two possible models and . Agent 1 hypotheses have the following likelihood functions: and for hypothesis ; and and for hypothesis . Therefore, hypothesis is closer to the true distribution. On the other hand, agents 2 to 6 have uniformly distributed observationally equivalent hypothesis for both and , that is, they are not able to differentiate between the hypothesis individually. Thus for , and .
Figure 2 shows the empirical mean over 5000 Monte Carlo simulations of the beliefs on hypothesis of agents 1, 4, 5 and 6. Results show that agent 1 is the fastest learning agent, since is the one with the correct model. Nevertheless, all other agents are converging to the correct parameter model as well, even if they do not have differentiable models.
Vi Conclusions and Future Work
We have studied the consistency and the rate of convergence for a distributed nonBayesian learning system. We have shown almost sure consistency and have provided bounds on the global exponential rate of convergence. The novelty of our results is in the establishment of convergence rate estimates that are nonasymptotic, geometric, and explicit, in the sense that the bounds capture the quantities characterizing the graph sequence properties as well as the agent learning capabilities. This results were derived for general timevarying directed graphs.
Our work suggests a number of open questions. It is natural to attempt to extensions to continuous spaces, on the number of agents, on the number of hypothesis, etc. This result can be extended to tracking problems where the distribution of the observations changes with time. When the number of hypothesis is large, ideas from social sampling can also be incorporated in this framework [28]. Moreover, the possibility of corrupted measurements or conflicting models between the agents are also of interest, especially in the setting of social networks.
References
 [1] R. J. Aumann, “Agreeing to disagree,” The annals of statistics, pp. 1236–1239, 1976.
 [2] V. Borkar and P. P. Varaiya, “Asymptotic agreement in distributed estimation,” IEEE Transactions on Automatic Control, vol. 27, no. 3, pp. 650–655, 1982.
 [3] J. N. Tsitsiklis and M. Athans, “Convergence and asymptotic agreement in distributed decision problems,” IEEE Transactions on Automatic Control, vol. 29, no. 1, pp. 42–50, 1984.
 [4] D. Acemoglu, M. A. Dahleh, I. Lobel, and A. Ozdaglar, “Bayesian learning in social networks,” The Review of Economic Studies, vol. 78, no. 4, pp. 1201–1236, 2011.
 [5] M. MuellerFrank, “A general framework for rational learning in social networks,” Theoretical Economics, vol. 8, no. 1, pp. 1–40, 2013.
 [6] D. Gale and S. Kariv, “Bayesian learning in social networks,” Games and Economic Behavior, vol. 45, no. 2, pp. 329–346, 2003.
 [7] L. G. Epstein, J. Noor, and A. Sandroni, “Nonbayesian learning,” The BE Journal of Theoretical Economics, vol. 10, no. 1, 2010.
 [8] A. Jadbabaie, P. Molavi, A. Sandroni, and A. TahbazSalehi, “Nonbayesian social learning,” Games and Economic Behavior, vol. 76, no. 1, pp. 210–225, 2012.
 [9] K. Rahnama Rad and A. TahbazSalehi, “Distributed parameter estimation in networks,” in IEEE Conference on Decision and Control, 2010, pp. 5050–5055.
 [10] M. Alanyali, S. Venkatesh, O. Savas, and S. Aeron, “Distributed bayesian hypothesis testing in sensor networks,” in American Control Conference, vol. 6, 2004, pp. 5369–5374.
 [11] V. Saligrama, M. Alanyali, and O. Savas, “Distributed detection in sensor networks with packet losses and finite capacity links,” IEEE Transactions on Signal Processing, vol. 54, no. 11, pp. 4118–4132, 2006.
 [12] R. Rahman, M. Alanyali, and V. Saligrama, “Distributed tracking in multihop sensor networks with communication delays,” IEEE Transactions on Signal Processing, vol. 55, no. 9, pp. 4656–4668, 2007.
 [13] A. Jadbabaie, P. Molavi, and A. TahbazSalehi, “Information heterogeneity and the speed of learning in social networks,” Columbia Business School Research Paper, no. 1328, 2013.
 [14] S. Bandyopadhyay and S.J. Chung, “Distributed estimation using bayesian consensus filtering,” in American Control Conference (ACC), 2014, pp. 634–641.
 [15] Q. Liu, A. Fang, L. Wang, and X. Wang, “Social learning with timevarying weights,” Journal of Systems Science and Complexity, vol. 27, no. 3, pp. 581–593, 2014.
 [16] S. Shahrampour and A. Jadbabaie, “Exponentially fast parameter estimation in networks using distributed dual averagingy,” in IEEE Conference on Decision and Control, 2013, pp. 6196–6201.
 [17] A. Lalitha, T. Javidi, and A. Sarwate, “Social learning and distributed hypothesis testing,” preprint arXiv:1410.4307, 2015.
 [18] S. Shahrampour, A. Rakhlin, and A. Jadbabaie, “Distributed detection: Finitetime analysis and impact of network topology,” arXiv preprint arXiv:1409.8606, 2014.
 [19] D. P. Bertsekas and J. N. Tsitsiklis, Parallel and distributed computation: numerical methods. PrenticeHall, Inc., 1989.
 [20] L. Moreau, “Stability of multiagent systems with timedependent communication links,” IEEE Transactions on Automatic Control, vol. 50, no. 2, pp. 169–182, 2005.
 [21] A. Jadbabaie, J. Lin, and A. S. Morse, “Coordination of groups of mobile autonomous agents using nearest neighbor rules,” IEEE Transactions on Automatic Control, vol. 48, no. 6, pp. 988–1001, 2003.
 [22] C. Genest, J. V. Zidek et al., “Combining probability distributions: A critique and an annotated bibliography,” Statistical Science, vol. 1, no. 1, pp. 114–135, 1986.
 [23] J. L. Doob, “Application of the theory of martingales,” Le calcul des probabilites et ses applications, pp. 23–27, 1949.
 [24] S. Ghosal, “A review of consistency and convergence of posterior distribution,” in Varanashi Symposium in Bayesian Inference, Banaras Hindu University, 1997.
 [25] A. Nedic and A. Olshevsky, “Distributed optimization over timevarying directed graphs,” IEEE Transactions on Automatic Control, vol. 60, no. 3, pp. 601–615, 2015.
 [26] A. Nedić, A. Olshevsky, A. Ozdaglar, and J. N. Tsitsiklis, “On distributed averaging algorithms and quantization effects,” IEEE Transactions on Automatic Control, vol. 54, no. 11, pp. 2506–2517, 2009.
 [27] C. McDiarmid, “On the method of bounded differences,” Surveys in combinatorics, vol. 141, no. 1, pp. 148–188, 1989.
 [28] A. Sarwate and T. Javidi, “Distributed learning of distributions via social sampling,” IEEE Transactions on Automatic Control, vol. 60, no. 1, pp. 34–45, 2015.