Publications by date
Pricing, Competition and Content for Internet Service Providers
Peter Key and Richard Steinberg,
IEEE/ACM Transactions on Networking October 2020, Vol. 28, Issue 5, pp 2285-2298
<id=”absKeySte:2020″ class=”abstract”>Abstract
We examine competition between two Internet Service Providers (ISPs), where the first ISP provides basic Internet service, while the second ISP provides Internet service plus content, i.e., \emph{enhanced service}, where the first ISP can partner with a Content Provider to provide the same content as the second ISP. When such a partnering arrangement occurs, the Content Provider pays the first ISP a transfer price for delivering the content. Users have heterogeneous preferences, and each in general faces three options: (1) buy basic Internet service from the first ISP; (2) buy enhanced service from the second ISP; or (3) buy enhanced service jointly from the first ISP and the Content Provider. We derive results on the existence and uniqueness of a Nash equilibrium, and provide closed-form expressions for the prices, user masses, and profits of the two ISPs and the Content Provider. When the first ISP has the ability to choose the transfer price, then when congestion is linear in the load, it is never optimal for the first ISP to set a negative transfer price in the hope of attracting more revenue from additional customers desiring enhanced service. Conversely, when congestion is sufficiently super-linear, the optimal strategy for the first ISP is either to set a negative transfer price (subsidizing the Content Provider) or to set a high transfer price that shuts the Content Provider out of the market.
[DOI] [ IEEE Explore Early Access][LSE eprint]
Simple Pricing Schemes for the Cloud
Ian A. Kash,Peter Key, and Warut Suksompong.
ACM Transactions on Economics and Computation August 2019, Vol. 7 Issue 2, Article No. 7.
<id=”absKasKeySuk:2019″ class=”abstract”>Abstract
The problem of pricing the cloud has attracted much recent attention due to the widespread use of cloud computing and cloud services. From a theoretical perspective, several mechanisms that provide strong efficiency or fairness guarantees and desirable incentive properties have been designed. However, these mechanisms often rely on a rigid model, with several parameters needing to be precisely known for the guarantees to hold. In this article, we consider a stochastic model and show that it is possible to obtain good welfare and revenue guarantees with simple mechanisms that do not make use of the information on some of these parameters. In particular, we prove that a mechanism that sets the same price per timestep for jobs of any length achieves at least 50% of the welfare and revenue obtained by a mechanism that can set different prices for jobs of different lengths, and the ratio can be improved if we have more specific knowledge of some parameters. Similarly, a mechanism that sets the same price for all servers even though the servers may receive different kinds of jobs can provide a reasonable welfare and revenue approximation compared to a mechanism that is allowed to set different prices for different servers.
<id=”bib_KeyKasSuk:2019″ class=”bibtex”>
BibTeX:
@Article{KasKeySuk:2019,
author = {Ian A. Kash, and Peter Key, and Warut Suksompong},
title = {Simple Pricing Schemes for the Cloud},
journal = {ACM Transactions on Economics and Computation},
year = {2019},
volume = {7},
number = {2},
month = aug,
doi = {10.1145/3327973},
url = {https://dl.acm.org/citation.cfm?id=3327973},
url2 = {http://arxiv.org/pdf/1705.08563.pdf},
}
[DOI] [Journal Paper ]
Strategic behavior and learning in all-pay auctions: an empirical study using crowdsourced data
Yoram Bachrach, Ian A. Kash, Peter Key and Joel Oren.
Autonomous Agents and Multi-Agent Systems March 2019, Vol. 33, Issue 1-2, pp 192–215.
<id=”abBacKasKeyOre:2019″ class=”abstract”>AbstractWe analyze human behavior in crowdsourcing contests using an all-pay auction model where all participants exert effort, but only the highest bidder receives the reward. We let workers sourced from Amazon Mechanical Turk participate in an all-pay auction, and contrast the game theoretic equilibrium with the choices of the humans participants. We examine how people competing in the contest learn and adapt their bids, comparing their behavior to well-established online learning algorithms in a novel approach to quantifying the performance of humans as learners. For the crowdsourcing contest designer, our results show that a bimodal distribution of effort should be expected, with some very high effort and some very low effort, and that humans have a tendency to overbid. Our results suggest that humans are weak learners in this setting, so it may be important to educate participants about the strategic implications of crowdsourcing contests.
<id=”bib_BacKasKeyOre:2019″ class=”bibtex”>
BibTeX:
@Article{BacKasKeyOre:2019,
author = {Yoram Bachrach and Ian A. Kash and Peter Key and Joel Oren},
title = {Strategic behavior and learning in all-pay auctions: an empirical study using crowdsourced data},
journal = {Autonomous Agents and Multi-Agent Systems},
year = {2019},
volume = {33},
number = {1-2},
pages = {192-215},
month = {March},
issn = {1573-7454},
doi = {10.1007/s10458-019-09402-4},
publisher = {Springer Nature},
url = {https://link.springer.com/article/10.1007/s10458-019-09402-4},
}
[ DOI][Journal Paper ] [PDF]
Optimal Pricing and Introduction Timing of New Virtual Machines
Ian A. Kash, Peter Key and Spyros I. Zoumpoulis.
EC 18 Proceedings of the 2018 ACM Conference on Economics and Computation June 2018, pp. 51-52.
<id=”absKelKeyZou:2018″ class=”abstract”>Abstract
As the quality of computer hardware increases over time, cloud service providers have the ability to offer more powerful virtual machines (VMs) and other resources to their customers. But providers face several trade-offs as they seek to make the best use of improved technology. On one hand, more powerful machines are more valuable to customers and command a higher price. On the other hand, there is a cost to develop and launch a new product. Further, the new product competes with existing products. Thus, the provider faces two questions. First, when should new classes of VMs be introduced? Second, how should they be priced, taking into account both the VM classes that currently exist and the ones that will be introduced in the future?
This decision problem, combining scheduling and pricing new product introductions, is common in a variety of settings. One aspect that is more specific to the cloud setting is that VMs are rented rather than sold. Thus existing customers can switch to a new offering, albeit with some inconvenience. There is indeed evidence of customers’ aversion to upgrades in the cloud computing services market. Based on a study of Microsoft Azure, we estimate that customers who arrive after a new VM class is launched are 50% more likely to use it than existing customers, indicating that these switching costs may be substantial.
This opens up a wide range of possible policies for the cloud service provider. Our main result is that a surprisingly simple policy is close to optimal in many situations: new VM classes are introduced on a periodic schedule and each is priced as if it were the only product being offered. (Periodic introductions have been noticeable in the practice of cloud computing. For example, Amazon Elastic Compute Cloud (EC2) launched new classes of the m.xlarge series in October 2007, February 2010, October 2012, and June 2015, i.e., in intervals of 28 months, 32 months, and 32 months.) We refer to this pricing policy as Myerson pricing, as these prices can be computed as in Myerson’s classic paper (1981). This policy produces a marketplace where new customers always select the newest and best offering, while existing customers may stick with older VMs due to switching costs.
<id=”bib_KeyKasZou:20188″ class=”bibtex”>
BibTeX:
@InProceedings{KasKeyZou:2018,
author = {Ian A. Kash and Peter B. Key and Spyros I. Zoumpoulis},
title = {Optimal Pricing and Introduction Timing of New Virtual Machines},
booktitle = {EC 18 Proceedings of the 2018 ACM Conference on Economics and Computation},
year = {2018},
publisher = {ACM},
pages = {51--52},
doi = {10.1145/3219166.3219209},
note = {See INSEAD working paper 2018/12/DSC},
url = {https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3137038},
}
[DOI] [Working Paper PDF]
Simple Pricing Schemes for the Cloud Ian A. Kash, Peter Key and Warut Suksompong. Proceedings of the 13th Conference on Web and Internet Economics (WINE).Dec 2017, pp. 311-324.
<id=”absKelKeyWal:2017″ class=”abstract”>Abstract
The problem of pricing the cloud has attracted much recent attention due to the widespread use of cloud computing and cloud services. From a theoretical perspective, several mechanisms that provide strong efficiency or fairness guarantees and desirable incentive properties have been designed. However, these mechanisms often rely on a rigid model, with several parameters needing to be precisely known in order for the guarantees to hold. In this paper, we consider a stochastic model and show that it is possible to obtain good welfare and revenue guarantees with simple mechanisms that do not make use of the information on some of these parameters. In particular, we prove that a mechanism that sets the same price per time step for jobs of any length achieves at least 50\% of the welfare and revenue obtained by a mechanism that can set different prices for jobs of different lengths, and the ratio can be improved if we have more specific knowledge of some parameters. Similarly, a mechanism that sets the same price for all servers even though the servers may receive different kinds of jobs can provide a reasonable welfare and revenue approximation compared to a mechanism that is allowed to set different prices for different servers.
<id=”bib_KeyKasWar:2017″ class=”bibtex”>
BibTeX:
@InProceedings{KasKeyWar:2017,
author = {Ian A. Kash, and Peter Key, and Warut Suksompong},
title = {Simple Pricing Schemes for the Cloud},
booktitle = {Proceedings of the 13th Conference on Web and Internet Economics (WINE 2017)},
year = {2017},
note = {A preliminary version appeared in Proceedings of the 12th Workshop on the Economics of Networks, Systems and Computation (NetEcon), June 2017},
publisher = {Springer},
month = dec,
pages = {311--324},
url = {https://link.springer.com/chapter/10.1007%2F978-3-319-71924-5_22},
doi = {10.1007/978-3-319-71924-5_22},
ulr2 = {http://arxiv.org/pdf/1705.08563.pdf},
}
[DOI] [Workshop version] Full PDF
Efficient Advert Assignment Frank Kelly, Peter Key and Neil Walton. Operations Research. 2016. Vol. 64(4), pp. 822-837.
<id=”absKelKeyWal:2016″ class=”abstract”>Abstract: We develop a framework for the analysis of large-scale ad auctions where adverts are assigned over a continuum of search types. For this pay-per-click market, we provide an efficient mechanism that maximizes social welfare. In particular, we show that the social welfare optimization can be solved in separate optimizations conducted on the time scales relevant to the search platform and advertisers. Here, on each search occurrence, the platform solves an assignment problem and, on a slower time scale, each advertiser submits a bid that matches its demand for click-throughs with supply. Importantly, knowledge of global parameters, such as the distribution of search terms, is not required when separating the problem in this way. Exploiting the information asymmetry between the platform and advertiser, we describe a simple mechanism that incentivizes truthful bidding, has a unique Nash equilibrium that is socially optimal, and thus implements our decomposition. Further, we consider models where advertisers adapt their bids smoothly over time and prove convergence to the solution that maximizes social welfare. Finally, we describe several extensions that illustrate the flexibility and tractability of our framework.
<id=”bib_KelKeyWal:2016″ class=”bibtex”>
BibTeX:
@article{KelKeyWal:2016,
author = {Frank Kelly and Peter Key and Neil Walton},
title = {Efficient Advert Assignment},
journal = {Operations Research},
year = {2016},
volume = {64},
number = {4},
pages = {822-837},
url = {
http://dx.doi.org/10.1287/opre.2016.1519
},
doi = {10.1287/opre.2016.1519}
}
[DOI] PDF of Preprint
Ranking and Tradeoffs in Sponsored Search Auctions Ben Roberts, Dinan Gunawardena, Ian A. Kash and Peter Key. ACM Trans. Econ. Comput.. New York, NY, USA June 2016. Vol. 4(3), pp. 17:1-17:21. ACM.
<id=”absRobGuKaKe:2016″ class=”abstract”>Abstract: In a sponsored search auction, decisions about how to rank ads impose tradeoffs between objectives, such as revenue and welfare. In this article, we examine how these tradeoffs should be made. We begin by arguing that the most natural solution concept to evaluate these tradeoffs is the lowest symmetric Nash equilibrium (SNE). As part of this argument, we generalise the well-known connection between the lowest SNE and the VCG outcome. We then propose a new ranking algorithm, loosely based on the revenue-optimal auction, that uses a reserve price to order the ads (not just to filter them) and give conditions under which it raises more revenue than simply applying that reserve price. Finally, we conduct extensive simulations examining the tradeoffs enabled by different ranking algorithms and show that our proposed algorithm enables superior operating points by a variety of metrics.
<id=”bib_RobGuKaKe:2016″ class=”bibtex”>
BibTeX:
@article{RobGuKaKe:2016,
author = {Roberts, Ben and Gunawardena, Dinan and Kash, Ian A. and Key, Peter},
title = {Ranking and Tradeoffs in Sponsored Search Auctions},
journal = {ACM Trans. Econ. Comput.},
publisher = {ACM},
year = {2016},
volume = {4},
number = {3},
pages = {17:1--17:21},
url = {http://doi.acm.org/10.1145/2910576},
doi = {10.1145/2910576}
}
[DOI] PDF from ACM
Mechanism Design for Mixed Bidders Yoram Bachrach, Sofia Ceppi, Ian A. Kash, Peter Key and Mohammad Reza Khani. In Proceedings of the 25th International Conference on World Wide Web. April 2016. WWW — World Wide Web Consortium (W3C).
<id=”absBaCeKaKeK:2016″ class=”abstract”>Abstract: The Generalized Second Price (GSP) auction has appealing properties when ads are simple (text based and identical in size), but does not generalize to richer ad settings, whereas truthful mechanisms such as VCG do. However, a straight switch from GSP to VCG incurs significant revenue loss for the search engine. We introduce a transitional mechanism which encourages advertisers to update their bids to their valuations, while mitigating revenue loss. In this setting, it is easier to propose first a payment function rather than an allocation function, so we give a general framework which guarantees incentive compatibility by requiring that the payment functions satisfy two specific properties. Finally, we analyze the revenue impacts of our mechanism on a sample of Bing data.
<id=”bib_BaCeKaKeK:2016″ class=”bibtex”>
BibTeX:
@inproceedings{BaCeKaKeK:2016,
author = {Yoram Bachrach and Sofia Ceppi and Ian A. Kash and Peter Key and Mohammad Reza Khani},
title = {Mechanism Design for Mixed Bidders},
booktitle = {Proceedings of the 25th International Conference on World Wide Web},
publisher = {WWW --- World Wide Web Consortium (W3C)},
year = {2016},
doi = {10.1145/2872427.2882983}
}
[DOI] PDF
Using Convolutional Neural Networks to Analyze Function Properties from Images Yoad Lewenberg, Yoram Bachrach, Ian Kash and Peter Key. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence. February 2016. , pp. 4363-4364. AAAI Press.
<id=”absLewBaKaKe:2016″ class=”abstract”>Abstract: We propose a system for determining properties of mathematical functions given an image of their graph representation. We demonstrate our approach for two dimensional graphs (curves of single variable functions) and three-dimensional graphs (surfaces of two variable functions), studying the properties of convexity and symmetry. Our method uses a Convolutional Neural Network which classifies functions according to these properties, without using any hand-crafted features. We propose algorithms for randomly constructing functions with convexity or symmetry properties, and use the images generated by these algorithms to train our network. Our system achieves a high accuracy on this task, even for functions where humans find it difficult to determine the function’s properties from its image.
<id=”bib_LewBaKaKe:2016″ class=”bibtex”>
BibTeX:
@inproceedings{LewBaKaKe:2016,
author = {Yoad Lewenberg and Yoram Bachrach and Ian Kash and Peter Key},
title = {Using Convolutional Neural Networks to Analyze Function Properties from Images},
booktitle = {Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence},
publisher = {AAAI Press},
year = {2016},
pages = {4363--4364}
}
PDF
Pricing the Cloud Ian Kash and Peter Key. IEEE Internet Computing. January 2016. Vol. 20, pp. 36-43. IEEE – Institute of Electrical and Electronics Engineers.
<id=”absKashKey:2016″ class=”abstract”>Abstract: Current cloud pricing schemes are fairly simple, but what’s the future of cloud pricing? Here, the authors discuss the economic issues shaping the cloud marketplace and address the open questions these issues yield. They then explore what the current state of research in economics and computer science has to say about some of these questions and how this relates to the future evolution of cloud pricing.
<id=”bib_KashKey:2016″ class=”bibtex”>
BibTeX:
@article{KashKey:2016,
author = {Ian Kash and Peter Key},
title = {Pricing the Cloud},
journal = {IEEE Internet Computing},
publisher = {IEEE – Institute of Electrical and Electronics Engineers},
year = {2016},
volume = {20},
pages = {36-43},
doi = {10.1109/MIC.2016.4}
}
[DOI] PDF Preprint
Non-Myopic Negotiators See What’s Best Yair Zick, Yoram Bachrach, Ian Kash and Peter Key. IJCAI. In Proceedings of the 24th International Joint Conference on Artificial Intelligence. July 2015.
<id=”absZicBaKaKe:2015″ class=”abstract”>Abstract: We consider revenue negotiation problems in iterative settings. In our model, a group of agents has some initial resources, used in order to generate revenue. Agents must agree on some way of dividing resources, but there’s a twist. At every time-step, the revenue shares received at time t are agent resources at time t + 1, and the game is repeated. The key issue here is that the way resources are shared has a dramatic effect on longterm social welfare, so in order to maximize individual long-term revenue one must consider the welfare of others, a behavior not captured by other models of cooperation and bargaining. Our work focuses on homogeneous production functions. We identify conditions that ensure that the socially optimal outcome is an e-Nash equilibrium. We apply our results to some families of utility functions, and discuss their strategic implications.
<id=”bib_ZicBaKaKe:2015″ class=”bibtex”>
BibTeX:
@inproceedings{ZicBaKaKe:2015,
author = {Yair Zick and Yoram Bachrach and Ian Kash and Peter Key},
title = {Non-Myopic Negotiators See What's Best},
booktitle = {Proceedings of the 24th International Joint Conference on Artificial Intelligence},
journal = {IJCAI},
year = {2015}
}
PDF
Mechanism Design for Mixed Ads Yoram Bachrach, Sofia Ceppi, Ian Kash, Peter Key and Mohammad Reza Khani. In Ad Auctions Workshop. January 2015.
<id=”absBaCeKaKeK:2015″ class=”abstract”>Abstract: We propose a hybrid auction mechanism for sponsored search, where bidders can be truthful or not, and are accordingly treated differently. Our class of hybrid mechanisms give incentives for non-truthful bidders to bid truthfully, while behaving as a non-truthful auction if no bidders are truthful.Our motivation is that the Generalized Second Price (GSP) auction (the current mechanism of choice) has appealing properties when ads are simple (text based and identical in size). But GSP does not generalize to richer ad settings, whereas truthful mechanisms, such as VCG do. Hence there are incentives for search platforms to migrate to truthful mechanisms, but a straight switch from GSP to VCG either requires all bidders instantly bid truthfully or incurs significant revenue loss. We introduce a transitional mechanism which encourages advertisers to update their bids to their valuations, while mitigating revenue loss.The mechanism is equivalent to GSP when nobody has updated their bid, is equivalent to VCG when everybody has updated, and it has the same allocation and payments of the original GSP if bids were in the minimum symmetric Nash equilibrium. In settings where both GSP ads and truthful (TF) ads exist, it is easier to propose a payment function than an allocation function. We give a general framework for these settings to characterize payment functions which guarantee incentive compatibility of truthful ads, by requiring that the payment functions satisfy two properties.Finally, we compare the revenue of our transitional mechanism with revenues of GSP and VCG mechanisms when run on a sample of Bing data.
<id=”bib_BaCeKaKeK:2015″ class=”bibtex”>
BibTeX:
@inproceedings{BaCeKaKeK:2015,
author = {Yoram Bachrach and Sofia Ceppi and Ian Kash and Peter Key and Mohammad Reza Khani},
title = {Mechanism Design for Mixed Ads},
booktitle = {Ad Auctions Workshop},
year = {2015}
}
PDF
Optimising Trade-offs Among Stakeholders in Ad Auctions Yoram Bachrach, Sofia Ceppi, Ian A. Kash, Peter Key and David Kurokawa. In Proceedings of the Fifteenth ACM Conference on Economics and Computation. New York, NY, USA 2014. , pp. 75-92. ACM.
<id=”absBaCeKaKeK:2014″ class=”abstract”>Abstract: We examine trade-offs among stakeholders in ad auctions. Our metrics are the revenue for the utility of the auctioneer, the number of clicks for the utility of the users and the welfare for the utility of the advertisers. We show how to optimize linear combinations of the stakeholder utilities, showing that these can be tackled through a GSP auction with a per-click reserve price. We then examine constrained optimization of stakeholder utilities.We use simulations and analysis of real-world sponsored search auction data to demonstrate the feasible trade-offs, examining the effect of changing the allowed number of ads on the utilities of the stakeholders. We investigate both short term effects, when the players do not have the time to modify their behavior, and long term equilibrium conditions.Finally, we examine a combinatorially richer constrained optimization problem, where there are several possible allowed configurations (templates) of ad formats. This model captures richer ad formats, which allow using the available screen real estate in various ways. We show that two natural generalizations of the GSP auction rules to this domain are poorly behaved, resulting in not having a symmetric Nash equilibrium or having one with poor welfare. We also provide positive results for restricted cases.
<id=”bib_BaCeKaKeK:2014″ class=”bibtex”>
BibTeX:
@inproceedings{BaCeKaKeK:2014,
author = {Bachrach, Yoram and Ceppi, Sofia and Kash, Ian A. and Key, Peter and Kurokawa, David},
title = {Optimising Trade-offs Among Stakeholders in Ad Auctions},
booktitle = {Proceedings of the Fifteenth ACM Conference on Economics and Computation},
publisher = {ACM},
year = {2014},
pages = {75--92},
doi = {10.1145/2600057.2602879}
}
[DOI] PDF PDF from ACM
Building a personalized tourist attraction recommender system using crowdsourcing Yoram Bachrach, Sofia Ceppi, Ian A. Kash, Peter Key, Filip Radlinski, Ely Porat, Michael Armstrong and Vijay Sharma. In Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems. 2014. , pp. 1631-1632.
<id=”absBaCKKRPAS:2014″ class=”abstract”>Abstract: We demonstrate how crowdsourcing can be used to automatically build a personalized tourist attraction recommender system, which tailors recommendations to specific individuals, so different people who use the system each get their own list of recommendations, appropriate to their own traits. Recommender systems crucially depend on the availability of reliable and large scale data that allows predicting how a new individual is likely to rate items from the catalog of possible items to recommend. We show how to automate the process of generating this data using crowdsourcing, so that such a system can be built even when such a dataset is not initially available. We first find possible tourist attractions to recommend by scraping such information from Wikipedia. Next, we use crowdsourced workers to filter the data, then provide their opinions regarding these items. Finally, we use machine learning methods to predict how new individuals are likely to rate each attraction, and recommend the items with the highest predicted ratings.
<id=”bib_BaCKKRPAS:2014″ class=”bibtex”>
BibTeX:
@inproceedings{BaCKKRPAS:2014,
author = {Bachrach, Yoram and Ceppi, Sofia and Kash, Ian A and Key, Peter and Radlinski, Filip and Porat, Ely and Armstrong, Michael and Sharma, Vijay},
title = {Building a personalized tourist attraction recommender system using crowdsourcing},
booktitle = {Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems},
year = {2014},
pages = {1631--1632},
url = {http://www.aamas-conference.org/AAMAS/aamas2014/proceedings/aamas/p1631.pdf}
}
[External PDF] PDF
The Shared Assignment Game and Applications to Pricing in Cloud Computing Gideon Blocq, Yoram Bachrach and Peter Key. In Proceedings of the 2014 International Conference on Autonomous Agents and Multi-agent Systems. Richland, SC 2014. , pp. 605-612. International Foundation for Autonomous Agents and Multiagent Systems.
<id=”absBloBacKey:2014″ class=”abstract”>Abstract: We propose an extension to the Assignment Game in which sellers provide indivisible heterogeneous goods to their buyers. Each good takes up various amounts of resources and each seller has capacity constraints with respect to the total amount of resources it can provide. Hence, the total amount of goods that the seller can provide is dependent on the set of buyers. In this model, we first demonstrate that the core is empty and proceed to suggest a fair allocation of the resulting utility of an optimal match, using the Shapley value. We then examine scenarios where the worth and resource demands of each good are private information of selfish buyers and consider ways in which they can manipulate the system. We show that such Shapley value manipulations are bounded in terms of the gain an agent can achieve by using them. Finally, since this model can be of use when considering elastic resource allocation and utility sharing in cloud computing domains, we provide simulation results which show our approach maximizes welfare and, when used as a pricing scheme, can also increase the revenue of the cloud server providers over what is achieved with the widely-used fixed pricing scheme.
<id=”bib_BloBacKey:2014″ class=”bibtex”>
BibTeX:
@inproceedings{BloBacKey:2014,
author = {Blocq, Gideon and Bachrach, Yoram and Key, Peter},
title = {The Shared Assignment Game and Applications to Pricing in Cloud Computing},
booktitle = {Proceedings of the 2014 International Conference on Autonomous Agents and Multi-agent Systems},
publisher = {International Foundation for Autonomous Agents and Multiagent Systems},
year = {2014},
pages = {605--612},
url = {http://dl.acm.org/citation.cfm?id=2615731.2615829}
}
[URL] PDF
The Architecture of Innovation: Tracking Face-to-face Interactions with Ubicomp Technologies Chloë Brown, Christos Efstratiou, Ilias Leontiadis, Daniele Quercia, Cecilia Mascolo, James Scott and Peter Key. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing. New York, NY, USA 2014. , pp. 811-822. ACM.
<id=”absBrEfLQMSK:2014″ class=”abstract”>Abstract: The layouts of the buildings we live in shape our everyday lives. In office environments, building spaces affect employees’ communication, which is crucial for productivity and innovation. However, accurate measurement of how spatial layouts affect interactions is a major challenge and traditional techniques may not give an objective view.We measure the impact of building spaces on social interactions using wearable sensing devices. We study a single organization that moved between two different buildings, affording a unique opportunity to examine how space alone can affect interactions. The analysis is based on two large scale deployments of wireless sensing technologies: short-range, lightweight RFID tags capable of detecting face-to-face interactions. We analyze the traces to study the impact of the building change on social behavior, which represents a first example of using ubiquitous sensing technology to study how the physical design of two workplaces combines with organizational structure to shape contact patterns.
<id=”bib_BrEfLQMSK:2014″ class=”bibtex”>
BibTeX:
@inproceedings{BrEfLQMSK:2014,
author = {Brown, Chloë and Efstratiou, Christos and Leontiadis, Ilias and Quercia, Daniele and Mascolo, Cecilia and Scott, James and Key, Peter},
title = {The Architecture of Innovation: Tracking Face-to-face Interactions with Ubicomp Technologies},
booktitle = {Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing},
publisher = {ACM},
year = {2014},
pages = {811--822},
note = {arXiv preprint arXiv:1406.6829},
url = {http://doi.acm.org/10.1145/2632048.2632056},
doi = {10.1145/2632048.2632056}
}
[DOI] PDF PDF from ACM
Incentivized optimal advert assignment via utility decomposition Frank Kelly, Peter Key and Neil Walton. In Proceedings of the fifteenth ACM conference on Economics and computation. 2014. , pp. 527-527.
<id=”absKelKeyWal:2014″ class=”abstract”>Abstract: We examine trade-offs among stakeholders in ad auctions. Our metrics are the revenue for the utility of the auctioneer, the number of clicks for the utility of the users and the welfare for the utility of the advertisers. We show how to optimize linear combinations of the stakeholder utilities, showing that these can be tackled through a GSP auction with a per-click reserve price. We then examine constrained optimization of stakeholder utilities.We use simulations and analysis of real-world sponsored search auction data to demonstrate the feasible trade-offs, examining the effect of changing the allowed number of ads on the utilities of the stakeholders. We investigate both short term effects, when the players do not have the time to modify their behavior, and long term equilibrium conditions.Finally, we examine a combinatorially richer constrained optimization problem, where there are several possible allowed configurations (templates) of ad formats. This model captures richer ad formats, which allow using the available screen real estate in various ways. We show that two natural generalizations of the GSP auction rules to this domain are poorly behaved, resulting in not having a symmetric Nash equilibrium or having one with poor welfare. We also provide positive results for restricted cases.
<id=”bib_KelKeyWal:2014″ class=”bibtex”>
BibTeX:
@inproceedings{KelKeyWal:2014,
author = {Kelly, Frank and Key, Peter and Walton, Neil},
title = {Incentivized optimal advert assignment via utility decomposition},
booktitle = {Proceedings of the fifteenth ACM conference on Economics and computation},
year = {2014},
pages = {527--527}
}1304.7642v1
PDF
Efficient regret bounds for online bid optimisation in budget-limited sponsored search auctions Long Tran-Thanh, Lampros Stavrogiannis, Victor Naroditskiy, Valentin Robu, Nicholas R. Jennings and Peter Key. In Proceedings of the 30th Conference on Uncertainty in AI. Quebec, Canada July 2014.
<id=”bib_TrStNaRJK:2014″ class=”bibtex”>
BibTeX:
@inproceedings{TrStNaRJK:2014,
author = {Tran-Thanh, Long and Stavrogiannis, Lampros and Naroditskiy, Victor and Robu, Valentin and Jennings, Nicholas R and Key, Peter},
title = {Efficient regret bounds for online bid optimisation in budget-limited sponsored search auctions},
booktitle = {Proceedings of the 30th Conference on Uncertainty in AI},
year = {2014}
}
PDF
Ranking and Tradeoffs in Sponsored Search Auctions Ben Roberts, Dinan Gunawardena, Ian A. Kash and Peter Key. In Proc. of the Fourteenth ACM Conf. on Electronic Commerce. 2013. , pp. 751-766. ACM.
<id=”absRobGuKaKe:2013″ class=”abstract”>Abstract: In a sponsored search auction, decisions about how to rank ads impose tradeoffs between objectives such as revenue and welfare. In this paper, we examine how these tradeoffs should be made. We begin by arguing that the most natural solution concept to evaluate these tradeoffs is the lowest symmetric Nash equilibrium (SNE). As part of this argument, we generalize the well known connection between the lowest SNE and the VCG outcome. We then propose a new ranking algorithm, loosely based on the revenue-optimal auction, that uses a reserve price to order the ads (not just to filter them) and give conditions under which it raises more revenue than simply applying that reserve price. Finally, we conduct extensive simulations examining the tradeoffs enabled by different ranking algorithms and show that our proposed algorithm enables superior operating points by a variety of metrics
<id=”bib_RobGuKaKe:2013″ class=”bibtex”>
BibTeX:
@inproceedings{RobGuKaKe:2013,
author = {Roberts, Ben and Gunawardena, Dinan and Kash, Ian A. and Key, Peter},
title = {Ranking and Tradeoffs in Sponsored Search Auctions},
booktitle = {Proc. of the Fourteenth ACM Conf. on Electronic Commerce},
publisher = {ACM},
year = {2013},
pages = {751--766},
url = {http://doi.acm.org/10.1145/2482540.2482568},
doi = {10.1145/2482540.2482568}
}
[DOI] PDF PDF from ACM
Dwelling on the Negative: Incentivizing Effort in Peer Prediction. Jens Witkowski, Yoram Bachrach, Peter Key and David C. Parkes. In HCOMP. 2013. AAAI.
<id=”absWitBaKePa:2013″ class=”abstract”>Abstract: Agents are asked to rank two objects in a setting where effort is costly and agents differ in quality (which is the probability that they can identify the correct, ground truth, ranking). We study simple output-agreement mechanisms that pay an agent in the case she agrees with the report of another, and potentially penalizes for disagreement through a negative payment. Assuming access to a quality oracle, able to determine whether an agent’s quality is above a given threshold, we design a payment scheme that aligns incentives so that agents whose quality is above this threshold participate and invest effort. Precluding negative payments leads the expected cost of this quality-oracle mechanism to increase by a factor of 2 to 5 relative to allowing both positive and negative payments. Dropping the assumption about access to a quality oracle, we further show that negative payments can be used to make agents with quality lower than the quality threshold choose to not to participate, while those above continue to participate and invest effort. Through the appropriate choice of payments, any design threshold can be achieved. This self selection mechanism has the same expected cost as the cost minimal quality-oracle mechanism, and thus when using the self-selection mechanism, perfect screening comes for free.
<id=”bib_WitBaKePa:2013″ class=”bibtex”>
BibTeX:
@inproceedings{WitBaKePa:2013,
author = {Witkowski, Jens and Bachrach, Yoram and Key, Peter and Parkes, David C.},
editor = {Hartman, Björn and Horvitz, Eric},
title = {Dwelling on the Negative: Incentivizing Effort in Peer Prediction.},
booktitle = {HCOMP},
publisher = {AAAI},
year = {2013}
}
PDF
Hotspotting-A Probabilistic Graphical Model For Image Object Localization Through Crowdsourcing. Mahyar Salek, Yoram Bachrach and Peter Key. In AAAI-13. July 2013. AAAI.
<id=”absSalBacKey:2013″ class=”abstract”>Abstract: Object localization is an image annotation task which consists of finding the location of a target object in an image. It is common to crowdsource annotation tasks and aggregate responses to estimate the true annotation. While for other kinds of annotations consensus is simple and powerful, it cannot be applied to object localization as effectively due to the task’s rich answer space and inherent noise in responses. We propose a probabilistic graphical model to localize objects in images based on responses from the crowd. We improve upon natural aggregation methods such as the mean and the median by simultaneously estimating the difficulty level of each question and skill level of every participant. We empirically evaluate our model on crowdsourced data and show that our method outperforms simple aggregators both in estimating the true locations and in ranking participants by their ability. We also propose a simple adaptive sourcing scheme that works well for very sparse datasets.
<id=”bib_SalBacKey:2013″ class=”bibtex”>
BibTeX:
@inproceedings{SalBacKey:2013,
author = {Salek, Mahyar and Bachrach, Yoram and Key, Peter},
title = {Hotspotting-A Probabilistic Graphical Model For Image Object Localization Through Crowdsourcing.},
booktitle = {AAAI-13},
publisher = {AAAI},
year = {2013}
}
PDF
Fixed and market pricing for cloud services Vineet Abhishek, Ian A. Kash and Peter Key. In NetEcon. 2012.
<id=”absAbhKasKey:2012″ class=”abstract”>Abstract: This paper considers two simple pricing schemes for selling cloud instances and studies the trade-off between them. We characterize the equilibrium for the hybrid system where arriving jobs can choose between fixed or the market based pricing. We provide theoretical and simulation based evidence suggesting that fixed price generates a higher expected revenue than the hybrid system.
<id=”bib_AbhKasKey:2012″ class=”bibtex”>
BibTeX:
@inproceedings{AbhKasKey:2012,
author = {Abhishek, Vineet and Kash, Ian A and Key, Peter},
title = {Fixed and market pricing for cloud services},
booktitle = {NetEcon},
year = {2012},
note = {arXiv preprint arXiv:1201.5621}
}
PDF Updated version on arXiv
Budget optimization for sponsored search: Censored learning in MDPs Kareem Amin, Michael Kearns, Peter Key and Anton Schwaighofer. In Proceedings of the Twenty-Eighth Conference on Uncertainty in Artificial Intelligence (UAI2012). 2012.
<id=”absAmiKeKeSc:2012a” class=”abstract”>Abstract: We consider the budget optimization problem faced by an advertiser participating in repeated sponsored search auctions, seeking to maximize the number of clicks attained under that budget. We cast the budget optimization problem as a Markov Decision Process (MDP) with censored observations, and propose a learning algorithm based on the well-known Kaplan-Meier or product-limit estimator. We validate the performance of this algorithm by comparing it to several others on a large set of search auction data from Microsoft adCenter, demonstrating fast convergence to optimal performance.
<id=”bib_AmiKeKeSc:2012a” class=”bibtex”>
BibTeX:
@inproceedings{AmiKeKeSc:2012a,
author = {Amin, Kareem and Kearns, Michael and Key, Peter and Schwaighofer, Anton},
title = {Budget optimization for sponsored search: Censored learning in MDPs},
booktitle = {Proceedings of the Twenty-Eighth Conference on Uncertainty in Artificial Intelligence (UAI2012)},
year = {2012},
note = {arXiv preprint arXiv:1210.4847}
}
[PDF ] Poster in PDF
Quality Expectation-Variance Tradeoffs in Crowdsourcing Contests. Xi Alice Gao, Yoram Bachrach, Peter Key and Thore Graepel. In AAAI. 2012.
<id=”absGaoBaKeGr:2012″ class=”abstract”>Abstract: We examine designs for crowdsourcing contests, where participants compete for rewards given to superior solutions of a task. We theoretically analyze tradeoffs between the expectation and variance of the principal’s utility (i.e. the best solution’s quality), and empirically test our theoretical predictions using a controlled experiment on Amazon Mechanical Turk. Our evaluation method is also crowdsourcing based and relies on the peer prediction mechanism. Our theoretical analysis shows an expectation-variance tradeoff of the principal’s utility in such contests through a Pareto efficient frontier. In particular, we show that the simple contest with 2 authors and the 2-pair contest have good theoretical properties. In contrast, our empirical results show that the 2-pair contest is the superior design among all designs tested, achieving the highest expectation and lowest variance of the principal’s utility.
<id=”bib_GaoBaKeGr:2012″ class=”bibtex”>
BibTeX:
@inproceedings{GaoBaKeGr:2012,
author = {Gao, Xi Alice and Bachrach, Yoram and Key, Peter and Graepel, Thore},
title = {Quality Expectation-Variance Tradeoffs in Crowdsourcing Contests.},
booktitle = {AAAI},
year = {2012}
}
PDF
Repeated auctions under budget constraints: Optimal bidding strategies and equilibria Ramakrishna Gummadi, Peter Key and Alexandre Proutiere. In Eighth Workshop on Ad Auctions. 2012.
<id=”absGumKeyPro:2012″ class=”abstract”>Abstract: How should agents bid in repeated sequential auctions when they are budget constrained? A motivating example is that of sponsored search auctions, where advertisers bid in a sequence of generalized second price (GSP) auctions. These auctions, specifically in the context of sponsored search, have many idiosyncratic features that distinguish them from other models of sequential auctions: First, each bidder competes in a large number of auctions, where each auction is worth very little. Second, the total bidder population is often large, which means it is unrealistic to assume that the bidders could possibly optimize their strategy by modeling specific opponents. Third, the presence of a virtually unlimited supply of these auctions means bidders are necessarily expense constrained. Motivated by these three factors, we first frame the generic problem as a discounted Markov Decision Process for which the environment is independent and identically distributed over time. We also allow the agents to receive income to augment their budget at a constant rate. We first provide a structural characterization of the associated value function and the optimal bidding strategy, which specifies the extent to which agents underbid from their true valuation due to long term budget constraints. We then provide an explicit characterization of the optimal bid shading factor in the limiting regime where the discount rate tends to zero, by identifying the limit of the value function in terms of the solution to a differential equation that can be solved efficiently. Finally, we proved the existence of Mean Field Equilibria for both the repeated second price and GSP auctions with a large number of bidders.
<id=”bib_GumKeyPro:2012″ class=”bibtex”>
BibTeX:
@inproceedings{GumKeyPro:2012,
author = {Gummadi, Ramakrishna and Key, Peter and Proutiere, Alexandre},
title = {Repeated auctions under budget constraints: Optimal bidding strategies and equilibria},
booktitle = {Eighth Workshop on Ad Auctions},
year = {2012}
}
PDF at SSRN
Congestion Games with Agent Failures. Reshef Meir, Moshe Tennenholtz, Yoram Bachrach and Peter Key. In AAAI. 2012.
<id=”absMeiTeBaKe:2012″ class=”abstract”>Abstract: We propose a natural model for agent failures in congestion games. In our model, each of the agents may fail to participate in the game, introducing uncertainty regarding the set of active agents. We examine how such uncertainty may change the Nash equilibria (NE) of the game. We prove that although the perturbed game induced by the failure model is not always a congestion game, it still admits at least one pure Nash equilibrium. Then, we turn to examine the effect of failures on the maximal social cost in any NE of the perturbed game. We show that in the limit case where failure probability is negligible new equilibria never emerge, and that the social cost may decrease but it never increases. For the case of non-negligible failure probabilities, we provide a full characterization of the maximal impact of failures on the social cost under worst-case equilibrium outcomes.
<id=”bib_MeiTeBaKe:2012″ class=”bibtex”>
BibTeX:
@inproceedings{MeiTeBaKe:2012,
author = {Meir, Reshef and Tennenholtz, Moshe and Bachrach, Yoram and Key, Peter},
title = {Congestion Games with Agent Failures.},
booktitle = {AAAI},
year = {2012}
}
PDF
Stochastic Variability in Sponsored Search Auctions: Observations and Models Furcy Pin and Peter Key. In Proceedings of the 12th ACM Conference on Electronic Commerce. New York, NY, USA 2011. , pp. 61-70. ACM.
<id=”absPinKey:2011″ class=”abstract”>Abstract: Sponsored search advertisement slots are currently sold via Generalized Second Price (GSP) auctions. Despite the simplicity of their rules, these auctions are far from being fully understood. Our observations on real ad-auction data show that advertisers usually enter many distinct auctions with different opponents and with varying parameters. We describe some of our findings from these observations and propose a simple probabilistic model taking them into account. This model can be used to predict the number of clicks received by the advertisers and the total price they can expect to pay depending on their bid, or even to estimate the players valuations, all at a very low computational cost.
<id=”bib_PinKey:2011″ class=”bibtex”>
BibTeX:
@inproceedings{PinKey:2011,
author = {Pin, Furcy and Key, Peter},
title = {Stochastic Variability in Sponsored Search Auctions: Observations and Models},
booktitle = {Proceedings of the 12th ACM Conference on Electronic Commerce},
publisher = {ACM},
year = {2011},
pages = {61--70},
url = {http://doi.acm.org/10.1145/1993574.1993586},
doi = {10.1145/1993574.1993586}
}
[DOI] PDF PDF from ACM
Dynamic Channel, Rate Selection and Scheduling for White Spaces Bozidar Radunovic, Alexandre Proutiere, Dinan Gunawardena and Peter Key. In Proceedings of the Seventh COnference on Emerging Networking EXperiments and Technologies. New York, NY, USA 2011. , pp. 2:1-2:12. ACM.
<id=”absRadPrGuKe:2011″ class=”abstract”>Abstract: We investigate dynamic channel, rate selection and scheduling for wireless systems which exploit the large number of channels available in the White-space spectrum. We first present measurements of radio channel characteristics from an indoor testbed operating in the 500 to 600MHz band and comprising 11 channels. We observe significant and unpredictable (non-stationary) variations in the quality of these channels, and demonstrate the potential benefit in throughput from tracking the best channel and also from optimally adapting the transmission rate. We propose adaptive learning schemes able to efficiently track the best channel and rate for transmission, even in scenarios with non-stationary channel condition variations. We also describe a joint scheduling scheme for providing fairness in an Access Point scenario. Finally, we implement the proposed adaptive scheme in our testbed, and demonstrate that it achieves significant throughput improvement (typically from 40% to 100 compared to traditional fixed channel selection schemes.
<id=”bib_RadPrGuKe:2011″ class=”bibtex”>
BibTeX:
@inproceedings{RadPrGuKe:2011,
author = {Radunovic, Bozidar and Proutiere, Alexandre and Gunawardena, Dinan and Key, Peter},
title = {Dynamic Channel, Rate Selection and Scheduling for White Spaces},
booktitle = {Proceedings of the Seventh COnference on Emerging Networking EXperiments and Technologies},
publisher = {ACM},
year = {2011},
pages = {2:1--2:12},
url = {http://doi.acm.org/10.1145/2079296.2079298},
doi = {10.1145/2079296.2079298}
}
[DOI] PDF PDF from ACM
Efficient and Fair MAC for Wireless Networks with Self-interference Cancellation Nikhil Singh, Dinan Gunawardena, Alexandre Proutiere, Bozidar Radunovic, Horia Vlad Balan and Peter Key. In Proc. of the 2011 International Symposium on Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks. 2011. , pp. 94-101. IEEE.
<id=”absSiGuPrRBK:2011″ class=”abstract”>Abstract: Recent advances in PHY layer design demonstrated efficient self-interference cancellation and full-duplex in a single band. Building a MAC that exploits self-interference cancellation is a challenging task. Links can be scheduled concurrently, but only if they either (i) don’t interfere or (ii) allow for self-interference cancellation. Two issues arise: Firstly, it is difficult to construct a schedule that fully exploits the potentials for self-interference cancellation for arbitrary traffic patterns. Secondly, designing an efficient and fair distributed MAC is a daunting task; the issues become even more pronounced when scheduling under the constraints. We propose ContraFlow, a novel MAC that exploits the benefits of self-interference cancellation and increases spatial reuse. We use full-duplex to eliminate hidden terminals, and we rectify decentralized coordination inefficiencies among nodes, thereby improving fairness. Using measurements and simulations we illustrate the performance gains achieved when ContraFlow is used and we obtain both a throughput increase over current systems, as well as a significant improvement in fairness.
<id=”bib_SiGuPrRBK:2011″ class=”bibtex”>
BibTeX:
@inproceedings{SiGuPrRBK:2011,
author = {Singh, Nikhil and Gunawardena, Dinan and Proutiere, Alexandre and Radunovic, Bozidar and Balan, Horia Vlad and Key, Peter},
title = {Efficient and Fair MAC for Wireless Networks with Self-interference Cancellation},
booktitle = {Proc. of the 2011 International Symposium on Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks},
publisher = {IEEE},
year = {2011},
pages = {94-101},
doi = {10.1109/WIOPT.2011.5930070}
}
[DOI] PDF
A Cooperative Approach to Collusion in Auctions Yoram Bachrach, Morteza Zadimoghaddam and Peter Key. SIGecom Exch.. New York, NY, USA March 2011. Vol. 10(1), pp. 17-22. ACM.
<id=”absBacZadKey:2011″ class=”abstract”>Abstract: The elegant Vickrey Clarke Groves (VCG) mechanism is well-known for the strong properties it offers: dominant truth-revealing strategies, efficiency and weak budget-balance in quite general settings. Despite this, it suffers from several drawbacks, prominently susceptibility to collusion. By jointly setting their bids, colluders may increase their utility by achieving lower prices for their items. The colluders can use monetary transfers to share this utility, but they must reach an agreement regarding their actions. We analyze the agreements that are likely to arise through a cooperative game theoretic approach, transforming the auction setting into a cooperative game. We examine both the setting of a multi-unit auction as well as path procurement auctions.
<id=”bib_BacZadKey:2011″ class=”bibtex”>
BibTeX:
@article{BacZadKey:2011,
author = {Bachrach, Yoram and Zadimoghaddam, Morteza and Key, Peter},
title = {A Cooperative Approach to Collusion in Auctions},
journal = {SIGecom Exch.},
publisher = {ACM},
year = {2011},
volume = {10},
number = {1},
pages = {17--22},
url = {http://doi.acm.org/10.1145/1978721.1978726},
doi = {10.1145/1978721.1978726}
}
[DOI] PDF PDF from ACM
Path Selection and Multipath Congestion Control Peter Key, Laurent Massoulié and Don Towsley. Commun. ACM. New York, NY, USA January 2011. Vol. 54(1), pp. 109-116. ACM.
<id=”absKeyMasTow:2011″ class=”abstract”>Abstract: In this paper, we investigate the benefits that accrue from the use of multiple paths by a session coupled with rate control over those paths. In particular, we study data transfers under two classes of multipath control, coordinated control where the rates over the paths are determined as a function of all paths, and uncoordinated control where the rates are determined independently over each path. We show that coordinated control exhibits desirable load balancing properties; for a homogeneous static random paths scenario, we show that the worst-case throughput performance of uncoordinated control behaves as if each user has but a single path (scaling like log(log(N))/log(N) where N is the system size, measured in number of resources), whereas coordinated control yields a worst-case throughput allocation bounded away from zero. We then allow users to change their set of paths and introduce the notion of a Nash equilibrium. We show that both coordinated and uncoordinated control lead to Nash equilibria corresponding to desirable welfare maximizing states, provided in the latter case, the rate controllers over each path do not exhibit any round-trip time (RTT) bias (unlike TCP Reno). Finally, we show in the case of coordinated control that more paths are better, leading to greater welfare states and throughput capacity, and that simple path reselection polices that shift to paths with higher net benefit can achieve these states.
<id=”bib_KeyMasTow:2011″ class=”bibtex”>
BibTeX:
@article{KeyMasTow:2011,
author = {Key, Peter and Massoulié, Laurent and Towsley, Don},
title = {Path Selection and Multipath Congestion Control},
journal = {Commun. ACM},
publisher = {ACM},
year = {2011},
volume = {54},
number = {1},
pages = {109--116},
url = {http://doi.acm.org/10.1145/1866739.1866762},
doi = {10.1145/1866739.1866762}
}
[DOI] PDF from ACM
Who’s Hogging the Bandwidth?: The Consequences Of Revealing The Invisible In the Home Marshini Chetty, Richard Banks, Richard Harper, Tim Regan, Abigail Sellen, Christos Gkantsidis, Thomas Karagiannis and Peter Key. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. New York, NY, USA 2010. , pp. 659-668. ACM.
<id=”absChBHRSGKK:2010″ class=”abstract”>Abstract: As more technologies enter the home, householders are burdened with the task of digital housekeeping—managing and sharing digital resources like bandwidth. In response to this, we created a domestic tool for bandwidth management called Home Watcher. Our field trial of Home Watcher showed that when resource contention amongst different household members is made visible, people’s understanding of bandwidth changes and household politics are revealed. In this paper, we describe the consequences of showing real time resource usage in a home, and how this varies depending on the social make up of the household.
<id=”bib_ChBHRSGKK:2010″ class=”bibtex”>
BibTeX:
@inproceedings{ChBHRSGKK:2010,
author = {Marshini Chetty and Richard Banks and Richard Harper and Tim Regan and Abigail Sellen and Christos Gkantsidis and Thomas Karagiannis and Peter Key},
title = {Who's Hogging the Bandwidth?: The Consequences Of Revealing The Invisible In the Home},
booktitle = {Proceedings of the SIGCHI Conference on Human Factors in Computing Systems},
publisher = {ACM},
year = {2010},
pages = {659--668},
url = {http://doi.acm.org/10.1145/1753326.1753423},
doi = {10.1145/1753326.1753423}
}
[DOI] PDF PDF from ACM
Rethinking Indoor Wireless Mesh Design: Low Power, Low Frequency, Full-duplex Bozidar Radunovic, Dinan Gunawardena, Peter Key, Alexandre Proutiere, Nikhil Singh, Vlad Balan and Gerald Dejean. In WiMesh (SECON Workshop). 2010. IEEE.
<id=”absRaGuKPSBD:2010″ class=”abstract”>Abstract: Existing indoor WiFi networks in the 2.5GHz and 5 GHz use too much transmit power, needed because the high carrier frequency limits signal penetration and connectivity. Instead, we propose a novel indoor wireless mesh design paradigm, based on Low Frequency, using the newly freed white spaces previously used as analogue TV bands, and Low Power – 100 times less power than currently used. Preliminary experiments show that this maintains a similar level of connectivity and performance to existing networks. It also yields more uniform connectivity, thus simplifies MAC and routing protocol design. We also advocate full-duplex networking in a single band, which becomes possible in this setting (because we operate at low frequencies). It potentially doubles the throughput of each link and eliminates hidden terminals.
<id=”bib_RaGuKPSBD:2010″ class=”bibtex”>
BibTeX:
@inproceedings{RaGuKPSBD:2010,
author = {Bozidar Radunovic and Dinan Gunawardena and Peter Key and Alexandre Proutiere and Nikhil Singh and Vlad Balan and Gerald Dejean},
title = {Rethinking Indoor Wireless Mesh Design: Low Power, Low Frequency, Full-duplex},
booktitle = {WiMesh (SECON Workshop)},
publisher = {IEEE},
year = {2010}
}
PDF
Toward Practical Opportunistic Routing with Intra-session Network Coding for Mesh Networks Bozidar Radunovic, Christos Gkantsidis, Peter Key and Pablo Rodriguez. IEEE/ACM Transactions on Networking. Piscataway, NJ, USA April 2010. Vol. 18(2), pp. 420-433. IEEE Press.
<id=”absRadGkKeRo:2010″ class=”abstract”>Abstract: We consider opportunistic routing in wireless mesh networks. We exploit the inherent diversity of the broadcast nature of wireless by making use of multi-path routing. Wepresent a novel optimization framework for opportunistic routing based on network utility maximization (NUM) that enables us to derive optimal flow control, routing, scheduling, and rate adaptation schemes, where we use network coding to ease the routing problem. All previous work on NUM assumed unicast transmissions; however, the wireless medium is by its nature broadcast and a transmission will be received by multiple nodes. The structure of our design is fundamentally different; this is due to the fact that our link rate constraints are defined per broadcast region instead of links in isolation. We prove optimality and derive a primal-dual algorithm that lays the basis for a practical protocol. Optimal MAC scheduling is difficult to implement, and we use 802.11-like random scheduling rather than optimal in our comparisons. Under random scheduling, our protocol becomes fully decentralized (we assume ideal signaling). The use of network coding introduces additional constraints on scheduling, and we propose a novel scheme to avoid starvation. We simulate realistic topologies and show that we can achieve 20-200% throughput improvement compared to single path routing, and several times compared to a recent related opportunistic protocol (MORE).
<id=”bib_RadGkKeRo:2010″ class=”bibtex”>
BibTeX:
@article{RadGkKeRo:2010,
author = {Bozidar Radunovic and Christos Gkantsidis and Peter Key and Pablo Rodriguez},
title = {Toward Practical Opportunistic Routing with Intra-session Network Coding for Mesh Networks},
journal = {IEEE/ACM Transactions on Networking},
publisher = {IEEE Press},
year = {2010},
volume = {18},
number = {2},
pages = {420--433},
doi = {10.1109/tnet.2009.2030682}
}
[DOI] PDF from ACM
Traffic Management and Resource Allocation in Small Wired/Wireless Networks Christos Gkantsidis, Thomas Karagiannis, Peter Key, Bozidar Radunovic, Elias Raftopoulos and D. Manjunath. In Proceedings of the 5th International Conference on Emerging Networking Experiments and Technologies. New York, NY, USA 2009. , pp. 265-276. ACM.
<id=”absGkKaKeRRM:2009″ class=”abstract”>Abstract: We consider the problem of traffic management in small networks with both wireless and wired devices, connected to the Internet through a single gateway. Examples of such networks are small office networks or residential networks, where typically traffic management is limited to flow prioritization through port-based filtering. We propose a practical resource allocation framework that provides simple mechanisms to applications and users to enable traffic management functionality currently not present due to the distributed nature of the system and various technology or protocol limitations. To allow for control irrespective of whether traffic flows cross wireless, wired or even broadband links, the proposed framework jointly optimizes rate allocations across wireless and wired devices in a weighted fair manner. Additionally, we propose a model for estimating the achievable capacity regions in wireless networks to overcome the absence of the queue information. This model is used by the controller to achieve a specific rate allocation. We evaluate a decentralized, host-based implementation of the proposed framework. The controller is incrementally deployable by not requiring modifications to existing network protocols and equipment or the wireless MAC. Using analytical methods and experimental results with realistic traffic, we show that our controller is stable with fast convergence for both UDP and TCP traffic, achieves weighted fairness, and mitigates scheduling inefficiencies of the existing hardware.
<id=”bib_GkKaKeRRM:2009″ class=”bibtex”>
BibTeX:
@inproceedings{GkKaKeRRM:2009,
author = {Gkantsidis, Christos and Karagiannis, Thomas and Key, Peter and Radunovic, Bozidar and Raftopoulos, Elias and Manjunath, D.},
title = {Traffic Management and Resource Allocation in Small Wired/Wireless Networks},
booktitle = {Proceedings of the 5th International Conference on Emerging Networking Experiments and Technologies},
publisher = {ACM},
year = {2009},
pages = {265--276},
url = {http://doi.acm.org/10.1145/1658939.1658970},
doi = {10.1145/1658939.1658970}
}
[DOI] PDF
Routing Games with Elastic Traffic Peter Key and Alexandre Proutiere. ACM Performance Engineering Review. September 2009.
<id=”absKeyProu:2009″ class=”abstract”>Abstract: In this paper, we introduce and investigate a novel class of multipath routing games with elastic traffic. Users open one or more connections along different feasible paths fromsource to destination and act selfishly— seeking to transfer data as fast as possible. Users only control their routing choices , and once these choices have been made, the connection rates are elastic and determined via congestion control algorithms (e.g. TCP) which ultimately maximize a certain notion of the network utility. We analyze the existence and the performance of the Nash Equilibria (NEs) of the resulting routing games.
<id=”bib_KeyProu:2009″ class=”bibtex”>
BibTeX:
@article{KeyProu:2009,
author = {Peter Key and Alexandre Proutiere},
title = {Routing Games with Elastic Traffic},
journal = {ACM Performance Engineering Review},
year = {2009}
}
PDF (Corrected)
Performance Analysis of Contention Based Medium Access Control Protocols Gaurav Sharma, Ayalvadi Ganesh and Peter Key. IEEE Trans. Information Theory. April 2009. Vol. 55(4), pp. 1665-1682.
<id=”absShaGanKey:2009″ class=”abstract”>Abstract: This paper studies the performance of contention based medium access control (MAC) protocols. In particular, a simple and accurate technique for estimating the throughput of the IEEE 802.11 DCF protocol is developed. The technique is based on a rigorous analysis of the Markov chain that corresponds to the time evolution of the back-off processes at the contending nodes. An extension of the technique is presented to handle the case where service differentiation is provided with the use of heterogeneous protocol parameters, as, for example, in IEEE 802.11e EDCA protocol. Our results provide new insights into the operation of such protocols. The techniques developed in the paper are applicable to a wide variety of contention based MAC protocols.
<id=”bib_ShaGanKey:2009″ class=”bibtex”>
BibTeX:
@article{ShaGanKey:2009,
author = {Gaurav Sharma and Ayalvadi Ganesh and Peter Key},
title = {Performance Analysis of Contention Based Medium Access Control Protocols},
journal = {IEEE Trans. Information Theory},
year = {2009},
volume = {55},
number = {4},
pages = {1665--1682},
url = {http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4802302&isnumber=4802290}
}
[URL] PDF
Efficiency and Fairness in Distributed Wireless Networks Through Self-interference Cancellation and Scheduling Bozidar Radunovic, Dinan Gunawardena, Alexandre Proutiere, Nikhil Singh, Vlad Balan and Peter Key. . Thesis : Microsoft Research. March 2009. (MSR-TR-2009-27)
<id=”absRaGuPrSBK:2009″ class=”abstract”>Abstract: Handling interference is one of the major challenges in the design of multi-user distributed wireless systems. In current systems, interference is managed through carrier sensing mechanisms such as CSMA/CA and through MAC algorithms based on random back-off. However, the asymmetry in channel sensing inevitably causes degraded throughput and fairness issues, such as those caused by hidden terminal problems. We propose ContraFlow, a solution based on self-interference cancellation and innovative scheduling mechanisms that increases spatial reuse, eliminates hidden terminals, and rectifies decentralized coordination inefficiencies among nodes, thereby improving fairness. Self-interference cancellation is a technique allowing a node to cancel its own transmitted signal and hence to successfully receive data while transmitting on the same channel. We demonstrate the feasibility of such techniques in a low power WPAN setting, using Lyrtech software-defined radios. Self-interference cancellation repairs carrier sensing, making it possible to successfully eliminate hidden terminal problems, even when using current multi-user MAC protocols; but it also provides the opportunity to design new distributed MAC scheduling algorithms that increase the spatial reuse and solve most of the fairness problems associated with current algorithms. We use simulations to illustrate the performance gains achieved when ContraFlow is used and we obtain both a throughput increase over current systems, as well as a significant improvement in fairness.
<id=”bib_RaGuPrSBK:2009″ class=”bibtex”>
BibTeX:
@techreport{RaGuPrSBK:2009,
author = {Bozidar Radunovic and Dinan Gunawardena and Alexandre Proutiere and Nikhil Singh and Vlad Balan and Peter Key},
title = {Efficiency and Fairness in Distributed Wireless Networks Through Self-interference Cancellation and Scheduling},
school = {Microsoft Research},
year = {2009},
number = {MSR-TR-2009-27}
}
PDF
Coalition Games and Resource Allocation in Ad-Hoc Networks R. J. Gibbens and P. B. Key. In Bio-Inspired Computing and Communication (Biowire), Springer, Heidelberg 2008. Vol. 5151/2008, pp. 387-398. Springer.
<id=”absGibbeKey:2008″ class=”abstract”>Abstract: In this paper we explore some of the connections between cooperative game theory and the utility maximization framework for routing and flow control in networks. Central to both approaches are the allocation of scarce resources between the various users of a network and the importance of discovering distributed mechanisms that work well. The specific setting of our study is ad-hoc networks where a game-theoretic approach is particularly appealing. We discuss the underlying motivation for the primal and dual algorithms that assign routes and flows within the network and coordinate resource usage between the users. Important features of this study are the stochastic nature of the traffic pattern offered to the network and the use of a dynamic scheme to vary a user’s ability to send traffic. We briefly review coalition games defined by a characteristic function and the crucial notion of the Shapley value to allocate resources between players. We present a series of experiments with several test networks that illustrate how a distributed scheme of flow control and routing can in practice be aligned with the Shapley values which capture the influence or market power of individual users within the network.
<id=”bib_GibbeKey:2008″ class=”bibtex”>
BibTeX:
@inbook{GibbeKey:2008,
author = {R.J. Gibbens and P.B. Key},
booktitle = {Bio-Inspired Computing and Communication (B)},
publisher = {Springer},
year = {2008},
volume = {5151/2008},
pages = {387--398},
chapter={Coalition Games and Resource Allocation in Ad-Hoc Networks},
url = {http://www.springerlink.com/content/w1q3m82109743v34/fulltext.pdf}
}
[URL] PDF
HomeMaestro: Distributed monitoring and diagnosis of performance anomalies in home networks Thomas Karagiannis, Christos Gkantsidis, Peter Key, Elias Athanasopoulos and Elias Raftopoulos. . #Microsoft#. October 2008.
<id=”absKaGkKeAtR:2008″ class=”abstract”>Abstract: Home networks are comprised of applications running over multiple wired and wireless devices competing for shared network resources. Despite all the devices operating in a single administrative domain in such networks, applications operate independently, and users cannot express or enforce policies. By studying multiple households’ network performance at the packet-level correlated with diaries capturing user experiences, we show that the lack of cooperation across applications leads to observable performance problems and associated user frustration. We describe HomeMaestro, a cooperative host-based system that monitors local and global application performance, and automatically detects contention for network resources. HomeMaestro is designed to manage home and small networks and requires no modification to routers, access points, applications, or protocols. At each host, it transparently monitors per-flow and per-process network usage statistics, such as throughput, RTT, and loss rates. We propose novel algorithms for detecting performance anomalies and application contention using time-series and cross-correlation analysis. Through the collected household traces, we show that HomeMaestro effectively identifies roughly 95 % of the user reported problems. In addition, we evaluate the processing and communication overhead of HomeMaestro, which typically are below 5 % CPU utilization and 10kbps signalling traffic per host respectively. 1
<id=”bib_KaGkKeAtR:2008″ class=”bibtex”>
BibTeX:
@techreport{KaGkKeAtR:2008,
author = {Karagiannis, Thomas and Gkantsidis, Christos and Key, Peter and Athanasopoulos, Elias and Raftopoulos, Elias},
title = {HomeMaestro: Distributed monitoring and diagnosis of performance anomalies in home networks},
institution = {#Microsoft#},
year = {2008}
}
PDF
Horizon: Balancing TCP over Multiple Paths in Wireless Mesh Network Božidar Radunović, Christos Gkantsidis, Dinan Gunawardena and Peter Key. In Proceedings of the 14th ACM International Conference on Mobile Computing and Networking. New York, NY, USA 2008. , pp. 247-258. ACM.
<id=”absRadGkGuKe:2008″ class=”abstract”>Abstract: There has been extensive work on network architectures that support multi-path routing to improve performance in wireless mesh networks. However, previous work uses ad-hoc design principles that cannot guarantee any network-wide performance objectives such as conjointly maximizing resource utilization and improving fairness. In parallel, numerous theoretical results have addressed the issue of optimizing a combined metric of network utilization and fairness using techniques based on back-pressure scheduling, routing and flow control. However, the proposed theoretical algorithms are extremely difficult to implement in practice, especially in the presence of the 802.11 MAC and TCP. We propose Horizon, a novel system design for multi-path forwarding in wireless meshes, based on the theoretical results on back-pressure. Our design works with an unmodified TCP stack and on top of the existing 802.11 MAC. We modified the back-pressure approach to obtain a simple 802.11-compatible packet forwarding heuristic and a novel, light-weight path estimator, while maintaining global optimality properties. We propose a delayed reordering algorithm that eliminates TCP timeouts while keeping TCP packet reordering to a minimum. We have evaluated our implementation on a 22-node tested. We have shown that Horizon effectively utilizes available resources (disjoint paths). In contrast to previous work, our design not only avoids bottlenecks but also optimally load-balances traffic across them when needed, improving fairness among competing flows. To our knowledge, Horizon is the first practical wireless system based on back-pressure.
<id=”bib_RadGkGuKe:2008″ class=”bibtex”>
BibTeX:
@inproceedings{RadGkGuKe:2008,
author = {Radunović, Božidar and Gkantsidis, Christos and Gunawardena, Dinan and Key, Peter},
title = {Horizon: Balancing TCP over Multiple Paths in Wireless Mesh Network},
booktitle = {Proceedings of the 14th ACM International Conference on Mobile Computing and Networking},
publisher = {ACM},
year = {2008},
pages = {247--258},
url = {http://doi.acm.org/10.1145/1409944.1409973},
doi = {10.1145/1409944.1409973}
}
[DOI] PDF from ACM
Address and traffic dynamics in a large enterprise network Richard Mortier, Thomas Karagiannis and Peter Key. Technical Report : Microsoft Research. July 2008. (MSR-TR-2008-98)
<id=”absMorKarKey:2008″ class=”abstract”>Abstract: Despite the centrally-managed nature of and critical infrastructure provided by enterprise networks, analyses of their characteristics have been limited. In this paper we examine the dynamics of enterprise networks from two distinct perspectives, namely traffic and addressing. Using a large packet trace spanning approximately 3.5 weeks coupled with diverse other data sources, we pose and answer a series of questions pertinent to understanding the aforementioned aspects of today’s enterprise networks. Specifically, (i ) What is the network and geographical spread of traffic observed at a site in the enterprise network? (ii) Is it possible to infer application usage based on port numbers alone? (iii ) Is the client-server model valid within the enterprise, namely can we accurately distinguish clients and servers looking at traffic volumes alone? (iv ) How reliable is host identification by IP address or name alone? (v) What are the mobility patterns for hosts within an enterprise network? Finally, we discuss the implications of our findings for tasks such as modelling and monitoring. Although no single enterprise network could be considered typical, we believe that even a single datum is better than none at all, and that insight into these characteristics of our network is of interest to the networking research community, for whom such data is rarely accessible
<id=”bib_MorKarKey:2008″ class=”bibtex”>
BibTeX:
@techreport{MorKarKey:2008,
author = {Richard Mortier and Thomas Karagiannis and Peter Key},
title = {Address and traffic dynamics in a large enterprise network},
school = {Microsoft Research},
year = {2008},
number = {MSR-TR-2008-98}
}
PDF
Control of communication networks: welfare maximization and multipath transfers Peter Key and Laurent Massoulié. Philosophical Transactions of the Royal Society A. June 2008. Vol. 366(1872), pp. 1955-1971.
<id=”absKeyMass:2008″ class=”abstract”>Abstract: We discuss control strategies for communication networks such as the Internet. We advocate the goal of welfare maximization as a paradigm for network resource allocation. We explore the application of this paradigm to the case of parallel network paths. We show that welfare maximization requires active balancing across paths by data sources, and potentially requires implementation of novel transport protocols. However, the only requirement from the underlying `network layer’ is to expose the marginal congestion cost of network paths to the `transport layer’. We further illustrate the versatility of the corresponding layered architecture by describing transport protocols with the following properties: they achieve welfare maximization when each communication may use an arbitrary collection of paths, available in an overlay, combined in series and parallel. We conclude by commenting on incentives, pricing and open problems.
<id=”bib_KeyMass:2008″ class=”bibtex”>
BibTeX:
@article{KeyMass:2008,
author = {Peter Key and Laurent Massoulié},
title = {Control of communication networks: welfare maximization and multipath transfers},
journal = {Philosophical Transactions of the Royal Society A},
year = {2008},
volume = {366},
number = {1872},
pages = {1955-1971},
url = {http://journals.royalsociety.org/content/9uu812t28w420485/fulltext.pdf}
}
[URL ]PDF
Homemaestro: Order from chaos in home networks Thomas Karagiannis, Elias Athanasopoulos, Christos Gkantsidis and Peter Key. . Microsoft Research. May 2008. (MSR-TR-2008-84)
<id=”absKarAtGkKe:2008″ class=”abstract”>Abstract: We present HomeMaestro, a distributed system for monitoring
and instrumentation of home networks. HomeMaestro performs extensive measurements at the host level to infer application network requirements, and identifies networkrelated problems through time-series analysis. By sharing and correlating information across hosts in the home network, our system automatically detects and resolves contention over network resources among applications based on predefined policies. Finally, HomeMaestro implements a distributed virtual queue to enforce those policies by prioritizing applications without additional assistance from network equipment such as routers or access points. We outlinethe challenges in managing home networks, describe the design choices and architecture of our system, and highlight the performance
<id=”bib_KarAtGkKe:2008″ class=”bibtex”>
BibTeX:
@techreport{KarAtGkKe:2008,
author = {Karagiannis, Thomas and Athanasopoulos, Elias and Gkantsidis, Christos and Key, Peter},
title = {Homemaestro: Order from chaos in home networks},
institution = {Microsoft Research},
year = {2008},
number = {MSR-TR-2008-84}
}
PDF
Non-metric coordinates for predicting network proximity Peter Key, Laurent Massoulié and Dan-Cristian Tomozei. In IEEE Infocom 2008 Minisymposium. Phoenix April 2008. IEEE.
<id=”absKeyMasTom:2008″ class=”abstract”>Abstract: We consider the problem of determining the “closest”, or best Internet host to connect to, from a list of candidate servers. Most existing approaches rely on the use of metric, or more specifically Euclidean coordinates to infer network proximity. This is problematic, given that network distances such as latency are known to violate the triangle inequality. This leads us to consider non-metric coordinate systems. We perform an empirical comparison between the “min-plus” non-metric coordinates and two metric coordinates, namely L-infinity and Euclidean. We observe that, when sufficiently many dimensions are used, min-plus outperforms metric coordinates for predicting Internet latencies.We also consider the prediction of “widest path capacity” between nodes. In this framework, we propose a generalization of min-plus coordinates. These results apply when node coordinates consist in measured network proximity to a random subset of landmark nodes. We perform empirical validation of these results on widest path bandwidth between PlanetLab nodes.We conclude that appropriate non-metric coordinates such as generalized min-plus systems are better suited than metric systems for representing the underlying structure of Internet distances, measured either via latencies or bandwidth.
<id=”bib_KeyMasTom:2008″ class=”bibtex”>
BibTeX:
@inproceedings{KeyMasTom:2008,
author = {Peter Key and Laurent Massoulié and Dan-Cristian Tomozei},
title = {Non-metric coordinates for predicting network proximity},
booktitle = {IEEE Infocom 2008 Minisymposium},
publisher = {IEEE},
year = {2008}
}
PDF
Dynamic decentralized multi-channel MAC protocols H. Seferoglu, A. Lakshmikantha, A. Ganesh and Peter Key. In Information Theory and Applications (ITA 08). UCSD January 2008.
<id=”absSefLaGaKe:2008″ class=”abstract”>Abstract: In this paper, we propose new dynamic decentralized multi-channel multiple access (MAC) protocols and study their performance. Our protocols build on slotted Aloha, but extend it in several ways to improve flow completion time and throughput, as follows: (i) channels are assigned to flows rather than packets to eliminate per packet collisions, thus the total number of collisions is reduced, and (ii) each flow owns or drops channels dynamically considering successful transmissions, thus the number of owned channels adapts to varying traffic. We present an analysis of the stability region and of flow completion times, for our algorithms, and show that one of them can achieve close to 100% throughput if flow sizes are large. We demonstrate by extensive simulations that, compared to current multi-channel MAC protocols, our algorithms improve flow completion time and throughput in wireless local area and mesh networks.
<id=”bib_SefLaGaKe:2008″ class=”bibtex”>
BibTeX:
@inproceedings{SefLaGaKe:2008,
author = {H. Seferoglu and A. Lakshmikantha and A. Ganesh and Peter Key},
title = {Dynamic decentralized multi-channel MAC protocols},
booktitle = {Information Theory and Applications (ITA 08)},
year = {2008}
}
PDF
Multipath Code Casting for Wireless Mesh Networks Christos Gkantsidis, Wenjun Hu, Peter Key, Bozidar Radunovic, Pablo Rodriguez and Steluta Gheorghiu. In Proceedings of the 2007 ACM CoNEXT Conference. New York, NY, USA 2007. , pp. 10:1-10:12. ACM.
<id=”absGkHuKeRRG:2007″ class=”abstract”>Abstract: Designing high throughput wireless mesh networks involvessolving interrelated scheduling, routing, and interference problems.In this paper, we exploit the broadcast properties andthe path diversity of wireless meshes to implement an efficientmultipath routing protocol, Multipath Code Casting(MC2).In contrast to prior work in opportunistic routing, whichrequired strong coordination across nodes to prevent informationrepetition, our design is based on network coding anddoes not require node coordination. Moreover, it providesa unified framework to deal with data transmissions acrossmultiple and, often, unreliable transmission paths. Our designalso includes a novel rate-scheduling algorithm that guarantees(proportionally) fair allocation of resources across multiple(multipath) flows, ensures that data use the paths withthe best performance, and prevents information overflow bycontrolling the data rate across each path. Using simulationsand a prototype implementation, we show that our algorithms provide over 30% performance improvement compared to traditional singlepath approaches when applied to realistic and other exemplar topologies; in some scenarios,our approach can even double the throughput. Our approach also performs better than 20% compared to other multipath routing schemes.
<id=”bib_GkHuKeRRG:2007″ class=”bibtex”>
BibTeX:
@inproceedings{GkHuKeRRG:2007,
author = {Christos Gkantsidis and Wenjun Hu and Peter Key and Bozidar Radunovic and Pablo Rodriguez and Steluta Gheorghiu},
title = {Multipath Code Casting for Wireless Mesh Networks},
booktitle = {Proceedings of the 2007 ACM CoNEXT Conference},
publisher = {ACM},
year = {2007},
pages = {10:1--10:12},
url = {http://doi.acm.org/10.1145/1364654.1364667},
doi = {10.1145/1364654.1364667}
}
[DOI] PDF
An Optimization Framework for Practical Multipath Routing in Wireless Mesh Bozidar Radunovic, Christos Gkantsidis, Peter Key, Pablo Rodriguez and Wenjun Hu. . Thesis : Microsoft Research. 2007. (MSR-TR-2007-81)
<id=”absRaGkKeRoH:2007″ class=”abstract”>Abstract: In this paper, we propose new dynamic decentralized multi-channel multiple access (MAC) protocols and study their performance. Our protocols build on slotted Aloha, but extend it in several ways to improve flow completion time and throughput, as follows: (i) channels are assigned to flows rather than packets to eliminate per packet collisions, thus the total number of collisions is reduced, and (ii) each flow owns or drops channels dynamically considering successful transmissions, thus the number of owned channels adapts to varying traffic. We present an analysis of the stability region and of flow completion times, for our algorithms, and show that one of them can achieve close to 100% throughput if flow sizes are large. We demonstrate by extensive simulations that, compared to current multi-channel MAC protocols, our algorithms improve flow completion time and throughput in wireless local area and mesh networks.
<id=”bib_RaGkKeRoH:2007″ class=”bibtex”>
BibTeX:
@techreport{RaGkKeRoH:2007,
author = {Bozidar Radunovic and Christos Gkantsidis and Peter Key and Pablo Rodriguez and Wenjun Hu},
title = {An Optimization Framework for Practical Multipath Routing in Wireless Mesh},
institution = {Microsoft Research},
year = {2007},
type = {Technical Report},
number = {MSR-TR-2007-81}
}
PDF
Path Selection and Multipath Congestion Control Peter Key, Laurent Massoulié and Don Towsley. In Proc. IEEE Infocom 2007. Alaska May 2007. IEEE.
<id=”absKeyMasTow:2007a” class=”abstract”>Abstract: In this paper we investigate the potential benefits of coordinated congestion control for multipath data transfers, and contrast with uncoordinated control. For static random path selections, we show the worst-case throughput performance of uncoordinated control behaves as if each user had but a single path (scaling like N))/N) where N is the system size, measured in number of resources). Whereas coordinated control gives a throughput allocation bounded away from zero, improving on both uncoordinated control and on the greedy-least loaded path selection of e.g. Mitzenmacher. We then allow users to change their set of routes and introduce the notion of a Nash equilibrium. We show that with RTT bias (as in TCP Reno), uncoordinated control can lead to inefficient equilibria. With no RTT bias, both uncoordinated or coordinated Nash equilibria correspond to desirable welfare maximising states. Moreover, simple path reselection polices that shift to paths with higher net benefit can find these states.
<id=”bib_KeyMasTow:2007a” class=”bibtex”>
BibTeX:
@inproceedings{KeyMasTow:2007a,
author = {Peter Key and Laurent Massoulié and Don Towsley},
title = {Path Selection and Multipath Congestion Control},
booktitle = {Proc. IEEE Infocom 2007},
publisher = {IEEE},
year = {2007}
}
PDF
Multipath Routing, Congestion Control and Load Balancing P. Key, L. Massoulié and D. Towsley. In ICASSP 2007. Hawaii April 2007.
<id=”absKeyMasTow:2007″ class=”abstract”>Abstract: Combining transport-layer congestion control with multi-path routingis a cross-layer approach that provides performance benefits over treatingthe layers separately. We phrase this as an optimisation problem,examine the case of data transfers, and show how a coordinatedcontroller gives strictly better performancethan an uncoordinated controller, which sets up parallel paths. Forfixed demands, and the case of random-path selection, we show how coordinated controlalso achieves better load balancing than greedy least-loaded pathselection. We then comment on adaptive path selection.
<id=”bib_KeyMasTow:2007″ class=”bibtex”>
BibTeX:
@inproceedings{KeyMasTow:2007,
author = {P. Key and L. Massoulié and D. Towsley},
title = {Multipath Routing, Congestion Control and Load Balancing},
booktitle = {ICASSP 2007},
year = {2007}
}
PDF
Congestion Notification and Probing Mechanisms for Endpoint Admission Control A. J. Ganesh, P. B. Key, D. Polis and R. Srikant. IEEE/ACM Trans. Netw.. Piscataway, NJ, USA June 2006. Vol. 14(3), pp. 568-578. IEEE Press.
<id=”absGanKePoSr:2006″ class=”abstract”>Abstract: There has been much interest in admission control schemes that place the burden of admission control decisions on the end users. In these schemes, referred to as Endpoint Admission Control, the decision to join the network is taken by the user, based on the probing of the network using probe packets. Depending on the level of congestion, routers mark the probe packets and thus inform the user of the state of the network. In this paper, we analyze three mechanisms for providing Endpoint Admission Control: virtual-queue marking, random-early marking and tail drop. For each scheme, we analyze the probing duration necessary to guarantee the required QoS and achieve high link utilization. Our main conclusion is that very few probe packets have to be sent when early marking is used, whereas tail drop requires a large number of probe packets.
<id=”bib_GanKePoSr:2006″ class=”bibtex”>
BibTeX:
@article{GanKePoSr:2006,
author = {A.J. Ganesh and P.B. Key and D. Polis and R. Srikant},
title = {Congestion Notification and Probing Mechanisms for Endpoint Admission Control},
journal = {IEEE/ACM Trans. Netw.},
publisher = {IEEE Press},
year = {2006},
volume = {14},
number = {3},
pages = {568--578},
url = {http://dx.doi.org/10.1109/TNET.2006.876180},
doi = {10.1109/TNET.2006.876180}
}
[DOI] PDF PDF from ACM
Fluid models of integrated traffic and multipath routing Peter Key and Laurent Massoulié. Queueing Systems. June 2006. Vol. 53(1), pp. 85-98.
<id=”absKeyMass:2006″ class=”abstract”>Abstract: In this paper we consider a stochastic model describing the varying number of flows in a network. This model features flows of two types, namely file transfers (with fixed volume) and streaming traffic (with fixed duration), and extends the model of Key, Massoulié, Bain and Kelly [27] by allowing more general bandwidth allocation criteria. We analyse the dynamics of the system under a fluid scaling, and show Lyapunov stability of the fluid limits under a natural stability condition. We provide natural interpretations of the fixed points of these fluid limits.We then compare the fluid dynamics of file transfers under (i) balanced multipath routing and (ii) parallel, uncoordinated routing. We show that for identical traffic demands, parallel uncoordinated routing can be unstable while balanced multipath routing is stable.Finally, we identify multi-dimensional Ornstein-Uhlenbeck processes as second-order approximations to the first-order fluid limit dynamics.
<id=”bib_KeyMass:2006″ class=”bibtex”>
BibTeX:
@article{KeyMass:2006,
author = {Peter Key and Laurent Massoulié},
title = {Fluid models of integrated traffic and multipath routing},
journal = {Queueing Systems},
year = {2006},
volume = {53},
number = {1},
pages = {85--98}
}
PDF
Efficient quarantining of scanning worms: optimal detection and coordination A. Ganesh, D. Gunawardena, P. Key, L. Massoulié and J. Scott. In Infocom 2006. April 2006. IEEE.
<id=”absGaGuKeMaS:2006″ class=”abstract”>Abstract: Current generation worms have causedconsiderable damage, despite their use of unsophisticatedscanning strategies for detecting vulnerablehosts. A number of adaptive techniques have beenproposed for quarantining hosts whose behaviouris deemed suspicious. Such techniques have beenproven to be effective against fast scanning worms.However, worms could evade detection by being lessaggressive. In this paper we consider the interplaybetween worm strategies and detection techniques,which can be described in game-theoretic terms.We use epidemiological modelling to characterisethe outcome of the game (the pay-off function), asa function of the strategies of the worm and thedetector. We design detection rules that are optimalagainst scanning worms with known characteristics.We then identify specific detection rules that areclose to optimal, in some mathematically precisesense, against any scanning worm. Finally, we designmethods for coordinating information among a setof end-hosts, using Bayesian decision theory. Weevaluate the proposed rules using simulations drivenby traces from a corporate environment of 600 hosts,and assess the benefits of coordination.
<id=”bib_GaGuKeMaS:2006″ class=”bibtex”>
BibTeX:
@inproceedings{GaGuKeMaS:2006,
author = {A. Ganesh and D. Gunawardena and P. Key and L. Massoulié and J. Scott},
title = {Efficient quarantining of scanning worms: optimal detection and coordination},
booktitle = {Infocom 2006},
publisher = {IEEE},
year = {2006}
}
PDF
Performance Analysis of Contention Based Medium Access Control Protocols Ayalvadi Ganesh Gaurav Sharma and Peter Key. In Infocom 2006. Barcelona April 2006. IEEE.
<id=”absGauraKey:2006″ class=”abstract”>Abstract: We study the performance of contention based medium access control (MAC) protocols. In particular, we provide a simple and accurate method for estimating the throughput of IEEE 802.11 DCF and IEEE 802.11e EDCA. Our method is based on a rigorous analysis of the Markov chain associated with the back-off process at the contending nodes. Our results provide new insights into the operation of IEEE 802.11 DCF and IEEE 802.11e EDCA. Although we focus on IEEE 802.11 MAC protocol in this paper, the techniques developed are applicable
<id=”bib_GauraKey:2006″ class=”bibtex”>
BibTeX:
@inproceedings{GauraKey:2006,
author = {Gaurav Sharma, Ayalvadi Ganesh and Peter Key},
title = {Performance Analysis of Contention Based Medium Access Control Protocols},
booktitle = {Infocom 2006},
publisher = {IEEE},
year = {2006},
doi = {10.1109/tit.2008.2009608}
}
[DOI] >PDF of extended version published IEEE Trans Info Theory
Combining Multipath Routing and Congestion Control for Robustness Peter Key, Laurent Massoulié and Don Towsley. In CISS 2006, 40th Conference on Information Sciences and Systems. Princeton March 2006. IEEE.
<id=”absKeyMasTow:2006″ class=”abstract”>Abstract: Flexible routing schemes mitigate some of the problemsassociated with uncertain traffic patterns and workloads bymaking the exact location of capacity less important: if there isavailable capacity the routing scheme will find it. In this paperwe propose a combined multipath routing and congestion controlarchitecture that can provide performance improvements to theend user and simplifies network dimensioning for operators. Wedescribe a flow-level model, able to handle streaming and filetransfer traffic, with stochastic arrivals, and look at a fluid limit.We describe a congestion controller and path selection algorithmthat automatically balances traffic across the lowest cost paths,and we suggest ways in which just two paths may be used,with a random selection policy. A notable feature of a multipathcongestion controller is that it cannot be tuned to a single RTT,hence it differs from standard TCP with respect to RTT bias.We show that under certain conditions the allocation of flows topaths is optimal and independent of the flow control algorithmused. Scalability of the architecture results from implementingthe algorithms at end-systems. We illustrate by examples howsuch an approach can halve response times and double the loadthat a network can carry.
<id=”bib_KeyMasTow:2006″ class=”bibtex”>
BibTeX:
@inproceedings{KeyMasTow:2006,
author = {Peter Key and Laurent Massoulié and Don Towsley},
title = {Combining Multipath Routing and Congestion Control for Robustness},
booktitle = {CISS 2006, 40th Conference on Information Sciences and Systems},
publisher = {IEEE},
year = {2006}
}
PDF
Schedulable regions and equilibrium cost for multipath flow control: the benefits of coordination Laurent Massoulié and Peter Key. In CISS 2006, 40th Conference on Information Sciences and Systems. Princeton March 2006. IEEE.
<id=”absMassoKey:2006″ class=”abstract”>Abstract: In this paper we consider deterministicdifferential equation models for the varying numberof flows in a network. These arise naturally as limitsof stochastic models under joint scaling of flow arrivalrates and network capacities. We compare these dynamicsunder (i) coordinated multipath routing and(ii) parallel, uncoordinated routing. We show thatfor identical traffic demands, parallel uncoordinatedrouting can be unstable while balanced multipathrouting is stable. In other words, coordination canstrictly increase the schedulable region, that is the setof demand vectors for which the system is stable. Wealso show that, even when uncoordinated multipathrouting stabilises the system, coordination can bringfurther benefits, as it
<id=”bib_MassoKey:2006″ class=”bibtex”>
BibTeX:
@inproceedings{MassoKey:2006,
author = {Laurent Massoulié and Peter Key},
title = {Schedulable regions and equilibrium cost for multipath flow control: the benefits of coordination},
booktitle = {CISS 2006, 40th Conference on Information Sciences and Systems},
publisher = {IEEE},
year = {2006}
}
PDF
Combined Multipath Routing and Congestion Control: a Robust Internet Architecture Peter Key, Laurent Massoulié and Don Towsley. . Thesis : MSR Technical Report. August 2005. (MSR-TR-2005-111)
<id=”absKeyMasTow:2005″ class=”abstract”>Abstract: Network management is complicated by uncertain traffic patterns and workloads. Flexible routing schemes mitigate some of the problems by making the exact location of capacity less important: if there is available capacity the routing scheme will find it. In this paper we propose a combined multipath routing and congestion control architecture that gives performance improvements to the end user and simplifies network dimensioning. We advocate multihoming and stepping stone routers to provide path diversity, and a congestion controller and path selection algorithm that automatically balances traffic across the lowest cost paths. A notable feature of a multipath congestion controller is that it cannot be tuned to a single RTT, hence it differs from standard TCP with respect to RTT bias. Scalability of the architecture results fromimplementing the algorithms at end-systems. We illustrateon network topologies of interest the performance impactof our architecture: active use of two paths can (i) halveresponse times and (ii) double the load that a network can carry
<id=”bib_KeyMasTow:2005″ class=”bibtex”>
BibTeX:
@techreport{KeyMasTow:2005,
author = {Peter Key and Laurent Massoulié and Don Towsley},
title = {Combined Multipath Routing and Congestion Control: a Robust Internet Architecture},
school = {MSR Technical Report},
year = {2005},
number = {MSR-TR-2005-111}
}
PDF
Resource Allocation Between Persistent and Transient Flows Ayalvadi Ganesh Supratim Deb and Peter Key. IEEE/ACM Trans. Netw.. Piscataway, NJ, USA April 2005. Vol. 13(2), pp. 302-315. IEEE Press.
<id=”absSupraKey:2005″ class=”abstract”>Abstract: The flow control algorithms currently used in the Internet have been tailored to share available capacity between users on the basis of the physical characteristics of the network links they use rather than the characteristics of their applications. However, real-time applications typically have very different requirements from file transfer or Web browsing, and treating them identically can result in a perception of poor quality of service even when adequate bandwidth is available. This is the motivation for differentiated services. In this paper, we explore service differentiation between persistent (fixed duration) and transient (fixed volume) flows, and also between transient flows of markedly different sizes; the latter is stimulated by current discussion on Web mice and elephants. We propose decentralized bandwidth allocation algorithms that can be implemented by end-systems without requiring the support of a complex network architecture, and show that they achieve performance very close to what is achievable by the optimal centralized scheme.
<id=”bib_SupraKey:2005″ class=”bibtex”>
BibTeX:
@article{SupraKey:2005,
author = {Supratim Deb, Ayalvadi Ganesh and Peter Key},
title = {Resource Allocation Between Persistent and Transient Flows},
journal = {IEEE/ACM Trans. Netw.},
publisher = {IEEE Press},
year = {2005},
volume = {13},
number = {2},
pages = {302--315},
url = {http://dx.doi.org/10.1109/TNET.2005.845544},
doi = {10.1109/TNET.2005.845544}
}
[DOI] [URL] PDF PDF from ACM
Farsighted Users Harness Network Time-Diversity Peter Key, Laurent Massoulié and Milan Vojnovic. In Infocom. March 2005. Vol. 4, pp. 2383-2394. IEEE.
<id=”absKeyMasVoj:2005″ class=”abstract”>Abstract: Fluctuations in network conditions are a common phenomenon. They arise in the current wired Internet due to changes in demand, and in wireless networks due to changing interference patterns. However, current congestion control design typically does not account for this, and in this sense the majority of congestion controllers proposed so far can be deemed as “myopic”. The present work deals with the following question: how should network end-users exploit such temporal fluctuations? We introduce a formal framework, in which time diversity is explicitly described by phases in network condition. We propose as bandwidth allocation criterion the solution to an optimization problem, which features both classical (myopic) users and so-called farsighted users. We identify the corresponding farsighted user strategy as that maximizing throughput subject to a social norm related to TCP-friendliness. We establish basic desirable properties of the resulting allocations. We propose adaptive decentralized algorithms for farsighted users to achieve their target allocation. The algorithms do not require either explicit knowledge of dynamics in network conditions, or special feedback from the network.
<id=”bib_KeyMasVoj:2005″ class=”bibtex”>
BibTeX:
@inproceedings{KeyMasVoj:2005,
author = {Peter Key and Laurent Massoulié and Milan Vojnovic},
title = {Farsighted Users Harness Network Time-Diversity},
booktitle = {Infocom},
publisher = {IEEE},
year = {2005},
volume = {4},
pages = {2383--2394}
}
PDF
Fair Internet traffic integration: network flow models and analysis A. Bain P. Key L. Massoulié and F. Kelly. Annals of Telecommunications. 2004. Vol. 59, pp. 1338-1352.
<id=”absKeyMaBaKe:2004″ class=”abstract”>Abstract: We use flow-level models to study the integration of two types of Internet traffic, elastic file transfers and streaming traffic. Previous studies have concentrated on just one type of traffic, such as the flow level models of Internet congestion control, where network capacity is dynamically shared between elastic file transfers, with a randomly varying number of such flows. We consider the addition of streaming traffic in two cases, under a fairness assumption that includesTCP-friendliness as a special case, and under certain admission control schemes. We establish sufficient conditions for stability, using a fluid model of the system. We also assess the impact of each traffic type on the other: file transfers are seen by streaming traffic as reducing the available capacity, whereas for file transfers the presence of streaming traffic amounts to replacing sharp capacity constraints by relaxed constraints. Simulation results suggest that the integration of streaming traffic and file transfers has a stabilizing effect on the variability of the number of flows present in the system.
<id=”bib_KeyMaBaKe:2004″ class=”bibtex”>
BibTeX:
@article{KeyMaBaKe:2004,
author = {P. Key, L. Massoulié, A. Bain and F. Kelly},
title = {Fair Internet traffic integration: network flow models and analysis},
journal = {Annals of Telecommunications},
publisher={Springer},
year = {2004},
volume = {59},
pages = {1338--1352}
}
>PDF
Emulating Low-priority Transport at the Application Layer: A Background Transfer Service Laurent Massoulié Peter Key and Bing Wang. In Proceedings of the Joint International Conference on Measurement and Modeling of Computer Systems. New York, NY, USA 2004. , pp. 118-129. ACM.
<id=”bib_KeyMasWan:2004″ class=”bibtex”>
BibTeX:
@inproceedings{KeyMasWan:2004,
author = {Peter Key, Laurent Massoulié and Bing Wang},
title = {Emulating Low-priority Transport at the Application Layer: A Background Transfer Service},
booktitle = {Proceedings of the Joint International Conference on Measurement and Modeling of Computer Systems},
publisher = {ACM},
year = {2004},
pages = {118--129},
doi = {10.1145/1005686.1005703}
}
[DOI] PDF PDF from ACM
PIC: Practical Internet Coordinates for Distance Estimation Manuel Costa, Miguel Castro, Antony Rowstron and Peter Key. In 24th International Conference on Distributed Computing Systems, ICDCS. Tokyo, Japan March 2004. IEEE.
<id=”absCosCaRoKe:2004″ class=”abstract”>Abstract: We introduce PIC, a practical coordinate-based mechanism to estimate Internet network distance (i.e., round-trip delay or network hops). Network distance estimation is important in many applications; for example, network-aware overlay construction and server selection. There are several proposals for distance estimation in the Internet but they all suffer from problems that limit their benefit. Most rely on a small set of infrastructure nodes that are a single point of failure and limit scalability. Others use sets of peers to compute coordinates but these coordinates can be arbitrarily wrong if one of these peers is malicious. While it may be reasonable to secure a small set of infrastructure nodes, it is unreasonable to secure all peers. PIC addresses these problems: it does not rely on infrastructure nodes and it can compute accurate coordinates even when some peers are malicious. We present PIC’s design, experimental evaluation, and an application to network-aware overlay construction and maintenance.
<id=”bib_CosCaRoKe:2004″ class=”bibtex”>
BibTeX:
@inproceedings{CosCaRoKe:2004,
author = {Manuel Costa and Miguel Castro and Antony Rowstron and Peter Key},
title = {PIC: Practical Internet Coordinates for Distance Estimation},
booktitle = {24th International Conference on Distributed Computing Systems, ICDCS},
publisher = {IEEE},
year = {2004}
}
PDF
Network Aware Applications: A Background Transfer Service Peter Key, Laurent Massoulié and Bing Wang. In Proceedings of Forty-First Allerton Conference on Communication, Control, and Computing. October 2003.
<id=”absKeyMasWan:2003″ class=”abstract”>Abstract: Network aware applications react to changing network conditions, offering potential quality of service differentiation without network support. We describe an application level approach to designing a low priority service — one that is ‘lower than best-effort’ in the context of the Internet. Such applications are appropriate for background file transfers, such as OS updates. We use a receive window control to limit the transfer rate of the application, and the optimal rate is determined by detecting a change-point. We motivate this joint control-estimation problem by considering a fluid-based optimisation framework, and describe practical solutions, based on stochastic approximation and binary search techniques. Simulation results demonstrate the effectiveness of the approach.
<id=”bib_KeyMasWan:2003″ class=”bibtex”>
BibTeX:
@inproceedings{KeyMasWan:2003,
author = {Peter Key and Laurent Massoulié and Bing Wang},
title = {Network Aware Applications: A Background Transfer Service},
booktitle = {Proceedings of Forty-First Allerton Conference on Communication, Control, and Computing},
year = {2003}
}
PDF
A network flow model for mixtures of file transfers and streaming traffic Alan Bain Peter Key Laurent Massoulié and Frank Kelly. In Proceedings ITC 18, Providing Quality of Service in Heterogeneous Environments. September 2003. Vol. 5b, pp. 1021-1030. Elsevier.
<id=”absPeterKell:2003″ class=”abstract”>Abstract: Roberts, Massoulié and co-authors have introduced and studied a flow level model of Internet congestion control, that representsthe randomly varying number of flows present in a network where bandwidth is dynamically shared between elasticfile transfers. In this paper we consider a generalization of the model to include streaming traffic as well as file transfers,under a fairness assumption that includes TCP-friendliness as a special case. We establish stability, under conditions, for afluid model of the system. We also assess the impact of each traffic type on the other: file transfers are seen by streamingtraffic as reducing the available capacity, whereas for file transfers the presence of streaming traffic amounts to replacingsharp capacity constraints by relaxed constraints. The integration of streaming traffic and file transfers has a stabilizingeffect on the variability of the number of flows present in the system.
<id=”bib_PeterKell:2003″ class=”bibtex”>
BibTeX:
@inproceedings{PeterKell:2003,
author = {Peter Key, Laurent Massoulié, Alan Bain and Frank Kelly},
editor = {J Charzinski and R. Lehnert and P. Tran-Gia},
title = {A network flow model for mixtures of file transfers and streaming traffic},
booktitle = {Proceedings ITC 18, Providing Quality of Service in Heterogeneous Environments},
publisher = {Elsevier},
year = {2003},
volume = {5b},
pages = {1021--1030}
}
PDF
Network characteristics: modelling, measurements and admission control Dinan Gunarwardena, Peter Key and Laurent Massoulié. In Proceedings of the 11th International Conference on Quality of Service. Berlin, Heidelberg June 2003. Springer-Verlag.
<id=”absGunKeyMas:2003″ class=”abstract”>Abstract: Round trip delays constitute the basic network measure that can beobtained by end systems without any network support. Our aim is to designmeasurement-based admission control strategies for streaming applications basedon such minimal feedback on network state. To this end we discuss simple statisticalmodels of packet round trip delays accross either local or wide area networks.We observe that the delay component due to queueing scales like the reciprocalof the spare capacity, at least in a heavy traffic regime where spare capacity isscarce. Our models also allow to capture the correlations between consecutivemeasurements. Based on these results we propose a two-stage strategy for inferringspare capacity along a network path. We show consistency of this estimate,and analyse its asymptotic variance when the number of samples becomes large.We have experimented these strategies in a local network environment. We observea good match between theory and practice for switched Ethernets. Surprisingly,the match deteriorates only slightly when the network path comprises hubs,although the theoretical models seem to be less applicable to such technology.
<id=”bib_GunKeyMas:2003″ class=”bibtex”>
BibTeX:
@inproceedings{GunKeyMas:2003,
author = {Dinan Gunarwardena and Peter Key and Laurent Massoulié},
title = {Network characteristics: modelling, measurements and admission control},
booktitle = {Proceedings of the 11th International Conference on Quality of Service},
publisher = {Springer-Verlag},
year = {2003},
url = {http://dl.acm.org/citation.cfm?id=1784037.1784039}
}
[URL] PDF
Probing Strategies for distributed admission control in large and small scale systems Peter B. Key and Laurent Massoulié. In INFOCOM 2003. San Francisco April 2003. Vol. 1, pp. 608-618. IEEE.
<id=”absKeyMass:2003″ class=”abstract”>Abstract: The aim of this article is to propose and analyse measurement-based admission control schemes. We distinguish between large-scale and small-scale systems, where scale is measured in the number of concurrent applications that can run simultaneously. For large scale systems, we show that simple end-user probing strategies, based on ECN-type feedback provided by the network, achieve a good utilisation/quality trade-off. We explicitly take account of feedback delay, and use limiting results for assessing performance. We illustrate the benefits of using ECN-type feedback rather than relying on loss. For small-scale systems, the previous strategies are no longer adequate and we propose alternative, more gradual probing strategies.
<id=”bib_KeyMass:2003″ class=”bibtex”>
BibTeX:
@inproceedings{KeyMass:2003,
author = {Peter B. Key and Laurent Massoulié},
title = {Probing Strategies for distributed admission control in large and small scale systems},
booktitle = {INFOCOM 2003},
publisher = {IEEE},
year = {2003},
volume = {1},
pages = {608--618}
}
PDF
Service-Differentiation for Delay-Sensitive Applications: An Optimisation-Based Approach Peter Key, Laurent Massoulié and Jonathan Shapiro. Performance Evaluation. September 2002. Vol. 49, pp. 471-489.
<id=”absKeyMasSha:2002″ class=”abstract”>Abstract: This paper deals with the performance of delay-sensitive applications running over a network that offers multiple classes of service, where the adaption of application rates in response to network feedback is the primary mechanism available for controlling quality of service. We first evaluate the gain in utilisation allowed by the introduction of several classes of service. To this end we compare the pairs of achievable rates, or schedulable regions, for two types of applications with two distinct delay requirements that make use of a single resource, with either no differentiation, simple priority-based differentiation, or earliest-deadline-first (EDF) scheduling-based differentiation. The main observations are that the gain achieved by differentiation is essentially affected by traffic burstiness, and that the two differentiation schemes yield very similar performance.We then consider what feedback information should be sent to traffic sources from different classes, casting the problem in the framework of optimisation-based congestion control. We establish a connection between the sample-path shadow price rationale for feedback synthesis and the rare perturbation analysis technique for gradient estimation in discrete event systems theory. Based on this connection, we propose several marking schemes, for simple priority-based differentiation with a measure of cost based on loss or delay, and also for EDF-based differentiation with loss-based cost. The interaction of these marking algorithms with simple congestion control algorithms is studied via simulations
<id=”bib_KeyMasSha:2002″ class=”bibtex”>
BibTeX:
@article{KeyMasSha:2002,
author = {Peter Key and Laurent Massoulié and Jonathan Shapiro},
title = {Service-Differentiation for Delay-Sensitive Applications: An Optimisation-Based Approach},
journal = {Performance Evaluation},
year = {2002},
volume = {49},
pages = {471--489}
}
PDF
Resource Allocation with Persistent and Transient Flows Supratim Deb, Ayalvadi J. Ganesh and Peter B. Key. In NETWORKING 2002, Second International IFIP-TC6 Networking Conference. May 2002. , pp. 455-466. Springer.
<id=”absDebGanKey:2002″ class=”abstract”>Abstract: The flow control algorithms currently used in the Internet have been tailored to share bandwidth between users on the basis of the physical characteristics of the network links they use rather than the characteristics of their applications. This can result in a perception of poor quality of service by some users even when adequate bandwidth is potentially available, and is the motivation for seeking to provide differentiated services. In this paper, stimulated by current discussion on Web mice and elephants, we explore service differentiation between persistent and short-lived flows, and between file transfers of different sizes. In particular, we seek to achieve this using decentralized algorithms that can be implemented by end-systems without requiring the support of a complex network architecture. The algorithms we propose correspond to a form of weighted processor sharing and can be tailored to approximate the shortest remaining processing time service discipline.
<id=”bib_DebGanKey:2002″ class=”bibtex”>
BibTeX:
@inproceedings{DebGanKey:2002,
author = {Supratim Deb and Ayalvadi J. Ganesh and Peter B. Key},
editor = {Enrico Gregori and Marco Conti and Andrew T. Campbell and Cambyse Guy Omidyar and Moshe Zukerman},
title = {Resource Allocation with Persistent and Transient Flows},
booktitle = {NETWORKING 2002, Second International IFIP-TC6 Networking Conference},
publisher = {Springer},
year = {2002},
pages = {455--466}
}
PDF of Updated version in ACM ToN 2005
Modelling the Performance of In-Call Probing for Multi-Level Adaptive Applications A. Bain and P. B. Key. Microsoft Research. jan 2002. (MSR-TR-2002-06)
<id=”absBainKey:2002″ class=”abstract”>Abstract: This paper looks at adaptive applications that can switch between a small number of different levels of service, with switching decisions made solely by the originating end-system. Typical of such applications are real time streaming protocols which can use different coding rates. The end-systems probe the network with sample traffic to determine congestion, and decide at what rate to enter according to the fate of their “probe” packets. During the lifetime of a connection, the procedure is repeated to reassess and possibly readjust the rate. We derive analytic models, based on diffusion limits under a natural scaling, to quantify the benefits of in-call probing. We then use simulation to compare the results in a number of scenarios, and show that this simple theory is remarkably accurate in predicting large-scale behaviour. The results also show that a small amount of in-call probing produces significant benefits to the system.
<id=”bib_BainKey:2002″ class=”bibtex”>
BibTeX:
@techreport{BainKey:2002,
author = {A. Bain and P. B. Key},
title = {Modelling the Performance of In-Call Probing for Multi-Level Adaptive Applications},
institution = {Microsoft Research},
year = {2002},
number = {MSR-TR-2002-06}
}
PDF
Properties of the Virtual Queue Marking Algorithm Richard J. Gibbens, Peter B. Key and Stephen R. E. Turner. In 17th UK Teletraffic Symposium. 2001. IEE.
<id=”bib_GibKeyTur:2001″ class=”bibtex”>
BibTeX:
@inproceedings{GibKeyTur:2001,
author = {Richard J. Gibbens and Peter B. Key and Stephen R. E. Turner},
title = {Properties of the Virtual Queue Marking Algorithm},
booktitle = {17th UK Teletraffic Symposium},
publisher = {IEE},
year = {2001}
}
PDF
Resource Pricing for Differentiated Services Peter B. Key. In Kommunication in Verteilten Systemen, (KiVS). 2001. , pp. 3-18. Springer.
<id=”absKey:2001″ class=”abstract”>Abstract: In paper we present an overview of recent work on resource pricing for differentiated services in the Internet. This approach is based upon encouraging cooperation between the end-systems and the network by use of the correct feedback signals. These signals reflect the congestion shadow prices at a resource, and their use means then even ‘selfish’ end-systems, acting in their own best interests, will push the system to a global or social optimum. In contrast to most current Diffserv proposals, little is required from resources in the network; they just have to mark packets correctly, while the end-system can use complex or simple strategies. All that is needed is for the end-systems to have an incentive to react to the feedback signals, and then we have a distributed resource sharing mechanism. We give examples of typical end-system behaviour, and show how this approach can also implement Distributed Admission Control, where the decision is in the hand of the end-system. We comment on how ECN (Explicit Congestion Notification) could be used as an enabling technology. Lastly we outline how guarantees can be constructed with this framework.
<id=”bib_Key:2001″ class=”bibtex”>
BibTeX:
@inproceedings{Key:2001,
author = {Peter B. Key},
editor = {U. Killat and W. Lammersdorf},
title = {Resource Pricing for Differentiated Services},
booktitle = {Kommunication in Verteilten Systemen, (KiVS)},
publisher = {Springer},
year = {2001},
pages = {3--18}
}
PDF
Modelling the Performance of Distributed Admission Control for Adaptive Applications A. Bain and P. B. Key. SIGMETRICS Perform. Eval. Rev.. New York, NY, USA December 2001. Vol. 29(3), pp. 21-22. ACM.
<id=”bib_BainKey:2001″ class=”bibtex”>
BibTeX:
@article{BainKey:2001,
author = {A. Bain and P. B Key},
title = {Modelling the Performance of Distributed Admission Control for Adaptive Applications},
journal = {SIGMETRICS Perform. Eval. Rev.},
publisher = {ACM},
year = {2001},
volume = {29},
number = {3},
pages = {21--22},
doi = {10.1145/507553.507562}
}
[DOI] PDF
Resource Allocation with Persistent and Transient Flows Ayalvadi Ganesh, Peter Key and Supratim Deb. Technical Report Microsoft Research. November 2001. (MSR-TR-2001-114)
<id=”bib_GanKeyDeb:2001″ class=”bibtex”>
BibTeX:
@techreport{GanKeyDeb:2001,
author = {Ayalvadi Ganesh and Peter Key and Supratim Deb},
title = {Resource Allocation with Persistent and Transient Flows},
school = {Microsoft Research},
year = {2001},
number = {MSR-TR-2001-114}
}
Feedback and bandwidth sharing in networks P. B. Key A. J Ganesh and L. Massoulié. In Proceedings 39th Annual Allerton Conference on Communication, Control and Computing. October 2001.
<id=”bib_A.JMass:2001″ class=”bibtex”>
BibTeX:
@inproceedings{A.JMass:2001,
author = {A. J Ganesh, P. B. Key and L. Massoulié},
title = {Feedback and bandwidth sharing in networks},
booktitle = {Proceedings 39th Annual Allerton Conference on Communication, Control and Computing},
year = {2001}
}
PDF
Distributed control and resource marking using best-effort routers Richard Gibbens and Peter Key. IEEE Network. June 2001. Vol. 15(3), pp. 54-59.
<id=”absGibbeKey:2001″ class=”abstract”>Abstract: We present a method for creating differential QoS where control is in the hands of the end system or user, and the network distributes congestion feedback information to users via packet marking at resources. Current proposals for creating differential QoS in the Internet often rely on classifying packets into a number of classes with routers treating different classes appropriately. The router plays a critical role in guaranteeing performance. In contrast, there is a growing body of work that seeks to place more of the control in the hands of the end system or user, with simple functionality in the router. This is the approach outlined in this tutorial article: using insights from economics and control theory we show how cooperation between end systems and the network can be encouraged using a simple packet marking scheme. The network distributes congestion feedback information to users via packet marking at resources, and users react accordingly to obtain differential QoS
<id=”bib_GibbeKey:2001″ class=”bibtex”>
BibTeX:
@article{GibbeKey:2001,
author = {Richard Gibbens and Peter Key},
title = {Distributed control and resource marking using best-effort routers},
journal = {IEEE Network},
year = {2001},
volume = {15},
number = {3},
pages = {54--59}
}
PDF
Modeling RED with Idealized TCP Sources P. Kuusela, P. Lassila, J. Virtamo and P. Key. In Proceedings of IFIP ATM & IP 2001. Budapest, Hungary June 2001. , pp. 155-166.
<id=”absKuuLaViKe:2001″ class=”abstract”>Abstract: We analyze the dynamic behavior of a single RED controlled queue interacting with a large population of idealized TCP sources, i.e., sources obeying the rules of linear increase and multiplicative decrease. The aggregate traffic from this population is modeled in terms of the time dependent expected value of the packet arrival rate which reacts to the packet loss taking place in the queue. The queue is described in terms of the time dependent expected values of the instantaneous queue length and of the exponentially averaged queue length, for which we also derive a pair of differential equations. This provides us with a complete model for the dynamics of the system which we use to explore transient and equilibrium behavior. The accuracy of the model is verified by comparison with simulated results.
<id=”bib_KuuLaViKe:2001″ class=”bibtex”>
BibTeX:
@inproceedings{KuuLaViKe:2001,
author = {P. Kuusela and P. Lassila and J. Virtamo and P. Key},
title = {Modeling RED with Idealized TCP Sources},
booktitle = {Proceedings of IFIP ATM & IP 2001},
year = {2001},
pages = {155--166}
}
PDF
Distributed Admission Control F. P. Kelly, P. B. Key and S. Zachary. IEEE Journal on Selected Areas in Communications. December 2000. Vol. 18(12)
<id=”absKelKeyZac:2000″ class=”abstract”>Abstract: This paper describes a framework for admission control for a packet-based network where the decisions are taken by edge devices or end-systems, rather than resources within the network. The decisions are based on the results of probe packets that the end-systems send through the network, and require only that resources apply a mark to packets in a way that is load dependent. One application example is the Internet, where marking information is fed back via an ECN bit, and we show how this approach allows a rich QoS framework for flows or streams. Our approach allows networks to be explicitly analyzed, and consequently engineered
<id=”bib_KelKeyZac:2000″ class=”bibtex”>
BibTeX:
@article{KelKeyZac:2000,
author = {F. P. Kelly and P. B. Key and S. Zachary},
title = {Distributed Admission Control},
journal = {IEEE Journal on Selected Areas in Communications},
year = {2000},
volume = {18},
number = {12}
}
PDF
An ECN-based end-to-end congestion-control framework: experiments and evaluation Koenraad Laevens, Peter B. Key and Derek McAuley. Microsoft Research Technical Report. October 2000. (MSR-TR-2000-104)
<id=”absLaeKeyMcA:2000″ class=”abstract”>Abstract: In this paper we describe a study of an end-to-end architecture in a packet network where congestion is signalled to all contributing users, who then react accordingly. We assume a generic form of packet marking, where congested nodes set a single bit in the packets flowing them using ECN (Explicit Congestion Notification). We briefly describe how allowing greater freedom to the end-systems lays the foundation for a differential QoS framework as a possible future extension to ECN. Not surprisingly, this QoS is a function of the user-network interaction. Focussing on a specific class of user behaviors, related to streaming applications, we decompose this interaction at the relevant timescales, relying both on simulations and dynamical system models. Results indicate that within the framework, single-bit notification achieves end-to-end QoS differentiation, approaching proportionally fair sharing of bandwidth.
<id=”bib_LaeKeyMcA:2000″ class=”bibtex”>
BibTeX:
@techreport{LaeKeyMcA:2000,
author = {Koenraad Laevens and Peter B. Key and Derek McAuley},
title = {An ECN-based end-to-end congestion-control framework: experiments and evaluation},
school = {Microsoft Research Technical Report},
year = {2000},
number = {MSR-TR-2000-104}
}
End-User Policies for Predicting Congestion Patterns in Data Networks Laurent Massoulié, Peter B. Key and Koenraad Laevens. In 13th ITC Specialist Seminar on Internet Traffic Measurement. September 2000.
<id=”absMasKeyLae:2000″ class=”abstract”>Abstract: In this paper we propose a scheme for detecting and predicting congestion patterns from network feedback. The focus here is on binary feedback messages, such as loss events or ECN marks. Simulations show that, when a fraction of end users modify their congestion control based on these estimations, oscillations in network buffer occupancy otherwise induced by TCP users are significantly reduced. These oscillation reductions benefit both TCP users and the users applying the proposed scheme, since both types of users receive a smaller proportion of ECN marks.
<id=”bib_MasKeyLae:2000″ class=”bibtex”>
BibTeX:
@inproceedings{MasKeyLae:2000,
author = {Laurent Massoulié and Peter B. Key and Koenraad Laevens},
title = {End-User Policies for Predicting Congestion Patterns in Data Networks},
booktitle = {13th ITC Specialist Seminar on Internet Traffic Measurement},
year = {2000}
}
PDF
Service Differentiation: Congestion Pricing, Brokers and Bandwidth Futures Peter B. Key. In NOSSDAV’99. 1999.
<id=”absKey:1999″ class=”abstract”>Abstract: In most Networks or Systems, users compete for a set of limited resources. Resources might be point-to-point bandwidth, buffer space, memory, CPU cycles etc. and users can be humans, applications or processes. Connection-oriented Telecoms services use either reservation, where resources are booked in advanced or Admission Control — think of telephony or ATM for example. Packet-based services and Operating Systems rely mainly on either priority schemes, for instance using simple priority queues, or flow-control feedback mechanisms such as TCP. The current Internet uses flow-control with an admit-all policy that relies on secondary measures, such as users’ willingness to tolerate current service, to throttle excess load. Within the IETF, IntServ is attempting to define connection-oriented admission control schemes whilst DiffServ is based on connectionless ideas, attempting to use priorities and complex queuing disciplines to provide QoS classes. But if we step outside the Computer Science or Telecommunication mindset, we see the same problem occurring in other areas, but solved using resource pricing as a fundamental ingredient. For example, think of resources as airline seats, units of (electrical) power or economic wealth. Prices or taxes are used to control the load, or to maximise the return by offering differential services at different prices — think for instance of power companies who use price incentives and penalties, or airlines. We claim not only that it is impossible to separate service differentiation from pricing, but also that some use of resource pricing is the only sensible way to proceed. The first point is clear: if there are two qualities which both cost the same, any rational person will chose the higher quality! As an example, most current DiffServ proposals are interesting but fundamentally flawed, caused by attempting to define technologies independently of economics or implementation. They attempt to define local priority schemes and scheduling policies, whereas an application/user is interested in end-to-end performance. Also, the cost and complexity of scheduling policies is not addressed. Even if the hardware cost is not significant, there are real management overheads. A salutary example is provided by the ABR service of ATM: this only really makes sense if ABR operates from end-system to end-system, which now looks highly unlikely. Moreover, despite being more complex to implement, it would have to be charged at a cheaper tariff than say a CBR service operating at the MCR, despite using the same bandwidth. (Otherwise a smart user with delay insensitive traffic could just use CBR and overflow excess traffic to a best-effort traffic class that has be to an order of magnitude cheaper). The Explicit Congestion Notification (ECN) [ ] proposal within DiffServ looks attractive, and we believe can be extended to give as much diversity as is required. Our contribution is to break away from shackles of mandatory TCP-like behaviour inherent in the current ECN RFC. We can even incorporate call admission control at the flow or micro-flow level, an anathema to the current DiffServ!
<id=”bib_Key:1999″ class=”bibtex”>
BibTeX:
@inproceedings{Key:1999,
author = {Peter B. Key},
title = {Service Differentiation: Congestion Pricing, Brokers and Bandwidth Futures},
booktitle = {NOSSDAV'99},
year = {1999},
}
PDF
The use of games to assess user strategies for differential Quality of Service in the Internet R. J. Gibbens and P. B. Key. In Workshop on Internet Service Quality Economics. MIT December 1999.
<id=”bib_GibbeKey:1999a” class=”bibtex”>
BibTeX:
@inproceedings{GibbeKey:1999a,
author = {R. J. Gibbens and P. B. Key},
title = {The use of games to assess user strategies for differential Quality of Service in the Internet},
booktitle = {Workshop on Internet Service Quality Economics},
year = {1999}
}
PDF
User Policies in a Network Implementing Congestion Pricing P. B. Key and L. Massoulié. In Workshop on Internet Service Quality Economics. MIT December 1999.
<id=”absKeyMass:1999″ class=”abstract”>Abstract: In this paper we consider a network implementing congestion pricing. For general user utility functions, and in a regime where user peak rates are small compared to network link capacities, we establish optimality of the considered pricing scheme. The corresponding optimal user policies are perhaps contrary to what one would expect: file transfers must either be done at maximal speed, or have access denied, whereas real time applications will display elasticity in their choice of sending rates. We also discuss how the optimality property is affected by price encoding mechanisms implemented in the network, and the resulting effect on user policies.
<id=”bib_KeyMass:1999″ class=”bibtex”>
BibTeX:
@inproceedings{KeyMass:1999,
author = {P. B. Key and L. Massoulié},
title = {User Policies in a Network Implementing Congestion Pricing},
booktitle = {Workshop on Internet Service Quality Economics},
year = {1999}
}
PDF
Teletraffic Engineering in a Competitive World, Proceedings ITC16 . June 1999. Vol. 3 Elsevier.
<id=”bib_KeySmit:1999″ class=”bibtex”>
BibTeX:
@proceedings{KeySmit:1999,,
editor = {P. B. Key and D. G. Smith},
title = {Teletraffic Engineering in a Competitive World, Proceedings ITC16},
publisher = {Elsevier},
year = {1999},
volume = {3},
url = {http://www.iee.org.uk/Conf/ITC16/}
}
[URL]
Differential QoS and pricing in networks: where flow control meets game theory P. B. Key and D. R. McAuley. IEE Proceedings Software. March 1999. Vol. 146(2), pp. 39-43.
<id=”absKeyMcAu:1999″ class=”abstract”>Abstract: This paper looks at ways of providing quality of service (QoS) to users, based on a simple pricing scheme. It is primarily aimed at elastic traffic, and it is users rather than the network who define the flow control schemes. A framework for assessing schemes and algorithms via a distributed game is presented
<id=”bib_KeyMcAu:1999″ class=”bibtex”>
BibTeX:
@article{KeyMcAu:1999,
author = {P. B. Key and D. R. McAuley},
title = {Differential QoS and pricing in networks: where flow control meets game theory},
journal = {IEE Proceedings Software},
year = {1999},
volume = {146},
number = {2},
pages = {39--43},
doi = {10.1049/ip-sen:19990154}
}
[DOI] PDF
Differential QoS and pricing in networks: where flow control meets game theory P. B. Key and D. McAuley. In Proceedings UK Performance Engineering Workshop. 1998. , pp. 2-12.
<id=”bib_KeyMcAu:1998″ class=”bibtex”>
BibTeX:
@inproceedings{KeyMcAu:1998,
author = {P. B. Key and D. McAuley},
title = {Differential QoS and pricing in networks: where flow control meets game theory},
booktitle = {Proceedings UK Performance Engineering Workshop},
year = {1998},
pages = {2-12}
}
Designs and Control of ATM/SDH Networks M. A. H. Dempster, E. A. Medeova, P. B. Key and S. A. Sargood. In Proceedings 4th International Conference on Telecommunication Systems. Nahsville 1996. , pp. 259-270.
<id=”bib_DemMeKeSa:1996″ class=”bibtex”>
BibTeX:
@inproceedings{DemMeKeSa:1996,
author = {M. A. H. Dempster and E. A. Medeova and P. B. Key and S. A. Sargood},
title = {Designs and Control of ATM/SDH Networks},
booktitle = {Proceedings 4th International Conference on Telecommunication Systems},
year = {1996},
pages = {259--270}
}
Cell Delay Variation and Burst Expansion in ATM Networks: Results from a Practical Study Using Fairisle Simon Crosby, Ian Leslie and Peter Key. 1995.
<id=”bib_CroLesKey:1995″ class=”bibtex”>
BibTeX:
@misc{CroLesKey:1995,
author = {Simon Crosby and Ian Leslie and Peter Key},
title = {Cell Delay Variation and Burst Expansion in ATM Networks: Results from a Practical Study Using Fairisle},
year = {1995}
}
PDF
A decision-theoretic approach to call admission control in ATM networks R. J. Gibbens, F. P. Kelly and P. B. Key. IEEE Journal on Selected Areas in Communications, special issue on Advances in the Fundamentals of Networking. 1995. Vol. 13(6), pp. 1101-1114.
<id=”absGibKelKey:1995″ class=”abstract”>Abstract: This paper describes a simple and robust ATM call admission control, and develops the theoretical background for its analysis. Acceptance decisions are based on whether the current load is less than a precalculated threshold, and Bayesian decision theory provides the framework for the choice of thresholds. This methodology allows an explicit treatment of the trade-offs between cell loss and call rejection, and of the consequences of estimation error. Further topics discussed include the robustness of the control to departures from the model assumptions, its performance relative to a control possessing precise knowledge of all unknown parameters, the relationship between leaky bucket depths and buffer requirements, and the treatment of multiple call types.
<id=”bib_GibKelKey:1995″ class=”bibtex”>
BibTeX:
@article{GibKelKey:1995,
author = {R. J. Gibbens and F. P. Kelly and P. B. Key},
title = {A decision-theoretic approach to call admission control in ATM networks},
journal = {IEEE Journal on Selected Areas in Communications, special issue on Advances in the Fundamentals of Networking},
year = {1995},
volume = {13},
number = {6},
pages = {1101--1114}
}
PDF
Dynamic Alternative Routing R. J. Gibbens, F. P. Kelly and P. B. Key. In Routing in Communication Networks. 1995. Prentice Hall.
<id=”bib_GibKelKey:1995a” class=”bibtex”>
BibTeX:
@incollection{GibKelKey:1995a,
author = {R. J. Gibbens and F. P. Kelly and P. B. Key},
editor = {M. Steenstrup},
title = {Dynamic Alternative Routing},
booktitle = {Routing in Communication Networks},
publisher = {Prentice Hall},
year = {1995}
}
Admission Control Problems in Telecommunications P. B. Key. In Complex Stochastic Systems and Engineering. 1995. , pp. 235-250. Oxford University Press.
<id=”bib_Key:1995″ class=”bibtex”>
BibTeX:
@incollection{Key:1995,
author = {P. B. Key},
editor = {D.M. Titterington},
title = {Admission Control Problems in Telecommunications},
booktitle = {Complex Stochastic Systems and Engineering},
publisher = {Oxford University Press},
year = {1995},
pages = {235--250}
}
Connection Admission Control in ATM networks Peter B. Key. BT Technology Journal. July 1995. Vol. 13(3)
<id=”absKey:1995a” class=”abstract”>Abstract: The uncertainties surrounding traffic characterisation for future broadband services have led some to claim that statistical multiplexing for ATM is not feasible. We argue that this is unduly pessimistic, and show how using a specified (contracted) peak rate and on-line estimation of mean rates that such multiplexing is possible and safe. We use effective bandwidths and Bayesian methods to develop Adaptive Call Admission Controls. We show how these can be applied practically by giving results of a simulation experiment where our methods are applied to heterogeneous sources. These confirm our claims that the methods are both practical and conservative i.e. the Quality of Service standards are met.
<id=”bib_Key:1995a” class=”bibtex”>
BibTeX:
@article{Key:1995a,
author = {Peter B. Key},
title = {Connection Admission Control in ATM networks},
journal = {BT Technology Journal},
year = {1995},
volume = {13},
number = {3}
}
Dimensioning playout buffers from an ATM network F. P. Kelly and P. B. Key. In Proceedings of the Eleventh U.K. Teletraffic Symposium. 1994. I.E.E..
<id=”absKellyKey:1994″ class=”abstract”>Abstract: The authors investigate how earlier work of van den Berg and Resing and of D’Ambrosio and Melen (1993) can be combined with classical bounds for the GI/G/1 queue to give insight into the output buffer required for a smooth playout of cells. The effect of both the number of stages across the network and the rate of the stream is considered. A possibly surprising conclusion is that as the rate of the stream decreases, the required output buffer size may actually increase. While the cell delay variation of small rate streams may cause no problems for buffers within the network, the cell delay variation introduced by the network may cause problems for the customer on output
<id=”bib_KellyKey:1994″ class=”bibtex”>
BibTeX:
@inproceedings{KellyKey:1994,
author = {F. P. Kelly and P. B. Key},
title = {Dimensioning playout buffers from an ATM network},
booktitle = {Proceedings of the Eleventh U.K. Teletraffic Symposium},
publisher = {I.E.E.},
year = {1994}
}
PDF
Some Control Issues in Telecommunications Networks P. B. Key. In In Probability, Statistics and Optimisation A tribute to Peter Whittle. 1994. , pp. 383-395. Wiley.
<id=”bib_Key:1994″ class=”bibtex”>
BibTeX:
@incollection{Key:1994,
author = {P.B. Key},
editor = {F.P. Kelly},
title = {Some Control Issues in Telecommunications Networks},
booktitle = {In Probability, Statistics and Optimisation A tribute to Peter Whittle},
publisher = {Wiley},
year = {1994},
pages = {383-395}
}
Adaptive Call Admission Control in ATM Networks P. B. Key and T. R. Griffiths. In The fundamental role of Teletraffic Engineering in the Evolution of Telecommunication Networks. 1994. Vol. 1b Elsevier.
<id=”absKeyGrif:1994″ class=”abstract”>Abstract: The uncertainties surrounding traffic characterisation for future broadband services have led some to claim that statistical multiplexing for ATM is not feasible. We argue that this is unduly pessimistic, and show how using a specified (contracted) peak rate and on-line estimation of mean rates that such multiplexing is possible and safe. We use effective bandwidths and Bayesian methods to develop Adaptive Call Admission Controls. We show how these can be applied practically by giving results of a simulation experiment where our methods are applied to heterogeneous sources. These confirm our claims that the methods are both practical and conservative i.e. the Quality of Service standards are met.
<id=”bib_KeyGrif:1994″ class=”bibtex”>
BibTeX:
@inproceedings{KeyGrif:1994,
author = {P. B. Key and T. R. Griffiths},
editor = {J. Labetoulle and J.W. Roberts},
title = {Adaptive Call Admission Control in ATM Networks},
booktitle = {The fundamental role of Teletraffic Engineering in the Evolution of Telecommunication Networks},
publisher = {Elsevier},
year = {1994},
volume = {1b},
url = {http://www.sciencedirect.com/science/article/pii/B9780444820310501116},
doi = {10.1016/B978-0-444-82031-0.50111-6}
}
[DOI] [URL]
CDV in ATM Networks — Performance Results from the Fairisle ATM Testbed S. A. Crosby, I. M. Leslie, K. van der Merwe, A. Atkinson, R. Griffiths and P. B. Key. In RACE EXPLOIT Traffic Workshop. September 1994. Basel, Switzerland.
<id=”bib_CrLeMeAGK:1994″ class=”bibtex”>
BibTeX:
@inproceedings{CrLeMeAGK:1994,
author = {S. A. Crosby and I. M. Leslie and van der Merwe, K. and A. Atkinson and R. Griffiths and P. B. Key},
title = {CDV in ATM Networks --- Performance Results from the Fairisle ATM Testbed},
booktitle = {RACE EXPLOIT Traffic Workshop},
publisher = {Basel, Switzerland},
year = {1994}
}
Design and Analysis of a Highly Reliable Transmission Network P. B. Key and A. M. Elvidge. In Teletraffic and Datatraffic in a Period of Change. 1991. , pp. 323-328.
<id=”bib_KeyElvi:1991″ class=”bibtex”>
BibTeX:
@inproceedings{KeyElvi:1991,
author = {P.B. Key and A.M. Elvidge},
editor = {A. Jensen and B. Iverson},
title = {Design and Analysis of a Highly Reliable Transmission Network},
booktitle = {Teletraffic and Datatraffic in a Period of Change},
year = {1991},
pages = {323--328}
}
PDF
A new Call Gapping Algorithm for Network Traffic Performance P. M. D. T. Turner and P. B. Key. In Teletraffic and Datatraffic in a Period of Change. 1991.
<id=”bib_TurneKey:1991″ class=”bibtex”>
BibTeX:
@inproceedings{TurneKey:1991,
author = {P.M.D.T. Turner and P.B. Key},
editor = {A. Jensen and B. Iverson},
title = {A new Call Gapping Algorithm for Network Traffic Performance},
booktitle = {Teletraffic and Datatraffic in a Period of Change},
year = {1991}
}
PDF
Optimal control and trunk reservation in loss networks P. B. Key. Probability in the Engineering and Informational Sciences. 1990. Vol. 4, pp. 203-242.
<id=”absKey:1990″ class=”abstract”>Abstract: Consider a stochastic loss network, where calls or customer types arrive and have to find a path through the network to a given destination, and where our aim is to maximize the gain (suitably defined) from the network. In general there will be a number of paths available, and when a call arrives the two questions to answer are first, should the call be accepted, and secondly, if it is accepted which route should it take? The answer to the first question is in some sense harder than the second, and all dynamic routing or control policies have some explicit or implicit mechanism for rejecting calls and so answer the question in some way.
<id=”bib_Key:1990″ class=”bibtex”>
BibTeX:
@article{Key:1990,
author = {P.B. Key},
title = {Optimal control and trunk reservation in loss networks},
journal = {Probability in the Engineering and Informational Sciences},
year = {1990},
volume = {4},
pages = {203-242},
doi = {10.1017/s0269964800001558}
}
[DOI]
Distributed Dynamic Routing Schemes P. B. Key and G. A. Cope. IEEE Communications Magazine. 1990. Vol. 28, pp. 54-64.
<id=”absKeyCope:1990″ class=”abstract”>Abstract: Schemes that do not explicitly use much information about the state of networks are briefly surveyed, with the focus on dynamic alternative routing (DAR), a simple but highly effective routing method currently planned for the British Telecom Network. State-dependent routing and how some of the methodology also has bearing on the control issue are discussed. The problem of dimensioning a network that uses dynamic routing (i.e. how much capacity is needed and where it should be put to provide an acceptable performance) is addressed. A practical example, which refers to routing in an international access network, is discussed. Some conclusions are drawn on the benefits and drawbacks of distributed routing
<id=”bib_KeyCope:1990″ class=”bibtex”>
BibTeX:
@article{KeyCope:1990,
author = {P.B. Key and G.A. Cope},
title = {Distributed Dynamic Routing Schemes},
journal = {IEEE Communications Magazine},
year = {1990},
volume = {28},
pages = {54-64}
}
PDF
Dynamic Alternative Routing: modelling and behaviour R. J. Gibbens, F. P. Kelly and P. B. Key. In Teletraffic Science, Proceedings of 12th International Teletraffic Congress. 1988. , pp. 1019-1025. Elsevier, Amsterdam.
<id=”bib_GibKelKey:1988″ class=”bibtex”>
BibTeX:
@incollection{GibKelKey:1988,
author = {R. J. Gibbens and F. P. Kelly and P. B. Key},
editor = {M. Bonatti},
title = {Dynamic Alternative Routing: modelling and behaviour},
booktitle = {Teletraffic Science, Proceedings of 12th International Teletraffic Congress},
publisher = {Elsevier, Amsterdam},
year = {1988},
pages = {1019--1025}
}
Markov decision processes and optimal control in circuit-switched networks P. B. Key. In 5th UK Teletraffic Symposium. 1988. IEE.
<id=”bib_Key:1988a” class=”bibtex”>
BibTeX:
@inproceedings{Key:1988a,
author = {P.B. Key},
title = {Markov decision processes and optimal control in circuit-switched networks},
booktitle = {5th UK Teletraffic Symposium},
publisher = {IEE},
year = {1988}
}
Implied cost methodology and software tools for a fully connected network with DAR and trunk reservation P. B. Key. British Telecom Technology Journal. 1988. Vol. 6, pp. 52-65.
<id=”bib_Key:1988″ class=”bibtex”>
BibTeX:
@article{Key:1988,
author = {P. B. Key},
title = {Implied cost methodology and software tools for a fully connected network with DAR and trunk reservation},
journal = {British Telecom Technology Journal},
year = {1988},
volume = {6},
pages = {52-65}
}
Cost Effective Use of networks employing Dynamic Alternative Routing P. B. Key and M. J. Whitehead. In Teletraffic Science for New Cost-Effective Systems, Proceedings of 12th International Teletraffic Congress, Turin. 1988. Elsevier, Amsterdam.
<id=”bib_KeyWhit:1988″ class=”bibtex”>
BibTeX:
@incollection{KeyWhit:1988,
author = {P. B. Key and M.J. Whitehead},
editor = {M. Bonatti},
title = {Cost Effective Use of networks employing Dynamic Alternative Routing},
booktitle = {Teletraffic Science for New Cost-Effective Systems, Proceedings of 12th International Teletraffic Congress, Turin},
publisher = {Elsevier, Amsterdam},
year = {1988}
}
Multi-Hour Dimensioning for DAR using Implied Costs P. B. Key. In 4th UK Teletraffic Symposium. 1987. IEE.
<id=”bib_Key:1987″ class=”bibtex”C
BibTeX:
@inproceedings{Key:1987,
author = {P. B. Key},
title = {Multi-Hour Dimensioning for DAR using Implied Costs},
booktitle = {4th UK Teletraffic Symposium},
publisher = {IEE},
year = {1987}
}
On the Bayesian steady forecasting model P. B. Key and E. J. Godolphin. Journal of the Royal Statistical Society, Series B. 1981. Vol. 43, pp. 92-96.
<id=”bib_KeyGodo:1981″ class=”bibtex”>
BibTeX:
@article{KeyGodo:1981,
author = {P.B. Key and E.J. Godolphin},
title = {On the Bayesian steady forecasting model},
journal = {Journal of the Royal Statistical Society, Series B},
year = {1981},
volume = {43},
pages = {92-96}
}