I am an independent consultant and researcher in Networks, Economics and Computational Science based in Cambridge, England.
A brief bio is here.
Contact me
Research Interests
My research interest lie at the intersection of Computer Science, Economics, Mathematics and Statistics. I like to design large systems that work well, creating the right incentives to achieve this. I am particularly interested in networks, such as Computer Networks, Communication networks, Social Networks and Ad Networks. Some of my recent research has used concepts from algorithmic game theory, microeconomics, economics, machine learning, optimization theory, and computational science.
Although much of my work involves mathematical and statistical models, I like to work on real systems, using empirical and experimental data.
Recent Publications
Pricing, Competition and Content for Internet Service Providers
Peter Key and Richard Steinberg,
IEEE/ACM Transactions on Networking October 2020, Vol. 28, Issue 5, pp 2285-2298
<id=”absKeySte:2020″ class=”abstract”>Abstract
We examine competition between two Internet Service Providers (ISPs), where the first ISP provides basic Internet service, while the second ISP provides Internet service plus content, i.e., \emph{enhanced service}, where the first ISP can partner with a Content Provider to provide the same content as the second ISP. When such a partnering arrangement occurs, the Content Provider pays the first ISP a transfer price for delivering the content. Users have heterogeneous preferences, and each in general faces three options: (1) buy basic Internet service from the first ISP; (2) buy enhanced service from the second ISP; or (3) buy enhanced service jointly from the first ISP and the Content Provider. We derive results on the existence and uniqueness of a Nash equilibrium, and provide closed-form expressions for the prices, user masses, and profits of the two ISPs and the Content Provider. When the first ISP has the ability to choose the transfer price, then when congestion is linear in the load, it is never optimal for the first ISP to set a negative transfer price in the hope of attracting more revenue from additional customers desiring enhanced service. Conversely, when congestion is sufficiently super-linear, the optimal strategy for the first ISP is either to set a negative transfer price (subsidizing the Content Provider) or to set a high transfer price that shuts the Content Provider out of the market.
[DOI] [ IEEE Explore Early Access][LSE eprint]
Simple Pricing Schemes for the Cloud
Ian A. Kash,Peter Key, and Warut Suksompong.
ACM Transactions on Economics and Computation August 2019, Vol. 7 Issue 2, Article No. 7.
<id=”absKasKeySuk:2019″ class=”abstract”>Abstract
The problem of pricing the cloud has attracted much recent attention due to the widespread use of cloud computing and cloud services. From a theoretical perspective, several mechanisms that provide strong efficiency or fairness guarantees and desirable incentive properties have been designed. However, these mechanisms often rely on a rigid model, with several parameters needing to be precisely known for the guarantees to hold. In this article, we consider a stochastic model and show that it is possible to obtain good welfare and revenue guarantees with simple mechanisms that do not make use of the information on some of these parameters. In particular, we prove that a mechanism that sets the same price per timestep for jobs of any length achieves at least 50% of the welfare and revenue obtained by a mechanism that can set different prices for jobs of different lengths, and the ratio can be improved if we have more specific knowledge of some parameters. Similarly, a mechanism that sets the same price for all servers even though the servers may receive different kinds of jobs can provide a reasonable welfare and revenue approximation compared to a mechanism that is allowed to set different prices for different servers.
<id=”bib_KeyKasSuk:2019″ class=”bibtex”>
BibTeX:
@Article{KasKeySuk:2019,
author = {Ian A. Kash, and Peter Key, and Warut Suksompong},
title = {Simple Pricing Schemes for the Cloud},
journal = {ACM Transactions on Economics and Computation},
year = {2019},
volume = {7},
number = {2},
month = aug,
doi = {10.1145/3327973},
url = {https://dl.acm.org/citation.cfm?id=3327973},
url2 = {http://arxiv.org/pdf/1705.08563.pdf},
}
[DOI] [Journal Paper ]
Strategic behavior and learning in all-pay auctions: an empirical study using crowdsourced data
Yoram Bachrach, Ian A. Kash, Peter Key and Joel Oren.
Autonomous Agents and Multi-Agent Systems March 2019, Vol. 33, Issue 1-2, pp 192–215.
<id=”abBacKasKeyOre:2019″ class=”abstract”>AbstractWe analyze human behavior in crowdsourcing contests using an all-pay auction model where all participants exert effort, but only the highest bidder receives the reward. We let workers sourced from Amazon Mechanical Turk participate in an all-pay auction, and contrast the game theoretic equilibrium with the choices of the humans participants. We examine how people competing in the contest learn and adapt their bids, comparing their behavior to well-established online learning algorithms in a novel approach to quantifying the performance of humans as learners. For the crowdsourcing contest designer, our results show that a bimodal distribution of effort should be expected, with some very high effort and some very low effort, and that humans have a tendency to overbid. Our results suggest that humans are weak learners in this setting, so it may be important to educate participants about the strategic implications of crowdsourcing contests.
<id=”bib_BacKasKeyOre:2019″ class=”bibtex”>
BibTeX:
@Article{BacKasKeyOre:2019,
author = {Yoram Bachrach and Ian A. Kash and Peter Key and Joel Oren},
title = {Strategic behavior and learning in all-pay auctions: an empirical study using crowdsourced data},
journal = {Autonomous Agents and Multi-Agent Systems},
year = {2019},
volume = {33},
number = {1-2},
pages = {192-215},
month = {March},
issn = {1573-7454},
doi = {10.1007/s10458-019-09402-4},
publisher = {Springer Nature},
url = {https://link.springer.com/article/10.1007/s10458-019-09402-4},
}
[ DOI][Journal Paper ] [PDF]
Optimal Pricing and Introduction Timing of New Virtual Machines
Ian A. Kash, Peter Key and Spyros I. Zoumpoulis.
EC 18 Proceedings of the 2018 ACM Conference on Economics and Computation June 2018, pp. 51-52.
<id=”absKelKeyZou:2018″ class=”abstract”>Abstract
As the quality of computer hardware increases over time, cloud service providers have the ability to offer more powerful virtual machines (VMs) and other resources to their customers. But providers face several trade-offs as they seek to make the best use of improved technology. On one hand, more powerful machines are more valuable to customers and command a higher price. On the other hand, there is a cost to develop and launch a new product. Further, the new product competes with existing products. Thus, the provider faces two questions. First, when should new classes of VMs be introduced? Second, how should they be priced, taking into account both the VM classes that currently exist and the ones that will be introduced in the future?
This decision problem, combining scheduling and pricing new product introductions, is common in a variety of settings. One aspect that is more specific to the cloud setting is that VMs are rented rather than sold. Thus existing customers can switch to a new offering, albeit with some inconvenience. There is indeed evidence of customers’ aversion to upgrades in the cloud computing services market. Based on a study of Microsoft Azure, we estimate that customers who arrive after a new VM class is launched are 50% more likely to use it than existing customers, indicating that these switching costs may be substantial.
This opens up a wide range of possible policies for the cloud service provider. Our main result is that a surprisingly simple policy is close to optimal in many situations: new VM classes are introduced on a periodic schedule and each is priced as if it were the only product being offered. (Periodic introductions have been noticeable in the practice of cloud computing. For example, Amazon Elastic Compute Cloud (EC2) launched new classes of the m.xlarge series in October 2007, February 2010, October 2012, and June 2015, i.e., in intervals of 28 months, 32 months, and 32 months.) We refer to this pricing policy as Myerson pricing, as these prices can be computed as in Myerson’s classic paper (1981). This policy produces a marketplace where new customers always select the newest and best offering, while existing customers may stick with older VMs due to switching costs.
<id=”bib_KeyKasZou:20188″ class=”bibtex”>
BibTeX:
@InProceedings{KasKeyZou:2018,
author = {Ian A. Kash and Peter B. Key and Spyros I. Zoumpoulis},
title = {Optimal Pricing and Introduction Timing of New Virtual Machines},
booktitle = {EC 18 Proceedings of the 2018 ACM Conference on Economics and Computation},
year = {2018},
publisher = {ACM},
pages = {51--52},
doi = {10.1145/3219166.3219209},
note = {See INSEAD working paper 2018/12/DSC},
url = {https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3137038},
}
[DOI] [Working Paper PDF]
Simple Pricing Schemes for the Cloud Ian A. Kash, Peter Key and Warut Suksompong. Proceedings of the 13th Conference on Web and Internet Economics (WINE).Dec 2017, pp. 311-324.
<id=”absKelKeyWal:2017″ class=”abstract”>Abstract
The problem of pricing the cloud has attracted much recent attention due to the widespread use of cloud computing and cloud services. From a theoretical perspective, several mechanisms that provide strong efficiency or fairness guarantees and desirable incentive properties have been designed. However, these mechanisms often rely on a rigid model, with several parameters needing to be precisely known in order for the guarantees to hold. In this paper, we consider a stochastic model and show that it is possible to obtain good welfare and revenue guarantees with simple mechanisms that do not make use of the information on some of these parameters. In particular, we prove that a mechanism that sets the same price per time step for jobs of any length achieves at least 50\% of the welfare and revenue obtained by a mechanism that can set different prices for jobs of different lengths, and the ratio can be improved if we have more specific knowledge of some parameters. Similarly, a mechanism that sets the same price for all servers even though the servers may receive different kinds of jobs can provide a reasonable welfare and revenue approximation compared to a mechanism that is allowed to set different prices for different servers.
<id=”bib_KeyKasWar:2017″ class=”bibtex”>
BibTeX:
@InProceedings{KasKeyWar:2017,
author = {Ian A. Kash, and Peter Key, and Warut Suksompong},
title = {Simple Pricing Schemes for the Cloud},
booktitle = {Proceedings of the 13th Conference on Web and Internet Economics (WINE 2017)},
year = {2017},
note = {A preliminary version appeared in Proceedings of the 12th Workshop on the Economics of Networks, Systems and Computation (NetEcon), June 2017},
publisher = {Springer},
month = dec,
pages = {311--324},
url = {https://link.springer.com/chapter/10.1007%2F978-3-319-71924-5_22},
doi = {10.1007/978-3-319-71924-5_22},
ulr2 = {http://arxiv.org/pdf/1705.08563.pdf},
}
[DOI] [Workshop version] Full PDF
Efficient Advert Assignment Frank Kelly, Peter Key and Neil Walton. Operations Research. 2016. Vol. 64(4), pp. 822-837.
<id=”absKelKeyWal:2016″ class=”abstract”>Abstract: We develop a framework for the analysis of large-scale ad auctions where adverts are assigned over a continuum of search types. For this pay-per-click market, we provide an efficient mechanism that maximizes social welfare. In particular, we show that the social welfare optimization can be solved in separate optimizations conducted on the time scales relevant to the search platform and advertisers. Here, on each search occurrence, the platform solves an assignment problem and, on a slower time scale, each advertiser submits a bid that matches its demand for click-throughs with supply. Importantly, knowledge of global parameters, such as the distribution of search terms, is not required when separating the problem in this way. Exploiting the information asymmetry between the platform and advertiser, we describe a simple mechanism that incentivizes truthful bidding, has a unique Nash equilibrium that is socially optimal, and thus implements our decomposition. Further, we consider models where advertisers adapt their bids smoothly over time and prove convergence to the solution that maximizes social welfare. Finally, we describe several extensions that illustrate the flexibility and tractability of our framework.
<id=”bib_KelKeyWal:2016″ class=”bibtex”>
BibTeX:
@article{KelKeyWal:2016,
author = {Frank Kelly and Peter Key and Neil Walton},
title = {Efficient Advert Assignment},
journal = {Operations Research},
year = {2016},
volume = {64},
number = {4},
pages = {822-837},
url = {
http://dx.doi.org/10.1287/opre.2016.1519
},
doi = {10.1287/opre.2016.1519}
}
[DOI] PDF of Preprint
Ranking and Tradeoffs in Sponsored Search Auctions Ben Roberts, Dinan Gunawardena, Ian A. Kash and Peter Key. ACM Trans. Econ. Comput.. New York, NY, USA June 2016. Vol. 4(3), pp. 17:1-17:21. ACM.
<id=”absRobGuKaKe:2016″ class=”abstract”>Abstract: In a sponsored search auction, decisions about how to rank ads impose tradeoffs between objectives, such as revenue and welfare. In this article, we examine how these tradeoffs should be made. We begin by arguing that the most natural solution concept to evaluate these tradeoffs is the lowest symmetric Nash equilibrium (SNE). As part of this argument, we generalise the well-known connection between the lowest SNE and the VCG outcome. We then propose a new ranking algorithm, loosely based on the revenue-optimal auction, that uses a reserve price to order the ads (not just to filter them) and give conditions under which it raises more revenue than simply applying that reserve price. Finally, we conduct extensive simulations examining the tradeoffs enabled by different ranking algorithms and show that our proposed algorithm enables superior operating points by a variety of metrics.
<id=”bib_RobGuKaKe:2016″ class=”bibtex”>
BibTeX:
@article{RobGuKaKe:2016,
author = {Roberts, Ben and Gunawardena, Dinan and Kash, Ian A. and Key, Peter},
title = {Ranking and Tradeoffs in Sponsored Search Auctions},
journal = {ACM Trans. Econ. Comput.},
publisher = {ACM},
year = {2016},
volume = {4},
number = {3},
pages = {17:1--17:21},
url = {http://doi.acm.org/10.1145/2910576},
doi = {10.1145/2910576}
}
[DOI] PDF from ACM
Mechanism Design for Mixed Bidders Yoram Bachrach, Sofia Ceppi, Ian A. Kash, Peter Key and Mohammad Reza Khani. In Proceedings of the 25th International Conference on World Wide Web. April 2016. WWW — World Wide Web Consortium (W3C).
<id=”absBaCeKaKeK:2016″ class=”abstract”>Abstract: The Generalized Second Price (GSP) auction has appealing properties when ads are simple (text based and identical in size), but does not generalize to richer ad settings, whereas truthful mechanisms such as VCG do. However, a straight switch from GSP to VCG incurs significant revenue loss for the search engine. We introduce a transitional mechanism which encourages advertisers to update their bids to their valuations, while mitigating revenue loss. In this setting, it is easier to propose first a payment function rather than an allocation function, so we give a general framework which guarantees incentive compatibility by requiring that the payment functions satisfy two specific properties. Finally, we analyze the revenue impacts of our mechanism on a sample of Bing data.
<id=”bib_BaCeKaKeK:2016″ class=”bibtex”>
BibTeX:
@inproceedings{BaCeKaKeK:2016,
author = {Yoram Bachrach and Sofia Ceppi and Ian A. Kash and Peter Key and Mohammad Reza Khani},
title = {Mechanism Design for Mixed Bidders},
booktitle = {Proceedings of the 25th International Conference on World Wide Web},
publisher = {WWW --- World Wide Web Consortium (W3C)},
year = {2016},
doi = {10.1145/2872427.2882983}
}
[DOI] PDF
Using Convolutional Neural Networks to Analyze Function Properties from Images Yoad Lewenberg, Yoram Bachrach, Ian Kash and Peter Key. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence. February 2016. , pp. 4363-4364. AAAI Press.
<id=”absLewBaKaKe:2016″ class=”abstract”>Abstract: We propose a system for determining properties of mathematical functions given an image of their graph representation. We demonstrate our approach for two dimensional graphs (curves of single variable functions) and three-dimensional graphs (surfaces of two variable functions), studying the properties of convexity and symmetry. Our method uses a Convolutional Neural Network which classifies functions according to these properties, without using any hand-crafted features. We propose algorithms for randomly constructing functions with convexity or symmetry properties, and use the images generated by these algorithms to train our network. Our system achieves a high accuracy on this task, even for functions where humans find it difficult to determine the function’s properties from its image.
<id=”bib_LewBaKaKe:2016″ class=”bibtex”>
BibTeX:
@inproceedings{LewBaKaKe:2016,
author = {Yoad Lewenberg and Yoram Bachrach and Ian Kash and Peter Key},
title = {Using Convolutional Neural Networks to Analyze Function Properties from Images},
booktitle = {Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence},
publisher = {AAAI Press},
year = {2016},
pages = {4363--4364}
}
PDF
Pricing the Cloud Ian Kash and Peter Key. IEEE Internet Computing. January 2016. Vol. 20, pp. 36-43. IEEE – Institute of Electrical and Electronics Engineers.
<id=”absKashKey:2016″ class=”abstract”>Abstract: Current cloud pricing schemes are fairly simple, but what’s the future of cloud pricing? Here, the authors discuss the economic issues shaping the cloud marketplace and address the open questions these issues yield. They then explore what the current state of research in economics and computer science has to say about some of these questions and how this relates to the future evolution of cloud pricing.
<id=”bib_KashKey:2016″ class=”bibtex”>
BibTeX:
@article{KashKey:2016,
author = {Ian Kash and Peter Key},
title = {Pricing the Cloud},
journal = {IEEE Internet Computing},
publisher = {IEEE – Institute of Electrical and Electronics Engineers},
year = {2016},
volume = {20},
pages = {36-43},
doi = {10.1109/MIC.2016.4}
}
[DOI] PDF Preprint
Non-Myopic Negotiators See What’s Best Yair Zick, Yoram Bachrach, Ian Kash and Peter Key. IJCAI. In Proceedings of the 24th International Joint Conference on Artificial Intelligence. July 2015.
<id=”absZicBaKaKe:2015″ class=”abstract”>Abstract: We consider revenue negotiation problems in iterative settings. In our model, a group of agents has some initial resources, used in order to generate revenue. Agents must agree on some way of dividing resources, but there’s a twist. At every time-step, the revenue shares received at time t are agent resources at time t + 1, and the game is repeated. The key issue here is that the way resources are shared has a dramatic effect on longterm social welfare, so in order to maximize individual long-term revenue one must consider the welfare of others, a behavior not captured by other models of cooperation and bargaining. Our work focuses on homogeneous production functions. We identify conditions that ensure that the socially optimal outcome is an e-Nash equilibrium. We apply our results to some families of utility functions, and discuss their strategic implications.
<id=”bib_ZicBaKaKe:2015″ class=”bibtex”>
BibTeX:
@inproceedings{ZicBaKaKe:2015,
author = {Yair Zick and Yoram Bachrach and Ian Kash and Peter Key},
title = {Non-Myopic Negotiators See What's Best},
booktitle = {Proceedings of the 24th International Joint Conference on Artificial Intelligence},
journal = {IJCAI},
year = {2015}
}
PDF
Mechanism Design for Mixed Ads Yoram Bachrach, Sofia Ceppi, Ian Kash, Peter Key and Mohammad Reza Khani. In Ad Auctions Workshop. January 2015.
<id=”absBaCeKaKeK:2015″ class=”abstract”>Abstract: We propose a hybrid auction mechanism for sponsored search, where bidders can be truthful or not, and are accordingly treated differently. Our class of hybrid mechanisms give incentives for non-truthful bidders to bid truthfully, while behaving as a non-truthful auction if no bidders are truthful.Our motivation is that the Generalized Second Price (GSP) auction (the current mechanism of choice) has appealing properties when ads are simple (text based and identical in size). But GSP does not generalize to richer ad settings, whereas truthful mechanisms, such as VCG do. Hence there are incentives for search platforms to migrate to truthful mechanisms, but a straight switch from GSP to VCG either requires all bidders instantly bid truthfully or incurs significant revenue loss. We introduce a transitional mechanism which encourages advertisers to update their bids to their valuations, while mitigating revenue loss.The mechanism is equivalent to GSP when nobody has updated their bid, is equivalent to VCG when everybody has updated, and it has the same allocation and payments of the original GSP if bids were in the minimum symmetric Nash equilibrium. In settings where both GSP ads and truthful (TF) ads exist, it is easier to propose a payment function than an allocation function. We give a general framework for these settings to characterize payment functions which guarantee incentive compatibility of truthful ads, by requiring that the payment functions satisfy two properties.Finally, we compare the revenue of our transitional mechanism with revenues of GSP and VCG mechanisms when run on a sample of Bing data.
<id=”bib_BaCeKaKeK:2015″ class=”bibtex”>
BibTeX:
@inproceedings{BaCeKaKeK:2015,
author = {Yoram Bachrach and Sofia Ceppi and Ian Kash and Peter Key and Mohammad Reza Khani},
title = {Mechanism Design for Mixed Ads},
booktitle = {Ad Auctions Workshop},
year = {2015}
}
PDF
Optimising Trade-offs Among Stakeholders in Ad Auctions Yoram Bachrach, Sofia Ceppi, Ian A. Kash, Peter Key and David Kurokawa. In Proceedings of the Fifteenth ACM Conference on Economics and Computation. New York, NY, USA 2014. , pp. 75-92. ACM.
<id=”absBaCeKaKeK:2014″ class=”abstract”>Abstract: We examine trade-offs among stakeholders in ad auctions. Our metrics are the revenue for the utility of the auctioneer, the number of clicks for the utility of the users and the welfare for the utility of the advertisers. We show how to optimize linear combinations of the stakeholder utilities, showing that these can be tackled through a GSP auction with a per-click reserve price. We then examine constrained optimization of stakeholder utilities.We use simulations and analysis of real-world sponsored search auction data to demonstrate the feasible trade-offs, examining the effect of changing the allowed number of ads on the utilities of the stakeholders. We investigate both short term effects, when the players do not have the time to modify their behavior, and long term equilibrium conditions.Finally, we examine a combinatorially richer constrained optimization problem, where there are several possible allowed configurations (templates) of ad formats. This model captures richer ad formats, which allow using the available screen real estate in various ways. We show that two natural generalizations of the GSP auction rules to this domain are poorly behaved, resulting in not having a symmetric Nash equilibrium or having one with poor welfare. We also provide positive results for restricted cases.
<id=”bib_BaCeKaKeK:2014″ class=”bibtex”>
BibTeX:
@inproceedings{BaCeKaKeK:2014,
author = {Bachrach, Yoram and Ceppi, Sofia and Kash, Ian A. and Key, Peter and Kurokawa, David},
title = {Optimising Trade-offs Among Stakeholders in Ad Auctions},
booktitle = {Proceedings of the Fifteenth ACM Conference on Economics and Computation},
publisher = {ACM},
year = {2014},
pages = {75--92},
doi = {10.1145/2600057.2602879}
}
[DOI] PDF PDF from ACM
Building a personalized tourist attraction recommender system using crowdsourcing Yoram Bachrach, Sofia Ceppi, Ian A. Kash, Peter Key, Filip Radlinski, Ely Porat, Michael Armstrong and Vijay Sharma. In Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems. 2014. , pp. 1631-1632.
<id=”absBaCKKRPAS:2014″ class=”abstract”>Abstract: We demonstrate how crowdsourcing can be used to automatically build a personalized tourist attraction recommender system, which tailors recommendations to specific individuals, so different people who use the system each get their own list of recommendations, appropriate to their own traits. Recommender systems crucially depend on the availability of reliable and large scale data that allows predicting how a new individual is likely to rate items from the catalog of possible items to recommend. We show how to automate the process of generating this data using crowdsourcing, so that such a system can be built even when such a dataset is not initially available. We first find possible tourist attractions to recommend by scraping such information from Wikipedia. Next, we use crowdsourced workers to filter the data, then provide their opinions regarding these items. Finally, we use machine learning methods to predict how new individuals are likely to rate each attraction, and recommend the items with the highest predicted ratings.
<id=”bib_BaCKKRPAS:2014″ class=”bibtex”>
BibTeX:
@inproceedings{BaCKKRPAS:2014,
author = {Bachrach, Yoram and Ceppi, Sofia and Kash, Ian A and Key, Peter and Radlinski, Filip and Porat, Ely and Armstrong, Michael and Sharma, Vijay},
title = {Building a personalized tourist attraction recommender system using crowdsourcing},
booktitle = {Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems},
year = {2014},
pages = {1631--1632},
url = {http://www.aamas-conference.org/AAMAS/aamas2014/proceedings/aamas/p1631.pdf}
}
[External PDF] PDF
The Shared Assignment Game and Applications to Pricing in Cloud Computing Gideon Blocq, Yoram Bachrach and Peter Key. In Proceedings of the 2014 International Conference on Autonomous Agents and Multi-agent Systems. Richland, SC 2014. , pp. 605-612. International Foundation for Autonomous Agents and Multiagent Systems.
<id=”absBloBacKey:2014″ class=”abstract”>Abstract: We propose an extension to the Assignment Game in which sellers provide indivisible heterogeneous goods to their buyers. Each good takes up various amounts of resources and each seller has capacity constraints with respect to the total amount of resources it can provide. Hence, the total amount of goods that the seller can provide is dependent on the set of buyers. In this model, we first demonstrate that the core is empty and proceed to suggest a fair allocation of the resulting utility of an optimal match, using the Shapley value. We then examine scenarios where the worth and resource demands of each good are private information of selfish buyers and consider ways in which they can manipulate the system. We show that such Shapley value manipulations are bounded in terms of the gain an agent can achieve by using them. Finally, since this model can be of use when considering elastic resource allocation and utility sharing in cloud computing domains, we provide simulation results which show our approach maximizes welfare and, when used as a pricing scheme, can also increase the revenue of the cloud server providers over what is achieved with the widely-used fixed pricing scheme.
<id=”bib_BloBacKey:2014″ class=”bibtex”>
BibTeX:
@inproceedings{BloBacKey:2014,
author = {Blocq, Gideon and Bachrach, Yoram and Key, Peter},
title = {The Shared Assignment Game and Applications to Pricing in Cloud Computing},
booktitle = {Proceedings of the 2014 International Conference on Autonomous Agents and Multi-agent Systems},
publisher = {International Foundation for Autonomous Agents and Multiagent Systems},
year = {2014},
pages = {605--612},
url = {http://dl.acm.org/citation.cfm?id=2615731.2615829}
}
[URL] PDF
The Architecture of Innovation: Tracking Face-to-face Interactions with Ubicomp Technologies Chloë Brown, Christos Efstratiou, Ilias Leontiadis, Daniele Quercia, Cecilia Mascolo, James Scott and Peter Key. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing. New York, NY, USA 2014. , pp. 811-822. ACM.
<id=”absBrEfLQMSK:2014″ class=”abstract”>Abstract: The layouts of the buildings we live in shape our everyday lives. In office environments, building spaces affect employees’ communication, which is crucial for productivity and innovation. However, accurate measurement of how spatial layouts affect interactions is a major challenge and traditional techniques may not give an objective view.We measure the impact of building spaces on social interactions using wearable sensing devices. We study a single organization that moved between two different buildings, affording a unique opportunity to examine how space alone can affect interactions. The analysis is based on two large scale deployments of wireless sensing technologies: short-range, lightweight RFID tags capable of detecting face-to-face interactions. We analyze the traces to study the impact of the building change on social behavior, which represents a first example of using ubiquitous sensing technology to study how the physical design of two workplaces combines with organizational structure to shape contact patterns.
<id=”bib_BrEfLQMSK:2014″ class=”bibtex”>
BibTeX:
@inproceedings{BrEfLQMSK:2014,
author = {Brown, Chloë and Efstratiou, Christos and Leontiadis, Ilias and Quercia, Daniele and Mascolo, Cecilia and Scott, James and Key, Peter},
title = {The Architecture of Innovation: Tracking Face-to-face Interactions with Ubicomp Technologies},
booktitle = {Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing},
publisher = {ACM},
year = {2014},
pages = {811--822},
note = {arXiv preprint arXiv:1406.6829},
url = {http://doi.acm.org/10.1145/2632048.2632056},
doi = {10.1145/2632048.2632056}
}
[DOI] PDF PDF from ACM
Incentivized optimal advert assignment via utility decomposition Frank Kelly, Peter Key and Neil Walton. In Proceedings of the fifteenth ACM conference on Economics and computation. 2014. , pp. 527-527.
<id=”absKelKeyWal:2014″ class=”abstract”>Abstract: We examine trade-offs among stakeholders in ad auctions. Our metrics are the revenue for the utility of the auctioneer, the number of clicks for the utility of the users and the welfare for the utility of the advertisers. We show how to optimize linear combinations of the stakeholder utilities, showing that these can be tackled through a GSP auction with a per-click reserve price. We then examine constrained optimization of stakeholder utilities.We use simulations and analysis of real-world sponsored search auction data to demonstrate the feasible trade-offs, examining the effect of changing the allowed number of ads on the utilities of the stakeholders. We investigate both short term effects, when the players do not have the time to modify their behavior, and long term equilibrium conditions.Finally, we examine a combinatorially richer constrained optimization problem, where there are several possible allowed configurations (templates) of ad formats. This model captures richer ad formats, which allow using the available screen real estate in various ways. We show that two natural generalizations of the GSP auction rules to this domain are poorly behaved, resulting in not having a symmetric Nash equilibrium or having one with poor welfare. We also provide positive results for restricted cases.
<id=”bib_KelKeyWal:2014″ class=”bibtex”>
BibTeX:
@inproceedings{KelKeyWal:2014,
author = {Kelly, Frank and Key, Peter and Walton, Neil},
title = {Incentivized optimal advert assignment via utility decomposition},
booktitle = {Proceedings of the fifteenth ACM conference on Economics and computation},
year = {2014},
pages = {527--527}
}1304.7642v1
PDF
Efficient regret bounds for online bid optimisation in budget-limited sponsored search auctions Long Tran-Thanh, Lampros Stavrogiannis, Victor Naroditskiy, Valentin Robu, Nicholas R. Jennings and Peter Key. In Proceedings of the 30th Conference on Uncertainty in AI. Quebec, Canada July 2014.
<id=”bib_TrStNaRJK:2014″ class=”bibtex”>
BibTeX:
@inproceedings{TrStNaRJK:2014,
author = {Tran-Thanh, Long and Stavrogiannis, Lampros and Naroditskiy, Victor and Robu, Valentin and Jennings, Nicholas R and Key, Peter},
title = {Efficient regret bounds for online bid optimisation in budget-limited sponsored search auctions},
booktitle = {Proceedings of the 30th Conference on Uncertainty in AI},
year = {2014}
}
PDF
Ranking and Tradeoffs in Sponsored Search Auctions Ben Roberts, Dinan Gunawardena, Ian A. Kash and Peter Key. In Proc. of the Fourteenth ACM Conf. on Electronic Commerce. 2013. , pp. 751-766. ACM.
<id=”absRobGuKaKe:2013″ class=”abstract”>Abstract: In a sponsored search auction, decisions about how to rank ads impose tradeoffs between objectives such as revenue and welfare. In this paper, we examine how these tradeoffs should be made. We begin by arguing that the most natural solution concept to evaluate these tradeoffs is the lowest symmetric Nash equilibrium (SNE). As part of this argument, we generalize the well known connection between the lowest SNE and the VCG outcome. We then propose a new ranking algorithm, loosely based on the revenue-optimal auction, that uses a reserve price to order the ads (not just to filter them) and give conditions under which it raises more revenue than simply applying that reserve price. Finally, we conduct extensive simulations examining the tradeoffs enabled by different ranking algorithms and show that our proposed algorithm enables superior operating points by a variety of metrics
<id=”bib_RobGuKaKe:2013″ class=”bibtex”>
BibTeX:
@inproceedings{RobGuKaKe:2013,
author = {Roberts, Ben and Gunawardena, Dinan and Kash, Ian A. and Key, Peter},
title = {Ranking and Tradeoffs in Sponsored Search Auctions},
booktitle = {Proc. of the Fourteenth ACM Conf. on Electronic Commerce},
publisher = {ACM},
year = {2013},
pages = {751--766},
url = {http://doi.acm.org/10.1145/2482540.2482568},
doi = {10.1145/2482540.2482568}
}
[DOI] PDF PDF from ACM
Dwelling on the Negative: Incentivizing Effort in Peer Prediction. Jens Witkowski, Yoram Bachrach, Peter Key and David C. Parkes. In HCOMP. 2013. AAAI.
<id=”absWitBaKePa:2013″ class=”abstract”>Abstract: Agents are asked to rank two objects in a setting where effort is costly and agents differ in quality (which is the probability that they can identify the correct, ground truth, ranking). We study simple output-agreement mechanisms that pay an agent in the case she agrees with the report of another, and potentially penalizes for disagreement through a negative payment. Assuming access to a quality oracle, able to determine whether an agent’s quality is above a given threshold, we design a payment scheme that aligns incentives so that agents whose quality is above this threshold participate and invest effort. Precluding negative payments leads the expected cost of this quality-oracle mechanism to increase by a factor of 2 to 5 relative to allowing both positive and negative payments. Dropping the assumption about access to a quality oracle, we further show that negative payments can be used to make agents with quality lower than the quality threshold choose to not to participate, while those above continue to participate and invest effort. Through the appropriate choice of payments, any design threshold can be achieved. This self selection mechanism has the same expected cost as the cost minimal quality-oracle mechanism, and thus when using the self-selection mechanism, perfect screening comes for free.
<id=”bib_WitBaKePa:2013″ class=”bibtex”>
BibTeX:
@inproceedings{WitBaKePa:2013,
author = {Witkowski, Jens and Bachrach, Yoram and Key, Peter and Parkes, David C.},
editor = {Hartman, Björn and Horvitz, Eric},
title = {Dwelling on the Negative: Incentivizing Effort in Peer Prediction.},
booktitle = {HCOMP},
publisher = {AAAI},
year = {2013}
}
PDF
Hotspotting-A Probabilistic Graphical Model For Image Object Localization Through Crowdsourcing. Mahyar Salek, Yoram Bachrach and Peter Key. In AAAI-13. July 2013. AAAI.
<id=”absSalBacKey:2013″ class=”abstract”>Abstract: Object localization is an image annotation task which consists of finding the location of a target object in an image. It is common to crowdsource annotation tasks and aggregate responses to estimate the true annotation. While for other kinds of annotations consensus is simple and powerful, it cannot be applied to object localization as effectively due to the task’s rich answer space and inherent noise in responses. We propose a probabilistic graphical model to localize objects in images based on responses from the crowd. We improve upon natural aggregation methods such as the mean and the median by simultaneously estimating the difficulty level of each question and skill level of every participant. We empirically evaluate our model on crowdsourced data and show that our method outperforms simple aggregators both in estimating the true locations and in ranking participants by their ability. We also propose a simple adaptive sourcing scheme that works well for very sparse datasets.
<id=”bib_SalBacKey:2013″ class=”bibtex”>
BibTeX:
@inproceedings{SalBacKey:2013,
author = {Salek, Mahyar and Bachrach, Yoram and Key, Peter},
title = {Hotspotting-A Probabilistic Graphical Model For Image Object Localization Through Crowdsourcing.},
booktitle = {AAAI-13},
publisher = {AAAI},
year = {2013}
}
PDF
Fixed and market pricing for cloud services Vineet Abhishek, Ian A. Kash and Peter Key. In NetEcon. 2012.
<id=”absAbhKasKey:2012″ class=”abstract”>Abstract: This paper considers two simple pricing schemes for selling cloud instances and studies the trade-off between them. We characterize the equilibrium for the hybrid system where arriving jobs can choose between fixed or the market based pricing. We provide theoretical and simulation based evidence suggesting that fixed price generates a higher expected revenue than the hybrid system.
<id=”bib_AbhKasKey:2012″ class=”bibtex”>
BibTeX:
@inproceedings{AbhKasKey:2012,
author = {Abhishek, Vineet and Kash, Ian A and Key, Peter},
title = {Fixed and market pricing for cloud services},
booktitle = {NetEcon},
year = {2012},
note = {arXiv preprint arXiv:1201.5621}
}
PDF Updated version on arXiv
Budget optimization for sponsored search: Censored learning in MDPs Kareem Amin, Michael Kearns, Peter Key and Anton Schwaighofer. In Proceedings of the Twenty-Eighth Conference on Uncertainty in Artificial Intelligence (UAI2012). 2012.
<id=”absAmiKeKeSc:2012a” class=”abstract”>Abstract: We consider the budget optimization problem faced by an advertiser participating in repeated sponsored search auctions, seeking to maximize the number of clicks attained under that budget. We cast the budget optimization problem as a Markov Decision Process (MDP) with censored observations, and propose a learning algorithm based on the well-known Kaplan-Meier or product-limit estimator. We validate the performance of this algorithm by comparing it to several others on a large set of search auction data from Microsoft adCenter, demonstrating fast convergence to optimal performance.
<id=”bib_AmiKeKeSc:2012a” class=”bibtex”>
BibTeX:
@inproceedings{AmiKeKeSc:2012a,
author = {Amin, Kareem and Kearns, Michael and Key, Peter and Schwaighofer, Anton},
title = {Budget optimization for sponsored search: Censored learning in MDPs},
booktitle = {Proceedings of the Twenty-Eighth Conference on Uncertainty in Artificial Intelligence (UAI2012)},
year = {2012},
note = {arXiv preprint arXiv:1210.4847}
}
[PDF ] Poster in PDF
Quality Expectation-Variance Tradeoffs in Crowdsourcing Contests. Xi Alice Gao, Yoram Bachrach, Peter Key and Thore Graepel. In AAAI. 2012.
<id=”absGaoBaKeGr:2012″ class=”abstract”>Abstract: We examine designs for crowdsourcing contests, where participants compete for rewards given to superior solutions of a task. We theoretically analyze tradeoffs between the expectation and variance of the principal’s utility (i.e. the best solution’s quality), and empirically test our theoretical predictions using a controlled experiment on Amazon Mechanical Turk. Our evaluation method is also crowdsourcing based and relies on the peer prediction mechanism. Our theoretical analysis shows an expectation-variance tradeoff of the principal’s utility in such contests through a Pareto efficient frontier. In particular, we show that the simple contest with 2 authors and the 2-pair contest have good theoretical properties. In contrast, our empirical results show that the 2-pair contest is the superior design among all designs tested, achieving the highest expectation and lowest variance of the principal’s utility.
<id=”bib_GaoBaKeGr:2012″ class=”bibtex”>
BibTeX:
@inproceedings{GaoBaKeGr:2012,
author = {Gao, Xi Alice and Bachrach, Yoram and Key, Peter and Graepel, Thore},
title = {Quality Expectation-Variance Tradeoffs in Crowdsourcing Contests.},
booktitle = {AAAI},
year = {2012}
}
PDF
Repeated auctions under budget constraints: Optimal bidding strategies and equilibria Ramakrishna Gummadi, Peter Key and Alexandre Proutiere. In Eighth Workshop on Ad Auctions. 2012.
<id=”absGumKeyPro:2012″ class=”abstract”>Abstract: How should agents bid in repeated sequential auctions when they are budget constrained? A motivating example is that of sponsored search auctions, where advertisers bid in a sequence of generalized second price (GSP) auctions. These auctions, specifically in the context of sponsored search, have many idiosyncratic features that distinguish them from other models of sequential auctions: First, each bidder competes in a large number of auctions, where each auction is worth very little. Second, the total bidder population is often large, which means it is unrealistic to assume that the bidders could possibly optimize their strategy by modeling specific opponents. Third, the presence of a virtually unlimited supply of these auctions means bidders are necessarily expense constrained. Motivated by these three factors, we first frame the generic problem as a discounted Markov Decision Process for which the environment is independent and identically distributed over time. We also allow the agents to receive income to augment their budget at a constant rate. We first provide a structural characterization of the associated value function and the optimal bidding strategy, which specifies the extent to which agents underbid from their true valuation due to long term budget constraints. We then provide an explicit characterization of the optimal bid shading factor in the limiting regime where the discount rate tends to zero, by identifying the limit of the value function in terms of the solution to a differential equation that can be solved efficiently. Finally, we proved the existence of Mean Field Equilibria for both the repeated second price and GSP auctions with a large number of bidders.
<id=”bib_GumKeyPro:2012″ class=”bibtex”>
BibTeX:
@inproceedings{GumKeyPro:2012,
author = {Gummadi, Ramakrishna and Key, Peter and Proutiere, Alexandre},
title = {Repeated auctions under budget constraints: Optimal bidding strategies and equilibria},
booktitle = {Eighth Workshop on Ad Auctions},
year = {2012}
}
PDF at SSRN
Congestion Games with Agent Failures. Reshef Meir, Moshe Tennenholtz, Yoram Bachrach and Peter Key. In AAAI. 2012.
<id=”absMeiTeBaKe:2012″ class=”abstract”>Abstract: We propose a natural model for agent failures in congestion games. In our model, each of the agents may fail to participate in the game, introducing uncertainty regarding the set of active agents. We examine how such uncertainty may change the Nash equilibria (NE) of the game. We prove that although the perturbed game induced by the failure model is not always a congestion game, it still admits at least one pure Nash equilibrium. Then, we turn to examine the effect of failures on the maximal social cost in any NE of the perturbed game. We show that in the limit case where failure probability is negligible new equilibria never emerge, and that the social cost may decrease but it never increases. For the case of non-negligible failure probabilities, we provide a full characterization of the maximal impact of failures on the social cost under worst-case equilibrium outcomes.
<id=”bib_MeiTeBaKe:2012″ class=”bibtex”>
BibTeX:
@inproceedings{MeiTeBaKe:2012,
author = {Meir, Reshef and Tennenholtz, Moshe and Bachrach, Yoram and Key, Peter},
title = {Congestion Games with Agent Failures.},
booktitle = {AAAI},
year = {2012}
}
PDF
Stochastic Variability in Sponsored Search Auctions: Observations and Models Furcy Pin and Peter Key. In Proceedings of the 12th ACM Conference on Electronic Commerce. New York, NY, USA 2011. , pp. 61-70. ACM.
<id=”absPinKey:2011″ class=”abstract”>Abstract: Sponsored search advertisement slots are currently sold via Generalized Second Price (GSP) auctions. Despite the simplicity of their rules, these auctions are far from being fully understood. Our observations on real ad-auction data show that advertisers usually enter many distinct auctions with different opponents and with varying parameters. We describe some of our findings from these observations and propose a simple probabilistic model taking them into account. This model can be used to predict the number of clicks received by the advertisers and the total price they can expect to pay depending on their bid, or even to estimate the players valuations, all at a very low computational cost.
<id=”bib_PinKey:2011″ class=”bibtex”>
BibTeX:
@inproceedings{PinKey:2011,
author = {Pin, Furcy and Key, Peter},
title = {Stochastic Variability in Sponsored Search Auctions: Observations and Models},
booktitle = {Proceedings of the 12th ACM Conference on Electronic Commerce},
publisher = {ACM},
year = {2011},
pages = {61--70},
url = {http://doi.acm.org/10.1145/1993574.1993586},
doi = {10.1145/1993574.1993586}
}
[DOI] PDF PDF from ACM
Dynamic Channel, Rate Selection and Scheduling for White Spaces Bozidar Radunovic, Alexandre Proutiere, Dinan Gunawardena and Peter Key. In Proceedings of the Seventh COnference on Emerging Networking EXperiments and Technologies. New York, NY, USA 2011. , pp. 2:1-2:12. ACM.
<id=”absRadPrGuKe:2011″ class=”abstract”>Abstract: We investigate dynamic channel, rate selection and scheduling for wireless systems which exploit the large number of channels available in the White-space spectrum. We first present measurements of radio channel characteristics from an indoor testbed operating in the 500 to 600MHz band and comprising 11 channels. We observe significant and unpredictable (non-stationary) variations in the quality of these channels, and demonstrate the potential benefit in throughput from tracking the best channel and also from optimally adapting the transmission rate. We propose adaptive learning schemes able to efficiently track the best channel and rate for transmission, even in scenarios with non-stationary channel condition variations. We also describe a joint scheduling scheme for providing fairness in an Access Point scenario. Finally, we implement the proposed adaptive scheme in our testbed, and demonstrate that it achieves significant throughput improvement (typically from 40% to 100 compared to traditional fixed channel selection schemes.
<id=”bib_RadPrGuKe:2011″ class=”bibtex”>
BibTeX:
@inproceedings{RadPrGuKe:2011,
author = {Radunovic, Bozidar and Proutiere, Alexandre and Gunawardena, Dinan and Key, Peter},
title = {Dynamic Channel, Rate Selection and Scheduling for White Spaces},
booktitle = {Proceedings of the Seventh COnference on Emerging Networking EXperiments and Technologies},
publisher = {ACM},
year = {2011},
pages = {2:1--2:12},
url = {http://doi.acm.org/10.1145/2079296.2079298},
doi = {10.1145/2079296.2079298}
}
[DOI] PDF PDF from ACM
Efficient and Fair MAC for Wireless Networks with Self-interference Cancellation Nikhil Singh, Dinan Gunawardena, Alexandre Proutiere, Bozidar Radunovic, Horia Vlad Balan and Peter Key. In Proc. of the 2011 International Symposium on Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks. 2011. , pp. 94-101. IEEE.
<id=”absSiGuPrRBK:2011″ class=”abstract”>Abstract: Recent advances in PHY layer design demonstrated efficient self-interference cancellation and full-duplex in a single band. Building a MAC that exploits self-interference cancellation is a challenging task. Links can be scheduled concurrently, but only if they either (i) don’t interfere or (ii) allow for self-interference cancellation. Two issues arise: Firstly, it is difficult to construct a schedule that fully exploits the potentials for self-interference cancellation for arbitrary traffic patterns. Secondly, designing an efficient and fair distributed MAC is a daunting task; the issues become even more pronounced when scheduling under the constraints. We propose ContraFlow, a novel MAC that exploits the benefits of self-interference cancellation and increases spatial reuse. We use full-duplex to eliminate hidden terminals, and we rectify decentralized coordination inefficiencies among nodes, thereby improving fairness. Using measurements and simulations we illustrate the performance gains achieved when ContraFlow is used and we obtain both a throughput increase over current systems, as well as a significant improvement in fairness.
<id=”bib_SiGuPrRBK:2011″ class=”bibtex”>
BibTeX:
@inproceedings{SiGuPrRBK:2011,
author = {Singh, Nikhil and Gunawardena, Dinan and Proutiere, Alexandre and Radunovic, Bozidar and Balan, Horia Vlad and Key, Peter},
title = {Efficient and Fair MAC for Wireless Networks with Self-interference Cancellation},
booktitle = {Proc. of the 2011 International Symposium on Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks},
publisher = {IEEE},
year = {2011},
pages = {94-101},
doi = {10.1109/WIOPT.2011.5930070}
}
[DOI] PDF
A Cooperative Approach to Collusion in Auctions Yoram Bachrach, Morteza Zadimoghaddam and Peter Key. SIGecom Exch.. New York, NY, USA March 2011. Vol. 10(1), pp. 17-22. ACM.
<id=”absBacZadKey:2011″ class=”abstract”>Abstract: The elegant Vickrey Clarke Groves (VCG) mechanism is well-known for the strong properties it offers: dominant truth-revealing strategies, efficiency and weak budget-balance in quite general settings. Despite this, it suffers from several drawbacks, prominently susceptibility to collusion. By jointly setting their bids, colluders may increase their utility by achieving lower prices for their items. The colluders can use monetary transfers to share this utility, but they must reach an agreement regarding their actions. We analyze the agreements that are likely to arise through a cooperative game theoretic approach, transforming the auction setting into a cooperative game. We examine both the setting of a multi-unit auction as well as path procurement auctions.
<id=”bib_BacZadKey:2011″ class=”bibtex”>
BibTeX:
@article{BacZadKey:2011,
author = {Bachrach, Yoram and Zadimoghaddam, Morteza and Key, Peter},
title = {A Cooperative Approach to Collusion in Auctions},
journal = {SIGecom Exch.},
publisher = {ACM},
year = {2011},
volume = {10},
number = {1},
pages = {17--22},
url = {http://doi.acm.org/10.1145/1978721.1978726},
doi = {10.1145/1978721.1978726}
}
[DOI] PDF PDF from ACM
Path Selection and Multipath Congestion Control Peter Key, Laurent Massoulié and Don Towsley. Commun. ACM. New York, NY, USA January 2011. Vol. 54(1), pp. 109-116. ACM.
<id=”absKeyMasTow:2011″ class=”abstract”>Abstract: In this paper, we investigate the benefits that accrue from the use of multiple paths by a session coupled with rate control over those paths. In particular, we study data transfers under two classes of multipath control, coordinated control where the rates over the paths are determined as a function of all paths, and uncoordinated control where the rates are determined independently over each path. We show that coordinated control exhibits desirable load balancing properties; for a homogeneous static random paths scenario, we show that the worst-case throughput performance of uncoordinated control behaves as if each user has but a single path (scaling like log(log(N))/log(N) where N is the system size, measured in number of resources), whereas coordinated control yields a worst-case throughput allocation bounded away from zero. We then allow users to change their set of paths and introduce the notion of a Nash equilibrium. We show that both coordinated and uncoordinated control lead to Nash equilibria corresponding to desirable welfare maximizing states, provided in the latter case, the rate controllers over each path do not exhibit any round-trip time (RTT) bias (unlike TCP Reno). Finally, we show in the case of coordinated control that more paths are better, leading to greater welfare states and throughput capacity, and that simple path reselection polices that shift to paths with higher net benefit can achieve these states.
<id=”bib_KeyMasTow:2011″ class=”bibtex”>
BibTeX:
@article{KeyMasTow:2011,
author = {Key, Peter and Massoulié, Laurent and Towsley, Don},
title = {Path Selection and Multipath Congestion Control},
journal = {Commun. ACM},
publisher = {ACM},
year = {2011},
volume = {54},
number = {1},
pages = {109--116},
url = {http://doi.acm.org/10.1145/1866739.1866762},
doi = {10.1145/1866739.1866762}
}
[DOI] PDF from ACM
Who’s Hogging the Bandwidth?: The Consequences Of Revealing The Invisible In the Home Marshini Chetty, Richard Banks, Richard Harper, Tim Regan, Abigail Sellen, Christos Gkantsidis, Thomas Karagiannis and Peter Key. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. New York, NY, USA 2010. , pp. 659-668. ACM.
<id=”absChBHRSGKK:2010″ class=”abstract”>Abstract: As more technologies enter the home, householders are burdened with the task of digital housekeeping—managing and sharing digital resources like bandwidth. In response to this, we created a domestic tool for bandwidth management called Home Watcher. Our field trial of Home Watcher showed that when resource contention amongst different household members is made visible, people’s understanding of bandwidth changes and household politics are revealed. In this paper, we describe the consequences of showing real time resource usage in a home, and how this varies depending on the social make up of the household.
<id=”bib_ChBHRSGKK:2010″ class=”bibtex”>
BibTeX:
@inproceedings{ChBHRSGKK:2010,
author = {Marshini Chetty and Richard Banks and Richard Harper and Tim Regan and Abigail Sellen and Christos Gkantsidis and Thomas Karagiannis and Peter Key},
title = {Who's Hogging the Bandwidth?: The Consequences Of Revealing The Invisible In the Home},
booktitle = {Proceedings of the SIGCHI Conference on Human Factors in Computing Systems},
publisher = {ACM},
year = {2010},
pages = {659--668},
url = {http://doi.acm.org/10.1145/1753326.1753423},
doi = {10.1145/1753326.1753423}
}
[DOI] PDF PDF from ACM
Rethinking Indoor Wireless Mesh Design: Low Power, Low Frequency, Full-duplex Bozidar Radunovic, Dinan Gunawardena, Peter Key, Alexandre Proutiere, Nikhil Singh, Vlad Balan and Gerald Dejean. In WiMesh (SECON Workshop). 2010. IEEE.
<id=”absRaGuKPSBD:2010″ class=”abstract”>Abstract: Existing indoor WiFi networks in the 2.5GHz and 5 GHz use too much transmit power, needed because the high carrier frequency limits signal penetration and connectivity. Instead, we propose a novel indoor wireless mesh design paradigm, based on Low Frequency, using the newly freed white spaces previously used as analogue TV bands, and Low Power – 100 times less power than currently used. Preliminary experiments show that this maintains a similar level of connectivity and performance to existing networks. It also yields more uniform connectivity, thus simplifies MAC and routing protocol design. We also advocate full-duplex networking in a single band, which becomes possible in this setting (because we operate at low frequencies). It potentially doubles the throughput of each link and eliminates hidden terminals.
<id=”bib_RaGuKPSBD:2010″ class=”bibtex”>
BibTeX:
@inproceedings{RaGuKPSBD:2010,
author = {Bozidar Radunovic and Dinan Gunawardena and Peter Key and Alexandre Proutiere and Nikhil Singh and Vlad Balan and Gerald Dejean},
title = {Rethinking Indoor Wireless Mesh Design: Low Power, Low Frequency, Full-duplex},
booktitle = {WiMesh (SECON Workshop)},
publisher = {IEEE},
year = {2010}
}
PDF