When you say (DCOP) privacy, what do you mean? Categorization of DCOP privacy and insights on internal constraint privacy

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

15 Scopus citations

Abstract

Privacy preservation is a main motivation for using the DCOP model and as such, it has been the subject of comprehensive research. The present paper provides for the first time a categorization of all possible DCOP privacy types. The paper focuses on a specific type, internal constraint privacy, which is highly relevant for models that enable asymmetric payoffs (PEAV-DCOP and ADCOP). An analysis of the run of two algorithms, one for ADCOP and one for PEAV, reveals that both models lose some internal constraint privacy.

Original languageEnglish
Title of host publicationICAART 2012 - Proceedings of the 4th International Conference on Agents and Artificial Intelligence
Pages380-386
Number of pages7
StatePublished - 2012
Externally publishedYes
Event4th International Conference on Agents and Artificial Intelligence, ICAART 2012 - Vilamoura, Algarve, Portugal
Duration: 6 Feb 20128 Feb 2012

Publication series

NameICAART 2012 - Proceedings of the 4th International Conference on Agents and Artificial Intelligence
Volume1

Conference

Conference4th International Conference on Agents and Artificial Intelligence, ICAART 2012
Country/TerritoryPortugal
CityVilamoura, Algarve
Period6/02/128/02/12

Keywords

  • Constraint privacy
  • Distributed constraint optimization

Fingerprint

Dive into the research topics of 'When you say (DCOP) privacy, what do you mean? Categorization of DCOP privacy and insights on internal constraint privacy'. Together they form a unique fingerprint.

Cite this