BEGIN:VCALENDAR
PRODID:-//eluceo/ical//2.0/EN
VERSION:2.0
CALSCALE:GREGORIAN
BEGIN:VEVENT
UID:www.tcs.tifr.res.in/event/834
DTSTAMP:20230914T125940Z
SUMMARY:Generative Adversarial Privacy: A Context-Aware Approach to Privacy
 -Guaranteed Data Publishing
DESCRIPTION:Speaker: Lalitha Sankar (Arizona State University\nSchool of El
 ectrical\, Computer and Energy Engineering\n551 E. Tyler Mall\, Room 585\n
 Tempe\, AZ 85281\nUnited States of America)\n\nAbstract: \nPreserving the 
 utility of published datasets while simultaneously providing provable priv
 acy guarantees is a well-known challenge. On the one hand\, context-free p
 rivacy solutions\, such as differential privacy\, provide strong privacy g
 uarantees\, but often lead to a significant reduction in utility. On the o
 ther hand\, context-aware privacy solutions\, such as information theoreti
 c privacy\, achieve an improved privacy-utility tradeoff\, but assume that
  the data curator has access to dataset statistics. We circumvent these li
 mitations by introducing a novel context-aware data-driven privacy framewo
 rk called generative adversarial privacy (GAP). GAP leverages recent advan
 cements in generative adversarial networks (GANs) to allow the data holder
  to learn privatization schemes from the dataset itself. Under GAP\, learn
 ing the privacy mechanism is formulated as a constrained minimax game betw
 een two players: a privatizer that sanitizes the dataset in a way that lim
 its the risk of inference attacks on the private variables\, and an advers
 ary that tries to infer the private variables from the sanitized dataset.
   To evaluate the performance of GAP\, we investigate two simple (yet can
 onical) statistical dataset models: (a) the binary data model\, and (b) th
 e binary Gaussian mixture model. For both models\, we derive game-theoreti
 cally optimal minimax privacy mechanisms\, and show that the privacy mecha
 nisms learned from data (in a generative adversarial fashion) match the th
 eoretically optimal ones. This demonstrates that our framework can be easi
 ly applied in practice\, even in the absence of dataset statistics (joint 
 work with Chong Huang (ASU)\, Peter Kairouz (Stanford)\, Xiao Chen (Stanfo
 rd)\, and Ram Rajagopal (Stanford).\nBio: Lalitha Sankar received the B.Te
 ch degree from the Indian Institute of Technology\, Bombay\, the M.S. degr
 ee from the University of Maryland\, and the Ph.D degree from Rutgers Univ
 ersity. She is presently an Assistant Professor in the ECEE department at 
 Arizona State University. Prior to this\, she was an Associate Research Sc
 holar at Princeton University. Following her doctorate\, Dr Sankar was a r
 ecipient of a three-year Science and Technology teaching postdoctoral fell
 owship from the Council on Science and Technology at Princeton University.
  Her research interests include information privacy and cybersecurity in d
 istributed and cyber-physical systems. For her doctoral work\, she receive
 d the 2007-2008 Electrical Engineering Academic Achievement Award from Rut
 gers University. She received the IEEE Globecom 2011 Best Paper award for 
 her work on side-information privacy and the US National Science Foundatio
 n CAREER award in 2014.\n
URL:https://www.tcs.tifr.res.in/web/events/834
DTSTART;TZID=Asia/Kolkata:20171221T110000
DTEND;TZID=Asia/Kolkata:20171221T120000
LOCATION:A-201 (STCS Seminar Room)
END:VEVENT
END:VCALENDAR
