Computations are often outsourced by computationally weak clients to computationally powerful external entities. Cloud computing is an obvious example of outsourced computation; outsourced chip manufacturing to off-shore foundries or ``fabs" is another (perhaps less obvious) example. Indeed, many major semiconductor design companies have now adopted the so-called "fabless" model. However, outsourcing raises a fundamental question of trust: how can the client ascertain that the outsourced computations were correctly performed? First, we describe the design of "verifiable ASICs" to address the problem of secure chip fabrication at off-shore foundries. Leveraging interactive proof (IP) protocols, we enable untrusted chips to provide run-time proofs of the correctness of computations they perform. These proofs are checked by a slower verifier chip fabricated at a trusted foundry. The proposed approach is the first to defend against arbitrary Trojan misbehaviors (Trojans refer to malicious modifications of a chip's blueprint by the foundry) while providing formal and comprehensive soundness guarantees.
Next, we examine the "MLaaS" setting, in which both the training and/or inference of machine learning models is outsourced to the cloud. MLaaS introduces both integrity and privacy and integrity risks. ML models can be maliciously trained, or provide incorrect outputs during inference. We describe tailored IP protocols for a special class of deep networks that use only polynomial activation functions. Finally, MLaaS also introduces privacy risk for users since they share their sensitive data with untrusted cloud applications. Privacy-preserving crypto. methods provide a way out, but are exorbitantly expensive. We show how deep network architectures can be tailored to reduce crypto costs by up to two orders of magnitude.