On (not) Trusting Trusted Hardware
-
Name:
On (not) Trusting Trusted Hardware
-
Venue:
252 / BBB
-
Date:
2025-10-07
- Speaker:
-
Time:
15:45
-
This presentation shows how one can combine the advantages of MPC and enclaves to obtain protocols that offer better performance and/or better security than pure MPC protocols, without depending entirely on the trust in the enclaves used. It also shows that this trust is not a binary choice, but depending on the level of trust a protocol can have different security properties, and different performance improvements are possible. In addition, it is shown that complete trust in one's own hardware, as is often assumed for MPC protocols, is not necessary, and secure computations can still be realized even if one's own hardware is not fully trusted. Specifically, we have constructed the currently fastest protocol for private set intersection using only enclaves with reduced trust assumptions. Besides its diverse applications, private set intersection is an interesting problem for this study because many specialized and high-performance MPC protocols exist for it, which makes it an interesting comparison. Our construction further reduces the performance cost by utilizing enclaves, assuming only fairly low confidence in their security. The presentation will focus on this application.
For another application, federated learning of decision trees, we construct a specific MPC protocol that does not keep a certain amount of information secret. We show how its security can be strengthened by enclaves, leading to different levels of security depending on the security assumption one is willing to make for the enclave. If the enclaves are completely secure, the enhanced protocol keeps all information secret; if the enclaves have memory access side channels, the protocol does not protect a certain amount of information, and if the enclaves have worst possible side channels, the loss of secrecy is as without the use of enclaves. In addition, this dissertation shows that even the normally assumed trust in one's hardware need not simply be given. We show how secure computation can be realized with even fewer trust assumptions than are required for normal MPC, at the cost of performance. For this result, we take an arbitrary execution of a virtual machine and record non-deterministic behavior. A second system then verifies this computation (possibly from a different vendor), achieving safety when only one of the two systems behaves honestly.