Removing Trust Bottlenecks with TEEs

July 21, 2025

A resilient cryptocurrency application resists failure even if one of its components is compromised. In theory, each part can be swapped out for another with no loss of functionality. This reduces the risk of censorship or harm from any compromised party. In practice, however, security breaches occur when no such alternatives are available. The solution is to build failsafes or redundant components so that more than one piece of a system must be compromised for real harm to occur. Yet these defenses should not rely solely on software safeguards.

For applications that handle large amounts of capital or highly sensitive information, secure software alone is not enough. Skilled attackers are known to breach developers’ machines, build infrastructure, and even entire software supply chains to target high-value targets like centralised exchanges or DeFi protocols. As of 2025, losses from such hacks range from millions to more than a billion US dollars. Our defenses must go deeper.

Trusted hardware is a powerful solution. Also known as secure enclaves or trusted execution environments (TEEs), they isolate computation and data to ensure confidentiality and integrity.

For some applications, secure hardware is a trust bottleneck. Users must either trust hardware manufacturers and operators, or mananage their own private keys. Many end-users of TEE-based applications understand and accept these risks, and are happy with the tradeoffs. This post, however, proposes use cases where secure hardware can strictly improve security without introducing new trust bottlenecks: multisignature wallets and transaction simulation.

Use case #1: Multisignature wallets

User interfaces are common single points of failure in otherwise secure multisignature (multisig) wallet systems. Although they require multiple signers to approve transactions, signers often use the same user interface (UI). If this UI is compromised, signers may inadvertently sign a malicious transaction that moves funds to unauthorised parties.

In a February 2025 hack, USD$1.5 billion worth of funds were stolen from a centralised exchange’s multisig, even though multiple signers had approved the transaction, and the on-chain contract was secure. The attacker had compromised the multisig’s official UI, and one or more signers from the victim organisation did not check the calldata on their hardware wallets before approving the transaction. Rather, the hacked UI reported a standard internal transaction.

The affected multisig product had a strong reputation for security, so this hack surprised the industry. The attackers were extremely sophisticated. They had compromised a developer’s machine with code that specifically targeted the victim. While unfortunate, we can learn from this incident. Specifically, multisigs can be redesigned to include trusted hardware signers that monitor for suspicious transactions and cancel them if needed.

This approach provides defense in depth. The software that runs on trusted hardware would not share the same supply chain as regular signer UIs, and attackers would have to compromise both or multiple systems to successfully steal funds. Furthermore, it is already challenging to breach a well-built secure enclave since it separates sensitive data and computation from its host at the hardware level.

How it works

To see how secure enclaves might be able to mitigate such attacks, let us first establish some assumptions about the problem domain:

  1. Hardware wallets do not leak private keys, but they do not always display readable transaction data.
  2. All user interfaces outside of a hardware wallet are inherently insecure.
  3. Users will always blind-sign transactions even if they are able to verify calldata and on their hardware wallets.

These assumptions are sound because they were the case during the February 2025 exchange hack described above: even if secure hardware wallets were used (1), when attackers compromised the multisig UI (2) and the signers did not verify the calldata they signed (3), a malicious transaction was executed.

Now, let us propose a new design for a multisig wallet. We dub it an NMPQ multisig. It has:

Consider this example. Alice, Bob, and Charlie are multisig signers. At least two approvals out of the three signers are needed to execute any transaction. Next, Alice and Bob sign a transaction. The transaction, however, cannot execute until some time period (e.g. 12 hours) elapses, unless the TEE signer approves it.

In this scenario, the TEE signer acts as a failsafe. It can prevent theft if Alice and Bob fall for a compromised multisig UI. Furthermore, the TEE signer can be programmed with policies by which it may automatically approve or reject transactions, reducing human intervention. Finally, in the unlikely event that the TEE signer fails to sign a legitimate transaction, the transaction can still be executed after the time delay.

Use case #2: Transaction simulation

Modern wallet UIs, including those of multisigs, simulate transactions to highlight the expected effects on balances or contract state. Such UIs, however, are a single point of failure. A malicious UI could falsely report that the outcome of a transaction is within normal bounds, leading to theft.

Trusted hardware could mitigate this risk by acting as a second channel for transaction simulation. The system would be built with a separate, minimal, and hardened software supply chain, and route results to users via a separate device than the one that runs their primary multisig UI. If transaction simulation results differ on both devices, users will know that the transaction payload is not what it seems.

How it works

A transaction simulation system that runs in a secure enclave would access the latest block of a blockchain and the current state for relevant accounts. It would verify that each block it sees is canonical according to the consensus rules of the chain, and also that the state data it accesses is cryptographically valid. Unlike a full node, it does not need to replay transactions — only verify state, and execute relevant transactions and contract calls. Finally, it would expose an authenticated API for users to query.

At the time of writing, there are no off-the-shelf solutions, but the component pieces exist. Teams that build such systems would repurpose existing full node and light client code for secure hardware platforms which have different networking and storage architectures than regular servers.

Special mention: ZK provers

Another use case for TEEs is to use them to generate zero-knowledge proofs. This provides performance benefits for privacy-preserving software or redundancy and extra security for layer-2 blockchains. This topic, however, is lengthy, complex, and beyond the scope of this post.

Conclusion

Secure hardware can increase the security of distributed systems even though they require trust in manufacturers and operators. Since they can provide alternative paths to mission-critical computation, they can thwart supply-chain attacks and remove single points of failure. This post has described two common applications where trusted hardware adds security and reduces trust bottlenecks, and readers will find it worthwhile to discover new use cases where this is also the case.

Removing Trust Bottlenecks with TEEs - July 21, 2025 - Koh Wei Jie