One of the biggest challenges to adopting new technology is the friction of changing standards and interfaces. Getting people - especially diverse groups in an ecosystem - to rewrite software is hard. In permissionless systems, it’s even harder since you may need to convince users to migrate individually. We’re seeing this challenge play out now with private payments and adoption by wallets. I’m writing down some early thoughts here to gather input from others thinking about similar problems.
A risk I see with the growing interest in TEEs is that they’ll become the default way to achieve technically enforced programmable privacy. If that happens, it might make it much harder for better cryptographic alternatives (like FHE or MPC) to gain adoption later, even once they’re mature and performant. To avoid this, it would be ideal if our software layers were designed so that the “backend” providing security could be swapped out easily.
I’m not yet sure what this entails—partly because I’m still working out what it means to design software for TEEs. I’d love input from others on how to think about this problem.
Memory Trace Obliviousness (MTO) and TEEs
Memory Trace Obliviousness (MTO) means that an adversary observing a program’s memory accesses cannot infer its secret inputs. Since program instructions live in memory, we assume the adversary can see which instructions and memory locations are accessed (though not the encrypted data itself).
There’s already extensive academic work on efficient MTO techniques and compilers that automatically transform ordinary programs into MTO ones. Oblivious Labs is working on productionizing such a compiler, and Elaine Shi gave an excellent talk on this topic at MEV-SBC. The compiler only needs the developer to specify which inputs are confidential—otherwise, it must conservatively treat everything as private.
MTO is necessary for TEEs (especially when assuming a physical adversary) and for certain forms of interactive MPC. Since thresholdized FHE is technically a form of MPC, I’m not sure whether MTO is universally required there—would appreciate input from others more familiar with this.
Available Instructions
The efficiency of MTO programs depends heavily on the instruction set available to the compiler or programmer. For instance, we could expose an oblivious_load instruction that implements an optimized ORAM for a specific memory region (example), reducing the need for software-level access indirection.
Other details of the backed could also come into play. For example, in the TEE setting, we may want the compiled program to split secrets up into shares in what is known as “software masking” as a means to obscure power channels. Depending on the hardware, there may be safe and unsafe sequences in which these shares can be processed (although ideally we encapsulate this complexity in the hardware so the compiler doesn’t need to worry about it).
Confidential Programs
So far we’ve only discussed hiding inputs, but sometimes we also want to hide the program logic itself (example). Conceptually, sensitive program logic can be treated as another “confidential input,” while the public program becomes a generic interpreter or universal circuit (e.g., a RISC-V processor).
For example:
- A DEX smart contract might hide only the traded assets and amounts from an adversary running the TEE or participating in the MPC.
- An “oblivious EVM” could go further, hiding even which contract logic is being executed (leaking only the maximum instruction count). This would, of course, be much slower.
Retrofitting MTO
It may not be too difficult to confidentially execute existing programs that weren’t designed for confidentiality. Consider blockchain pre-execution privacy: multiple users’ transactions must be processed privately, but once a block is finalized, its output must be public so others can verify it (either through re-execution or succinct proofs).
In this setting, TEEs could execute MTO-compiled versions of existing smart contracts (like Uniswap V3), assuming conservative confidentiality defaults or user-provided annotations. Since these modified contracts are functionally equivalent to the originals, the public chain could still treat the canonical contracts as authoritative—the state root would match.
Questions
- What properties must a program satisfy to be securely implementable using interactive MPC, FHE, or other primitives (e.g., iO)?
- Does it make sense to ask developers to label inputs as confidential, and later retarget compilers to different “secure backends”?
- Can these primitives be combined easily—e.g., marking certain data to be TEE-protected only, and other data to remain confidential even if the TEE is compromised?
- Are there ongoing projects targeting this “pluggable backend” abstraction? I know of Phantom Zone and Enclav3, but I’m unsure how applicable their work is.
- Is the concept of MTO relevant to FHE at all?
- What did I miss?