Abstract
Intel Trust Domain Extensions (TDX) is an architectural extension in the 4th Generation Intel Xeon Scalable Processor that supports confidential computing. TDX allows the deployment of virtual machines in the Secure-Arbitration Mode (SEAM) with encrypted CPU state and memory, integrity protection, and remote attestation. TDX aims at enforcing hardware-assisted isolation for virtual machines and minimize the attack surface exposed to host platforms, which are considered to be untrustworthy or adversarial in the confidential computing’s new threat model. TDX can be leveraged by regulated industries or sensitive data holders to outsource their computations and data with end-to-end protection in public cloud infrastructures.
This article aims at providing a comprehensive understanding of TDX to potential adopters, domain experts, and security researchers looking to leverage the technology for their own purposes. We adopt a top-down approach, starting with high-level security principles and moving to low-level technical details of TDX. Our analysis is based on publicly available documentation and source code, offering insights from security researchers outside of Intel.
1 INTRODUCTION
Deploying computations to cloud infrastructures can reduce costs, but regulated industries have concerns about moving sensitive data to third-party cloud service providers. Confidential computing aims at providing end-to-end protection for outsourced computations by minimizing the root-of-trust in the processors and their vendors. All data must be protected throughout its life cycle, from leaving its owners’ premises to entering certified CPU packages in the cloud. Adversaries, such as those intercepting data on the network, disk storage, or main memory, should not be able to access the data in clear form.
Cryptographic mechanisms, such as storage encryption and secure communication channels, protect the confidentiality, integrity, and authenticity of data both at rest and in transit. The emerging CPU-based Trusted Execution Environment (TEE) techniques aim at protecting data in use, i.e., data loaded into main memory.
Intel Trust Domain Extensions (TDX) is an architectural extension that provides TEE capabilities in the 4th Generation Intel Xeon Scalable Processors. TDX introduces the SEAM to offer cryptographic isolation and protection for Virtual Machines (VMs), which are called Trust Domains (TDs) in the TDX terminology. The threat model assumes that the privileged software, such as hypervisor or host Operating System (OS), may be untrustworthy or adversarial. TDX aims at protecting the confidentiality and integrity of CPU state and memory for designated TDs and also enables TD owners to verify the authenticity of remote platforms. TDX is built using a combination of techniques, including Virtualization Technology (VT) [64], Multi-key Total Memory Encryption (MKTME) [31], and TDX Module [40]. TDX also relies on Software Guard Extensions (SGX) [53] and Data Center Attestation Primitives (DCAP) [58] for remote attestation.
Throughout the article, we aim at giving an objective review of TDX. Our goal is to provide a thorough understanding of TDX to potential adopters, domain experts, and security researchers who want to leverage or investigate the technology for their own purposes. All the information is based on publicly available documentation [26, 32, 39, 40, 42] and source code [27, 28, 29].
The following is a roadmap of this article. We begin by outlining the security principles (Section 2) and the threat model (Section 3) of TDX. Next, we provide a comprehensive comparison of existing confidential computing technologies on the market (Section 4) and examine the existing Intel technologies that serve as the building blocks for TDX (Section 5). Once the background knowledge is established, we offer a high-level overview of TDX (Section 6) and then delve into the technical details of the TDX Module (Section 7), memory protection mechanisms (Section 8), and remote attestation (Section 9). Finally, we conclude with a summary (Section 10). To assist readers in navigating the numerous terms and abbreviations used in this article, a list of acronyms is also provided (Section A).
2 SECURITY PRINCIPLES
In cloud computing, multiple security domains, e.g., a hypervisor managed by a cloud service provider and VMs owned by different tenants, coexist on a shared physical machine. While hardware-assisted virtualization can isolate tenants’ workloads, the security model still relies on a privileged hypervisor to provide trustworthy VM management. To address this issue, TDX enforces cryptographic isolation among the security domains, thereby mitigating cross-domain attacks. This eliminates hierarchical dependencies on untrusted/privileged host software and excludes hypervisors and cloud operators from the Trusted Computing Base (TCB), allowing tenants to securely provision and run their computations with confidence.
TDX guarantees confidentiality and integrity of TD’s memory and virtual CPU states, ensuring that they cannot be accessed or tampered with by other security domains executing on the same machine. This is achieved through a combination of (1) memory access control, (2) runtime memory encryption, and (3) an Intel-signed TDX Module that handles security-sensitive TD management operations.
In addition, remote attestation provides tenants with proof of the authenticity of TDs executing on genuine TDX-enabled Intel processors. These guarantees are based on a specific threat model and require certain trust assumptions.
Memory Confidentiality. TD’s data residing inside the processor package are stored in clear text. However, when the data is offloaded from the processor to the main memory, the processor encrypts it using a TD-specific cryptographic key known only to the processor. The encryption is performed at the cache line granularity, making it impossible for peripheral devices to read or tamper with the TD’s private memory without detection. The processor can detect any tampering that may occur when loading data from the main memory.
CPU State Confidentiality. TDX protects against concurrently executing processes by managing the virtual CPU states of TDs during all context switches between security domains. The states are stored in the TD’s metadata, which is protected while in main memory using the TD’s key. During context switches, TDX clears or isolates the TD-specific states from internal processor registers and buffers, such as Translation Lookaside Buffer (TLB) entries or branch prediction buffers, to maintain the protection of the TD’s information.
Execution Integrity. TDX protects the integrity of TD’s execution from host interference, ensuring that the TD resumes its computation after an interrupt at the expected instruction within the expected states. It is capable of detecting malicious changes in the virtual CPU states, as well as injection, modification, or removal of instructions located in the private memory. However, TDX does not provide additional guarantees for the control flow integrity. It is the responsibility of the TD owner to use existing compilation-based or hardware-assisted control flow integrity enforcement techniques, such as Control Flow Enforcement Technology (CET) [25].
I/O Protection. Peripheral devices or accelerators are outside the trust boundaries of TDs and should not be allowed to access TD’s private memory. To support virtualized I/O, a TD can choose to explicitly share memory for data transfer purposes. However, TDX does not provide any confidentiality and integrity protection for the data located in shared memory regions. It is the responsibility of TD owners to implement proper mechanisms, such as using secure communication channels like Transport Layer Security (TLS), to protect the data that leaves the TD’s trust boundary. In the future, TDX 2.0 is planned to include TDX Connect [35, 38] to address the trusted I/O issue.
3 THREAT MODEL
TDX operates on the assumption that adversaries may have physical or remote access to a computer and may be able to gain control over the boot firmware, System Management Mode (SMM), host OS, hypervisor, and peripheral devices. The primary objective of these adversaries is to obtain confidential data or interfere with the execution of a TD. It is important to note that TDX cannot guarantee availability, as adversaries can control all the compute resources for TDs and launch Denial of Service (DoS) attacks. It is crucial for the TDX design to prevent adversaries from conducting actions that compromise the TDX security guarantees outlined in Section 2. Below, we summarize the capabilities of adversaries and identify potential attack vectors and scenarios.
Adversaries can interact with the TDX Module through its host-side interface functions, which allow them to build, initialize, measure, and tear down TDs. These interface functions can be invoked in an arbitrary order with semantically and syntax valid/invalid inputs.
Adversaries can control the compute resources assigned to TDs, including physical memory pages, processor time, and physical/virtual devices. They can interrupt TDs at any point, and try to read and write to arbitrary memory locations, as well as reconfigure the Input/Output Memory Management Unit (IOMMU).
Adversaries have the capability of manipulating the input data for TDs [30], including Advanced Configuration and Power Interface (ACPI) tables, Peripheral Component Interconnect (PCI) config, Model-Specific Register (MSR), Memory-Mapped Input/Output (MMIO), Direct Memory Access (DMA), emulated devices, hypercalls handled by the host, source of randomness, and time notion.
Adversaries can conduct physical and hardware attacks, for instance, by probing buses or accessing main memory through malicious DMA. There is no defense against physical attacks that roll back arbitrary memory regions. However, it should not be possible for adversaries to extract the secret key material baked into the processor chip’s fuses. The scope of the threat model does not cover fault injections or side-channel attacks such as power glitches, time, and power analysis.
Attacking TDX attestation is within the scope as it undermines the trust model and may enable adversaries to forge a counterfeit TEE for collecting confidential information from tenants.
3.1 TCB
The TCB of TDX consists of the TDX-enabled Intel processors and the built-in technologies, such as VT, MKTME, and SGX. The TCB also includes software modules signed by Intel, including the TDX Module, the NP/P-SEAM Loaders, and architectural SGX enclaves for remote attestation. The software stacks running within TDs are owned by the tenant and are considered part of the TCB. The cryptographic primitives used in TDX are considered sound and their implementation secure, including the generation of random numbers and the absence of side-channel attacks like timing attacks.
Tenants must trust the processor manufacturer, Intel, for developing, manufacturing, building, and signing the hardware/software components used by TDX. The source code packages for the TDX Module, the NP/P-SEAM Loaders, and the DCAP for attestation are publicly available for audit purposes, allowing tenants to assess their trustworthiness. However, tenants must also trust that the version signed by Intel is equivalent to the one they have reviewed, which involves placing trust in the compilation process to protect against supply chain attacks.
Moreover, tenants are required to trust Intel’s Provisioning Certification Service (PCS) for remote attestation. The PCS, which originally supported SGX attestation, has been expanded to include retrieval of Provisioning Certification Key (PCK) certificates, revocation lists, and TCB information for TDX.
4 COMPARISON OF CONFIDENTIAL COMPUTING TECHNOLOGIES
Confidential computing technologies share a common objective of protecting outsourced sensitive data and computations from unauthorized access, tampering, and disclosure on untrusted third-party infrastructures. Major processor vendors are competing to incorporate confidential computing capabilities into their chips. Despite differences in implementation and terminology, these technologies share fundamental security principles with similar system designs, such as introducing new execution modes or privilege levels, migrating VM management functions to attested firmware/software, ensuring secure or measured launch of trusted components, enforcing memory access control, and providing memory encryption protection.
In addition to Intel TDX, we provide a brief overview of the confidential computing technologies from other vendors, including AMD Secure Encrypted Virtualization (SEV), IBM Secure Execution and Protected Execution Facility (PEF), Arm Confidential Compute Architecture (CCA), and RISC-V Confidential VM Extension (CoVE), for comparison purposes. We have summarized the distinct features of these technologies in Table 1. Readers already familiar with these technologies can skip this section and proceed directly to Section 5, where we explain the existing Intel technologies that support TDX.
Technology | Summary |
---|---|
AMD SEV [3, 43, 44] | - enforces cryptographic VM isolation via AMD PSP |
IBM Secure Execution [24] | - protects SVMs on IBM Z and LinuxONE. |
IBM PEF [22] | - protects SVMs on Power ISA |
Arm CCA [50] | - introduces Realm world for running confidential VMs |
RISC-V CoVE [57] | - introduces the TSM to manage TVM life cycles |
4.1 AMD SEV
SEV [44] is a confidential computing feature in AMD EPYC processors. It protects sensitive data stored within VMs from privileged software or administrators in a multi-tenant cloud environment. SEV relies on AMD Secure Memory Encryption (SME) and AMD Virtualization (AMD-V) to enforce cryptographic isolation between VMs and the hypervisor. Each VM is assigned a unique ephemeral Advanced Encryption Standard (AES) key, which is used for runtime memory encryption. The AES engine in the on-die memory controller encrypts or decrypts data written to or read from the main memory. The per-VM keys are managed by the AMD Platform Security Processor (PSP), which is a 32-bit Arm Cortex-A5 micro-controller integrated within the AMD System-on-Chip (SoC). The C-bit (bit 47) in physical addresses determines memory page encryption. SEV also provides a remote attestation mechanism that allows the VM owners to verify the trustworthiness of VMs’ launch measurements and the SEV platforms. The PSP generates the attestation report signed by an AMD-certified attestation key. The VM owners can verify the authenticity of the attestation report and the embedded platform/guest measurements.
AMD has released three generations of SEV. The first generation SEV [44] only protects the confidentiality of a VM’s memory. The second generation SEV-ES (Encrypted State) [43] adds protection for CPU register state during hypervisor transition, and the third generation SEV-SNP (Secure Nested Paging) [3] adds integrity protection to prevent memory corrupting, replaying, and remapping attacks. Particularly, SEV-SNP provides memory integrity protection using Reverse Mapping Table (RMP). Reverse Mapping Table (RMP) tracks each page’s ownership and permissions to prevent unauthorized access. SEV-SNP also introduces the Virtual Machine Privilege Level (VMPL) feature by dividing the guest address space into four levels and providing additional security isolation within a VM. The privilege levels range from zero to three, where VMPL0 is the highest level of privilege and VMPL3 is the lowest. For instance, the Linux Secure VM Service Module (SVSM) [61] makes extensive use of the RMP and VMPL features to perform sensitive services, e.g., live migration and vTPM, in a secure manner.
4.2 IBM Confidential Computing
IBM’s early exploration of confidential computing can be traced back to the research on SecureBlue++ [10, 70], which included running on an emulated POWER processor on the Mambo CPU simulator [9]. Today, IBM Systems support two architectures for confidential computing: Secure Execution [24], offered on IBM Z and LinuxONE, and PEF [22], released as an open source project on OpenPOWER systems.
IBM Secure Execution. IBM Secure Execution provides support for Secure Virtual Machines (SVMs) that run inside isolated TEEs since IBM Z15 and LinuxONE III. Secure Execution protects the confidentiality, integrity, and authenticity of code and data in an SVM from any unauthorized access and snooping or tampering. Secure Execution leverages trusted firmware, called the Ultravisor, to perform security-sensitive tasks to bootstrap and run SVMs. The Ultravisor shields the SVM’s memory and its state during context switches and protects the SVM from a potentially compromised or malicious hypervisor. Tenants using Secure Execution can embed their encrypted sensitive data in the VM images and rely on the Ultravisor to decrypt and expose them to the SVMs executing inside the TEEs. Specifically, tenants can encrypt their confidential data with a symmetric data key, which they embed in the IBM Secure Execution Header. They further encrypt this header with the key obtained from the verified Host Key Document and embed the header in their VM image. The header can contain multiple key slots that allow an image to run on multiple target hosts. The Host Key Document, signed by the hardware manufacturer, contains the public key linked with the private key embedded in the hardware of IBM Z or LinuxONE. Ultravisor, the only component having access to the hardware private key and the data key, enforces that only the expected tenant’s SVM executing inside the TEE has access to the unencrypted data. In addition to embedding built-in secrets within the VM image, Secure Execution also supports remote attestation starting from IBM Z16 and LinuxONE Emperor 4. This allows tenants to verify the SVM’s measurements before releasing their secrets.
IBM PEF. PEF provides a VM-based TEE using extensions to the IBM Power Instruction Set Architecture (ISA) that are supported in most POWER9 and POWER10 processors. PEF firmware, tooling to prepare SVMs, and OS extensions, were released as open source software [23]. To protect sensitive data and code, PEF introduces a trusted firmware called Protected Execution Ultravisor (Ultravisor) that shields the SVM execution and enforces the security guarantees with the help of the CPU architectural changes. The PEF relies on the secure and trusted boot of the system and the Ultravisor executing in a new, highest privileged CPU state called Secure State. The hypervisor starts the VM, which invokes the Ultravisor to transition to an SVM using the Enter Secure Mode (ESM) call. The Ultravisor converts the VM into an SVM by moving it to the secure memory that is inaccessible to untrusted code. Before executing the SVM, the Ultravisor performs integrity checking. It decrypts the payload attached to the SVM image to decode the integrity information and a passphrase for the encrypted file system. After ensuring the integrity of the SVM, the Ultravisor exposes the passphrase to the SVM booting system that decrypts the tenant’s file system. The Ultravisor uses the Trusted Platform Module (TPM) to get access to the symmetric seed required to check integrity and decrypt the payload. The symmetric seed is guarded using the Platform Configuration Register (PCR) sealing mechanism and accessed by establishing a secure channel to the TPM. The TPM only grants access to an Ultravisor on a correctly booted system. If the Ultravisor gets access to the symmetric seed, it generates the HMAC key and symmetric key that are used to verify integrity and decrypt the passphrase.
4.3 Arm CCA
CCA [50] was introduced in the Armv9 architecture. Traditionally, Arm TrustZone allows secure execution by having two separated worlds, the Normal World and the Secure World. TrustZone prevents software in the Normal World from accessing data in the Secure World. CCA introduces the Realm Management Extension (RME) with two additional worlds, the Realm World and the Root World. The Realm World provides mutually distrusting execution environments for confidential VMs, isolating workloads from any other security domains, including host OS, hypervisor, other Realms, and TrustZone. To enforce the isolation of address spaces, CCA uses a Granule Protection Table (GPT), which is an extension to the page table that tracks the ownership of each page with different worlds. The Monitor in the Root World handles the creation and management of the GPT, preventing a hypervisor or an OS from directly changing it. The Monitor can dynamically move physical memory between different worlds by updating the GPT. CCA also supports attestation to measure and verify the CCA platform and the initial state of the Realms.
4.4 RISC-V CoVE
CoVE [57] is a reference confidential computing architecture for RISC-V. Its protected instance is called a TEE Virtual Machine (TVM). The architecture introduces the TEE Security Manager (TSM) driver, which is an M-mode (highest privilege level in RISC-V) firmware component for switching between confidential and non-confidential environments. The TSM driver tracks the assignment of memory pages to TVMs through the Memory Tracking Table (MTT). The TSM driver measures and loads the TSM, which is a trusted intermediary between the hypervisor and the TVMs. CoVE defines the Application Binary Interface (ABI) for the hypervisor to request virtual machine management services from the TSM. CoVE adopts a layered attestation architecture, which begins with the hardware and progresses through the TSM driver, TSM, and TVM. Each layer is loaded, measured, and certified by the previous layer. This approach provides a secure chain of trust that can be used to verify the integrity of the system. The TVM can obtain a certificate from the TSM that contains attestation evidence rooted back to the hardware. This certificate provides a mechanism for verifying the authenticity of the TVM and the software it runs.
5 BUILDING BLOCKS FOR TDX
TDX relies on a combination of existing Intel technologies, including VT, Total Memory Encryption (TME)/MKTME, and SGX. In this section, we provide an overview of these underpinning technologies and explain how they are used in TDX. A summary of these technologies can be found in Table 2.
Technology | Summary |
---|---|
Intel VT [64] | - provides hardware-assisted virtualization for CPU, memory, and I/O |
Intel TME | - encrypts entire main memory |
Intel MKTME [31] | - supports multiple keys for memory encryption |
Intel SGX [53] | - encloses sensitive code and data of an application within an enclave |
5.1 Intel VT
Intel VT [64] is a set of hardware-assisted virtualization features in Intel processors. Using VT, Virtual Machine Monitors (VMMs) or hypervisors can achieve better performance, isolation, and security compared to software-based virtualization. Intel’s VT portfolio includes, among others, the virtualization of CPU, memory, and I/O.
Processors with VT-x technology have a special instruction set, called Virtual Machine Extensions (VMX), which enables control of virtualization. Processors with VT-x technology can operate in two modes: VMX root mode and VMX non-root mode. The hypervisor runs in VMX root mode while the guest VMs run in the VMX non-root mode. VT-x defines two new transitions, VM entry and VM exit, to switch between the guest and the hypervisor. The Virtual Machine Control Structure (VMCS) is a data structure that stores VM and host state information for mode transitions. It also controls which guest operations can cause VM exits.
Intel VT-x utilizes Extended Page Table (EPT) for implementing Second Level Address Translation (SLAT). Each guest kernel maintains its page table to translate Guest Virtual Address (GVA) to Guest Physical Address (GPA). The hypervisor manages EPT to map GPA to Host Physical Address (HPA).
VMs can use different I/O models, including software-based and hardware-based models, to access I/O devices. Software-based I/O models involve emulated devices or para-virtualized devices, while hardware-based I/O models include direct device assignment, Single Root I/O virtualization (SR-IOV) devices, and Scalable I/O virtualization (S-IOV) devices.
Intel VT for Directed I/O (VT-d) enables the isolation and restriction of device accesses to entities managing the device. It includes I/O device assignment, DMA remapping, interrupt remapping, and interrupt posting. With the support of VT-d, VMs can directly access physical I/O memory through virtual-to-physical address translation with the help of the IOMMU. VT-d also provides flexibility in I/O device assignments to VMs and eliminates the need for the hypervisor to handle interrupts and DMA transfers. Overall, VT-d enhances the performance and security of virtualized environments that require direct access to I/O devices.
VT \(\Rightarrow\) TDX. TDX is a VM-based TEE. It relies on the VT to provide isolation among TDs. As the hypervisor is no longer trusted in the new threat model, the functionalities of managing TDs have been enclosed within the TDX Module. The TDX Module and TDs run in the new SEAM VMX root/non-root mode with additional protection. TDX still leverages EPT to manage GPA-to-HPA translation. But currently, it maintains two EPTs for each TD, a protected one for private (encrypted) memory and another one for shared (unencrypted) memory. We provide a detailed explanation of the TDX’s architecture and the TDX Module in Sections 6.1 and 7.
It is worth noting that currently nested virtualization is not supported in TDX 1.0, which means that running VMs within a TD is not allowed. Attempting to use VMX instructions within a TD can result in Undefined Instruction (UD) exceptions. But the TD partitioning architecture specification draft [37] indicates that nested virtualization will be supported in TDX 1.5 in the future.
5.2 Intel TME/MKTME
Total Memory Encryption (TME) was first introduced with the Intel 11th Generation Core vPro mobile processor. This feature is designed to protect against attackers who have physical access to a computer’s memory and attempt to steal data. TME encrypts the entire computer’s memory using a single transient key. The key is generated at boot-time through a combination of hardware-based random number generators and security measures integrated into the system’s chipset. Memory encryption is performed by encryption engines on each memory controller. The encryption process uses the NIST standard AES-XTS algorithm with 128-bit or 256-bit keys.
MKTME [31] extends TME to support multiple keys and memory encryption at page granularity. For each memory transaction, MKTME extracts Host Key Identifier (HKID) from the physical memory address and selects a corresponding key to encrypt/decrypt memory. HKID occupies a configurable number of bits from the top of the physical address. The HKIDs range is set by the BIOS during system boot. MKTME allows for software-provided keys and introduces a new instruction,
MKTME \(\Rightarrow\) TDX. To use MKTME in the virtualized environments, the hypervisor must be trusted to control the memory encryption, which violates the new threat model for confidential computing. Therefore, in TDX, the TDX Module is responsible for controlling memory encryption for TDs. The HKID space has been partitioned to private HKIDs and shared HKIDs. The TDX Module ensures that a unique private HKID is assigned to each TD. Therefore, this HKID can be used to represent the identity of a specific TD. The private HKIDs can only be used for encrypting the private memory of TDs. The TDX Module still leverages MKTME to protect TD’s memory. More information about how TDX uses MKTME can be found in Sections 6.2 and 8.
5.3 Intel SGX
Intel introduced SGX [53] in 2015 with the 6th Generation Core processors to protect against memory bus snooping and cold boot attacks. It enables developers to partition their applications and protect selected code and data within enclaves. The memory of an enclave can only be accessed by authorized code. SGX uses hardware-based memory encryption to protect the enclave’s contents. Any unauthorized attempts to access or tamper with the enclave’s memory can trigger exceptions. SGX adds 18 new instructions into Intel’s ISA and enables secure offloading of computations to environments where the underlying host components (such as host application, host kernel, SMM, and peripheral devices) are untrustworthy. SGX’s security ultimately depends on the security of the firmware and microcode that implement its features.
The Enclave Page Cache (EPC) is a special memory region that contains the enclave’s code and data, where each page is encrypted using the Memory Encryption Engine (MEE). The Enclave Page Cache Map (EPCM) stores the page metadata, such as configuration, permissions, and type of each page. At boot time, keys are generated and used for decrypting the contents of encrypted pages inside the CPU. The keys are controlled by the MEE and never exposed to the outside. Thus, only this particular CPU can decrypt the memory. The CPU stores these keys internally and prevents access to them by any software. Additionally, privileged software out of enclaves is not allowed to read or write the EPC or EPCM pages.
SGX offers both local and remote attestation to verify the integrity and authenticity of enclaves. Local attestation is used to establish trust between two enclaves within the same platform, while remote attestation verifies the trustworthiness of an enclave to a third-party entity off the platform. In local attestation, an enclave can verify another enclave’s integrity and the genuineness of the underlying hardware platform. To do so, the first enclave generates a report and uses the identity information of the second enclave to sign it. The second enclave retrieves its Report Key and verifies the report using this Report Key. A third party may want to establish trust with a remotely executed enclave before provisioning it with secrets. In this scenario, remote attestation is necessary. To perform remote attestation, SGX utilizes a special architectural enclave known as the Quoting Enclave (QE). The QE is developed and signed by Intel. The QE receives a report from another enclave, locally verifies it, and transforms it into a remotely verifiable quote by signing it with the Attestation Key. The relying party can send this quote to the Intel Attestation Service (IAS), which verifies the quote to identify and assess the trustworthiness of the SGX enclave. The QE’s role is to provide a secure and trustworthy environment for the transformation of a report into a quote and to ensure the quote cannot be modified or falsified. Intel also provides DCAP [58], which is a composition of software packages, for data centers to deploy their own ECDSA attestation infrastructures for SGX enclave attestation.
Researchers have used SGX to provide secure containers (e.g., Scone [4]) and shielded execution for unmodified applications (e.g., Haven [6]). Graphene [63], an SGX-based framework, provides techniques for running unmodified applications as well as dynamic libraries inside SGX enclaves. Besides, SGX has a wide spectrum of applications ranging from the function encryption system (e.g., Iron [16]), source code partitioning to protect security-sensitive data and functions (e.g., Glamdring [51]), machine learning [20, 21, 55, 62], network security [7], fault-tolerant [8], encrypted data search (e.g., HardIDX [17]), secure databases (e.g., EnclaveDB [56]), secure coordination for distributed system (e.g., SecureKeeper [12]), and secure distributed computations (e.g., VC3 [59]).
Identifying vulnerabilities of SGX is another important line of research. Researchers also have identified a wide range of attack vectors targeting SGX, such as controlled-channel attacks [66, 67, 69, 71], cache attacks [11, 18, 54, 60], branch prediction attacks [15, 49], and speculative execution attacks [13, 48].
SGX \(\Rightarrow\) TDX. SGX and TDX protect memory at different granularities. But on the same platform, TDX and SGX are within the same TCB. Thus, they can locally attest to each other. TDX leverages the remote attestation mechanism provided by SGX. The attestation report of a TDX platform can be verified and signed within a QE. More details about TDX’s remote attestation can be found in Sections 6.4 and 9.
It is worth noting that at the moment, running an SGX enclave within a TD is not allowed, as invoking
6 OVERVIEW OF TDX
In this section, we give an overview of TDX, discussing its system architecture, memory protection mechanisms, I/O model, attestation, and features that have been planned for the future. Each topic also includes pointers to subsequent sections that provide more technical details.
6.1 TDX System Architecture
Figure 1 illustrates the runtime architecture of TDX. It is composed of two key components: (1) TDX-enabled processors, which offer architectural functionalities like hardware-assisted virtualization, memory encryption/integrity protection, and the ability to certify TEE platforms, (2) TDX Module, an Intel-signed and CPU-attested software module that leverages the features of TDX-enabled processors to facilitate the construction, execution, and termination of TDs while enforcing the security guarantees. The TDX Module provides two sets of interface functions, host-side interface functions for a TDX-enlightened hypervisor and guest-side interface functions for TDs. It is loaded and executed in the SEAM Range, which is a portion of system memory reserved via UEFI/BIOS. The P-SEAM Loader, which also resides in the SEAM Range, can install and update the TDX Module. More information on the loading process of the TDX Module can be found in Section 7.1.
Secure-Arbitration Mode (SEAM) is an extension of the VMX architecture and provides two new execution modes: SEAM VMX root mode and SEAM VMX non-root mode. A TDX-enlightened hypervisor operates in the traditional VMX root mode and utilizes the
On the other hand, TDs run in the SEAM VMX non-root mode. TDX supports the execution of unmodified user-level applications within a TD, much like in a standard VM. However, the guest OS kernel, illustrated as the TDX-enlightened OS in Figure 1, must undergo modifications to align with the underlying TDX platform, accommodating both the architectural paradigms and the security imperatives of TDX. These modifications include managing new TDX exceptions via an in-guest Virtualization Exception (VE) handler, implementing a hypercall-like mechanism for communication between a TD and the TDX Module, transitioning memory pages from private to shared for I/O operations, and integrating attestation support. The specific implementation details may vary depending on the OS type. For instance, the detailed implementation of the enlightened guest Linux kernel has been described in the kernel documentation [14]. TDs can trap into the TDX Module either through a TD exit or by invoking the
The confidentiality assurances offered by confidential computing render it a prime target for research on side-channel information leakage. The unveiling of a succession of micro-architectural attacks [5, 47, 52, 65, 68] exploiting the speculative execution of CPUs highlights a concerning issue: the isolation of security domains enforced in the architectural states may not be consistent as in the micro-architectural states. As TDX becomes more widely available in the market, it is expected to attract increased attention from security researchers. Our primary emphasis lies in examining the existing defenses integrated into the TDX Module to address known attack vectors. For detailed information, please refer to Section 7.8.
6.2 TDX Memory Protection
TDX leverages VMX to enforce memory isolation for TDs. Similar to legacy VMs, TDs are unable to access the memory of other security domains, such as SMM, hypervisors, the TDX Module, and other VMs/TDs. With VMX, hypervisors maintain EPTs to enforce memory isolation. However, since hypervisors are no longer trusted, TDX has moved the tasks of memory management to the TDX Module, which controls the address translation of TD’s private memory.
A more intriguing aspect of TDX’s security model is its protection of TD’s memory from privileged software, corrupted devices, and unprincipled administrators on the host. TDX achieves this by implementing access control and cryptographic isolation. Access control prevents other security domains on the same computer from accessing a TD’s data. Cryptographic isolation is utilized to prevent malicious DMA devices or adversaries with physical access to the main memory from directly reading or corrupting TD’s private memory.
Memory Partitioning. With TDX enabled, the entire physical memory space is partitioned into two parts: normal memory and secure memory. The sensitive data of TDs, including the private memory, virtual CPU state, and its associated metadata, should be stored in secure memory. TDs can also specify memory regions as shared memory for I/O, which is not protected through TDX. Thus, these memory regions belong to normal memory. All other software, which is not executing in the SEAM mode, belongs to normal memory and is not allowed to access secure memory, regardless of its privilege level. The memory controller, an architectural component inside the processor, enforces memory access checks.
To make a physical page part of the secure memory, the TD Owner Bit is enabled (Section 8.2). Each TD Owner Bit is associated with a memory segment corresponding to a cache line.1 The TD Owner Bits are stored in the Error Correction Code (ECC) memory associated with these segments. The TDX Module controls the conversion of physical memory pages to secure memory by attaching private HKIDs to their physical addresses. The HKID is encoded in the upper bits of the physical address. The set of private HKIDs is controlled by TDX and can only be used for TDs and the TDX Module. When the memory controller writes to a physical address with a private HKID, it sets the TD Owner Bit to 1. When it writes to an address that does not have a private HKID, it clears the TD Owner Bit. Access control is enforced on each cache line read. The read request passes through the memory controller, which permits only processes executing in SEAM mode to read a cache line with a TD Owner Bit set to 1. Any read request not in the SEAM mode receives all zeros when trying to read such a cache line.
When building a TD, the (untrusted) hypervisor selects the memory pages from the normal memory to become part of the secure memory. The TDX Module gradually moves these pages to the secure memory. It uses them for the metadata (Section 7.4) and the main memory of each TD. A TD must explicitly accept these pages before they can be used for its main memory. The TDX Module performs sanity checks of the secure memory setup by maintaining a Physical Address Metadata Table (PAMT), which is described in more detail in Section 7.7.
Memory Confidentiality. TDX leverages MKTME (Section 5.2) for encrypting TD’s private memory and its metadata. MKTME is responsible for transparent memory encryption and decryption of data passing through the memory controller. The TDX Module programs the keys used by the MKTME to encrypt specific cache lines when they are written to memory. The keys are associated with the HKIDs embedded in the physical addresses. MKTME decodes HKIDs and uses the referenced cryptographic keys to perform the cryptographic operations.
MKTME stores cryptographic keys in its internal memory, never exposing them to the outside. The cryptographic keys can only be referenced by their HKIDs. When building a new TD, the hypervisor selects an unused private HKID, and the TDX Module requests the processor to generate a new cryptographic key related to this HKID. The TDX Module binds this
MKTME encrypts memory at the cache line granularity using AES-128 XTS cryptography when the cache line is being written back to main memory. The encryption can prevent some physical attacks, like the cold boot attack. Please see Section 8.1 for more details on MKTME and HKIDs.
Memory Integrity. TDX provides two distinct mechanisms for ensuring memory integrity: Logical Integrity (Li) and Cryptographic Integrity (Ci).
Li protects the integrity against unauthorized writes at the software level by using the TD Owner Bit. Since the TDX only allows the use of private HKIDs in the SEAM mode, any unauthorized writes to a TD’s private memory from outside the SEAM mode will clear the TD Owner Bit. When the modified private memory is read, the cleared TD Owner Bit will trigger an exception. However, this feature cannot prevent adversaries from bit flipping (e.g., via a Rowhammer attack [45]) the main memory.
Ci is a more advanced mechanism that addresses the limitations of Li. In addition to the TD Owner Bit, Ci also computes a Message Authentication Code (MAC) on a cache line when it is being written back to memory. The MAC is computed using a 128-bit MAC key generated during system initialization and is stored as part of the memory metadata during the write-back. When the memory is read, the MAC is recalculated. Any tampering with the memory content will be detected by Ci if the TD Owner Bit or the recalculated MAC mismatch with the stored metadata. However, neither Li nor Ci can detect the memory replay attack if the adversary can roll back both the memory content and the metadata. We provide a more detailed technical discussion of the memory integrity protection in Section 8.2.
6.3 TDX I/O Model
According to the TDX threat model, hypervisors and peripheral devices are considered untrusted and are prohibited from directly accessing the private memory of TDs. It is the responsibility of TDs and their owners to secure I/O data before it leaves the trust boundary. This requires sealing the I/O data buffers and placing them in shared memory, which is identified by the shared bit in the GPA. Hypervisors or peripheral devices can then move the data in and out of the shared memory. This necessitates modifications to the guest kernel to support this I/O model. Furthermore, all I/O data that is transferred into the TDs from hypervisors or peripheral devices must be thoroughly examined and validated, as it is no longer considered trustworthy.
In the Linux guest support for TDX, all MMIO regions and DMA buffers have been mapped as shared memory within the TDs. The Linux guest is enforced to use SWIOTLB to allocate and convert DMA buffers in unified locations. To protect against malicious inputs from I/O, only a limited number of hardened drivers [30] are allowed within TDs.
6.4 TDX Attestation
Remote attestation is a method for verifying the identity and trustworthiness of a TEE. The attester can provide proof to a challenger to show that computations are being executed within protected domains. The challenger validates the evidence by checking the digital signatures and comparing the measurements to reference values.
On a TDX-enabled machine, the attester operates within a TD and is responsible for handling remote attestation requests. When a request is received from a challenger, such as a tenant, the attester provides evidence of proper instantiation of the TD through the generation of a TD quote. This quote, which serves as the evidence, is produced by the TDX module and signed by the Quoting Enclave. It contains measurements of the TDX’s TCB and the software components loaded in the TD. The quote also includes a certificate chain anchored by a certificate issued by Intel. Upon receipt of the quote, the challenger verifies its authenticity by checking the quote and determining if the attester is running on a genuine TDX-enabled platform and if the TD has the expected software measurements. If the quote is successfully validated, the challenger can proceed to establish a secure channel with the attester or release secrets to the attester. We provide a more detailed technical discussion of remote attestation in Section 9.
6.5 Future Features
Live migration and trusted I/O are crucial features for confidential VMs but are currently not supported in TDX 1.0. However, according to documents [35, 36, 38], Intel is planning to include the support for live migration in TDX 1.5 and trusted I/O in TDX 2.0. These plans are still in progress and may be subject to change in the future. Here we provide a brief overview of these two features and explain their design.
Live Migration. Live migration is an essential feature for cloud service providers as it enables them to transfer running VMs from one physical host to another without any service interruptions. This functionality is important for maintenance tasks such as hardware upgrades, software patches, and load balancing. However, migrating a TD is more complex than migrating a traditional VM due to the security concerns of confidential computing. Since the hypervisor is considered untrusted, it is not allowed to directly access and transfer the CPU state and private memory of the TD from the source to the destination platform. Furthermore, tenants should have the ability to define and enforce migration policies. For instance, if the destination platform does not meet the TCB requirements specified in the policy, the migration should be canceled.
Intel introduces Service TDs to expand the trust boundary of the TDX Module. Rather than making the TDX Module overly complex and bloated, it is more convenient and flexible to add customized and specialized functionalities into a Service TD. A Service TD can be bound to regular TDs via the TDX Module with access privileges to their assets.
Migration TD (MigTD) is a Service TD that is specifically designed for live migration. The entire live migration session is under the control of the TDX Module and the MigTDs. The untrusted hypervisor, which is controlled by the cloud service provider, is only responsible for transferring the encrypted TD’s assets over networks. These assets include the TD’s metadata, CPU state, and private memory, and are protected by a Migration Session Key (MSK) that is only accessible by the MigTDs and TDX Module.
Both the source and destination platforms have a running MigTD. MigTDs are respectively bound to the source TD (to be migrated) and the destination TD (initially as a TD template waiting for migration). The MigTDs are responsible for remote attestation between source and destination platforms and evaluate their TCB levels based on security policies. Once the platforms are deemed acceptable for migration, a secure channel is established between the two MigTDs. The source MigTD generates an MSK, which is shared to the destination MigTD through this secure channel. Both MigTDs program the MSK into the corresponding TDX Modules. The source TDX Module exports and encrypts the TD’s assets with the MSK, while the destination TDX Module decrypts the assets with the same key and imports them into the destination TD. It is worth noting that the source and destination TDs have their HKIDs assigned independently, thus protected with different TD private keys.
Trusted I/O. A computer consists of various functional components. However, confidential computing has conceptually shattered the unified trust model. As a result, each component, made by different vendors, can no longer trust each other. This creates a serious impediment to efficient I/O, as untrusted devices cannot read and write data in the private memory of TEEs. To address this issue, Intel has proposed TDX Connect in TDX 2.0, aiming to extend the trust from a TD to external devices. This requires changes to the devices and the TDX platform to use a compatible protocol to establish mutual trust and enable secure communication channels. The key principle is that a TD and a device should be able to securely exchange and verify their identities and measurements. Additionally, the data paths between a TD and a device are not trusted and may be vulnerable to interception by attackers. Therefore, an end-to-end secure channel is necessary to protect the data transmitted between a TD and a device. The detailed protocols for TDX Connect can be found in the proposals [35, 38].
7 TDX MODULE
This section provides an in-depth analysis of the TDX Module. We first discuss its loading process in Section 7.1, followed by an explanation of the physical and linear memory layout in Section 7.2. We then describe the metadata created by the TDX Module to manage TDs in Section 7.4, and the process of context switching across security domains in Section 7.5. Additionally, we provide details about the Keyhole structure (Section 7.6) and memory management (Section 7.7) of the TDX Module.
7.1 Loading TDX Module
Figure 2 illustrates the two-stage process of loading the TDX Module. The process begins with the loading of the Intel Non-Persistent SEAM Loader (NP-SEAM Loader), which is an Intel Authenticated Code Module (ACM). ACMs are Intel-signed modules that run within the internal RAM of the processor. The NP-SEAM Loader is authenticated and loaded by the Intel Trusted Execution Technology (TXT) [19] through the
It is important to note that both the P-SEAM Loader and the TDX Module are loaded in the SEAM Range, which is a portion of system memory reserved via UEFI/BIOS. The range’s base address and size are specified by the
The P-SEAM Loader provides a
The
(1) | checking the parameters to the | ||||
(2) | verifying the signature of the TDX Module, | ||||
(3) | checking the SVN of the to-be-loaded image and comparing with the resident TDX Module, | ||||
(4) | determining the physical and linear addresses and sizes of the TDX Module’s various memory regions in the SEAM Range: code, data, stack, page table, Sysinfo_Table, Keyhole, and Keyhole-Edit (Section 7.2), | ||||
(5) | mapping the regions’ physical addresses to their linear addresses (Section 7.2), | ||||
(6) | loading the TDX Module’s binary image into the SEAM Range, measuring the image, computing and verifying the TDX Module’s hash value, | ||||
(7) | setting up the TDX Module’s Sysinfo_Table, | ||||
(8) | setting up SEAM Transfer VMCS on each LP (Section 7.5), | ||||
(9) | recording the TDX Module’s hash, SVN, in the P-SEAM Loader’s data region. |
In addition to the
7.2 Memory Layout of TDX Module
Here we discuss the physical and linear memory layout for the TDX Module, respectively.
Physical Memory Layout. Figure 3 depicts the physical memory layout of the TDX Module within the Module_Range. The layout starts with a 4 KB page that holds the Sysinfo_Table of the TDX Module. The Sysinfo_Table consists of 2 KB platform information populated by Mcheck from the NP-SEAM Loader and the next 2 KB populated by the P-SEAM Loader with the TDX Module’s information, such as the SEAM Range base address and size, the base linear addresses of the memory regions, number of LPs, and range of private HKIDs. After the Sysinfo_Table, there is the per-LP VMCS region. Each LP has a 4 KB SEAM Transfer VMCS (see Section 7.5). Following the per-LP VMCS region, there is the data region, which is partitioned into per-LP data region and a global data region. Next, there is the TDX Module’s 4-level page table, followed by the per-LP stack regions, and finally, the code region for the TDX Module’s executable code. Linear Memory Layout. The TDX Module has its own linear address space and maintains a page table to translate addresses. Figure 4 illustrates the layout of the TDX Module’s linear address space, which is established by the P-SEAM Loader through the construction of the TDX Module’s page table. To prevent memory corruption attacks, the P-SEAM Loader randomizes bits 34 to 46 of the linear addresses, which are represented by the boxes in Figure 4. The linear addresses and the sizes of all regions are recorded in the fields of the Sysinfo_Table. The Page Table Entriess (PTEs) for code, stack, data, and Sysinfo_Table can be statically populated in advance and require no changes to the page table at runtime. However, the Keyhole region serves to map data passed from external software dynamically during the execution of the TDX Module. This requires the addition of the Keyhole-Edit region to allow runtime editing of the PTEs for the Keyhole’s mapping. A detailed discussion of the Keyhole and Keyhole-Edit regions can be found in Section 7.6.
7.3 Initialization and Configuration of TDX Module
After the TDX Module is loaded, the host kernel is responsible for initializing and configuring the TDX Module. The host kernel makes a
7.4 Metadata for TDs
The TDX Module is responsible for managing the entire life cycle of TDs. As such, it needs to maintain metadata for each TD instance. The TDX Module ensures that the memory encryption is applied to the metadata to prevent the hypervisor from accessing or modifying it.
Each TD’s metadata consists of the following control structures: TDR, Trust Domain Control Structure (TDCS), Trust Domain Virtual Processor State (TDVPS), and Secure EPT (SEPT). Figure 5 illustrates the relationships between these control structures.
TDR.
TDR is the initial structure that is created at the inception of a TD and is destroyed when the TD is terminated. During the entire life cycle of the TD,
TDCS. TDCS is a control structure that manages the operations and stores the state at the scope of a TD. It consists of four continuous TDCX memory pages, each allocated for a specific purpose, such as TD’s management structures, MSR bitmaps, SEPT root page, and a special zero page. TDCS is encrypted with the TD’s private key, which is generated when the TDR is created.
TDVPS. TDVPS is a control structure for each virtual CPU of a TD. It consists of six memory pages, starting from a TDVPR page that contains references to multiple TDVPX pages. The first TDVPR page holds the fields for VE information, virtual CPU management, guest state, and guest MSR state. The second page is for the TD Transfer VMCS (Section 7.5), which controls the TD’s entry and exit. The third page is a Virtual APIC (VAPIC) page, followed by three pages for guest extension information. Like the TDCS, the TDVPS is also protected by the TD’s private key.
SEPT. For legacy VMs, hypervisors manage address translations from GPA to HPA using EPT. However, in TDX, guest address translations must be protected from untrusted hypervisors. To achieve this, TDX has two types of EPT: SEPT and Shared EPT. SEPT is used to translate addresses of a TD’s private memory and is protected by the TD’s private key. The reference to the SEPT and the SEPT root page are stored in the TDCS. Shared EPT, on the other hand, is used to translate addresses for memory explicitly shared by the TD with a hypervisor, such as in the case of virtualized I/O. It remains under the control of the hypervisor. The guest kernel in the TD can determine which memory pages to share by setting the shared bit in the GPA. Shared memory pages are not encrypted with the TD’s private key.
7.5 Context Switches
There are two types of context switches for TDX: the first occurs between the hypervisor and the TDX Module, while the second occurs between TDs and the TDX Module. We delve into each of these in more detail.
Hypervisor \(\leftrightarrow\) TDX Module.
In TDX, a hypervisor is prohibited from directly managing TDs. Instead, it must interact with the TDX Module through
The repurposing of VMCS for context switches between the hypervisor and the TDX Module may seem confusing initially, as the hypervisor is not a “guest VM” and the TDX Module is not a “host hypervisor.” We can disregard the guest/host concept and only view the SEAM Transfer VMCS as a means of switching the execution context between the hypervisor and the TDX Module.
Figure 6 depicts the location and layout of the SEAM Transfer VMCS regions in the Module_Range, which begins at the
When a
When the LP transitions into the TDX Module through the
TD \(\leftrightarrow\) TDX Module.
In traditional virtualization, the hypervisor handles VM exits, which are controlled by the VM’s Transfer VMCS. Each VMCS is associated with one virtual CPU and stores the virtual CPU state for recovering the guest execution in the next VM resume. However, this operation leaks the virtual CPU state as VMCS is visible to hypervisors. In TDX, synchronous
In TDX, certain TD exits cannot be fully handled by the TDX Module and instead require a hypervisor to emulate certain operations, such as port I/O, HLT, CPUID, and more. However, traditional hypervisors have access to the entire virtual CPU states and memory, exposing more information than necessary to handle these exits. TDX addresses this issue by introducing a new mechanism for handling TD exits. All TD exits first trap into the TDX Module, which injects a VE into the TD to handle the exit. The TD’s guest kernel includes a corresponding VE handler that prepares a minimized set of parameters and invokes a
7.6 Keyholes
All memory buffers passed through
The Keyhole region is a reserved linear address range specifically for address mapping. The region is comprised of an array of Keyholes. This array is further divided into 128-Keyhole segments, with each segment assigned to one LP. The TDX Module organizes free Keyholes in an LRU list when setting up per-LP data structures. Each Keyhole corresponds to a 4 KB-aligned linear address and links to a physical memory page. Since multiple memory buffers can exist within the same memory page, each Keyhole maintains a reference count to track the number of referenced buffers on the page.
When the TDX Module is installed by the P-SEAM Loader, all the linear addresses of Keyholes are mapped to an empty physical address. This is achieved by setting all the leaf-level PTEs for the Keyhole region in the TDX Module’s page table to zero. Simultaneously, the physical addresses of the corresponding PTEs for the Keyholes are mapped to the Keyhole-Edit region. This enables the TDX Module to locate and modify the Keyhole’s address mappings in its page table during runtime.
When processing a
7.7 Physical Memory Management
The TDX Module manages physical memory by using a set ofTDMRs and their control structures, PAMTs. TDMRs are constructed by the hypervisor based on a list of Convertible Memory Regions (CMRs), which are the memory regions that can be used for TD’s private memory or metadata. These regions are subject to MKTME encryption and TDX memory integrity protection. This list of CMRs is prepared by the UEFI/BIOS.
Each TDMR is a single range of physical memory that is 1 GB-aligned and has a size that is an integral multiple of 1 GB, but does not necessarily need to be a power of two. Two TDMRs cannot overlap. A TDMR may contain reserved areas that cannot be used by the TDX Module. A reserved area is an array of 4 KB-aligned memory pages (each page is 4 KB). Memory in a TDMR, except for the reserved areas, must be convertible. It should be noted that TDMR configuration is managed by software without using hardware range registers.
The TDX Module uses PAMT to track page attributes of each physical memory page in a TDMR. The attributes contain the information about the page owner, page type, and page size. The page attributes allow the TDX Module to ensure that a physical memory page in a TDMR has a proper type and is only assigned to at most one TD. When a page is assigned to a TD’s private memory, the TDX Module can check whether the page size in the SEPT and PAMT are consistent.
A PAMT is divided into blocks, where each block tracks page addresses within the 1 GB size range. Each block has three levels to track metadata for pages with sizes 4 KB, 2 MB, and 1 GB, respectively. The first level tracks a single 1 GB page, the second level tracks 512 2 MB pages, and the third level tracks \(512\times 512\) 4 KB pages. Given a physical address, the TDX Module can perform a PAMT hierarchical walk to retrieve its page attributes for a sanity check.
The TDX Module manages the data structure by updating the attributes of each page it uses during runtime. Any operation that requires accessing, removing, or adding a page causes the TDX Module to walk through PAMT to adjust the page attributes and check corresponding access rights. The memory for PAMT is allocated by the hypervisor and is encrypted with the TDX Module’s global private key.
7.8 Side Channel Mitigation
Some of the known CPU vulnerabilities have been addressed in hardware fixes. During the initialization of the TDX Module, it reads the
Furthermore, to counteract the Bounds Check Bypass (BCB) [1] vulnerability, a software-level mitigation strategy is deployed, employing memory barriers such as
8 MEMORY PROTECTION
A TD’s memory is divided into private memory and shared memory. The private memory is only accessible by the TD and the TDX Module. The shared memory is also accessible by the hypervisor and is used for operations that require cooperation from the hypervisor, such as networking, I/O, and DMA. TDX protects the confidentiality and integrity of a TD’s private memory.
8.1 HKID Space Partitioning
The HKID space is partitioned once during the boot process into two ranges, private HKIDs and shared HKIDs. Only software in the SEAM mode, namely the TDX Module and TDs, can read and write memory whose contents are encrypted by keys associated with private HKIDs. Keys associated with shared HKIDs can be used to encrypt memory outside the SEAM mode, such as the memory of legacy VMs and the host kernel.
When the hypervisor requests the TDX Module to establish a TD, it allocates a private HKID for the TD. The TDX Module, using the
A physical memory page associated with a HKID stores the HKID in the upper bits of the page’s physical address, as shown in Figure 7. At boot time, the number of bits used for HKIDs (
The hypervisor and the TDX Module configure the memory encryption by setting the HKID in the upper bits of the physical address of a memory page. The hypervisor can only use shared HKIDs, while the TDX Module can use both shared and private HKIDs. An exception will be raised if any software executing outside SEAM mode tries to access memory through a physical address with a private HKID.
8.2 TD Memory Integrity Protection
TDX always protects the integrity of the TD’s private memory content. This protection is required because an entity outside the SEAM mode,e.g., a malicious hypervisor or a DMA device, can write to the TD’s private memory. TDX cannot prevent such modification, but it can detect and flag it. It prevents a TD or the TDX Module from reading the tampered content. To detect such tampering, TDX supports two memory integrity modes that can be configured on a system:
(1) | Logical Integrity (Li): memory integrity is protected by a TD Owner Bit. | ||||
(2) | Cryptographic Integrity (Ci): memory integrity is protected by a MAC and a TD Owner Bit. |
Both Li and Ci apply to a physical memory segment with the size of a cache line and whose address is cache line aligned. Ci can detect modifications made by direct physical access to the memory or bit flips, such as the Rowhammer attack [45], which Li cannot detect.
In addition to Li and Ci, if a program outside the SEAM mode reads the private memory of a TD or the TDX Module, the read will always return zeros. This is to prevent ciphertext cryptanalysis and side channels in which a program outside the SEAM mode could determine whether a program in the SEAM mode changes the memory content.
If a TD or the TDX Module writes to a memory segment belonging to a TD’s private memory, the corresponding TD Owner Bit is set to 1. Due to the way a TD’s memory is set up, all TD Owner Bits of a TD’s private memory should be set to 1. However, if an entity outside the SEAM mode writes to a segment belonging to the private memory, the corresponding TD Owner Bit is cleared to 0. Later, when the TD or the TDX Module reads the segment, the segment is marked as poisoned. If the reader is the TD, this poisoned marking causes a TD exit for the TD. The TDX Module can capture this TD exit and put the TD into a fatal state, which prevents any further entry into the TD and leads to the tearing down of the TD. If the TDX Module reads the poisoned content, the TDX Module and the TDX’s hardware extension in the processor are marked as disabled. Any further
If Ci is enabled, the processor generates a 128-bit MAC key during system initialization. On each write, TDX uses this key to calculate and store a 28-bit MAC in the ECC memory corresponding to the cache line. On each read, the memory controller recalculates the MAC and compares it with the value read from the ECC memory. The mismatch indicates integrity or authenticity violation and results in the cache line being marked as poisoned. The MAC is calculated over (1) the ciphertext (encrypted content of the cache line), (2) the tweak values used for AES-XTS encryption, (3) the TD Owner Bit, and (4) the 128-bit MAC key.
9 REMOTE ATTESTATION
The attestation of a TD consists of generating a local attestation report, which can be verified on the platform, and then extending this report with digital signatures and certificates to enable remote attestation of the TD off the platform. We first describe the overall process of generating and extending a local TD report in Section 9.1. Then we review the setup and the configuration of the host TDX platform to enable remote attestation in Section 9.2. Finally, we provide details on using remote attestation for establishing a secure channel and encrypted boot in Section 9.3.
9.1 Attestation Process
Several steps are involved when generating and extending a local report of a TD to enable remote attestation. The first step is to take measurements of the loaded software during the build-time and runtime of the TD. The next step is to retrieve the TD’s measurements and platform TCB information, i.e., generating a TD report. The final step is to derive a quote from the TD report. A third party can use the quote to verify whether the TD runs on a genuine TDX platform with the expected TCB versions and software measurements.
Taking Measurements. TDX provides two types of measurement registers for each TD: a build-time measurement register called Measurement of Trust Domain (MRTD) and four Runtime Measurement Registers (RTMRs). These measurement registers are comparable to the TPM’s PCRs, see Table 3 derived from [41] showing the mapping between TDX measurement registers and TPM PCRs.
The MRTD contains a measurement of the TD build process. At the TD creation, when the hypervisor adds initial memory pages to the TD, it extends the MRTD in the TDCS with measurements of these pages. The hypervisor calls
RTMRs are general measurement registers labeled 0 through 3 for TD’s runtime measurements. A TD can use these registers to provide a measured boot, i.e., measuring all software loaded after booting. These measurement registers are initialized to zero. The TD calls
Generating TD Reports.
A report is generated inside a TD. The TD calls
Figure 8 illustrates a TD report consisting of three components:
Deriving Quotes.
To enable verification off the platform by a third party, the TD report must be converted into a quote. TDX tends to reuse the remote attestation mechanism of SGX. A TD makes a call to request the QE running on the host platform to sign the TD report. This call can be implemented over a VSOCK or a
9.2 Platform Setup
Configuring the attestation infrastructure involves registering the platform with the Intel PCS, running architectural enclaves for generating quotes, and retrieving certificates required for verifying quotes. Intel extends the existing DCAP [58] to support remote attestation for TDX.
Registration. On multiple-package platforms, platform keys are derived at platform assembly time. These keys are shared between CPU-packages and are encrypted by the CPU’s unique hardware key. Provisioning Certification Keys (PCKs) are derived from the platform keys and used for certifying (signing) attestation keys. Since PCKs are not recognized by the attestation infrastructure, they must be registered with Intel PCS.
To register a platform, we need to run the PCK Cert ID Retrieval Tool to extract a manifest from the platform. This manifest contains information on CPU packages, e.g., CPU ID (128-bit), SVN, and hardware TCB information. When the Intel PCS gets the register server request, it checks whether CPUs and TCB are in good standing before issuing a PCK certificate. The manifest is signed with keys derived from the CPU package’s hardware keys and the Intel PCS checks whether these signatures are valid. If registration succeeds, the Intel PCS returns an Intel-issued certificate for the PCK.
Typically in DCAP, a Provisioning Certification Caching service (PCCS) runs on the host platform to facilitate PCK certificate retrieval. This service can run anywhere. It forwards the PCK requests from the PCK Cert ID Retrieval Tool to the Intel PCS and caches the returned PCK certificates locally. The Intel PCS also provides certificates and revocation lists for PCKs in all genuine Intel platforms. PCCS maintains local caches of these artifacts as well.
Before registering, a platform must have the appropriate UEFI/BIOS settings and access to the Intel PCS. Both TDX and SGX must be enabled in the UEFI/BIOS on the host platform. An Intel account is required for retrieving API keys for registering a platform with the Intel PCS. If the PCCS is utilized, it must be configured with the API keys and Intel PCS server’s address.
Architectural Enclaves.
To enable quote generation on the platform. Intel provides two architectural enclaves: Provisioning Certificate Enclave (PCE) and QE. The PCE acts as a local certification authority for the QE. In its initialization process, the QE generates an attestation key pair. It sends the public part to the PCE. The PCE authenticates that this is a legitimate QE on the platform and then signs the attestation public key certificate with the PCK. This signature creates a quote certificate chain from an Intel-issued PCK certificate to the QE attestation public key. Figure 9 illustrates the quote’s certificate chain. The PCK certificate is used for verifying the QE attestation public key certificate, and the QE attestation public key in turn for verifying the signature on the quote.
Remote Attestation Flow.
Figure 10 shows a remote third party performing attestation with an attestation agent running on a TD. The remote party sends an attestation request providing a nonce to the attestation agent (Step 1). The nonce provides freshness to the request and prevents replay attacks. The attestation agent retrieves a TD report from the TDX Module providing the nonce as the
The remote party requires the platform’s PCK certificate to verify the quote, so it may download the PCK certificate from a PCCS (Step 5) or retrieve directly from the Intel PCS (Step 6). The party then proceeds to validate the quote (Step 7). It checks for the nonce in the quote and verifies the integrity of the signature chain from the Intel-issued PCK certificate to the signed quote, walking the certificate chain to determine whether the quote has a valid signature. The party also checks that no keys in the chain have been revoked and whether the TCB is up-to-date. Finally, the party checks if the measurements, i.e., values in MRTD and RTMRs, in the quote match a set of reference values. If it successfully validates the quote, the remote party can trust that the TD has been properly instantiated on a TDX platform.
9.3 Use Cases
Secure Channel Establishment. Remote attestation can be integrated with establishing a secure channel [46], linking channel setup with the endpoint’s TEE identity, state, and configuration. This integration prevents relay attacks since an attacker cannot forward a challenger’s attestation request from a compromised system to a trusted system to service the request.
In a typical scenario when a client negotiates a secure channel with a server running in a TEE, it wants to ensure a connection with a properly instantiated server. The server, serving as an attester, generates an ephemeral public and private key pair. It computes the hash of the public key and then creates a TD report providing this hash as the
Tenants also have the flexibility to integrate the attestation agent in the virtual BIOS and wrap the entire VM workload within an encrypted image. In this case, the virtual BIOS needs to be extended to support the functionalities of retrieving the quote and fetching the key for mounting the encrypted disk image.
10 CONCLUSION
In this article, we provide a top-down review of Intel TDX, covering its security principles, threat model, underpinning technologies, system architecture, and future features. We then dive deeper into the design of the TDX Module, memory protection mechanisms, and remote attestation. The review is based on publicly available documentation and source code. As confidential computing is a fast-evolving field, we highlight ongoing challenges and efforts, including the need to support live migration and trusted I/O. We will continue to conduct in-depth security analysis as the technology progresses.
ACKNOWLEDGMENTS
We would like to extend our sincere thanks to Guerney Hunt, Rick Boivie, Dimitrios Pendarakis, and Jonathan Bradbury for taking the time to read our draft and providing their invaluable feedback and suggestions.
APPENDIX
A LIST OF ACRONYMS
ABI Application Binary Interface ......................................... 7
ACM Authenticated CodeModule ....................................................... 15
ACPI Advanced Configuration and Power Interface ..................................... 4
AES Advanced Encryption Standard .................................................... 5
BCB Bounds Check Bypass..............................................................21
CCA Confidential Compute Architecture ............................................... 4
CET Control Flow Enforcement Technology ............................................. 3
Ci Cryptographic Integrity ......................................................... 13
CMR Convertible Memory Region ....................................................... 21
CoVE Confidential VMExtension......................................................... 4
DCAP Data Center Attestation Primitives .............................................. 2
DMA Direct Memory Access ............................................................. 4
DoS Denial of Service................................................................. 3
ECC Error Correction Code ............................................................ 12
EPC Enclave Page Cache................................................................. 9
EPCM Enclave Page CacheMap ........................................................... 9
EPT Extended Page Table ............................................................... 8
ESM Enter SecureMode ................................................................. 6
GPA Guest Physical Address ............................................................. 8
GPT Granule Protection Table............................................................ 7
GVA Guest Virtual Address .............................................................. 8
HKID Host Key Identifier ............................................................... 9
HPA Host Physical Address .............................................................. 8
IAS Intel Attestation Service ......................................................... 10
IOMMU Input/OutputMemoryManagement Unit ................................................ 4
ISA Instruction Set Architecture ........................................................ 6
KET Key Encryption Table ................................................................ 9
Li Logical Integrity .................................................................... 13
LP Logical Processor....................................................................16
MAC Message Authentication Code ....................................................... 13
MEE Memory Encryption Engine ........................................................... 9
MigTD Migration TD................................................................... 14
MKTME Multi-key TotalMemory Encryption ............................................... 2
MMIO Memory-Mapped Input/Output ...................................................... 4
MMU MemoryManagement Unit ........................................................... 20
MRTD Measurement of Trust Domain ..................................................... 23
MSK Migration Session Key............................................................. 14
MSR Model-Specific Register ........................................................... 4
MTT Memory Tracking Table ............................................................. 7
OS Operating System.................................................................... 2
PAMT Physical Address Metadata Table .................................................. 13
PCCS Provisioning Certification Caching service ....................................... 26
PCE Provisioning Certificate Enclave .................................................. 26
PCI Peripheral Component Interconnect .................................................. 4
PCK Provisioning Certification Key ..................................................... 4
PCR PlatformConfiguration Register ...................................................... 6
PCS Provisioning Certification Service .................................................. 4
PEF Protected Execution Facility ........................................................ 4
PSP Platform Security Processor ......................................................... 5
PTE Page Table Entry ................................................................... 17
QE Quoting Enclave ..................................................................... 10
RDCL Rogue Data Cache Load ............................................................. 21
RME RealmManagement Extension ........................................................... 7
RMP ReverseMapping Table ................................................................ 5
RTMR RuntimeMeasurement Register ....................................................... 23
S-IOV Scalable I/O virtualization ....................................................... 8
SEAM Secure-ArbitrationMode ............................................................. 1
SEPT Secure EPT ........................................................................ 18
SEV Secure Encrypted Virtualization .................................................... 4
SGX Software Guard Extensions .......................................................... 2
SLAT Second Level Address Translation................................................... 8
SME SecureMemory Encryption ............................................................ 5
SMM SystemManagementMode ............................................................... 3
SoC System-on-Chip ..................................................................... 5
SR-IOV Single Root I/O virtualization ...................................................8
SVM Secure Virtual Machine ............................................................. 6
SVN Security Version Number ........................................................... 16
SVSM Secure VMServiceModule............................................................ 5
TCB Trusted Computing Base ............................................................ 2
TD Trust Domain ....................................................................... 2
TDCS Trust Domain Control Structure .................................................. 18
TDMR Trust DomainMemory Region ....................................................... 17
TDR Trust Domain Root ................................................................ 17
TDVPS Trust Domain Virtual Processor State ............................................ 18
TDX Trust Domain Extensions ........................................................... 1
TEE Trusted Execution Environment ..................................................... 2
TLB Translation Lookaside Buffer ...................................................... 3
TLS Transport Layer Security .......................................................... 3
TME TotalMemory Encryption .......................................................... 7
TPM Trusted PlatformModule ........................................................... 6
TSM TEE Security Manager .............................................................. 7
TVM TEE Virtual Machine ............................................................... 7
TXT Trusted Execution Technology ..................................................... 15
UD Undefined Instruction ................................................................ 8
VAPIC Virtual APIC .................................................................... 18
VE Virtualization Exception ............................................................. 11
VM Virtual Machine ..................................................................... 2
VMCS Virtual Machine Control Structure ................................................. 8
VMM Virtual Machine Monitor ........................................................... 7
VMPL Virtual Machine Privilege Level .................................................... 5
VMX Virtual Machine Extensions ......................................................... 7
VT Virtualization Technology ............................................................ 2
Footnotes
1 At the time of this writing, the size of the processor’s cache line is 64 bytes; thus the address of such a memory segment is 64 B-aligned.
Footnote
- [1] 2018. CVE-2017-5753. Retrieved March 29, 2024 from https://nvd.nist.gov/vuln/detail/CVE-2017-5753Google Scholar
- [2] 2018. CVE-2017-5754. Retrieved March 29, 2024 from https://nvd.nist.gov/vuln/detail/CVE-2017-5754Google Scholar
- [3] . 2020. Strengthening VM isolation with integrity protection and more. AMD (2020).Google Scholar
- [4] Sergei Arnautov, Bohdan Trach, Franz Gregor, Thomas Knauth, Andre Martin, Christian Priebe, Joshua Lind, Divya Muthukumaran, Dan O’Keeffe, Mark L. Stillwell, David Goltzsche, Dave Eyers, Rüdiger Kapitza, Peter Pietzuch, and Christof Fetzer. 2016. Scone: Secure linux containers with intel sgx. In Proceedings of the OSDI. 689–703.Google Scholar
- [5] . 2022. Branch history injection: On the effectiveness of hardware mitigations against \(\lbrace\)cross-privilege\(\rbrace\) spectre-v2 attacks. In Proceedings of the 31st USENIX Security Symposium (USENIX Security 22). 971–988.Google Scholar
- [6] . 2015. Shielding applications from an untrusted cloud with haven. ACM Transactions on Computer Systems 33, 3 (2015), 1–26.Google ScholarDigital Library
- [7] . 2016. Attestation transparency: Building secure internet services for legacy clients. In Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security. 687–698.Google ScholarDigital Library
- [8] . 2017. Hybrids on steroids: SGX-based high performance BFT. In Proceedings of the 12th European Conference on Computer Systems. 222–237.Google ScholarDigital Library
- [9] Patrick Bohrer, James Peterson, Mootaz Elnozahy, Ram Rajamony, Ahmed Gheith, Ron Rockhold, Charles Lefurgy, Hazim Shafi, Tarun Nakra, Rick Simpson, Evan Speight, Kartik Sudeep, Eric Van Hensbergen, and Lixin Zhang. 2004. Mambo: A full system simulator for the PowerPC architecture. ACM SIGMETRICS Performance Evaluation Review 31, 4 (2004), 8–12.Google ScholarDigital Library
- [10] . 2012. SecureBlue++: CPU support for secure execution. IBM, IBM Research Division, RC25287 (WAT1205-070) (2012), 1–9.Google Scholar
- [11] . 2017. Software grand exposure: SGX cache attacks are practical. In Proceedings of the WOOT. 11–11.Google Scholar
- [12] . 2016. Securekeeper: Confidential zookeeper using intel sgx. In Proceedings of the 17th International Middleware Conference. 1–13.Google ScholarDigital Library
- [13] . 2019. Sgxpectre: Stealing intel secrets from sgx enclaves via speculative execution. In Proceedings of the 2019 IEEE European Symposium on Security and Privacy (EuroS&P). IEEE, 142–157.Google ScholarCross Ref
- [14] . 2023. Retrieved March 29, 2024 from https://www.kernel.org/doc/Documentation/x86/tdx.rstGoogle Scholar
- [15] . 2018. Branchscope: A new side-channel attack on directional branch predictor. ACM SIGPLAN Notices 53, 2 (2018), 693–707.Google ScholarDigital Library
- [16] . 2017. Iron: Functional encryption using Intel SGX. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 765–782.Google ScholarDigital Library
- [17] . 2017. HardIDX: Practical and secure index with SGX. In Proceedings of the Data and Applications Security and Privacy XXXI: 31st Annual IFIP WG 11.3 Conference, DBSec 2017, Philadelphia, PA, USA, July 19-21, 2017. Springer, 386–408.Google ScholarCross Ref
- [18] . 2017. Cache attacks on intel SGX. In Proceedings of the 10th European Workshop on Systems Security. 1–6.Google ScholarDigital Library
- [19] . 2012. Intel trusted execution technology: Hardware-based technology for enhancing server platform security. Intel Corporation (2012).Google Scholar
- [20] Zhongshu Gu, Heqing Huang, Jialong Zhang, Dong Su, Hani Jamjoom, Ankita Lamba, Dimitrios Pendarakis, and Ian Molloy. 2020. Confidential Inference via Ternary Model Partitioning.Google Scholar
- [21] . 2019. Reaching data confidentiality and model accountability on the caltrain. In Proceedings of the 2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN). IEEE, 336–348.Google ScholarCross Ref
- [22] Guerney D. H. Hunt, Ramachandra Pai, Michael V. Le, Hani Jamjoom, Sukadev Bhattiprolu, Rick Boivie, Laurent Dufour, Brad Frey, Mohit Kapur, Kenneth A. Goldman, Ryan Grimm, Janani Janakirman, John M. Ludden, Paul Mackerras, Cathy May, Elaine R. Palmer, Bharata Bhasker Rao, Lawrence Roy, William A. Starke, Jeff Stuecheli, Enriquillo Valdez, and Wendel Voigt. 2021. Confidential computing for OpenPOWER. In Proceedings of the 16th European Conference on Computer Systems. 294–310.Google Scholar
- [23] . 2020. Retrieved March 29, 2024 from https://github.com/open-power/ultravisorGoogle Scholar
- [24] . 2022. Introducing IBM secure execution for linux 1.3.0. Retrieved from https://www.ibm.com/docs/en/linuxonibm/pdf/l130se03.pdfGoogle Scholar
- [25] Vedvyas Shanbhogue, Deepak Gupta, and Ravi Sahita. 2019. Security analysis of processor instruction set architecture for enforcing control-flow integrity. In Proceedings of the 8th International Workshop on Hardware and Architectural Support for Security and Privacy. 1–11.Google Scholar
- [26] . 2021. Intel® trust domain cpu architectural extensions specification. Retrieved March 29, 2024 from https://cdrdv2.intel.com/v1/dl/getContent/733582Google Scholar
- [27] . 2022. Retrieved from https://www.intel.com/content/www/us/en/download/738875/738876/intel-trust-domain-extension-intel-tdx-module.htmlGoogle Scholar
- [28] . 2022. Retrieved March 29, 2024 from https://www.intel.com/content/www/us/en/download/738874/intel-trust-domain-extension-intel-tdx-loader.htmlGoogle Scholar
- [29] . 2022. Retrieved March 29, 2024 from https://github.com/intel/tdx/Google Scholar
- [30] . 2022. Retrieved March 29, 2024 from https://intel.github.io/ccc-linux-guest-hardening-docs/security-spec.htmlGoogle Scholar
- [31] . 2022. Intel® architecture memory encryption technologies. Intel Corporation (2022).Google Scholar
- [32] . 2022. Intel® TDX loader interface specification. Retrieved March 29, 2024 from https://cdrdv2.intel.com/v1/dl/getContent/733584Google Scholar
- [33] . 2022. TDX Virtual Firmware (TDVF). Retrieved March 29, 2024 from https://github.com/tianocore/edk2-staging/tree/TDVFGoogle Scholar
- [34] . 2023. CPUID enumeration and architectural MSRs. Retrieved March 29, 2024 from https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/technical-documentation/cpuid-enumeration-and-architectural-msrs.htmlGoogle Scholar
- [35] . 2023. Device attestation model in confidential computing environment. Retrieved March 29, 2024 from https://cdrdv2.intel.com/v1/dl/getContent/742533Google Scholar
- [36] . 2023. Intel TDX Module v1.5 TD Migration Architecture Specification. Retrieved March 29, 2024 from https://cdrdv2.intel.com/v1/dl/getContent/733578Google Scholar
- [37] . 2023. Intel TDX module v1.5 TD partitioning architecture specification. Retrieved March 29, 2024 from https://cdrdv2.intel.com/v1/dl/getContent/773039Google Scholar
- [38] . 2023. Intel® TDX connect architecture specification. Retrieved March 29, 2024 from https://cdrdv2.intel.com/v1/dl/getContent/773614Google Scholar
- [39] . 2023. Intel® TDX guest-hypervisor communication interface. Retrieved March 29, 2024 from https://cdrdv2.intel.com/v1/dl/getContent/726790Google Scholar
- [40] . 2023. Intel® TDX module 1.0 specification. Retrieved March 29, 2024 from https://cdrdv2.intel.com/v1/dl/getContent/733568 (2023).Google Scholar
- [41] . 2023. Intel® TDX virtual firmware design guide. Retrieved March 29, 2024 from https://cdrdv2.intel.com/v1/dl/getContent/733585Google Scholar
- [42] . 2023. Intel® trust domain extensions. Retrieved March 29, 2024 from https://cdrdv2.intel.com/v1/dl/getContent/690419Google Scholar
- [43] . 2017. Protecting vm register state with sev-es. AMD (2017).Google Scholar
- [44] . 2016. AMD memory encryption. AMD (2016).Google Scholar
- [45] . 2014. Flipping bits in memory without accessing them: An experimental study of DRAM disturbance errors. In Proceedings of the 2014 ACM/IEEE 41st International Symposium on Computer Architecture (ISCA). 361–372.
DOI: Google ScholarCross Ref - [46] Thomas Knauth, Michael Steiner, Somnath Chakrabarti, Li Lei, Cedric Xing, and Mona Vij. 2019. Integrating Remote Attestation with Transport Layer Security.Google Scholar
- [47] Paul Kocher, Jann Horn, Anders Fogh, Daniel Genkin, Daniel Gruss, Werner Haas, Mike Hamburg, Moritz Lipp, Stefan Mangard, Thomas Prescher, Michael Schwarz, and Yuval Yarom. 2020. Spectre attacks: Exploiting speculative execution. Communications of the ACM 63, 7 (2020), 93–101.Google ScholarDigital Library
- [48] . 2018. Spectre returns! Speculation attacks using the return stack buffer. In Proceedings of the12th USENIX Workshop on Offensive Technologies.Google Scholar
- [49] . 2017. Inferring fine-grained control flow inside SGX enclaves with branch shadowing. In Proceedings of the USENIX Security Symposium. 16–18.Google Scholar
- [50] . 2022. Design and verification of the arm confidential compute architecture. In Proceedings of the16th USENIX Symposium on Operating Systems Design and Implementation (OSDI 22). 465–484.Google Scholar
- [51] . 2017. Glamdring: Automatic application partitioning for intel SGX. In Proceedings of the 2017 USENIX Annual Technical Conference (USENIX ATC’17). USENIX Association, Santa Clara, CA, 285–298.Google Scholar
- [52] Moritz Lipp, Michael Schwarz, Daniel Gruss, Thomas Prescher, Werner Haas, Jann Horn, Stefan Mangard, Paul Kocher, Daniel Genkin, Yuval Yarom, Mike Hamburg, and Raoul Strackx. 2020. Meltdown: Reading kernel memory from user space. Communications of the ACM 63, 6 (2020), 46–56.Google ScholarDigital Library
- [53] Frank McKeen, Ilya Alexandrovich, Alex Berenzon, Carlos V. Rozas, Hisham Shafi, Vedvyas Shanbhogue, and Uday R. Savagaonkar. 2013. Innovative instructions and software model for isolated execution. In Proceedings of the 2nd International Workshop on Hardware and Architectural Support for Security and Privacy (HASP’13), Tel-Aviv, Israel, 1 pages.Google Scholar
- [54] . 2017. Cachezoom: How SGX amplifies the power of cache attacks. In Proceedings of the Cryptographic Hardware and Embedded Systems–CHES 2017: 19th International Conference, Taipei, Taiwan, September 25-28, 2017. Springer, 69–90.Google ScholarCross Ref
- [55] . 2016. Oblivious multi-party machine learning on trusted processors. In Proceedings of the USENIX Security Symposium. 10–12.Google Scholar
- [56] . 2018. EnclaveDB: A secure database using SGX. In Proceedings of the 2018 IEEE Symposium on Security and Privacy (SP). IEEE, 264–278.Google ScholarCross Ref
- [57] . 2023. CoVE: Towards confidential computing on RISC-V platforms. In Proceedings of the 20th ACM International Conference on Computing Frontiers. 315–321.Google ScholarDigital Library
- [58] . 2018. Supporting third party attestation for Intel SGX with Intel data center attestation primitives. Retrieved March 29, 2024 from https://cdrdv2-public.intel.com/671314/intel-sgx-support-for-third-party-attestation.pdfGoogle Scholar
- [59] . 2015. VC3: Trustworthy data analytics in the cloud using SGX. In Proceedings of the 2015 IEEE Symposium on Security and Privacy. IEEE, 38–54.Google ScholarDigital Library
- [60] . 2017. Malware guard extension: Using SGX to conceal cache attacks. In Proceedings of the Detection of Intrusions and Malware, and Vulnerability Assessment: 14th International Conference, DIMVA 2017, Bonn, Germany, July 6-7, 2017. Springer, 3–24.Google ScholarCross Ref
- [61] . 2023. Linux SVSM (Secure VM Service Module). Retrieved March 29, 2024 from https://github.com/AMDESE/linux-svsmGoogle Scholar
- [62] Florian Tramèr and Dan Boneh. 2019. Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware.Google Scholar
- [63] . 2017. Graphene-SGX: A practical library os for unmodified applications on SGX. In Proceedings of the USENIX Annual Technical Conference. 645–658.Google Scholar
- [64] Rich Uhlig, Gil Neiger, Dion Rodgers, Amy L. Santoni, Fernando C. M. Martins, Andrew V. Anderson, Steven M. Bennett, Alain Kagi, Felix H. Leung, and Larry Smith. 2005. Intel virtualization technology. Computer 38, 5 (2005), 48–56.Google Scholar
- [65] . 2018. Foreshadow: Extracting the keys to the intel \(\lbrace\)SGX\(\rbrace\) kingdom with transient out-of-order execution. In Proceedings of the 27th USENIX Security Symposium. USENIX, 991–1008.Google Scholar
- [66] . 2017. SGX-Step: A practical attack framework for precise enclave execution control. In Proceedings of the 2nd Workshop on System Software for Trusted Execution. 1–6.Google ScholarDigital Library
- [67] . 2017. Telling your secrets without page faults: Stealthy page table-based attacks on enclaved execution. In Proceedings of the 26th USENIX Security Symposium. USENIX Association, 1041–1056.Google Scholar
- [68] . 2019. RIDL: Rogue in-flight data load. In Proceedings of the 2019 IEEE Symposium on Security and Privacy (SP). IEEE, 88–105.Google ScholarCross Ref
- [69] . 2017. Leaky cauldron on the dark land: Understanding memory side-channel hazards in SGX. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 2421–2434.Google ScholarDigital Library
- [70] . 2011. CPU support for secure executables. In Proceedings of the Trust and Trustworthy Computing: 4th International Conference, TRUST 2011, Pittsburgh, PA, USA, June 22-24, 2011. Springer, 172–187.Google ScholarCross Ref
- [71] . 2015. Controlled-channel attacks: Deterministic side channels for untrusted operating systems. In Proceedings of the 2015 IEEE Symposium on Security and Privacy. IEEE, 640–656.Google ScholarDigital Library
Index Terms
- Intel TDX Demystified: A Top-Down Approach
Recommendations
TwinVisor: Hardware-isolated Confidential Virtual Machines for ARM
SOSP '21: Proceedings of the ACM SIGOPS 28th Symposium on Operating Systems PrinciplesConfidential VM, which offers an isolated execution environment for cloud tenants with limited trust in the cloud provider, has recently been deployed in major clouds such as AWS and Azure. However, while ARM has become increasingly popular in cloud ...
TDX: a high-performance table-driven XML parser
ACM-SE 44: Proceedings of the 44th annual Southeast regional conferenceThis paper presents TDX, a table-driven XML parser. TDX combines parsing and validation into one pass to increase the performance of XML-based applications, such as Web services. The TDX approach is based on the observation that context-free grammars ...
Hecate: Lifting and Shifting On-Premises Workloads to an Untrusted Cloud
CCS '22: Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications SecurityDespite the recent exponential growth in cloud adoption, businesses that handle sensitive data (e.g., health and financial sectors) are hesitant to migrate their on-premises IT infrastructure to the public cloud due to the lack of trust on the cloud ...
Comments