Intel announces Data Streaming Accelerator (DSA)

Published: Nov 21st 2019, 08:54 GMT   Comments

« press release »


Intel® DSA is a high-performance data copy and transformation accelerator that will be integrated in future Intel® processors, targeted for optimizing streaming data movement and transformation operations common with applications for high-performance storage, networking, persistent memory, and various data processing applications.

Intel® DSA replaces the Intel® QuickData Technology, which is a part of Intel® I/O Acceleration Technology.

The goal is to provide higher overall system performance for data mover and transformation operations, while freeing up CPU cycles for higher level functions. Intel® DSA enables high performance data mover capability to/from volatile memory, persistent memory, memory-mapped I/O, and through a Non-Transparent Bridge (NTB) device to/from remote volatile and persistent memory on another node in a cluster. Enumeration and configuration is done with a PCI Express* compatible programming interface to the Operating System (OS) and can be controlled through a device driver.

Besides the basic data mover operations, Intel® DSA supports a set of transformation operations on memory. For example:

  • Generate and test CRC checksum, or Data Integrity Field (DIF) to support storage and networking applications.
  • Memory Compare and delta generate/merge to support VM migration, VM Fast check-pointing and software managed memory deduplication usages.

Each SoC may support any number of Intel® DSA device instances. A multi-socket server platform may support multiple such SoCs. From a software perspective, each instance is exposed as a PCI-Express Root Complex Integrated Endpoint. Each instance is under the scope of a DMA Remapping hardware unit [also called an input–output memory management unit (IOMMU)]. Depending on the SoC design, different instances can be behind the same or different DMA Remapping hardware units.

Intel® DSA supports a variety of PCI-SIG* defined services to provide highly scalable configurations, including:

  • Address Translation Services (ATS)
  • Process Address Space ID (PASID)
  • Page Request Services (PRS)
  • Message Signalled Interrupts Extended (MSI-X)
  • Advanced Error Reporting (AER)

The above capabilities enable Intel® DSA to support Shared Virtual Memory (SVM) operation, allowing the device to operate directly in the application’s virtual address space without requiring pinned memory. Intel® DSA also supports Intel® Scalable I/O Virtualization (Intel® Scalable IOV) to support hyperscale virtualization. In addition to traditional MSI-X, it also supports device specific Interrupt Message Store (IMS).

Figure 1:  Abstracted Internal Block Diagram of Intel® DSA

Figure 1 illustrates the high-level blocks within the device at a conceptual level. The I/O fabric interface is used for receiving downstream work requests from clients and for upstream read, write, and address translation operations.

Each device contains the following basic components:

  • Work Queues (WQ) – On device storage to queue descriptors to the device. Requests are added to a WQ by using new instructions to write to the memory mapped “portal” associated with each WQ.
  • Groups – Abstract container that can include one or more engines and work queues.
  • Engines – Pulls work submitted to the WQs and process them.

Two types of WQs are supported:

  • Dedicated WQ (DWQ) – A single client owns this exclusively and can submit work to it.
  • Shared WQ (SWQ) – Multiple clients can submit work to the SWQ.

A client using DWQ submits work descriptors using the MOVDIR64B instruction. This is a posted write, so the client must track the number of descriptors submitted to ensure that it does not exceed the configured work queue length as any additional descriptors would be dropped.

Clients using shared work queues submit work descriptors using either ENQCMDS (from supervisor mode) or ENQCMD (from user mode). These instructions indicate via the EFLAGS.ZF bit whether the request was accepted.

Refer to the Intel® Software Developer’s Manual (SDM) or the Intel® Instruction Set Extensions (ISE) for more details on these instructions.

SOFTWARE ARCHITECTURE

Figure 2 below shows the Software Architecture. The kernel driver Intel® Data Accelerator Driver (IDXD) is a typical kernel driver that identifies devices instances in a system. This is also a component referred to in the Intel® Scalable IOV specification as the Virtual Device Composition Module (VDCM) that creates instances to facilitate exposing a virtual Intel® DSA instance to a guest OS.

Figure 2:  Software Architecture

The kernel driver provides the following services:

  • Character device interface for each configured WQ for native applications to mmap(2) on this device to get access to the WQ portal.
  • API to provide access to the WQ portals for in-kernel use.
  • VDCM to compose virtual devices to provide Intel® DSA instances to a guest OS.
  • User interface via the sysfs filesystem to allow tools to discover the topology and ability to configure the work queues.

The system administrator can configure devices in a variety of ways. Refer to the Intel® DSA specification for all the programming and configuration of work queues into different modes.

ACCELERATOR CONFIGURATOR  (ACCEL-CONFIG)

accel-config is a utility that allows system administrators to configure groups, work queues and engines. The utility parses the topology and capabilities exposed via sysfs and provides a command line interface to configure resources. Some of the capabilities of the accel-config are listed below:

  • Display the device hierarchy.
  • Configure attributes and provide access for kernel or applications.
  • Use API library (libaccel) that applications can link to to perform operations through a standard ‘C’ library.
  • Control devices to stop, start interfaces.
  • Create VFIO mediated devices to expose virtual Intel® DSA instances to Guest OSes.

For more information, refer to accel-config.

USING INTEL® DSA IN NATIVE KERNEL

The sysfs attributes allows the sysadmin to specify a type and name for each WQ. This allows the WQ to be reserved for a specific purpose. There are three types supported in the driver:

  • Kernel – Reserved for native kernel use.
  • User – Reserved for native user space use, for tools such as DPDK, etc.
  • Mdev – For exposing mediated devices to support providing Intel® DSA functionality to a guest OS.

For user and mdev types, sysadmin can specify a string to identify the work queue provisioned by the sysadmin. For example, the strings mysql or DPDK can be used to uniquely identify the resource reserved for a specific use.

Figure 3:  Using Intel® DSA in the Kernel

The IDXD driver utilizes the Linux* kernel DMA engine subsystem in order to serve kernel work requests.

Some examples include ClearPage engine, NonTransparent Bridge (NTB), and handling persistent memory.

UPSTREAMING INTEL® DSA SUPPORT IN LINUX*

Intel® DSA uses several new CPU and platform features and they all interact to provide the required functionality. Since there are several components and complex interactions, the code and the blog sections are broken into several small pieces to ease the introduction of the different technologies and their support in Linux. Here is a breakdown of the current planned phases:

  • Phase 1: Bare metal driver, user space tools. Targeting in-kernel and native user space usages.
  • Phase 2: Support for native support for ENQCMD, Interrupt Message Store (IMS). This will show native use of shared work queue configuration.
  • Phase 3: Constructing Intel® DSA mediated devices, guest support, Virtual IOMMU (vIOMMU) support in QEMU.
  • Phase 4: Handling ENQCMD in a guest OS and associated enabling in KVM, QEMU.

People who are interested in the whole picture can take a look at this tree to keep up with the development progress for each stage as they are posted and discussed in the community. We will make every effort to keep this blog updated with references as they develop.

REFERENCES

Source: 01.org


« end of the press release »




Comment Policy
  1. Comments must be written in English and should not exceed 1000 characters.
  2. Comments deemed to be spam or solely promotional in nature will be deleted. Including a link to relevant content is permitted, but comments should be relevant to the post topic. Discussions about politics are not allowed on this website.
  3. Comments and usernames containing language or concepts that could be deemed offensive will be deleted.
  4. Comments complaining about the post subject or its source will be removed.
  5. A failure to comply with these rules will result in a warning and, in extreme cases, a ban. In addition, please note that comments that attack or harass an individual directly will result in a ban without warning.
  6. VideoCardz has never been sponsored by AMD, Intel, or NVIDIA. Users claiming otherwise will be banned.
  7. VideoCardz Moderating Team reserves the right to edit or delete any comments submitted to the site without notice.
  8. If you have any questions about the commenting policy, please let us know through the Contact Page.
Hide Comment Policy
Comments