Environment Abstraction Layer (EAL) 负责底层的资源比如像硬件以及内存空间。它提供了一个通用接口,隐藏了从app到lib的环境细节。由初始化例程来负责如何分配这些资源(比如,memory space, PCI devices, timers, consoles 等等)。
典型的EAL提供的服务如下;
在Linux用户空间环境中,DPDK应用作为用户空间app使用pthread lib的程序运行。 关于devices以及address space的PCI信息可以通过/sys kernel interface以及kenrnel modules像uio_pci_generic, or igb_uio被发现。 参考 UIO: User-space drivers documentation in the Linux kernel。 This meory is mmap’d in the application.
EAL 通过在hugetlbfs中使用mmap()执行物理内存的分配(使用大页来提高性能)。 这些内存被暴露给DPDK的服务层比如像 Mempool Library。
此时, DPDK 服务层将会被初始化,然后通过pthread setaffinity calls, 每个执行单元将被分配到指定的lcore上作为一个用户级的thread运行。
Time reference通过CPU Time-Stamp Counter(TSC)被提供或者通过HPET kernel API through a mmap() call.
部分初始化工作通过glibc的启动函数完成。一个check同时在初始化阶段完成来保证the micro architecture type chose in the config file 被CPU支持。 然后, main() function is called. 核心的初始化阶段以及launch is done in rte_eal_init() (see the API doc). 它包含了调用pthread library (更加确切地说, pthread_self(), pthread_create(), and pthread_setaffinity_np()).
![Figure 2](./programming_guid/2015-05-29 19:36:22屏幕截图.png)
* [Figure 2] . EAL Initialization in a Linux Application Environment *
Note: 初始化一些对象,例如像memory zones, rings, memory pools, lpm talbs 以及 Hash tables should be done as part of the overall application initialization on the master lcore. The creation and initialization functions for these objects are not multi-thread safe. However, once initialized, the objects themselves can safely be used in multiple threads simultaneously.
Linuxapp EAL allows a multi-process as well as a multi-threaded (pthread) deployment model. See Chapter 2.20 Multi-process Support for more details.
分配大量连续的物理内存是通过hugetlbfs kernel filesystem来实现的。 EAL提供了一些API来预留named memory zones in this contiguous memory. The physical address of the reserved memory for that memory zone is also returned to the user by the memory zone reservation API.
Note : Memory reservations done using the APIs provided by the rte_malloc library are also backed by pages from the hugetlbfs filesystem. However, physical address information is not available for the blocks of memory allocated in this way.
The existing memory management implementation is based on the Linux kernel hugepage
mechanism. However, Xen Dom0 does not support hugepages, so a new Linux kernel module
rte_dom0_mm is added to workaround this limitation.
== The EAL uses IOCTL interface to notify the Linux kernel module rte_dom0_mm to allocate memory of specified size, and get all memory segments information from the module, and the EAL uses MMAP interface to map the allocated memory. == For each memory segment, the physical addresses are contiguous within it but actual hardware addresses are contiguous within 2MB.
The EAL uses the /sys/bus/pci utilities provided by the kernel to scan the content on the PCI bus. To access PCI memory, a kernel module called uio_pci_generic provides a /dev/uioX device file and resource file in /sys that can be mmap’d to obtain access to PIC address space from the application. The DPDK-specific igb_uio module can also be used for this. Both drivers use the uio kernel feature (userland driver).
Note: lcore refers to a logical execution unit of the processor, sometimes called a hardware thead.
Shared variable are the default behavior. Per-lcore variables are implemented Local Storage (TLS) to provide per-thread local storage.
A logging APi is provided by EAL. By default, in a Linux application, logs are sent to syslog and also to the console. However, the log function can be overridden by the user to use a different logging mechanism.
There are some debug functions to dump the stack in glibc. The rte_panic() function can
voluntarily provoke a SIG_ABORT, which can trigger the generation of a core file, readable by gdb.
The EAL can query the CPU at runtime (using the rte_cpu_get_feature() function) to determine
which CPU features are available.
The EAL creates a host thread to poll the UIO device file descriptors to detect the interrupts. Callbacks can be registered or unregistered by the EAL functions for a specific interrupt event and are called in the host thread asynchronously. The EAL also allows timed callbacks to be used in the same way as for NIC interrupts.
Note: The only interrupts supported by the DPDK Poll-Mode Drivers are those for link status change, i.e. link up and link down notification.
The EAL PCI device blacklist functionality can be used to mark certain NIC ports as blacklisted, so they are ignored by the DPDK. The ports to be blacklisted are identified using the PCIe* description (Domain:Bus:Device.Function).
Locks and atomic operations are per-architecture (i686 and x86_64).
The mapping of physical memory is provided by this feature in the EAL. As physical memory can have gaps, the memory is described in a table of descriptors, and each descriptor (called rte_memseg ) describes a contiguous portion of memory.
On top of this, the memzone allocator’s role is to reserve contiguous portions of physical memory. These zones are identified by a unique name when the memory is reserved.
The rte_memzone descriptors are also located in the configuration structure. This structure is accessed using rte_eal_get_configuration(). The loopup (by name) of a memory zone returns a descriptor containing the physical address of the memory zone.
Memory zones can be reserved with specific start address alignment by supplying the align parameter (by default, they are aligned to cache line size). The alignment value should be a power of two and not less than the cache line size (64 bytes). Memory zones can also be reserved from either 2 MB or 1 GB hugepages, provided that both are available on the system.
DPDK usually pins one pthread per core to avoid the overhead of task switching. This allows for significant performance gains, but lacks flexibility and is not always efficient.
Power management helps to improve the CPU efficiency by limiting the CPU runtime frequency. However, alternately it is possible to utilize the idle cycles available to take advantage of the
full capability of the CPU.
By taking advantage of cgroup, the CPU utilization quota can be simply assigned. This gives another way to improve the CPU efficient, however, there is a prerequisite; DPDK must handle the context switching switching between multiple pthreads per core.
For further flexibility, it is useful to set pthread affinity not only to a CPU but to a CPU set.
The term “lcore” refers to an EAL thread, which is really a Linux/FreeBSD pthread. “EAL pthreads” are created and managed by EAL and execute the tasks issued by remote launch. In each EAL pthread, there is a TLS(Thread Local Storage) called _lcore_id for unique identification. As EAL pthread ususally bind 1:1 to the physical CPU, the _lcore_id is typically equal to the CPU ID.
When using multiple pthreads, however, the binding is no longer always 1:1 between an EAL pthread and a specified pyhsical CPU. The EAL pthread may have affinity to a CPU seete, and as such the _lcore_id will not be the same as the CPU ID. For this reason, there is an EAL long opetion ‘-lcores’ defined to assign the CPU affinity of lcores. For s specified lcore ID or ID group, the option allows setting the CPU set for that EAL pthread.
The format pattern: -lcores=’[@cpu_set][,[@cpu_set],…]’
‘lcore_set’ and ‘cpu_set’ can be a single number, range or a group.
A number is a “digit([0-9]+)”; a range is “-”; a group is “(
It is possible to use the DPDK execution context with any user pthread (Non-EAL pthreads). In a non-EAL pthread, the _lcore_id is always LCORE_ID_ANY which identifies that it is not an EAL thread with a valid, unique, _lcore_id. Some libraries will use an laternative unique (e.g. TID), some will not be impacted at all, ans some will work but with limitations (e.g. timer and mempool libraries).
All these impacts are mentioned in Known Issues section.
There are two public APIs rte_thread_set_affinity()
and rte_pthread_get_affinity()
introduced for threads. When they’re used in any pthread context, the Thread Local Storage (TLS) will be set/get.
Those TLS include _cpuset and _socket_id:
rte_mempool
The rte_mempool uses a per-lcore cache inside the mempool. For non-EAL pthreads, rte_lcore_id()
will not return a valid number. So for now, when rte_mempool is used with non-EAL pthread, the put/get operations will bypass the mempool cache and there is a performance penalty because of this bypass. Support for non-EAL mempool cache is currently being enaled.
rte_ring
rte_ring supports multi-producer enqueue and multi-consumer dequeue. However, it is non-preemptive, this has a knock on effect of making rte_mempool non-preemtable.
Note: The “non-preemptive” constraint means:
== Bypassing this constraint it may cause the 2nd pthread to spin until the 1st one is scheduled again.== Moreover, if the 1st pthread is preempted by a context that has an higher priority, it may even cause a dead lock.
This does not mean it cannot be used, simply, there is a need to narrow down the situation when it is used by multi-pthread on the same core.
RTE_RING_PAUSE_PEP_COUNT
is defined for rte_ring to reduce contention. It’s mainly for case 2, a yield is issued after number of times pause repeat.
It adds a sched_yield() syscall if the thread spins for too long while waiting on the other thread to finish its operations on the ring. This gives the pre-empted thread a chance to proceed and finish with the ring enqueue/dequeue operation.
rte_timer
Running rte_timer_manager()
on a non-EAL pthread is not allowed. However, resetting/stopping the timer from a non-EAL pthread is allowed.
The following is a simple example of cgroup control usage, there are two pthreads(t0 and t1) doing packet I/O on the same core ($CPU). We expect only 50% of CPU spend on packet IO.
mkdir /sys/fs/cgroup/cpu/pkt_io
mkdir /sys/fs/cgroup/cpuset/pkt_io
echo $cpu > /sys/fs/cgroup/cpuset/cpuset.cpus
echo $t0 > /sys/fs/cgroup/cpu/pkt_io/tasks
echo $t0 > /sys/fs/cgroup/cpuset/pkt_io/tasks
echo $t1 > /sys/fs/cgroup/cpu/pkt_io/tasks
echo $t1 > /sys/fs/cgroup/cpuset/pkt_io/tasks
cd /sys/fs/cgroup/cpu/pkt_io
echo 100000 > pkt_io/cpu.cfs_period_us
echo 50000 > pkt_io/cpu.cfs_quota_us