一、测试命令./cyclictest –p 80 –t5 –n
1. 默认创建5个SCHED_FIFO策略的realtime线程,优先级80,运行周期是1000,1500,2000,2500,3000微秒,无干扰测试结果图:
由此可见在AdvLinux3.0.2实时系统,最小值在2~3微秒,平均值为9-11微秒,而最大值则分布在24-29微秒之间。
2.运行同样的测试,但是在运行这个测试的过程中引入更多的干扰,如将该设备与其它设备进行串口通信,则结果变为有干扰测试结果图:
引入串口通信过程,最大值为34us。没有出现AdvLinux3.0.2非实时系统下,最大值为1219微秒。
二、测试命令./cyclictest--smp -p95 -m
这一结果显示了Cyclictest工具运行在一个四核系统,在所有内存都锁定的情况下,每个内核运行一个测量线程,它们每一个SCHED_FIFO优先级是95,锁定内存分配。在测试的结果中,CPU0的最大延迟是33us,平均延迟是9us; CPU1的最大延迟是33us,平均延迟是9us; CPU2的最大延迟是32us,平均延迟是12us; CPU3的最大延迟是29us,平均延迟是13us.
cat /proc/cpuinfo查看系统是几核系统
三、测试命令./cyclictest -t1 -p 80 -n -i number -l10000
图1
线程优先级为80,不同的时间间隔下的结果,其中,C:9397计数器。线程的时间间隔每达到一次,计数器加1
Min:最小时延(us);Act:最近一次的时延(us);Avg:平均时延(us);Max: 最大时延(us)
I为500us时,最小延时为2,平均为11,最大的为 26。I为10000us时,最小延时为4,平均为17,最大的为 33。
RT-Preempt Patch使能
RT-Preempt Patch对Linux kernel的主要改造包括:
Making in-kernel locking-primitives (using spinlocks) preemptible though reimplementation with rtmutexes:
Critical sections protected by i.e. spinlock_t and rwlock_t are now preemptible. The creation of non-preemptible sections (in kernel) is still possible with raw_spinlock_t (same APIs like spinlock_t)
Implementing priority inheritance for in-kernel spinlocks and semaphores. For more information on priority inversion and priority inheritance please consultIntroduction to Priority Inversion
Converting interrupt handlers into preemptible kernel threads: The RT-Preempt patch treats soft interrupt handlers in kernel thread context, which is represented by a task_struct like a common userspace process. However it is also possible to register an IRQ in kernel context.
Converting the old Linux timer API into separate infrastructures for high resolution kernel timers plus one for timeouts, leading to userspace POSIX timers with high resolution.
1. What is "latency"?
------------------------------------------------------------------------------
The term latency, when used in the context of the RT Kernel, is the
time interval between the occurance of an event and the time when that
event is "handled" (typically "handled" means running some thread as a
result of the event). Latencys that are of interest to kernel
programmers (and application programmers) are:
- the time between when an interrupt occurs and the thread
waiting for that interrupt is run
- the time between a timer expiration and the thread waiting for
that timer to run
- The time between the receipt of a network packet and when the
thread waiting for that packet runs
Yes, the timer and network example above are usually examples of the
more general interrupt case (most timers signal expiration with an
interrupt and most network interface cards signal packet arrival with
an interrupt as well), but the main idea is that an "event" occurs and
there is some elapsed time interval which concludes with the kernel
successfully handling the event.
So, latency in and of itself is not a bad thing; there is always a
delay between occurance and completion of an event. What is bad is
when latency becomes excessive, meaning that the delay exceeds some
arbitrary threshold. What is this threshold? That's for each
application to define. A threshold or "deadline" is what defines a
real time application: meeting deadlines means success, missing
deadlines (exceeding the threshold) means failing to be real time.
https://rt.wiki.kernel.org/index.php/Cyclictest