linux页帧大小,Page Frame Reclaiming (Linux页帧回收机制)

桂杰
2023-12-01

The Page Frame Reclaiming Algorithm

Why is Page Frame Reclaiming needed?

In Linux, cache systems keep grabbing more and more page frames but never release them, because they don’t know if and when the processes will reuse some of the cached data and are therefore unable to identify the portions of cache that should be released. Moreover, demanding paging mechanism let the user process acquire more and more pages but have no way to release them while the process continues executing. Therefore, if there’s no mechanism to ensure releasing of page frames, the system’s memory will be exhausted sooner or later. This is why Page Frame Reclaiming is needed.

Selecting a Target Page

Types of Pages:

ðUnclaimable

ðSwappable

ðSyncable

ðDiscardable

Mapped page: it maps a portion of a file

ðPages belonging to file mapping in User Mode address spaces

ðPages in page cache

Anonymous page: it belongs to an anonymous region of a process

ðUser Mode heap and stack

The tmpfs special filesystem is used for IPC shared memory mechanism.

Design of PFRA

Finding a good page frame reclaiming algorithm is a rather empirical job, with little support in theory.

A few general rules:

1.Free the “harmless” pages first

Pages included in disk and memory caches not referenced by any process should be reclaimed before pages belonging to User Mode address space of a process.

2.Make all pages of User Mode process reclaimable

PFRA must be able to steal any page of a User Mode process, including anonymous pages. In this way, processes that have been sleeping for a long time will progressively lose all their page frames.

3.Reclaim a shared page frame by unmapping at once all page table entries that refer to it.

4.Reclaim “unused” pages only

The PFRA uses a simplified Least Recent Used (LRU) algorithm to classify pages as in-use and unused.

Reverse Mapping

Reverse Mapping: locate quickly all the table entries that point to the same page frame

Objecgt-based reverse mapping

Reverse mapping for anonymous pages

Reverse mapping for mapped pages

The priority search tree (PST)

To improve performance, priority queues typically use aas their backbone, giving O(logn) performance for inserts and removals, and O(n) to build initially. Alternatively, when ais used, insertion and removal also take O(logn) time, although building trees from existing sequences of elements takes O(nlogn) time

There are several specializedthat either supply additional operations or outperform these approaches. Theuses O(logn) time for both operations, but also allow queries of the element of highest priority without removing it in constant time.add several more operations, but require O(logn) time for requests.can insert elements, query the highest priority element, and increase an element's priority inconstant timethough deletions are still O(logn)).can do this in worst-case constant time.

While relying on a heap is a common way to implement priority queues, for integer data, faster implementations exist. This can even apply to data-types that have a finite range, such as floats

Theof priority queues naturally suggest a sorting method: insert all the elements to be sorted into a priority queue, and sequentially remove them; they will come out in sorted order.

The Least Recently Used Lists

All pages belonging to User Mode address space of processes or to page cache are grouped into two lists called inactive list and active list; they are also collectively denoted as LRU lists.

Implementing RFPA

ðLow on memory reclaiming

ðReclaiming pages of shrinkable Disk caches

ðPeriodic reclaiming

ðThe out of memory killer(Despite RFPA effort to reserve free memory for system, it is possible that there’s so much pressure on the virtual memory subsystem to become so high that all available memory becomes exhausted. The swap areas are full and the disk caches have already been shrunken. As a consequence, no process can proceed with its execution, thus no process will free up its page frames. To cope with this dramatic situation, the RFPA makes use of a so-called Out-of-Memory Killer, which selects a process in the system and abruptly kills it to free its page frames.

Swap Trashing and Swap Token

Swap Trashing: when the system is short of free memory, the PFRA tries hard to free memory by writing pages to disk and stealing underlying page frames of process; at the same time, the processes want to proceed with their executions, hence they try hard to access their pages. As a consequence, the kernel assigns the process the page  frame just freed by PFRA. The net result is that pages are contiguously written to and read back from disk; most of the time is spent on accessing the disk, thus no process makes substantial progress toward its termination.

Swap Token: to mitigate the likelihood of swap trashing, swap token is introduced, which is assign to a single process in the system, exempting the process from page frame reclaiming, so that the process can progress and terminate.

If PFRA reaches its hardest level, the swap token is not considered.

Swapping

Swapping is the crowning feature of the page frame reclaiming.

ðAnonymous memory region of a process

ðPrivate memory mapping

ðIPC shared memory region

Each swap area consists of a sequence of page slots: 4096-byte blocks used to contain a swapped-out page frame.

The Swap Cache

Race condition: concurrent swap-ins and swap-outs

Solution: swap cache + PG_locked flag

The swap cache is implemented by the page cache data structure and procedures.

Swapping Out Pages

ðInsert the page frame into the swap cache

ðUpdate the page table entry

ðWrite the page into the swap area

ðRemove the page frame from the swap cache

Swapping In Pages

A swapping-in takes place when a process addresses a page that has been swapped out to disk.

 类似资料: