diff options
| author | Minchan Kim <minchan@kernel.org> | 2012-10-08 16:32:16 -0700 | 
|---|---|---|
| committer | Linus Torvalds <torvalds@linux-foundation.org> | 2012-10-09 16:22:46 +0900 | 
| commit | 435b405c06119d93333738172b8060b0ed12af41 (patch) | |
| tree | a87f9a493f5c677ab23eeab1eab2e45caeb79bc3 /include/linux/page-isolation.h | |
| parent | 41d575ad4a511b71a4a41c8313004212f5c229b1 (diff) | |
| download | olio-linux-3.10-435b405c06119d93333738172b8060b0ed12af41.tar.xz olio-linux-3.10-435b405c06119d93333738172b8060b0ed12af41.zip  | |
memory-hotplug: fix pages missed by race rather than failing
If race between allocation and isolation in memory-hotplug offline
happens, some pages could be in MIGRATE_MOVABLE of free_list although the
pageblock's migratetype of the page is MIGRATE_ISOLATE.
The race could be detected by get_freepage_migratetype in
__test_page_isolated_in_pageblock.  If it is detected, now EBUSY gets
bubbled all the way up and the hotplug operations fails.
But better idea is instead of returning and failing memory-hotremove, move
the free page to the correct list at the time it is detected.  It could
enhance memory-hotremove operation success ratio although the race is
really rare.
Suggested by Mel Gorman.
[akpm@linux-foundation.org: small cleanup]
Signed-off-by: Minchan Kim <minchan@kernel.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Reviewed-by: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Acked-by: Mel Gorman <mgorman@suse.de>
Cc: Xishi Qiu <qiuxishi@huawei.com>
Cc: Wen Congyang <wency@cn.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'include/linux/page-isolation.h')
| -rw-r--r-- | include/linux/page-isolation.h | 4 | 
1 files changed, 4 insertions, 0 deletions
diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h index 105077aa768..fca8c0a5c18 100644 --- a/include/linux/page-isolation.h +++ b/include/linux/page-isolation.h @@ -6,6 +6,10 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count);  void set_pageblock_migratetype(struct page *page, int migratetype);  int move_freepages_block(struct zone *zone, struct page *page,  				int migratetype); +int move_freepages(struct zone *zone, +			  struct page *start_page, struct page *end_page, +			  int migratetype); +  /*   * Changes migrate type in [start_pfn, end_pfn) to be MIGRATE_ISOLATE.   * If specified range includes migrate types other than MOVABLE or CMA,  |