diff options
| author | Andi Kleen <ak@linux.intel.com> | 2012-06-09 02:40:03 -0700 | 
|---|---|---|
| committer | Pekka Enberg <penberg@kernel.org> | 2012-06-20 10:01:04 +0300 | 
| commit | e7b691b085fda913830e5280ae6f724b2a63c824 (patch) | |
| tree | 9fbd380538f1c3fd5e36c5beeac35041351baf40 /mm/slab.c | |
| parent | 8c138bc00925521c4e764269db3a903bd2a51592 (diff) | |
| download | olio-linux-3.10-e7b691b085fda913830e5280ae6f724b2a63c824.tar.xz olio-linux-3.10-e7b691b085fda913830e5280ae6f724b2a63c824.zip  | |
slab/mempolicy: always use local policy from interrupt context
slab_node() could access current->mempolicy from interrupt context.
However there's a race condition during exit where the mempolicy
is first freed and then the pointer zeroed.
Using this from interrupts seems bogus anyways. The interrupt
will interrupt a random process and therefore get a random
mempolicy. Many times, this will be idle's, which noone can change.
Just disable this here and always use local for slab
from interrupts. I also cleaned up the callers of slab_node a bit
which always passed the same argument.
I believe the original mempolicy code did that in fact,
so it's likely a regression.
v2: send version with correct logic
v3: simplify. fix typo.
Reported-by: Arun Sharma <asharma@fb.com>
Cc: penberg@kernel.org
Cc: cl@linux.com
Signed-off-by: Andi Kleen <ak@linux.intel.com>
[tdmackey@twitter.com: Rework control flow based on feedback from
cl@linux.com, fix logic, and cleanup current task_struct reference]
Acked-by: David Rientjes <rientjes@google.com>
Acked-by: Christoph Lameter <cl@linux.com>
Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: David Mackey <tdmackey@twitter.com>
Signed-off-by: Pekka Enberg <penberg@kernel.org>
Diffstat (limited to 'mm/slab.c')
| -rw-r--r-- | mm/slab.c | 4 | 
1 files changed, 2 insertions, 2 deletions
diff --git a/mm/slab.c b/mm/slab.c index fc4a7744670..dd607a8e670 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3310,7 +3310,7 @@ static void *alternate_node_alloc(struct kmem_cache *cachep, gfp_t flags)  	if (cpuset_do_slab_mem_spread() && (cachep->flags & SLAB_MEM_SPREAD))  		nid_alloc = cpuset_slab_spread_node();  	else if (current->mempolicy) -		nid_alloc = slab_node(current->mempolicy); +		nid_alloc = slab_node();  	if (nid_alloc != nid_here)  		return ____cache_alloc_node(cachep, flags, nid_alloc);  	return NULL; @@ -3342,7 +3342,7 @@ static void *fallback_alloc(struct kmem_cache *cache, gfp_t flags)  retry_cpuset:  	cpuset_mems_cookie = get_mems_allowed(); -	zonelist = node_zonelist(slab_node(current->mempolicy), flags); +	zonelist = node_zonelist(slab_node(), flags);  retry:  	/*  |