diff options
| author | Peter Zijlstra <a.p.zijlstra@chello.nl> | 2010-03-11 13:40:30 +0100 | 
|---|---|---|
| committer | Ingo Molnar <mingo@elte.hu> | 2010-03-11 15:21:28 +0100 | 
| commit | 45e16a6834b6af098702e5ea6c9a40de42ff77d8 (patch) | |
| tree | 401649ce862d60960b47c9261a16346a51a72c14 | |
| parent | 85cfabbcd10f8d112feee6e2ec64ee78033b6d3c (diff) | |
| download | olio-linux-3.10-45e16a6834b6af098702e5ea6c9a40de42ff77d8.tar.xz olio-linux-3.10-45e16a6834b6af098702e5ea6c9a40de42ff77d8.zip  | |
perf, x86: Fix hw_perf_enable() event assignment
What happens is that we schedule badly like:
<...>-1987  [019]   280.252808: x86_pmu_start: event-46/1300c0: idx: 0
<...>-1987  [019]   280.252811: x86_pmu_start: event-47/1300c0: idx: 1
<...>-1987  [019]   280.252812: x86_pmu_start: event-48/1300c0: idx: 2
<...>-1987  [019]   280.252813: x86_pmu_start: event-49/1300c0: idx: 3
<...>-1987  [019]   280.252814: x86_pmu_start: event-50/1300c0: idx: 32
<...>-1987  [019]   280.252825: x86_pmu_stop: event-46/1300c0: idx: 0
<...>-1987  [019]   280.252826: x86_pmu_stop: event-47/1300c0: idx: 1
<...>-1987  [019]   280.252827: x86_pmu_stop: event-48/1300c0: idx: 2
<...>-1987  [019]   280.252828: x86_pmu_stop: event-49/1300c0: idx: 3
<...>-1987  [019]   280.252829: x86_pmu_stop: event-50/1300c0: idx: 32
<...>-1987  [019]   280.252834: x86_pmu_start: event-47/1300c0: idx: 1
<...>-1987  [019]   280.252834: x86_pmu_start: event-48/1300c0: idx: 2
<...>-1987  [019]   280.252835: x86_pmu_start: event-49/1300c0: idx: 3
<...>-1987  [019]   280.252836: x86_pmu_start: event-50/1300c0: idx: 32
<...>-1987  [019]   280.252837: x86_pmu_start: event-51/1300c0: idx: 32 *FAIL*
This happens because we only iterate the n_running events in the first
pass, and reset their index to -1 if they don't match to force a
re-assignment.
Now, in our RR example, n_running == 0 because we fully unscheduled, so
event-50 will retain its idx==32, even though in scheduling it will have
gotten idx=0, and we don't trigger the re-assign path.
The easiest way to fix this is the below patch, which simply validates
the full assignment in the second pass.
Reported-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1268311069.5037.31.camel@laptop>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
| -rw-r--r-- | arch/x86/kernel/cpu/perf_event.c | 12 | 
1 files changed, 3 insertions, 9 deletions
diff --git a/arch/x86/kernel/cpu/perf_event.c b/arch/x86/kernel/cpu/perf_event.c index c6bde7d7afd..5fb490c6ee5 100644 --- a/arch/x86/kernel/cpu/perf_event.c +++ b/arch/x86/kernel/cpu/perf_event.c @@ -811,7 +811,6 @@ void hw_perf_enable(void)  		 * step2: reprogram moved events into new counters  		 */  		for (i = 0; i < n_running; i++) { -  			event = cpuc->event_list[i];  			hwc = &event->hw; @@ -826,21 +825,16 @@ void hw_perf_enable(void)  				continue;  			x86_pmu_stop(event); - -			hwc->idx = -1;  		}  		for (i = 0; i < cpuc->n_events; i++) { -  			event = cpuc->event_list[i];  			hwc = &event->hw; -			if (i < n_running && -			    match_prev_assignment(hwc, cpuc, i)) -				continue; - -			if (hwc->idx == -1) +			if (!match_prev_assignment(hwc, cpuc, i))  				x86_assign_hw_event(event, cpuc, i); +			else if (i < n_running) +				continue;  			x86_pmu_start(event);  		}  |