1Using TopDown metrics
2---------------------
3
4TopDown metrics break apart performance bottlenecks. Starting at level
51 it is typical to get metrics on retiring, bad speculation, frontend
6bound, and backend bound. Higher levels provide more detail in to the
7level 1 bottlenecks, such as at level 2: core bound, memory bound,
8heavy operations, light operations, branch mispredicts, machine
9clears, fetch latency and fetch bandwidth. For more details see [1][2][3].
10
11perf stat --topdown implements this using available metrics that vary
12per architecture.
13
14% perf stat -a --topdown -I1000
15#           time      %  tma_retiring %  tma_backend_bound %  tma_frontend_bound %  tma_bad_speculation
16     1.001141351                 11.5                 34.9                  46.9                    6.7
17     2.006141972                 13.4                 28.1                  50.4                    8.1
18     3.010162040                 12.9                 28.1                  51.1                    8.0
19     4.014009311                 12.5                 28.6                  51.8                    7.2
20     5.017838554                 11.8                 33.0                  48.0                    7.2
21     5.704818971                 14.0                 27.5                  51.3                    7.3
22...
23
24New Topdown features in Intel Ice Lake
25======================================
26
27With Ice Lake CPUs the TopDown metrics are directly available as
28fixed counters and do not require generic counters. This allows
29to collect TopDown always in addition to other events.
30
31Using TopDown through RDPMC in applications on Intel Ice Lake
32=============================================================
33
34For more fine grained measurements it can be useful to
35access the new  directly from user space. This is more complicated,
36but drastically lowers overhead.
37
38On Ice Lake, there is a new fixed counter 3: SLOTS, which reports
39"pipeline SLOTS" (cycles multiplied by core issue width) and a
40metric register that reports slots ratios for the different bottleneck
41categories.
42
43The metrics counter is CPU model specific and is not available on older
44CPUs.
45
46Example code
47============
48
49Library functions to do the functionality described below
50is also available in libjevents [4]
51
52The application opens a group with fixed counter 3 (SLOTS) and any
53metric event, and allow user programs to read the performance counters.
54
55Fixed counter 3 is mapped to a pseudo event event=0x00, umask=04,
56so the perf_event_attr structure should be initialized with
57{ .config = 0x0400, .type = PERF_TYPE_RAW }
58The metric events are mapped to the pseudo event event=0x00, umask=0x8X.
59For example, the perf_event_attr structure can be initialized with
60{ .config = 0x8000, .type = PERF_TYPE_RAW } for Retiring metric event
61The Fixed counter 3 must be the leader of the group.
62
63#include <linux/perf_event.h>
64#include <sys/mman.h>
65#include <sys/syscall.h>
66#include <unistd.h>
67
68/* Provide own perf_event_open stub because glibc doesn't */
69__attribute__((weak))
70int perf_event_open(struct perf_event_attr *attr, pid_t pid,
71		    int cpu, int group_fd, unsigned long flags)
72{
73	return syscall(__NR_perf_event_open, attr, pid, cpu, group_fd, flags);
74}
75
76/* Open slots counter file descriptor for current task. */
77struct perf_event_attr slots = {
78	.type = PERF_TYPE_RAW,
79	.size = sizeof(struct perf_event_attr),
80	.config = 0x400,
81	.exclude_kernel = 1,
82};
83
84int slots_fd = perf_event_open(&slots, 0, -1, -1, 0);
85if (slots_fd < 0)
86	... error ...
87
88/* Memory mapping the fd permits _rdpmc calls from userspace */
89void *slots_p = mmap(0, getpagesize(), PROT_READ, MAP_SHARED, slots_fd, 0);
90if (!slot_p)
91	.... error ...
92
93/*
94 * Open metrics event file descriptor for current task.
95 * Set slots event as the leader of the group.
96 */
97struct perf_event_attr metrics = {
98	.type = PERF_TYPE_RAW,
99	.size = sizeof(struct perf_event_attr),
100	.config = 0x8000,
101	.exclude_kernel = 1,
102};
103
104int metrics_fd = perf_event_open(&metrics, 0, -1, slots_fd, 0);
105if (metrics_fd < 0)
106	... error ...
107
108/* Memory mapping the fd permits _rdpmc calls from userspace */
109void *metrics_p = mmap(0, getpagesize(), PROT_READ, MAP_SHARED, metrics_fd, 0);
110if (!metrics_p)
111	... error ...
112
113Note: the file descriptors returned by the perf_event_open calls must be memory
114mapped to permit calls to the _rdpmd instruction. Permission may also be granted
115by writing the /sys/devices/cpu/rdpmc sysfs node.
116
117The RDPMC instruction (or _rdpmc compiler intrinsic) can now be used
118to read slots and the topdown metrics at different points of the program:
119
120#include <stdint.h>
121#include <x86intrin.h>
122
123#define RDPMC_FIXED	(1 << 30)	/* return fixed counters */
124#define RDPMC_METRIC	(1 << 29)	/* return metric counters */
125
126#define FIXED_COUNTER_SLOTS		3
127#define METRIC_COUNTER_TOPDOWN_L1_L2	0
128
129static inline uint64_t read_slots(void)
130{
131	return _rdpmc(RDPMC_FIXED | FIXED_COUNTER_SLOTS);
132}
133
134static inline uint64_t read_metrics(void)
135{
136	return _rdpmc(RDPMC_METRIC | METRIC_COUNTER_TOPDOWN_L1_L2);
137}
138
139Then the program can be instrumented to read these metrics at different
140points.
141
142It's not a good idea to do this with too short code regions,
143as the parallelism and overlap in the CPU program execution will
144cause too much measurement inaccuracy. For example instrumenting
145individual basic blocks is definitely too fine grained.
146
147_rdpmc calls should not be mixed with reading the metrics and slots counters
148through system calls, as the kernel will reset these counters after each system
149call.
150
151Decoding metrics values
152=======================
153
154The value reported by read_metrics() contains four 8 bit fields
155that represent a scaled ratio that represent the Level 1 bottleneck.
156All four fields add up to 0xff (= 100%)
157
158The binary ratios in the metric value can be converted to float ratios:
159
160#define GET_METRIC(m, i) (((m) >> (i*8)) & 0xff)
161
162/* L1 Topdown metric events */
163#define TOPDOWN_RETIRING(val)	((float)GET_METRIC(val, 0) / 0xff)
164#define TOPDOWN_BAD_SPEC(val)	((float)GET_METRIC(val, 1) / 0xff)
165#define TOPDOWN_FE_BOUND(val)	((float)GET_METRIC(val, 2) / 0xff)
166#define TOPDOWN_BE_BOUND(val)	((float)GET_METRIC(val, 3) / 0xff)
167
168/*
169 * L2 Topdown metric events.
170 * Available on Sapphire Rapids and later platforms.
171 */
172#define TOPDOWN_HEAVY_OPS(val)		((float)GET_METRIC(val, 4) / 0xff)
173#define TOPDOWN_BR_MISPREDICT(val)	((float)GET_METRIC(val, 5) / 0xff)
174#define TOPDOWN_FETCH_LAT(val)		((float)GET_METRIC(val, 6) / 0xff)
175#define TOPDOWN_MEM_BOUND(val)		((float)GET_METRIC(val, 7) / 0xff)
176
177and then converted to percent for printing.
178
179The ratios in the metric accumulate for the time when the counter
180is enabled. For measuring programs it is often useful to measure
181specific sections. For this it is needed to deltas on metrics.
182
183This can be done by scaling the metrics with the slots counter
184read at the same time.
185
186Then it's possible to take deltas of these slots counts
187measured at different points, and determine the metrics
188for that time period.
189
190	slots_a = read_slots();
191	metric_a = read_metrics();
192
193	... larger code region ...
194
195	slots_b = read_slots()
196	metric_b = read_metrics()
197
198	# compute scaled metrics for measurement a
199	retiring_slots_a = GET_METRIC(metric_a, 0) * slots_a
200	bad_spec_slots_a = GET_METRIC(metric_a, 1) * slots_a
201	fe_bound_slots_a = GET_METRIC(metric_a, 2) * slots_a
202	be_bound_slots_a = GET_METRIC(metric_a, 3) * slots_a
203
204	# compute delta scaled metrics between b and a
205	retiring_slots = GET_METRIC(metric_b, 0) * slots_b - retiring_slots_a
206	bad_spec_slots = GET_METRIC(metric_b, 1) * slots_b - bad_spec_slots_a
207	fe_bound_slots = GET_METRIC(metric_b, 2) * slots_b - fe_bound_slots_a
208	be_bound_slots = GET_METRIC(metric_b, 3) * slots_b - be_bound_slots_a
209
210Later the individual ratios of L1 metric events for the measurement period can
211be recreated from these counts.
212
213	slots_delta = slots_b - slots_a
214	retiring_ratio = (float)retiring_slots / slots_delta
215	bad_spec_ratio = (float)bad_spec_slots / slots_delta
216	fe_bound_ratio = (float)fe_bound_slots / slots_delta
217	be_bound_ratio = (float)be_bound_slots / slota_delta
218
219	printf("Retiring %.2f%% Bad Speculation %.2f%% FE Bound %.2f%% BE Bound %.2f%%\n",
220		retiring_ratio * 100.,
221		bad_spec_ratio * 100.,
222		fe_bound_ratio * 100.,
223		be_bound_ratio * 100.);
224
225The individual ratios of L2 metric events for the measurement period can be
226recreated from L1 and L2 metric counters. (Available on Sapphire Rapids and
227later platforms)
228
229	# compute scaled metrics for measurement a
230	heavy_ops_slots_a = GET_METRIC(metric_a, 4) * slots_a
231	br_mispredict_slots_a = GET_METRIC(metric_a, 5) * slots_a
232	fetch_lat_slots_a = GET_METRIC(metric_a, 6) * slots_a
233	mem_bound_slots_a = GET_METRIC(metric_a, 7) * slots_a
234
235	# compute delta scaled metrics between b and a
236	heavy_ops_slots = GET_METRIC(metric_b, 4) * slots_b - heavy_ops_slots_a
237	br_mispredict_slots = GET_METRIC(metric_b, 5) * slots_b - br_mispredict_slots_a
238	fetch_lat_slots = GET_METRIC(metric_b, 6) * slots_b - fetch_lat_slots_a
239	mem_bound_slots = GET_METRIC(metric_b, 7) * slots_b - mem_bound_slots_a
240
241	slots_delta = slots_b - slots_a
242	heavy_ops_ratio = (float)heavy_ops_slots / slots_delta
243	light_ops_ratio = retiring_ratio - heavy_ops_ratio;
244
245	br_mispredict_ratio = (float)br_mispredict_slots / slots_delta
246	machine_clears_ratio = bad_spec_ratio - br_mispredict_ratio;
247
248	fetch_lat_ratio = (float)fetch_lat_slots / slots_delta
249	fetch_bw_ratio = fe_bound_ratio - fetch_lat_ratio;
250
251	mem_bound_ratio = (float)mem_bound_slots / slota_delta
252	core_bound_ratio = be_bound_ratio - mem_bound_ratio;
253
254	printf("Heavy Operations %.2f%% Light Operations %.2f%% "
255	       "Branch Mispredict %.2f%% Machine Clears %.2f%% "
256	       "Fetch Latency %.2f%% Fetch Bandwidth %.2f%% "
257	       "Mem Bound %.2f%% Core Bound %.2f%%\n",
258		heavy_ops_ratio * 100.,
259		light_ops_ratio * 100.,
260		br_mispredict_ratio * 100.,
261		machine_clears_ratio * 100.,
262		fetch_lat_ratio * 100.,
263		fetch_bw_ratio * 100.,
264		mem_bound_ratio * 100.,
265		core_bound_ratio * 100.);
266
267Resetting metrics counters
268==========================
269
270Since the individual metrics are only 8bit they lose precision for
271short regions over time because the number of cycles covered by each
272fraction bit shrinks. So the counters need to be reset regularly.
273
274When using the kernel perf API the kernel resets on every read.
275So as long as the reading is at reasonable intervals (every few
276seconds) the precision is good.
277
278When using perf stat it is recommended to always use the -I option,
279with no longer interval than a few seconds
280
281	perf stat -I 1000 --topdown ...
282
283For user programs using RDPMC directly the counter can
284be reset explicitly using ioctl:
285
286	ioctl(perf_fd, PERF_EVENT_IOC_RESET, 0);
287
288This "opens" a new measurement period.
289
290A program using RDPMC for TopDown should schedule such a reset
291regularly, as in every few seconds.
292
293Limits on Intel Ice Lake
294========================
295
296Four pseudo TopDown metric events are exposed for the end-users,
297topdown-retiring, topdown-bad-spec, topdown-fe-bound and topdown-be-bound.
298They can be used to collect the TopDown value under the following
299rules:
300- All the TopDown metric events must be in a group with the SLOTS event.
301- The SLOTS event must be the leader of the group.
302- The PERF_FORMAT_GROUP flag must be applied for each TopDown metric
303  events
304
305The SLOTS event and the TopDown metric events can be counting members of
306a sampling read group. Since the SLOTS event must be the leader of a TopDown
307group, the second event of the group is the sampling event.
308For example, perf record -e '{slots, $sampling_event, topdown-retiring}:S'
309
310Extension on Intel Sapphire Rapids Server
311=========================================
312The metrics counter is extended to support TMA method level 2 metrics.
313The lower half of the register is the TMA level 1 metrics (legacy).
314The upper half is also divided into four 8-bit fields for the new level 2
315metrics. Four more TopDown metric events are exposed for the end-users,
316topdown-heavy-ops, topdown-br-mispredict, topdown-fetch-lat and
317topdown-mem-bound.
318
319Each of the new level 2 metrics in the upper half is a subset of the
320corresponding level 1 metric in the lower half. Software can deduce the
321other four level 2 metrics by subtracting corresponding metrics as below.
322
323    Light_Operations = Retiring - Heavy_Operations
324    Machine_Clears = Bad_Speculation - Branch_Mispredicts
325    Fetch_Bandwidth = Frontend_Bound - Fetch_Latency
326    Core_Bound = Backend_Bound - Memory_Bound
327
328TPEBS in TopDown
329================
330
331TPEBS (Timed PEBS) is one of the new Intel PMU features provided since Granite
332Rapids microarchitecture. The TPEBS feature adds a 16 bit retire_latency field
333in the Basic Info group of the PEBS record. It records the Core cycles since the
334retirement of the previous instruction to the retirement of current instruction.
335Please refer to Section 8.4.1 of "Intel® Architecture Instruction Set Extensions
336Programming Reference" for more details about this feature. Because this feature
337extends PEBS record, sampling with weight option is required to get the
338retire_latency value.
339
340	perf record -e event_name -W ...
341
342In the most recent release of TMA, the metrics begin to use event retire_latency
343values in some of the metrics’ formulas on processors that support TPEBS feature.
344For previous generations that do not support TPEBS, the values are static and
345predefined per processor family by the hardware architects. Due to the diversity
346of workloads in execution environments, retire_latency values measured at real
347time are more accurate. Therefore, new TMA metrics that use TPEBS will provide
348more accurate performance analysis results.
349
350To support TPEBS in TMA metrics, a new modifier :R on event is added. Perf would
351capture retire_latency value of required events(event with :R in metric formula)
352with perf record. The retire_latency value would be used in metric calculation.
353Currently, this feature is supported through perf stat
354
355	perf stat -M metric_name --record-tpebs ...
356
357
358
359[1] https://software.intel.com/en-us/top-down-microarchitecture-analysis-method-win
360[2] https://sites.google.com/site/analysismethods/yasin-pubs
361[3] https://perf.wiki.kernel.org/index.php/Top-Down_Analysis
362[4] https://github.com/andikleen/pmu-tools/tree/master/jevents
363