3 undefined symbol: glGenQueries
5 The problem here is code in libfips directly calling OpenGL
6 functions like glGenQueries but not linking against any OpenGL
9 We don't want to link against any OpenGL library so that the
10 application itself can choose which OpenGL to use (and how to
13 The trick is to instead make these calls indirectly by first
14 calling glXGetProcAddressARB/eglGetProcAddress. There's some
15 proof-of-concept code for this in the stash-egl-lookup-fixups
16 branch, (which needs some cleaning up).
18 Feature requests (small-scale, near-term changes)
19 =================================================
21 Report CPU load per frame.
23 Report GPU load per frame.
25 Report CPU frequency per frame.
27 Report GPU frequency per frame.
29 Report shader compilation time.
31 Add Eric's tiny hash table for collecting per-shader statistics
33 people.freedesktop.org:~anholt/hash_table
35 Sort list of shaders in output
37 Use better units for shader activity (eg. absolute time, relative percentage)
39 Capture GPU performance counters.
41 Allow dumping of shader source for investigation
43 Infrastructure (larger-scale things, more future-looking items)
44 ===============================================================
46 Use ncurses for a better top-like display.
48 Emit per-frame data in a format for external timeline viewer.
50 Allow enabling/disabling of tracing at run-time
52 Such as via signals, (optionally specified by env. variable)
54 Investigation for other potential features
55 ==========================================
57 Audit Eric's recipe for performance tuning to see what else fips
58 should automatically collect:
60 http://dri.freedesktop.org/wiki/IntelPerformanceTuning/
62 Audit exisiting visualization tools before writing one from scratch
64 Eero suggested that pytimechart might be well-suited:
66 http://pythonhosted.org/pytimechart/index.html
70 Explore using perf/LTTng probing instead of LD_PRELOAD wrapper
72 This has the advantage of allowing full-system,
73 multi-process data collection.