{"id":"src_AvCVX6IMc1zAGjuIGz8uW","slug":"koute-bytehound","name":"koute/bytehound","type":"github","url":"https://github.com/koute/bytehound","orgId":"org__6UF50ygydhlwnAHvxYQP","productId":null,"productSlug":null,"org":{"id":"org__6UF50ygydhlwnAHvxYQP","slug":"koute","name":"koute"},"isPrimary":false,"isHidden":true,"discovery":"on_demand","metadata":"{\"lookup\":{\"coordinate\":\"koute/bytehound\",\"fetchedAt\":\"2026-04-30T01:50:48.770Z\",\"lastRefreshedAt\":\"2026-04-30T01:50:48.770Z\",\"emptyResult\":false}}","releaseCount":12,"releasesLast30Days":0,"avgReleasesPerWeek":0,"latestVersion":"0.11.0","latestDate":"2022-11-23T09:43:06.000Z","changelogUrl":null,"hasChangelogFile":false,"lastFetchedAt":null,"lastPolledAt":null,"trackingSince":"2019-05-18T16:44:23.000Z","releases":[{"id":"rel_IpWY_MwYzX9pOf0QOHHO9","version":"0.11.0","type":"feature","title":"0.11.0","summary":"Major changes:\r\n- Added support for `_rjem_aligned_alloc` (jemalloc variant of `aligned_malloc`)\r\n- The initialization is now done eagerly in the stat...","titleGenerated":null,"titleShort":null,"content":"Major changes:\r\n- Added support for `_rjem_aligned_alloc` (jemalloc variant of `aligned_malloc`)\r\n- The initialization is now done eagerly in the static constructor, in case the profiled executable overrides all of the `LD_PRELOAD`-hooked functions by itself so none actually get called","publishedAt":"2022-11-23T09:43:06.000Z","fetchedAt":"2026-04-30T01:50:49.104Z","url":"https://github.com/koute/bytehound/releases/tag/0.11.0","media":[]},{"id":"rel_2sMkkEdwVrlbPG0Nyu7Iq","version":"0.10.0","type":"feature","title":"0.10.0","summary":"Major changes:\r\n\r\n* Performance improvements; CPU overhead of allocation-heavy heavily multithreaded programs was cut down by up to ~80%\r\n* You can no...","titleGenerated":null,"titleShort":null,"content":"Major changes:\r\n\r\n* Performance improvements; CPU overhead of allocation-heavy heavily multithreaded programs was cut down by up to ~80%\r\n* You can now control whether child processes are profiled with the `MEMORY_PROFILER_TRACK_CHILD_PROCESSES` environment variable (disabled by default)\r\n* The fragmentation timeline was removed from the UI\r\n* `mmap`/`munmap` calls are now gathered by default (you can disable this with `MEMORY_PROFILER_GATHER_MAPS`)\r\n* Total actual memory usage is now gathered by periodically polling `/proc/self/smaps`\r\n* Maps can now be browsed in the UI and analyzed through the scripting API\r\n* Maps are now named according to their source using `PR_SET_VMA_ANON_NAME` (Linux 5.17 or newer; on older kernels this is emulated in user space)\r\n* Glibc-internal `__mmap` and `__munmap` are now hooked into\r\n* Bytehound-internal allocations now exclusively use mimalloc as their allocator\r\n* New scripting APIs:\r\n   - `AllocationList::only_alive_at`\r\n   - `AllocationList::only_from_maps`\r\n   - `Graph::start_at`\r\n   - `Graph::end_at`\r\n   - `Graph::show_address_space`\r\n   - `Graph::show_rss`\r\n   - `MapList`\r\n   - `Map`\r\n* Removed scripting APIs:\r\n   - `AllocationList::only_not_deallocated_after_at_least`\r\n   - `AllocationList::only_not_deallocated_until_at_most`\r\n   - `Graph::truncate_until`\r\n   - `Graph::extend_until`\r\n* Removed lifetime filters in the UI: `only_not_deallocated_in_current_range`, `only_deallocated_in_current_range`\r\n* Fixed a rare crash when profiling programs using jemalloc\r\n* Added support for `aligned_alloc`\r\n* Added support for `memalign`\r\n* Relative scale in the generated graphs is now always relative to the start of profiling\r\n* Gathered backtraces will now include an extra Bytehound-specific frame on the bottom to indicate which function was called\r\n* Minor improvements to the UI","publishedAt":"2022-11-17T06:23:45.000Z","fetchedAt":"2026-04-30T01:50:49.104Z","url":"https://github.com/koute/bytehound/releases/tag/0.10.0","media":[]},{"id":"rel_8sQ7Vhkf58MVd0jl50m7l","version":"0.9.0","type":"feature","title":"0.9.0","summary":"Major changes:\r\n * Deallocation backtraces are now gathered by default; you can use the `MEMORY_PROFILER_GRAB_BACKTRACES_ON_FREE` environment variable...","titleGenerated":null,"titleShort":null,"content":"Major changes:\r\n * Deallocation backtraces are now gathered by default; you can use the `MEMORY_PROFILER_GRAB_BACKTRACES_ON_FREE` environment variable to turn this off\r\n * Deallocation backtraces are now shown in the GUI for each allocation\r\n * Allocations can now be filtered according to where exactly they were deallocated\r\n * Allocations can now be filtered according to whether the last allocation in their realloc chain was leaked or not\r\n * Profiling of executables larger than 4GB is now supported\r\n * Profiling of executables using unprefixed jemalloc is now supported\r\n * New scripting APIs:\r\n      * `AllocationList::only_matching_deallocation_backtraces`\r\n      * `AllocationList::only_not_matching_deallocation_backtraces`\r\n      * `AllocationList::only_position_in_chain_at_least`\r\n      * `AllocationList::only_position_in_chain_at_most`\r\n      * `AllocationList::only_chain_leaked`\r\n  * The `server` subcommand of the CLI should now use less memory when loading large data files\r\n  * The behavior of `malloc_usable_size` when called with a `NULL` argument now matches glibc\r\n  * At minimum Rust 1.62 is now required to build the crates; older versions might still work, but will not be supported\r\n  * The way the profiler is initialized was reworked; this should increase compatibility and might fix some of the crashes seen when trying to profile certain programs","publishedAt":"2022-07-25T13:08:13.000Z","fetchedAt":"2026-04-30T01:50:49.104Z","url":"https://github.com/koute/bytehound/releases/tag/0.9.0","media":[]},{"id":"rel_88UxggO7sMNdapARMnyeL","version":"0.8.0","type":"feature","title":"0.8.0","summary":"Major changes:\r\n\r\n  * Significantly lower CPU usage when temporary allocation culling is turned on\r\n  * Each thread has now its own first-level backtr...","titleGenerated":null,"titleShort":null,"content":"Major changes:\r\n\r\n  * Significantly lower CPU usage when temporary allocation culling is turned on\r\n  * Each thread has now its own first-level backtrace cache; this might result in higher memory usage when profiling\r\n  * The `MEMORY_PROFILER_BACKTRACE_CACHE_SIZE` environment variable knob was replaced with `MEMORY_PROFILER_BACKTRACE_CACHE_SIZE_LEVEL_1` and `MEMORY_PROFILER_BACKTRACE_CACHE_SIZE_LEVEL_2` to control the size of the per-thread caches and the global cache respectively\r\n  * The `MEMORY_PROFILER_PRECISE_TIMESTAMPS` environment variable knob was removed (always gathering precise timestamps is fast enough on amd64)\r\n  * The default value of `MEMORY_PROFILER_TEMPORARY_ALLOCATION_PENDING_THRESHOLD` is now unset, which means that the allocations will be buffered indefinitely until they're either culled or until they'll live long enough to not be eligible for culling (might increase memory usage in certain cases)\r\n  * Backtraces are now not emitted for allocations which were completely culled\r\n  * You can now see whether a given allocation was made through jemalloc, and filter according to that\r\n  * You can now see when a given allocation group reached its maximum memory usage was, and filter according to that\r\n  * New scripting APIs:\r\n    - `Graph::show_memory_usage`\r\n    - `Graph::show_live_allocations`\r\n    - `Graph::show_new_allocations`\r\n    - `Graph::show_deallocations`\r\n    - `AllocationList::only_group_max_total_usage_first_seen_at_least`\r\n    - `AllocationList::only_jemalloc`\r\n  * New subcommand: `extract` (will unpack all of the files embedded into a given data file)\r\n  * The `strip` subcommand will now not buffer allocations indefinitely when using the `--threshold` option, which results in a significantly lower memory usage when stripping huge data files from long profiling runs\r\n  * `malloc_usable_size` now works properly when compiled with the `jemalloc` feature\r\n  * `reallocarray` doesn't segfault anymore\r\n  * The compilation should now work on distributions with an ancient version of Yarn","publishedAt":"2021-11-16T09:57:40.000Z","fetchedAt":"2026-04-30T01:50:49.104Z","url":"https://github.com/koute/bytehound/releases/tag/0.8.0","media":[]},{"id":"rel_-l9dKnzTvc0qXplxxpQZe","version":"0.7.0","type":"feature","title":"0.7.0","summary":"Major changes:\r\n\r\n   * The project was rebranded from `memory-profiler` to `bytehound`\r\n   * Profiling of applications using jemalloc is now fully sup...","titleGenerated":null,"titleShort":null,"content":"Major changes:\r\n\r\n   * The project was rebranded from `memory-profiler` to `bytehound`\r\n   * Profiling of applications using jemalloc is now fully supported (AMD64-only, `jemallocator` crate only)\r\n   * Added built-in scripting capabilities which can be used for automated analysis and report generation; those can be accessed through the `script` subcommand\r\n   * Added a scripting console to the GUI\r\n   * Added the ability to define programmatic filters in the GUI\r\n   * Allocation graphs are now shown in the GUI when browsing through the allocations grouped by backtraces\r\n   * Improved support for tracking and analyzing reallocations\r\n   * Improved paralellization of the analyzer's internals, which should result in snappier behavior on modern multicore machines\r\n   * The cutoff point for determining allocations' lifetime is now the end of profiling for those allocations which were never deallocated\r\n   * The `squeeze` subcommand was renamed to `strip`\r\n   * You can now use the `strip` subcommand to strip away only a subset of temporary allocations\r\n   * Information about allocations culled at runtime is now emitted on a per-backtrace basis during profiling\r\n   * Fixed an issue where the shadow stack based unwinding was incompatible with Rust's ABI in certain rare cases\r\n   * `mmap` calls are now always gathered in order (if you have enabled their gathering)\r\n   * Improved runtime backtrace deduplication which should result in smaller datafiles\r\n   * Many other miscellaneous bugfixes\r\n","publishedAt":"2021-08-18T07:54:10.000Z","fetchedAt":"2026-04-30T01:50:49.104Z","url":"https://github.com/koute/bytehound/releases/tag/0.7.0","media":[]},{"id":"rel_znNHkQvj7UBVuv73viH_b","version":"0.6.1","type":"feature","title":"0.6.1","summary":"This is a bugfix release that fixes a possible deadlock when FDEs are dynamically registered at runtime.","titleGenerated":null,"titleShort":null,"content":"This is a bugfix release that fixes a possible deadlock when FDEs are dynamically registered at runtime.","publishedAt":"2021-06-10T07:58:45.000Z","fetchedAt":"2026-04-30T01:50:49.104Z","url":"https://github.com/koute/bytehound/releases/tag/0.6.1","media":[]},{"id":"rel_4h-UtI_YgGaMo2qgoqSDD","version":"0.6.0","type":"feature","title":"0.6.0","summary":"Major changes:\r\n   * Added a runtime backtrace cache; backtraces are now deduplicated when profiling, which results in less data being generated.\r\n   ...","titleGenerated":null,"titleShort":null,"content":"Major changes:\r\n   * Added a runtime backtrace cache; backtraces are now deduplicated when profiling, which results in less data being generated.\r\n   * Added automatic culling of temporary allocations when running with `MEMORY_PROFILER_CULL_TEMPORARY_ALLOCATIONS` set to `1`.\r\n   * Added support for `reallocarray`.\r\n   * Added support for unwinding through JITed code, provided the JIT compiler registers its unwinding tables through `__register_frame`.\r\n   * Added support for unwinding through frames which require arbitrary DWARF expressions to be evaluated when resolving register values.\r\n   * Added support for DWARF expressions that fetch memory.\r\n   * Allocations are not tracked by their addresses anymore; they're now tracked by unique IDs, which fixes a race condition when multiple threads are simultaneously allocating and deallocating memory in quick succession.\r\n   * `mmap` calls are now not gathered by default.\r\n   * Rewrote TLS state management; some deallocations from TLS destructors which were previously missed by the profiler are now gathered.\r\n   * When profiling is disabled at runtime the profiler doesn't completely shutdown anymore, and will keep on gathering data for those allocations which were made before it was disabled; when reenabled it won't create a new file anymore and instead it will keep on writing to the same file as it did before it was disabled.\r\n   * The profiler now requires Rust nightly to compile.","publishedAt":"2021-06-09T08:47:29.000Z","fetchedAt":"2026-04-30T01:50:49.104Z","url":"https://github.com/koute/bytehound/releases/tag/0.6.0","media":[]},{"id":"rel_Jvp0J7Oy2CLvBxL_EpTCy","version":"0.5.0","type":"feature","title":"0.5.0","summary":"Major changes:\r\n\r\n  * Shadow stack based unwinding is now supported on stable Rust and turned on by default.\r\n  * Systems where `perf_event_open` is u...","titleGenerated":null,"titleShort":null,"content":"Major changes:\r\n\r\n  * Shadow stack based unwinding is now supported on stable Rust and turned on by default.\r\n  * Systems where `perf_event_open` is unavailable (e.g. unpatched MIPS64 systems, docker containers, etc.) are now supported.\r\n  * The mechanism for exception handling when using shadow stack based unwinding was completely rewritten using proper landing pads.\r\n  * Programs which call `longjmp`/`setjmp` are now partially supported when using shadow stack based unwinding.\r\n  * Shared objects dynamically loaded through `dlopen` are now properly handled.\r\n  * Rust symbol demangling is now supported.\r\n  * Fixed an issue where calling `backtrace` on certain architectures while using shadow stack based unwinding would crash the program.\r\n  * The profiler can now be compiled with the `jemalloc` feature to use jemalloc instead of the system allocator.\r\n  * The profiler can now be started and stopped programmatically through `memory_profiler_start` and `memory_profiler_stop` functions exported by `libmemory_profiler.so`. Those are equivalent to controlling the profiler through signals.","publishedAt":"2019-10-07T13:58:23.000Z","fetchedAt":"2026-04-30T01:50:49.136Z","url":"https://github.com/koute/bytehound/releases/tag/0.5.0","media":[]},{"id":"rel_kFmlJXNpkh1RbM8T2aZC_","version":"0.4.0","type":"feature","title":"0.4.0","summary":"Major changes:\r\n\r\n   * The profiler can now be compiled on Rust stable, with the caveat that the shadow stack based unwinding will be then disabled.\r\n...","titleGenerated":null,"titleShort":null,"content":"Major changes:\r\n\r\n   * The profiler can now be compiled on Rust stable, with the caveat that the shadow stack based unwinding will be then disabled.\r\n   * The profiler is now fully lazily initialized; if disabled with `MEMORY_PROFILER_DISABLE_BY_DEFAULT` the profiler will not initialize itself nor create an output file.\r\n   * The signal handler registration can now be disabled with `MEMORY_PROFILER_REGISTER_SIGUSR1` and `MEMORY_PROFILER_REGISTER_SIGUSR2`.\r\n   * When the profiling is disabled at runtime it will more thoroughly deinitialize itself, and when reenabled it will create a new output file instead of continuing to write data to the old one.\r\n   * The embedded server is now disabled by default and can be reenabled with the `MEMORY_PROFILER_ENABLE_SERVER` environment variable.\r\n   * The base port of the embedded server can now be set with the `MEMORY_PROILER_BASE_SERVER_PORT` environment variable.\r\n   * The `MEMORY_PROFILER_OUTPUT` now supports an `%n` placeholder.\r\n   * The GUI has now a graph which shows allocations and deallocations per second.","publishedAt":"2019-07-14T19:24:30.000Z","fetchedAt":"2026-04-30T01:50:49.136Z","url":"https://github.com/koute/bytehound/releases/tag/0.4.0","media":[]},{"id":"rel_I2T0jfBQ-9cEoIXFTvrDj","version":"0.3.0","type":"feature","title":"0.3.0","summary":"Major changes:\r\n\r\n - More performance improvements. In the average case the cost per a single allocation was cut down to approximately 75%. Every thre...","titleGenerated":null,"titleShort":null,"content":"Major changes:\r\n\r\n - More performance improvements. In the average case the cost per a single allocation was cut down to approximately 75%. Every thread has now its own unwind context, so stack traces can be now gathered in parallel.\r\n- The profiler should no longer crash on systems with a recent version of `libstdc++` when a C++ exception is thrown.","publishedAt":"2019-06-06T15:49:13.000Z","fetchedAt":"2026-04-30T01:50:49.136Z","url":"https://github.com/koute/bytehound/releases/tag/0.3.0","media":[]},{"id":"rel_8SdZiVtKSwS8atD0HrSlI","version":"0.2.0","type":"feature","title":"0.2.0","summary":"Major changes:\r\n  * Massive performance improvements. In the average case on AMD64 the cost per a single allocation was cut down to 20%; on ARM it was...","titleGenerated":null,"titleShort":null,"content":"Major changes:\r\n  * Massive performance improvements. In the average case on AMD64 the cost per a single allocation was cut down to 20%; on ARM it was cut down to less than 50%.\r\n  * The profiler no longer crashes when a memory operation is triggered from a destructor of an object residing in TLS.\r\n  * The gathered timestamps are no longer as precise as they were; they should be at most off by ~250ms if your application isn't making a lot of allocations. You can restore the previous behavior if you need it by setting `MEMORY_PROFILER_PRECISE_TIMESTAMPS` to `1` at the cost of extra CPU time.","publishedAt":"2019-05-28T17:12:01.000Z","fetchedAt":"2026-04-30T01:50:49.136Z","url":"https://github.com/koute/bytehound/releases/tag/0.2.0","media":[]},{"id":"rel_65fK7JApqM0o9B-Jcn2W0","version":"0.1.0","type":"feature","title":"0.1.0","summary":"Initial public release","titleGenerated":null,"titleShort":null,"content":"Initial public release","publishedAt":"2019-05-18T16:44:23.000Z","fetchedAt":"2026-04-30T01:50:49.136Z","url":"https://github.com/koute/bytehound/releases/tag/0.1.0","media":[]}],"pagination":{"page":1,"pageSize":20,"returned":12,"totalItems":12,"totalPages":1,"hasMore":false},"summaries":{"rolling":null,"monthly":[]}}