## Count each bit-position separately over many 64-bit bitmasks, with AVX but not AVX2

(Related: How to quickly count bits into separate bins in a series of ints on Sandy Bridge? is an earlier duplicate of this, with some different answers. Editor's note: the answers here are probably better.

Also, an AVX2 version of a similar problem, with many bins for a whole row of bits much wider than one `uint64_t`

: Improve column population count algorithm)

I am working on a project in C where I need to go through tens of millions of masks (of type ulong (64-bit)) and update an array (called `target`

) of 64 short integers (uint16) based on a simple rule:

// for any given mask, do the following loop for (i = 0; i < 64; i++) { if (mask & (1ull << i)) { target[i]++ } }

The problem is that I need do the above loops on tens of millions of masks and I need to finish in less than a second. Wonder if there are any way to speed it up, like using some sort special assembly instruction that represents the above loop.

Currently I use gcc 4.8.4 on ubuntu 14.04 (i7-2670QM, supporting AVX, not AVX2) to compile and run the following code and took about 2 seconds. Would love to make it run under 200ms.

#include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/time.h> #include <sys/stat.h> double getTS() { struct timeval tv; gettimeofday(&tv, NULL); return tv.tv_sec + tv.tv_usec / 1000000.0; } unsigned int target[64]; int main(int argc, char *argv[]) { int i, j; unsigned long x = 123; unsigned long m = 1; char *p = malloc(8 * 10000000); if (!p) { printf("failed to allocate\n"); exit(0); } memset(p, 0xff, 80000000); printf("p=%p\n", p); unsigned long *pLong = (unsigned long*)p; double start = getTS(); for (j = 0; j < 10000000; j++) { m = 1; for (i = 0; i < 64; i++) { if ((pLong[j] & m) == m) { target[i]++; } m = (m << 1); } } printf("took %f secs\n", getTS() - start); return 0; }

Thanks in advance!

On my system, a 4 year old MacBook (2.7 GHz intel core i5) with `clang-900.0.39.2 -O3`

, your code runs in 500ms.

Just changing the inner test to `if ((pLong[j] & m) != 0)`

saves 30%, running in 350ms.

Further simplifying the inner part to `target[i] += (pLong[j] >> i) & 1;`

without a test brings it down to 280ms.

Further improvements seem to require more advanced techniques such as unpacking the bits into blocks of 8 ulongs and adding those in parallel, handling 255 ulongs at a time.

Here is an improved version using this method. it runs in 45ms on my system.

#include <stdint.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/time.h> #include <sys/stat.h> double getTS() { struct timeval tv; gettimeofday(&tv, NULL); return tv.tv_sec + tv.tv_usec / 1000000.0; } int main(int argc, char *argv[]) { unsigned int target[64] = { 0 }; unsigned long *pLong = malloc(sizeof(*pLong) * 10000000); int i, j; if (!pLong) { printf("failed to allocate\n"); exit(1); } memset(pLong, 0xff, sizeof(*pLong) * 10000000); printf("p=%p\n", (void*)pLong); double start = getTS(); uint64_t inflate[256]; for (i = 0; i < 256; i++) { uint64_t x = i; x = (x | (x << 28)); x = (x | (x << 14)); inflate[i] = (x | (x << 7)) & 0x0101010101010101ULL; } for (j = 0; j < 10000000 / 255 * 255; j += 255) { uint64_t b[8] = { 0 }; for (int k = 0; k < 255; k++) { uint64_t u = pLong[j + k]; for (int kk = 0; kk < 8; kk++, u >>= 8) b[kk] += inflate[u & 255]; } for (i = 0; i < 64; i++) target[i] += (b[i / 8] >> ((i % 8) * 8)) & 255; } for (; j < 10000000; j++) { uint64_t m = 1; for (i = 0; i < 64; i++) { target[i] += (pLong[j] >> i) & 1; m <<= 1; } } printf("target = {"); for (i = 0; i < 64; i++) printf(" %d", target[i]); printf(" }\n"); printf("took %f secs\n", getTS() - start); return 0; }

The technique for inflating a byte to a 64-bit long are investigated and explained in the answer: https://stackoverflow.com/a/55059914/4593267 . I made the `target`

array a local variable, as well as the `inflate`

array, and I print the results to ensure the compiler will not optimize the computations away. In a production version you would compute the `inflate`

array separately.

Using SIMD directly might provide further improvements at the expense of portability and readability. This kind of optimisation is often better left to the compiler as it can generate specific code for the target architecture. Unless performance is critical and benchmarking proves this to be a bottleneck, I would always favor a generic solution.

A different solution by njuffa provides similar performance without the need for a precomputed array. Depending on your compiler and hardware specifics, it might be faster.

**[PDF] x86 Intrinsics Cheat Sheet,** Count each bit-position separately over many 64-bit bitmasks, with AVX but not AVX2. 2019 Community Moderator ElectionHow to generate a sse4.2 popcnt� Count each bit-position separately over many 64-bit bitmasks, with AVX but not AVX2 AVX, not AVX2) to compile and run the following code and took about 2 seconds

**Uncategorized – Branch Free,** NOTE: For each element pair cmpord sets the result bits to 1 if both elements are not. NaN, otherwise 0. cmpunord sets bits if at least one is NaN. SSE2. Old Float/ � Count each bit-position separately over many 64-bit bitmasks 6 0

One way of speeding this up significantly, even without AVX, is to split the data into blocks of up to 255 elements, and accumulate the bit counts byte-wise in ordinary `uint64_t`

variables. Since the source data has 64 bits, we need an array of 8 byte-wise accumulators. The first accumulator counts bits in positions 0, 8, 16, ... 56, second accumulator counts bits in positions 1, 9, 17, ... 57; and so on. After we are finished processing a block of data, we transfers the counts form the byte-wise accumulator into the `target`

counts. A function to update the `target`

counts for a block of up to 255 numbers can be coded in a straightforward fashion according to the description above, where `BITS`

is the number of bits in the source data:

/* update the counts of 1-bits in each bit position for up to 255 numbers */ void sum_block (const uint64_t *pLong, unsigned int *target, int lo, int hi) { int jj, k, kk; uint64_t byte_wise_sum [BITS/8] = {0}; for (jj = lo; jj < hi; jj++) { uint64_t t = pLong[jj]; for (k = 0; k < BITS/8; k++) { byte_wise_sum[k] += t & 0x0101010101010101; t >>= 1; } } /* accumulate byte sums into target */ for (k = 0; k < BITS/8; k++) { for (kk = 0; kk < BITS; kk += 8) { target[kk + k] += (byte_wise_sum[k] >> kk) & 0xff; } } }

The entire ISO-C99 program, which should be able to run on at least Windows and Linux platforms is shown below. It initializes the source data with a PRNG, performs a correctness check against the asker's reference implementation, and benchmarks both the reference code and the accelerated version. On my machine (Intel Xeon E3-1270 v2 @ 3.50 GHz), when compiled with MSVS 2010 at full optimization (`/Ox`

), the output of the program is:

p=0000000000550040 ref took 2.020282 secs, fast took 0.027099 secs

where `ref`

refers to the asker's original solution. The speed-up here is about a factor 74x. Different speed-ups will be observed with other (and especially newer) compilers.

#include <stdio.h> #include <stdlib.h> #include <stdint.h> #include <string.h> #if defined(_WIN32) #if !defined(WIN32_LEAN_AND_MEAN) #define WIN32_LEAN_AND_MEAN #endif #include <windows.h> double second (void) { LARGE_INTEGER t; static double oofreq; static int checkedForHighResTimer; static BOOL hasHighResTimer; if (!checkedForHighResTimer) { hasHighResTimer = QueryPerformanceFrequency (&t); oofreq = 1.0 / (double)t.QuadPart; checkedForHighResTimer = 1; } if (hasHighResTimer) { QueryPerformanceCounter (&t); return (double)t.QuadPart * oofreq; } else { return (double)GetTickCount() * 1.0e-3; } } #elif defined(__linux__) || defined(__APPLE__) #include <stddef.h> #include <sys/time.h> double second (void) { struct timeval tv; gettimeofday(&tv, NULL); return (double)tv.tv_sec + (double)tv.tv_usec * 1.0e-6; } #else #error unsupported platform #endif /* From: geo <gmars...@gmail.com> Newsgroups: sci.math,comp.lang.c,comp.lang.fortran Subject: 64-bit KISS RNGs Date: Sat, 28 Feb 2009 04:30:48 -0800 (PST) This 64-bit KISS RNG has three components, each nearly good enough to serve alone. The components are: Multiply-With-Carry (MWC), period (2^121+2^63-1) Xorshift (XSH), period 2^64-1 Congruential (CNG), period 2^64 */ static uint64_t kiss64_x = 1234567890987654321ULL; static uint64_t kiss64_c = 123456123456123456ULL; static uint64_t kiss64_y = 362436362436362436ULL; static uint64_t kiss64_z = 1066149217761810ULL; static uint64_t kiss64_t; #define MWC64 (kiss64_t = (kiss64_x << 58) + kiss64_c, \ kiss64_c = (kiss64_x >> 6), kiss64_x += kiss64_t, \ kiss64_c += (kiss64_x < kiss64_t), kiss64_x) #define XSH64 (kiss64_y ^= (kiss64_y << 13), kiss64_y ^= (kiss64_y >> 17), \ kiss64_y ^= (kiss64_y << 43)) #define CNG64 (kiss64_z = 6906969069ULL * kiss64_z + 1234567ULL) #define KISS64 (MWC64 + XSH64 + CNG64) #define N (10000000) #define BITS (64) #define BLOCK_SIZE (255) /* cupdate the count of 1-bits in each bit position for up to 255 numbers */ void sum_block (const uint64_t *pLong, unsigned int *target, int lo, int hi) { int jj, k, kk; uint64_t byte_wise_sum [BITS/8] = {0}; for (jj = lo; jj < hi; jj++) { uint64_t t = pLong[jj]; for (k = 0; k < BITS/8; k++) { byte_wise_sum[k] += t & 0x0101010101010101; t >>= 1; } } /* accumulate byte sums into target */ for (k = 0; k < BITS/8; k++) { for (kk = 0; kk < BITS; kk += 8) { target[kk + k] += (byte_wise_sum[k] >> kk) & 0xff; } } } int main (void) { double start_ref, stop_ref, start, stop; uint64_t *pLong; unsigned int target_ref [BITS] = {0}; unsigned int target [BITS] = {0}; int i, j; pLong = malloc (sizeof(pLong[0]) * N); if (!pLong) { printf("failed to allocate\n"); return EXIT_FAILURE; } printf("p=%p\n", pLong); /* init data */ for (j = 0; j < N; j++) { pLong[j] = KISS64; } /* count bits slowly */ start_ref = second(); for (j = 0; j < N; j++) { uint64_t m = 1; for (i = 0; i < BITS; i++) { if ((pLong[j] & m) == m) { target_ref[i]++; } m = (m << 1); } } stop_ref = second(); /* count bits fast */ start = second(); for (j = 0; j < N / BLOCK_SIZE; j++) { sum_block (pLong, target, j * BLOCK_SIZE, (j+1) * BLOCK_SIZE); } sum_block (pLong, target, j * BLOCK_SIZE, N); stop = second(); /* check whether result is correct */ for (i = 0; i < BITS; i++) { if (target[i] != target_ref[i]) { printf ("error @ %d: res=%u ref=%u\n", i, target[i], target_ref[i]); } } /* print benchmark results */ printf("ref took %f secs, fast took %f secs\n", stop_ref - start_ref, stop - start); return EXIT_SUCCESS; }

**[PDF] Faster Population Counts Using AVX2 Instructions,** There include, but are not limited to (I told you I don't care about floating point, All the other granularities of shuffle up to a 512-bit size are there in AVX-512 but bytes that many cheap byte->byte transformations that can be done over 8 bits that 512->256->128->64 bits, but the first stage requires two separate 256-> 128� Count each bit-position separately over many 64-bit bitmasks, with AVX but not AVX2 I can't die. Who am I? Locked Away- What am I? Is there a way to make cleveref distinguish two environments with the same counter? Too soon for a plot twist?

For starters, the problem of unpacking the bits, because seriously, you do not want to test each bit individually.

So just follow the following strategy for unpacking the bits into bytes of a vector: https://stackoverflow.com/a/24242696/2879325

Now that you have padded each bit to 8 bits, you can just do this for blocks of up to 255 bitmasks at a time, and accumulate them all into a single vector register. After that, you would have to expect potential overflows, so you need to transfer.

After each block of 255, unpack again to 32bit, and add into the array. (You don't have to do exactly 255, just some convenient number less than 256 to avoid overflow of byte accumulators).

At 8 instructions per bitmask (4 per each lower and higher 32-bit with AVX2) - or half that if you have AVX512 available - you should be able to achieve a throughput of about half a billion bitmasks per second and core on an recent CPU.

typedef uint64_t T; const size_t bytes = 8; const size_t bits = bytes * 8; const size_t block_size = 128; static inline __m256i expand_bits_to_bytes(uint32_t x) { __m256i xbcast = _mm256_set1_epi32(x); // we only use the low 32bits of each lane, but this is fine with AVX2 // Each byte gets the source byte containing the corresponding bit const __m256i shufmask = _mm256_set_epi64x( 0x0303030303030303, 0x0202020202020202, 0x0101010101010101, 0x0000000000000000); __m256i shuf = _mm256_shuffle_epi8(xbcast, shufmask); const __m256i andmask = _mm256_set1_epi64x(0x8040201008040201); // every 8 bits -> 8 bytes, pattern repeats. __m256i isolated_inverted = _mm256_andnot_si256(shuf, andmask); // this is the extra step: byte == 0 ? 0 : -1 return _mm256_cmpeq_epi8(isolated_inverted, _mm256_setzero_si256()); } void bitcount_vectorized(const T *data, uint32_t accumulator[bits], const size_t count) { for (size_t outer = 0; outer < count - (count % block_size); outer += block_size) { __m256i temp_accumulator[bits / 32] = { _mm256_setzero_si256() }; for (size_t inner = 0; inner < block_size; ++inner) { for (size_t j = 0; j < bits / 32; j++) { const auto unpacked = expand_bits_to_bytes(static_cast<uint32_t>(data[outer + inner] >> (j * 32))); temp_accumulator[j] = _mm256_sub_epi8(temp_accumulator[j], unpacked); } } for (size_t j = 0; j < bits; j++) { accumulator[j] += ((uint8_t*)(&temp_accumulator))[j]; } } for (size_t outer = count - (count % block_size); outer < count; outer++) { for (size_t j = 0; j < bits; j++) { if (data[outer] & (T(1) << j)) { accumulator[j]++; } } } } void bitcount_naive(const T *data, uint32_t accumulator[bits], const size_t count) { for (size_t outer = 0; outer < count; outer++) { for (size_t j = 0; j < bits; j++) { if (data[outer] & (T(1) << j)) { accumulator[j]++; } } } }

Depending on the chose compiler, the vectorized form achieved roughly a factor 25 speedup over the naive one.

On a Ryzen 5 1600X, the vectorized form roughly achieved the predicted throughput of ~600,000,000 elements per second.

Surprisingly, this is actually still 50% slower than the solution proposed by @njuffa.

**General Setwise Operations,** Most processors have dedicated instructions to count the number of ones in a word instructions: popcnt for x64 processors and cnt for the. 64-bit ARM be preferable to dedicated non-SIMD instructions if we are by checking the value of each bit individually by calling count A population count for a 64-bit word would. What should I do when a paper is published similar to my PhD thesis without citation? School performs periodic password audits. Is my pass

See

Efficient Computation of Positional Population Counts Using SIMD Instructions by Marcus D. R. Klarqvist, Wojciech Muła, Daniel Lemire (7 Nov 2019)

Faster Population Counts using AVX2 Instructions by Wojciech Muła, Nathan Kurz, Daniel Lemire (23 Nov 2016).

Basically, each full adder compresses 3 inputs to 2 outputs. So one can eliminate an entire 256-bit word for the price of 5 logic instructions. The full adder operation could be repeated until registers become exhausted. Then results in the registers are accumulated (as seen in most of the other answers).

Positional popcnt for 16-bit subwords is implemented here: https://github.com/mklarqvist/positional-popcount

// Carry-Save Full Adder (3:2 compressor) b ^= a; a ^= c; c ^= b; // xor sum b |= a; b ^= c; // carry

*Note: the accumulate step for positional-popcnt is more expensive than for normal simd popcnt. Which I believe makes it feasible to add a couple of half-adders to the end of the CSU, it might pay to go all the way up to 256 words before accumulating.*

**[PDF] x86 Intrinsics Cheat Sheet,** Bitwise boolean operations on 64-bit words are in fact 64 parallel operations on each bit performing one setwise operation without any "side-effects". Square� Count each bit-position separately over many 64-bit bitmasks, with AVX but not AVX2 (5 answers) Closed last month . Update: Please read the code, it is NOT about counting bits in one int

**Programming,** NOTE: For each element pair cmpord sets the result bits to 1 if both elements are not. NaN, otherwise 0. cmpunord sets bits if at least one is NaN. SSE2. Old Float/ � Output : n = 16, Position 5 n = 12, Invalid number n = 128, Position 8. Following is another method for this problem. The idea is to one by one right shift the set bit of given number ‘n’ until ‘n’ becomes 0.

**Indirect Bit Indexing and Set,** Faster Population Counts Using AVX2 Instructions with Daniel Lemire and Nathan Kurz. AVX512 8-bit positional population count procedure [2019-12-31] new Parse multiple decimal integers separated by arbitrary number of delimiters can be really AVX512F is not as powerful as AVX512BW, but base64 is feasible. The simplest set of bit masks is to define one bit mask for each bit position. We use 0s to mask out the bits we don’t care about, and 1s to denote the bits we want modified. Although bit masks can be literals, they’re often defined as symbolic constants so they can be given a meaningful name and easily reused.

**[PDF] Intel� Architecture Instruction Set Extensions Programming Reference,** Each byte contain 2 independently indices - say upper 3 bits will be Each entry will be also counted (not only) set, but this is not required. Tags: Intel� Advanced Vector Extensions (Intel� AVX) surely it will be much better to notify this more visible. To be short - AVX2 is currently unusable for me. Write an efficient program to count number of 1s in binary representation of an integer. Examples : Input : n = 6 Output : 2 Binary representation of 6 is 110 and has 2 set bits Input : n = 13 Output : 3 Binary representation of 13 is 1101 and has 3 set bits

##### Comments

- using
`uint16`

as counters would likely result in overflow - "like using some sort special assembly instruction that represents the above loop." --> a good compiler will see this. Refer to your compiles optimization settings to activate such.
- Suggestion: take down question, add a test harness around it to report time and post that code asking for improvements. As is now, question is too broad.
- You should fill the buffer with realistic data too. The number of set bits and their pattern affects the performance of this code, accidentally having all zeroes would unfairly benefit branchy code which would perform worse on maximally random bits (50% chance, uncorrelated)
- @Gene: no,
`popcnt`

is a horizontal popcnt that adds up all the bits in one integer. The OP wants separate counts for every bit-position over multiple integers. You want to unpack bit to something wider (e.g. nibbles or bytes), then vertical add (like`paddb`

) as many times as you can without overflow, then expand to wider counters. - It appears that my CPU doesn't perform as well as yours :-) I tried
`if ((pLong[j] & m) != 0)`

and it made no difference in time. Tried`target[i] += (pLong[j] >> i) & 1;`

, it is actually worse, since time goes to 2.74 seconds. - @pktCoder and chqrlie: Your numbers are more useful when you specify the CPU model used to run the experiments and the compiler options used to compile the code.
- @HadiBrais My CPU is
`i7-2670QM`

, mentioned in the post. - @pktCoder: Also note that chqrlie is using clang, which normally unrolls inner loops, while gcc doesn't enable loop unrolling unless you use
`-fprofile-generate`

/ run the program /`-fprofile-use`

. Also your gcc4.8 is quite old, barely newer than your CPU. A newer gcc version would optimize better. - @phuclv: Thank you. I added your method in this comparative benchmark: stackoverflow.com/a/55059914/4593267 in the case of this question, it is (only) 30% slower than the lookup table. SIMD approaches would probably squeeze more cpu cycles out.
- Thanks @peter-cordes for the writeup. I will have to study it carefully since I believe it's the way to go.