## Signed extension from 24 bit to 32 bit in C++

I have 3 unsigned bytes that are coming over the wire separately.

[byte1, byte2, byte3]

I need to convert these to a signed 32-bit value but I am not quite sure how to handle the sign of the negative values.

I thought of copying the bytes to the upper 3 bytes in the int32 and then shifting everything to the right but I read this may have unexpected behavior.

Is there an easier way to handle this?

The representation is using two's complement.

**sign extend 24-bit to 32-bit - Forums,** what's the fastest way to sign extend from 24-bit to 32-bit? So I didn't use sar which cannot be done with C (with no in-line assembly). However, when extended to a 32-bit operand by the ALU it is sign extended: The value of the left-most bit of the immediate operand (bit 15) is copied to all bits to the left (into the high-order bits). So if the 16-bit immediate operand is a 16-bit two's complement negative integer, the 32-bit ALU operand is a 32-bit version of the same

Assuming both representations are two's complement, simply

upper_byte = (Signed_byte(incoming_msb) >= 0? 0 : Byte(-1));

where

using Signed_byte = signed char; using Byte = unsigned char;

and `upper_byte`

is a variable representing the missing fourth byte.

The conversion to `Signed_byte`

is formally implementation-dependent, but a two's complement implementation doesn't have a choice, really.

**24-bit signed type,** The simplest way would be to copy first three bytes in the reverse order, then convert to 32-bit by sign-extending (that is if negative, set the fourth byte to 0xff else to 0x00). This way, you get 32-bit signed integer. Signed extension from 24 bit to 32 bit in C++. Related. 2461. How do you set, clear, and toggle a single bit? 839. How to count the number of set bits in a 32-bit

This is a pretty old question, but I recently had to do the same (while dealing with 24-bit audio samples), and wrote my own solution for it. It's using a similar principle as this answer, but more generic, and potentially generates better code after compiling.

template <size_t Bits, typename T> inline constexpr T sign_extend(const T& v) noexcept { static_assert(std::is_integral<T>::value, "T is not integral"); static_assert((sizeof(T) * 8u) >= Bits, "T is smaller than the specified width"); if constexpr ((sizeof(T) * 8u) == Bits) return v; else { using S = struct { signed Val : Bits; }; return reinterpret_cast<const S*>(&v)->Val; } }

This has no hard-coded math, it simply lets the compiler do the work and figure out the best way to sign-extend the number. With certain widths, this can even generate a native sign-extension instruction in the assembly, such as MOVSX on x86.

This function assumes you copied your N-bit number into the lower N bits of the type you want to extend it to. So for example:

int16_t a = -42; int32_t b{}; memcpy(&b, &a, sizeof(a)); b = sign_extend<16>(b);

Of course it works for any number of bits, extending it to the full width of the type that contained the data.

**Sign extension,** Sign extension (abbreviated as sext) is the operation, in computer arithmetic, For example, if six bits are used to represent the number " 00 1010 " (decimal positive has 8 bits, a word 16 bits, a doubleword and extended doubleword 32 bits,� I tend to use it in a way where the data to pack is quite small, maybe just 3 or 4 integers on average (many teeny arrays compressed from 32-bit to 24-bit or smaller), and typically we only have 3 or 4 integers to pack/unpack at a time with unfortunately a random-access pattern to retrieve these small 24-bit arrays.

You could let the compiler process itself the sign extension. Assuming that the lowest significant byte is byte1 and the high significant byte is byte3;

int val = (signed char) byte3; // C guarantees the sign extension val << 16; // shift the byte at its definitive place val |= ((int) (unsigned char) byte2) << 8; // place the second byte val |= ((int) (unsigned char) byte1; // and the least significant one

I have used C style cast here when `static_cast`

would have been more C++ish, but as an old dinosaur (and Java programmer) I find C style cast more readable for integer conversions.

**Chapter 7 Converting Applications for a 64-Bit Environment (Sun ,** The C data-type model for 32-bit applications is the ILP32 model, so named because This is particularly true in the case of sign extension, where explicit casting is essential struct bar { char *p; long j; int i; int k; }; /* sizeof (struct bar) = 24 */� Converting 2s complement 24-bit and 32-bit into signed integer Dec 05, 2016, 01:06 am I have an external ADC that delivers three bytes of 2s complement data for a 24-bit signed integer result.

Here's a method that works for any bit count, even if it's not a multiple of 8. This assumes you've already assembled the 3 bytes into an integer `value`

.

const int bits = 24; int mask = (1 << bits) - 1; bool is_negative = (value & ~(mask >> 1)) != 0; value |= -is_negative & ~mask;

**Reverse the bits in a byte with 7 operations (no 64-bit),** In C, sign extension from a constant bit-width is trivial, since bit fields may be specified in structs or Counting bits set in 14, 24, or 32-bit words using 64-bit. Guaranteed that size of foo will be 4 bytes, as the size is related to the 'container' type, which in this case is uint32_t. Bitfields do not pack (individually), they will get stored in their container type's size. Meaning that if you use a uint32_t type, to store a single bit, it won't reduce this down to a byte (the smallest size capable of storing that number of bits), but it will

Sign Extension. 05/23/2017; 2 minutes to read; In this article. When a 32-bit signed integer is negative, its highest bit is equal to one. When this 32-bit signed integer is cast to a 64-bit number, the high bits can be set to zero (preserving the unsigned integer and hexadecimal value of the number) or the high bits can be set to one (preserving the signed value of the number).

01001000 <- 8-bit value of 72 00000000 01001000 <- extended to 16-bit value 00000000 00000000 00000000 01001000 <- extended 32-bit value Note : it is programmer's responsibility to keep track of which values are signed or unsigned , and treat them accordingly.

Note that for a 4-bit, 6-bit, 8-bit, 16-bit or 32-bit signed binary number all the bits MUST have a value, therefore “0’s” are used to fill the spaces between the leftmost sign bit and the first or highest value “1”.

##### Comments

- Another benefit of bit-twiddling in this way is that you're not limited to a 32-bit int - it works just as well on a 64-bit int for example. I'd change the type, perhaps to a template parameter, and make
`bits`

a function parameter as well. - @MarkRansom good points, is that approximately what you meant?
- I need a signed 32 not unsigned though
- @Beto you can just use signed types here, at least I see no way for it to break (unless
`bits`

is something unreasonable). Makes the rest of the code more dangerous though. - Perfect. I like the way you split
`m`

assignment into two parts to make sure the shifting occurs on the proper type. - Why so complicated though? You could just
`(value ^ m) - m`

with`m = 1 << (bits - 1)`

- @harold if you think you have a better answer go ahead and answer the question yourself. I'm having a hard time convincing myself that it works, but if it does you'll get a +1 from me.
- Fair enough, I just thought maybe there's a reason for it
- Placing a value in a bit field so small that the extracted value is not equal, must surely be implementation-defined behavior. Still I like this. :)
- How do you compile this? I get some "error: address of bit-field requested". Works if I remove that
`.i24`

in the memcpy, maybe that's what you meant? - @harold yes. This was made up without compiling