Programmatically determining max value of a signed integer type

Programmatically determining max value of a signed integer type

This related question is about determining the max value of a signed type at compile-time:

C question: off_t (and other signed integer types) minimum and maximum values

However, I've since realized that determining the max value of a signed type (e.g. time_t or off_t) at runtime seems to be a very difficult task.

The closest thing to a solution I can think of is:

uintmax_t x = (uintmax_t)1<<CHAR_BIT*sizeof(type)-2;
while ((type)x<=0) x>>=1;

This avoids any looping as long as type has no padding bits, but if type does have padding bits, the cast invokes implementation-defined behavior, which could be a signal or a nonsensical implementation-defined conversion (e.g. stripping the sign bit).

I'm beginning to think the problem is unsolvable, which is a bit unsettling and would be a defect in the C standard, in my opinion. Any ideas for proving me wrong?


C# int.MaxValue, MinValue (Get Lowest Number), MaxValue. The minimum value too can be determined with int.MinValue.Int, uint. Numeric types (int, uint, short and ushort) have specific max values. These are  Convert Value Larger Than intmax. View MATLAB Command. Return the largest value of the 8-bit signed integer type, which is 127. v = intmax ( 'int8') v = int8 127. Convert a value larger than 127 to 8-bit signed integer. v1 = int8 (198) v1 = int8 127. The converted value becomes 127.


Mathematically, if you have a finite set (X, of size n (n a positive integer) and a comparison operator (x,y,z in X; x<=y and y<=z implies x<=z), it's a very simple problem to find the maximum value. (Also, it exists.)

The easiest way to solve this problem, but the most computationally expensive, is to generate an array with all possible values from, then find the max.

Part 1. For any type with a finite member set, there's a finite number of bits (m) which can be used to uniquely represent any given member of that type. We just make an array which contains all possible bit patterns, where any given bit pattern is represented by a given value in the specific type.

Part 2. Next we'd need to convert each binary number into the given type. This task is where my programming inexperience makes me unable to speak to how this may be accomplished. I've read some about casting, maybe that would do the trick? Or some other conversion method?

Part 3. Assuming that the previous step was finished, we now have a finite set of values in the desired type and a comparison operator on that set. Find the max.

But what if...

...we don't know the exact number of members of the given type? Than we over-estimate. If we can't produce a reasonable over-estimate, than there should be physical bounds on the number. Once we have an over-estimate, we check all of those possible bit patters to confirm which bit patters represent members of the type. After discarding those which aren't used, we now have a set of all possible bit patterns which represent some member of the given type. This most recently generated set is what we'd use now at part 1.

...we don't have a comparison operator in that type? Than the specific problem is not only impossible, but logically irrelevant. That is, if our program doesn't have access to give a meaningful result to if we compare two values from our given type, than our given type has no ordering in the context of our program. Without an ordering, there's no such thing as a maximum value.

...we can't convert a given binary number into a given type? Then the method breaks. But similar to the previous exception, if you can't convert types, than our tool-set seems logically very limited.

Technically, you may not need to convert between binary representations and a given type. The entire point of the conversion is to insure the generated list is exhaustive.

...we want to optimize the problem? Than we need some information about how the given type maps from binary numbers. For example, unsigned int, signed int (2's compliment), and signed int (1's compliment) each map from bits into numbers in a very documented and simple way. Thus, if we wanted the highest possible value for unsigned int and we knew we were working with m bits, than we could simply fill each bit with a 1, convert the bit pattern to decimal, then output the number.

This relates to optimization because the most expensive part of this solution is the listing of all possible answers. If we have some previous knowledge of how the given type maps from bit patterns, we can generate a subset of all possibilities by making instead all potential candidates.

Good luck.

Range of Type (The GNU C Library), h to determine which type will work. Each signed integer type has a pair of macros which give the smallest and largest values that it can hold. Each unsigned  The number 2,147,483,647 (or hexadecimal 7FFFFFFF 16) is the maximum positive value for a 32-bit signed binary integer in computing. It is therefore the maximum value for variables declared as integers (e.g., as int) in many programming languages, and the maximum possible score, money, etc. for many video games.


Update: Thankfully, my previous answer below was wrong, and there seems to be a solution to this question.

intmax_t x;
for (x=INTMAX_MAX; (T)x!=x; x/=2);

This program either yields x containing the max possible value of type T, or generates an implementation-defined signal.

Working around the signal case may be possible but difficult and computationally infeasible (as in having to install a signal handler for every possible signal number), so I don't think this answer is fully satisfactory. POSIX signal semantics may give enough additional properties to make it feasible; I'm not sure.

The interesting part, especially if you're comfortable assuming you're not on an implementation that will generate a signal, is what happens when (T)x results in an implementation-defined conversion. The trick of the above loop is that it does not rely at all on the implementation's choice of value for the conversion. All it relies upon is that (T)x==x is possible if and only if x fits in type T, since otherwise the value of x is outside the range of possible values of any expression of type T.


Old idea, wrong because it does not account for the above (T)x==x property:

I think I have a sketch of a proof that what I'm looking for is impossible:

  1. Let X be a conforming C implementation and assume INT_MAX>32767.
  2. Define a new C implementation Y identical to X, but where the values of INT_MAX and INT_MIN are each divided by 2.
  3. Prove that Y is a conforming C implementation.

The essential idea of this outline is that, due to the fact that everything related to out-of-bound values with signed types is implementation-defined or undefined behavior, an arbitrary number of the high value bits of a signed integer type can be considered as padding bits without actually making any changes to the implementation except the limit macros in limits.h.

Any thoughts on if this sounds correct or bogus? If it's correct, I'd be happy to award the bounty to whoever can do the best job of making it more rigorous.

Programming .NET Windows Applications, ScrollBar properties Property Value type Description LargeChange Integer Read/​write. In proportion to the Maximum property, determines the width ofthumb. the Value property has changed, either by a Scroll event or programmatically. LLONG_MAX. Maximum value for a variable of type long long. 9,223,372,036,854,775,807. ULLONG_MAX. Maximum value for a variable of type unsigned long long. 18,446,744,073,709,551,615 (0xffffffffffffffff) If a value exceeds the largest integer representation, the Microsoft compiler generates an error.


I might just be writing stupid things here, since I'm relatively new to C, but wouldn't this work for getting the max of a signed?

unsigned x = ~0;
signed y=x/2;

This might be a dumb way to do it, but as far as I've seen unsigned max values are signed max*2+1. Won't it work backwards?

Sorry for the time wasted if this proves to be completely inadequate and incorrect.

An amazing trick to print maximum value of an unsigned integer in C , How to declare a variable with maximum value of unsigned int in C? A signed type variable contains both kind of values –ve to +ve but when any See the below captured image of calculator with maximum value of 32 bits (4 bytes), here I set all 32 bits. Solved programs: » C » C++ » DS » Java » C# The Bigint data type represents an integer value. It can be stored in 8 bytes. Formula . 2^(n-1) is the formula of the maximum value of a Bigint data type. In the preceding formula N is the size of the data type. The ^ operator calculates the power of the value. Now determine the value of N in Bit:


Shouldn't something like the following pseudo code do the job?

signed_type_of_max_size test_values =
    [(1<<7)-1, (1<<15)-1, (1<<31)-1, (1<<63)-1];

for test_value in test_values:
    signed_foo_t a = test_value;
    signed_foo_t b = a + 1;
    if (b < a):
        print "Max positive value of signed_foo_t is ", a

Or much simpler, why shouldn't the following work?

signed_foo_t signed_foo_max = (1<<(sizeof(signed_foo_t)*8-1))-1;

For my own code, I would definitely go for a build-time check defining a preprocessor macro, though.

UInt16.MaxValue Field in C# with Examples, Program to calculate Electricity Bill · Program to print half Diamond star pattern · C# Tutorial · How to Execute C# The MaxValue field of UInt16 Struct is used to represent the maximum value of the 16-bit unsigned integer. It is used to avoid the OverflowException if the integer value is not in the range of the UInt16 type. ABL INTEGER data-type range is from: -(2^31) to ((2^31) - 1) In OpenEdge 10.1A and earlier: All intermediate calculations are carried out in 32-bit arithmetic. Code that contains an arithmetic calculation where the intermediate result cannot be stored in a 32-bit integer, yields an incorrect result even though the final result does not exceed the-2147483648 to +2147483647 range of INTEGER data.


Maximum value of an integer for which factorial can be calculated on , Program to find maximum value of an integer for which factorial can be calculated on a machine, assuming that factorial is stored using basic data type like long long int. is based on that fact that, in most of the machines, when we cross limit of integer, the values becomes negative. C# program to find maximum value of. CHAR_BIT 8 Defines the number of bits in a byte. SCHAR_MIN-128 Defines the minimum value for a signed char. SCHAR_MAX +127 Defines the maximum value for a signed char. UCHAR_MAX 255 Defines the maximum value for an unsigned char. CHAR_MIN-128 Defines the minimum value for type char and its value


9,223,372,036,854,775,807, The number 9,223,372,036,854,775,807 is the integer equal to 263 − 1. Its prime factorization It is therefore the maximum value for a variable declared as a long integer ( long , long long int , or bigint ) in many programming In C and C++ this is available as the long long data type signed or unsigned), in C#, long . The Integer class wraps a value of the primitive type int in an object. An object of type Integer contains a single field whose type is int.. In addition, this class provides several methods for converting an int to a String and a String to an int, as well as other constants and methods useful when dealing with an int.


How to find range of data types in C programming?, The sizeof() operator gives you bytes required to store value of some type The minimum and maximum range of a signed type is given by - find range of data type */ #include <stdio.h> void printUnsignedRange(int In the above program I have used bitwise left shift << operator to calculate power of 2. The type of an integer literal is determined by its suffix as follows: If the literal has no suffix, its type is the first of the following types in which its value can be represented: int , uint , long , ulong .