Unpacking COMP-3 digit using Java

unpack decimal
how to unpack packed decimal
how sign is stored in comp field

I have a file with some COMP-3 encoded fields. Can someone please tell me how do I test this code in below thread ?

How to unpack COMP-3 digits using Java?

Code I tried is

BufferedReader br = new BufferedReader(new FileReader(FILENAME))) {

    String sCurrentLine;
    int i=0;
    String bf =null;
    while ((sCurrentLine = br.readLine()) != null) {
        System.out.println("FROM BYTES ");
            System.out.println(unpackData(sCurrentLine.getBytes(), 5));

        for (int j = 0; j < sCurrentLine.length(); j++) {
            char c = sCurrentLine.charAt(j);
            bf =bf + (int)c;

Above code is not giving correct result. I tried single column to convert but it is not returning correct result. My input column

input file looks like

I tried out JRecord passing cbl copybook and data file, it generate Java Code which is giving not same result Generated output

required output

cbl copy book is look like below image

The accepted answer might in How to unpack COMP-3 digits using Java? work if working with Ascii based Cobol. It will not work when reading Mainframe Ebcdic files with a FileReader.

You have marked the question is marked as Mainframe - Ebcdic.

To process correctly,

  1. Do a Binary transfer from the Mainframe (or run on the mainframe). Do not do an ascii conversion, this will corrupt the comp-3 fields.
  2. Read the file as a stream and process it as bytes.

The answer in COMP-3 data unpacking in Java (Embedded in Pentaho) will work; there are other answers on stackoverflow that will work as well.

Trying to process Comp-3 data as a character is error prone


If you have a Cobol copybook, the JRecord library will let you read the file using a Cobol copybook. It contains a document ReadMe_NewUsers.html that goes through the basics.


The Generate >>> Java~JRecord code for cobol menu option of the RecordEditor will generate Java~JRecord from a Cobol copybook (and optionally a data file).

There are details on generating code in this answer How do I identify the level of a field in copybook using JRecord in Java? or look at here

Also in the RecordEditor the Record Layouts >>> Load Cobol Copybook will load a Cobol copybook; you can then use the Layout to view the file.

COMP-3 data unpacking in Java (Embedded in Pentaho), I have huge mainframe file and there are some packed digits in that file. I would like to know how to unpack following digit using java? Expires. 5% OFF. 5% Off Entire Order. 12/31/19. UP TO 64% OFF. Up to 64% Off Nuance Software. 12/31/19. PROMO CODE.

The best way to manipulate packed decimal is to use the IBM Data Access Accelerator API. It uses an IBM specific JVM optimization called packed objects which is a technology for efficiently working on native data. There's some good Java code on SO for processing packed decimal data but the Data Access Accelerator is the sensible choice. It blows the RYO code away.

packed-decimal - St4k, An IBM COBOL COMPUTATIONAL-3 number is a signed binary-coded decimal entity. It consists of pairs of BCD digits packed 2 per byte, with the  Since the Data Junction tool (Which we used for data extract) supports COMP-3 datatype's encoding, it by default converts the data and we use that data for validation. What Data Junction tool does is what we are trying achieve using Java and that is the requirement.

If you compare the copybook with the data; you will see it does not match.

In particular Class-Order-edg is defines as pic 9(3) but it looks like it is binary in the file.

Bils-count-edg looks to be shifted 6 bytes. This is consistent with fields Class-order-edg --> Country-no-edg being changed to comp-3/comp. The copybook appears to be out of date.

COBOL Comp-3 (Computational-3) Packed Fields: What they are , How to unpack COMP-3 digits using Java? asciimainframeebcdicpacked-​decimal. Dec 2 '13 20:17. Shekhar. 9,387. 0. votes. 2. answers. 1,087. views. Unable to  Comp-3 value hex hex after Ascii conversion 400 x'400c' x'200c' x'40' is the ebcdic space character it gets converted to the ascii space character x'20'. You need to do binary transfer, keeping the file as ebcdic: Check the file on the Mainframe if it has a RECFM=FB you can do a transfer. If the file is RECFM=VB make sure you transfer the RDW (Record Descriptor word) (or copy the VB file to a FB file on the mainframe).

The meaning of the term "COMP-3 encrypted file" is not clear to me, but I think you are saying that you have a file that was transferred from a zOS (EBCDIC) based system to an ASCII based system and you want to be able to process the values contained in the COMP-3 (packed decimal fields). If that is correct, I have some code and research that is relevant to your need.

I am assuming that the file was converted from EBCDIC to ASCII when it was transferred from zOS.

It is a common misconception that if COMP-3 (packed decimal) data is converted from EBCDIC to ASCII that it gets "corrupted". That is not the case. What you get are values ranging from x'00' - x'0F'. Regardless of whether you are are on an EBCDIC or ASCII based system, the hexadecimal values in that range are the same.

If the data is viewed outside of a hex editor [on either system] it appears to be corrupt. Depending on the code page, the packed decimal number 01234567890 may display as ⌁杅ྉ. However, using a hex editor you can see that the value is actually x'01 23 45 67 89 0F'. Two numbers are stored in a single byte (one digit in each nibble with the last nibble in the last byte being the sign). When each byte is converted from hex the actual numbers are returned. For example, using Lua, if the variable iChar contains x'23', the function oDec = string.format("%X", iChar) returns the text value of "23" which can be converted to a number. By iterating over the entire string of x'01 23 45 67 89 0F' the actual number (01234567890) is returned. The number can be "repacked" by reversing the process.

Sample code to unpack a packed decimal field is shown below:

--[[ Lua 5.2.3 ]]
--[[ Author: David Alley 
         Written: August 9, 2017 ]]
--[[ Begin Function ]]
function xdec_unpack (iHex, lField, lNumber)
This function reads packed decimal data (converted from EBCDIC to ASCII) as input
and returns unpacked ASCII decimal numbers.
    if iHex == nil or iHex == ""
            return iHex
    local aChar = {}     
    local aUnpack = {} 
    local iChar = ''
    for i = 1, lField do
        aChar[i] = string.byte(iHex, i)
    for i, iChar in ipairs(aChar) do
        local oDec = string.format("%X", iChar)
            if string.len(oDec) == 1
            table.insert(aUnpack, "0" .. oDec) --[[ Handles binary zeros ]]
                table.insert(aUnpack, oDec)
    if string.len(table.concat(aUnpack)) - 1 ~= lNumber
            aUnpack[1] = string.sub(aUnpack[1],2,2)
return table.concat(aUnpack)
--[[ End Function xdec_unpack ]]

--[[ The code below was written for Linux and reads an entire file. It assumes that there is only one field, and that 
         field is in packed decimal format. Packed decimal format means that the file was transferred from a z/OS (EBCDIC) system 
         and the data was converted to ASCII.

         It is necessary to supply the field length because when Lua encounters binary zeros (common in packed decimal), 
       they are treated as an "end of record" indicator. The packed length value is supplied by the variable lField and the
         unpacked length value is supplied by the variable lNumber. 

         Since there is only one field, that field by default, is at the end of each record (the field is the record). Therefore, 
         any "new line" values (0x0a for Linux) must be included when reading records. This is handled by adding 1 to the variable 
         lField when reading records. Therefore, this code must be modified if there are multiple fields, and/or the packed decimal
         field is not the last field in the record.

         The sign is dropped from the unpacked value that is returned from the function xdec_unpack by removing the last byte from the 
         variable Output1 before writing the records. Therefore, this code must be modified if it is necessary to process negative 
         numbers. ]]

local lField = 7      --[[ This variable is the length of the packed decimal field before unpacking and is required by the 
                                                 xdec_unpack function. ]]
local lNumber = 12  --[[ This variable is the length of the unpacked decimal field not including the sign. It is required by the 
                                                 xdec_unpack function. Its purpose is to determine if a high order zero (left zero) is to be removed. This 
                                                 occurs in situations where the packed decimal field contains an even number of digits. For example,
                                                 0123456789. ]]
local sFile = io.open("/home/david/Documents/Lua/Input/Input2.txt", "r")
local oFile = io.open("/home/david/Documents/Lua/Input/Output1.txt", "w")
while true do
    local sLine = sFile:read(lField + 1)        
    if sLine == nil then break end
    local Output1 = xdec_unpack(sLine, lField, lNumber) --[[ Call function to unpack ]]
  Output1 = string.sub(Output1,1, #Output1 - 1) --[[ Remove sign ]]
    oFile:write(Output1, "\n")

[Solved] How to unpack COMP-3 digits using Java? on ascii , You can use the IBM Record Generator for Java, a free tool. This allows you to generate a Java class that represents a COBOL or PL/I DSECT which you can  Unpacking COMP-3 digit using Record Editor/Jrecord I have created layout based on cobol copybook. Layout snap-shot: I tried to load data also selecting same layout, it gives me wrong result for some columns.

C++ Decimal unpack - C/C++ compilers for IBM Z, This page discusses how data is stored in COBOL "comp-3", or "packed" fields. (digit) per byte, so packed data only requires half the storage of unpacked  COMP-3 is IBM's packed decimal format. On the S/360 and later CPUs of that descent it's directly manipulable via machine instructions. A similar capability exists on Intel's x86 and Motorola's M68K families, but very little code on those machines uses it.

How to unpack COMP-3 digits using Java? - ascii - android, I have huge mainframe file and there are some packed digits in that file. I would like to know how to unpack following digit using java? packed  John Carr wrote:Ok, so I've been trying to figure out bitwise operators and such, and have gone through examples and tutorials, and I can't seem to grasp it because none of them seem to explain what their goal is, so I've come up with a hypothetical, and would appreciate it if someone could show me an example

Read Cobol packed (COMP-3) values from a file, I am in need of unpacking a decimal (aka Cobol Comp-3) that is is the best way to unpack this comp-3 field fully without any loss of digits. I am looking to code a generic COBOL program which can be used to unpack COMP/COMP-3 fields to corresponding length Non-Comp field. I will be having 2 input files. First file will be the one to be unpacked. Second file will contain info about the layout of the file in below format

  • COMP-3 is binary, so the BufferedReader is already wrong. You need to use an InpitStream.
  • Yeah, I have gone through the thread.<br> but we have compressed file, data is not in hex format(e.g. x'123f' ?? ) it has special characters. <br> how do I convert that file first to hex format ? I have Copybook of the data file. Some of fields is Char and others are fixed.
  • Without seeing the file, I can not help you. If the data is compressed - what is the compression format. The mainframe has its own hardware based compression + the usual compression algorithms (e.g. zip etc) can be run.
  • I am trying to generate equivalent Java Code. In Cobol code, Data type has been changed Fixed(15) INIT(0) to PIC'-----------9V.999' INIT in output layout, and they writing output file. I am working on same input file. For input file snapshot please check image above.
  • In Mainframe, they are FTPing the mainframe flat file to windows using binary format. This is technique they are using to generate text file. once file has been generated it look like above image ( Edited part).
  • If you have a Cobol copybook, have a look at this answer stackoverflow.com/questions/45529152/… The Generate function should generate Java~JRecord code to read the file for you. There are several templates, the standard template will generate basic JRecord code; othere templates can convert the Cobol records in to pojo's. You will need the JRecord library: sourceforge.net/projects/jrecord
  • Please check out thread stackoverflow.com/questions/45637188/…
  • The notion of the data not getting "corrupted" in some way is just wrong. x'40' is a perfectly valid byte in a COMP-3 field and will be converted to x'20'. x'30' is a valid byte in a COMP-3 field and does not have a defined character associated with it. How will ASCII-conversion handle it?
  • COMP-3, by definition, is packed decimal. The only valid values are 0 - 9 and the letter that represents the sign (e.g. "F") which is the last nibble in the last byte. On z/OS, if you try to do arithmetic on a COMP-3 field and it contains x'40' or any non-number, the program abends with an S0C7 (data exception). Here is a quick reference: s0c7.blogspot.com. Every byte contains 2 digits with the last byte containing 1 digit and the sign.
  • So if every byte contains two digits, what is wrong about x'40'? One digit 4, one digit 0, everything just as you described it. Note: I'm not talking about the last byte of the field, but a byte somewhere in the middle.
  • While the digits 4 and 0 may appear together in a packed decimal field, That is not equal to x'40' (a space in EBCDIC) and does not display as such. The value is 40, not x'40'. A packed decimal field must be processed in its entirety to be a valid number. The number 40 would be stored as x'04 0F' (assuming an unsigned number).
  • "A packed decimal field must be processed in its entirety" - and exactly that is what regualr file-transfer tools don't do. They look at one byte at a time and when they encounter a packed field with value 400 they will convert the byte-sequence x'40 0F' to x'20 0F' when converting EBCDIC to ASCII - assuming they are handling the shift-in character correctly...