Fetch data from 20 related tables (through id), combine them to a json File and leverage spring batch for this

job execution context spring batch
spring batch example
spring batch tutorial
spring batch job repository
spring batch fork join
spring boot batch restart job
spring batch documentation
spring batch transaction manager

I have a Person database in SQL Server with tables like address, license, relatives etc. about 20 of them. All the tables have id parameter that is unique per person. There are millions of records in these tables. I need to combine theses records of the person using their common id parameter, and convert to a json table file with some column name changes. This json file then gets pushed to kafka through a producer. If I can get the example with the kafka producer as item writer- fine, but real problem is understanding the strategy and specifics on how to utilize spring batch item reader, processor, and item writer to create the composite json file. This is my first Spring batch application so I am relatively new to this.

I am hoping for the suggestions on the implementation strategy using a composite reader or processor to use person id as the cursor, and query each table using the id for each table , convert the resulting records to json and aggregate it to a composite, relational json file with root table PersonData that feeds to kafka cluster.

Basically I have one data source, same database for the reader. I plan to use Person table to fetch id and other records unique for the person, and use id as the where clause for 19 other tables. convert each resultset from the table to json, and composite the json object at the end and write to kafka.

We had such an requirement in a project and solved it with the following approach.

  1. In Splitflow, that run parallel, we had a step for ever table that loaded the data of the table in the file, sorted by common id (this is optional, but it is easier for testing, if you have the data in files).

  2. Then we implemented our own "MergeReader". This mergereader had FlatFileItemReaders for every file/table (let's call them dataReaders). All these FlatFileItemReaders were wrapped with a SingleItemPeekableItemReader. The logic for the read method of the MergeReader is as follows:

    public MyContainerPerId read() {
    
       // you need a container to store the items, that belong together
       MyContainerPerId container = new MyContainerPerId();
    
       //  peek through all "dataReaders" to find the lowest actual key
       int lowestId = searchLowestKey();
    
       for (Reader dataReader : dataReaders) {
           // I assume, that more than one entry in a table can belong to
           // the same person id
           wihile (dataReader.peek().getId() == lowestId) {
           {
                 container.add(dataReader.read());
           }
       }
    
       // the container contains all entries from all tables
       // belonging to the same person id    
       return container;
    }
    

If you need restart capability, you have implement ItemStream in a way, that it keeps track of the current readposition for every dataReader.

[PDF] Spring Batch, Fetch data from 20 related tables (through id), combine them to a json File and leverage spring batch for this This json file then gets pushed to kafka through a producer. the strategy and specifics on how to utilize spring batch item reader, processor, and item writer to create the composite json file. We don’t have to drill down to any directory since the json file is in the same directory as our index.html. The fetch function will return a promise. When the JSON data is fetched from the file, the then function will run with the JSON data in the response. If anything goes wrong (like the JSON file cannot be found), the catch function will run.

I used the Driving Query Based ItemReaders usage pattern described here to solve this issue.

Reader: just a default implementation of JdbcCursoritemReader with sql to fetch the unique relational id (e.g. select id from person -)

Processor: Uses this long id as the input and a dao implemented by me using jdbcTemplate from spring fetches data through queries against each of the table for a specific id (e.g. select * from license where id=) and map results in list format to a POJO of Person - then convert to json object (using Jackson) and then to string

Writer: either write the file out with json string or publish json string to a topic in case of kafka use

Spring Batch - Reference Documentation, volumes of data between databases, transforming it, and so on). Easy to configure, customize, and extend services, by leveraging the spring Merge: A program that reads records from multiple input files and produces one output timestamp column in each database table used concurrently by both batch and on-line. 6� In How to Use JSON Data with PHP or JavaScript, I discussed how to use XMLHttpRequest() to get data from a JSON feed. The Fetch API is a newer built-in feature of JavaScript that makes working with requests and responses easier.

We went through similar exercise migrating 100mn + rows from multiple tables as a form of JSON so that we can post it to a message bus.

The idea is create a view, de-normalize the data and read from that view using JdbcPagingItemReader.Reading from one source has less overhead.

When you de-normalize the data make sure you do not get multiple rows for master table.

Example - SQL server -

create or alter view viewName as
select master.col1 , master.col2,
(select dep1.col1,
               dep1.col2
               from dependent1 dep1
        where dep1.col3 = master.col3 for json path
       )                as dep1
from master master;

The above will give you dependent table data in a json String with one row for each master table data. Once you retrieve the data you can use GSON or Jackson to convert it as POJO.

We tried to avoid JdbcCursoritemReader as it will pull all data in memory and read one by one from it. It does not support pagination.

Jdbctemplate get multiple records, Easy to configure, customize, and extend services, by leveraging the spring This section describes stereotypes relating to the concept of a batch job. It combines multiple steps that belong logically together in a flow and allows for and fails at 9:30, the following entries will be made in the batch meta data tables:. In this tutorial we continue our discovery of the fetch API by looking at how it can be used to load a JSON file. We also look at creating a request object for use with fetch. Would you like to

Spring data jpa cross join, SQL DELETE – deleting related rows in multiple tables. Feb 14, 2019 � How to Count Table records using Spring JdbcTemplate. records on employee table on Always combine JDBC batch insert or update with PreparedStatement to get best of 24 Aug 2016 We use Spring JDBC Template with Postgres JSON data types� Name - This array is the child of an object, and is used as a field label within the JSON file. The first array in our JSON output will contain data from the ‘orders’ database table, so we will name this array ‘orders’. Input RowSet Variable - The name of a variable which contains the RowSet data to write to a file. For example, from

Spring Data JPA Tutorial: Creating Database Queries With the JPA , We will use here custom query using @Query annotation to fetch the data. cross join Join � em rela��o ao que no c�digo da classe esta como id Blue Cross Blue can leverage it to facilitate their daily tasks. master1 m where m [spring- data-jpa-1 spring batch application runs on a daily basis generating a delimited file� For this, the PHP built-in function “json_encode()” is used. This function takes the array as an argument. This function will convert the array data into strings of JSON. Step 5: Writing data to JSON file. Finally, after having data in JSON string, there is a need to write the such data to JSON file.

Chapter 5. Reading data - Spring Batch in Action, Spring Data JPA Tutorial: Getting the Required Dependencies Plugin (version 2.2.4) declaration to the plugins section of the pom.xml file. We can create database queries with the JPA Criteria API by We can create reusable specifications and combine them in the Optional<Todo> findOne(Long id);. Given an HTML document containing JSON data and the task is to convert JSON data into a HTML table. Approach 1: Take the JSON Object in a variable. Call a function which first adds the column names to the < table > element.(It is looking for the all columns, which is UNION of the column names). Traverse the JSON data and match key with the

Comments
  • did you try your options? what is your specific question?
  • I guess stored procedure options seems to be discouraged, and huge join query option is also out so I have edited my question, and the specific question is how to create composite item reader that queries all tables and builds a complex json file out of it for PersonData