Update table every 1000 rows

fastest way to update rows in a large table in sql server
sql server fastest way to update millions of rows
select records in batches sql server
sql server insert commit every 1000 rows
mysql update 1000 rows at a time
sql update multiple rows
mysql batch update
bulk update sql

I am trying to do an update on a specific record every 1000 rows using Postgres. I am looking for a better way to do that. My function is described below:

CREATE OR REPLACE FUNCTION update_row()
 RETURNS void AS
 $BODY$

 declare

 myUID integer;
 nRow integer;
 maxUid integer;

 BEGIN

 nRow:=1000;
 select max(uid_atm_inp) from tab into maxUid  where field1 = '1240200';

 loop
  if (nRow > 1000 and nRow < maxUid) then

  select uid from tab into myUID where field1 = '1240200' and uid >= nRow limit 1;

  update tab 
  set field = 'xxx'
  where field1 = '1240200' and uid = myUID;

  nRow:=nRow+1000;      
  end if;


 end loop;

 END; $BODY$
 LANGUAGE plpgsql VOLATILE

How can I improve this procedure? I think there is something wrong. The loop does not end and takes too much time.

To perform this task in SQL, you could use the row_number window function and update only those rows where the number is divisible by 1000.

Your loop doesn't finish because there is no EXIT or RETURN in it.

In search of a T-SQL script that will UPDATE 1 M rows on a table, invoking a COMMIT every 1000 UPDATES. (for business reasons, I cannot  update table set col = value WHERE ID IN (SELECT ID FROM TABLE ORDER BY ID FETCH FIRST 1000 ROWS ONLY); Thesmithman's options are perfect too btw - depends what you've gotta accomplish.

I doubt you could ever rival the performance of a standard SQL update with a procedural loop. Instead of doing it a row at a time, just do it all as a single statement:

with t2 as (
  select
    uid, row_number() over (order by 1) as rn
  from tab
  where field1 = '1240200'
)
update tab t1
set field = 'xxx'
from t2
where
  t1.uid = t2.uid and
  mod (t2.rn, 1000) = 0

Per my comment, I am presupposing what you mean by "every 1000th row," as without some designation of how to determine what tuple is what row number. That is easily edited by changing the "order by" criteria.

Adding a second where clause on the update (t1.field1 = '1240200') can't hurt but might not be necessary if these are nested loop.

This might be notionally similar to what Laurenz has in mind.

The main business table has one column I am updating. I cannot add any additional columns to the table or update any other columns. The temp  big_table@ORA920> update big_table set created = created+1; 1833792 rows updated. Elapsed: 00:02:13.28 not a big deal.

I solved this way:

declare

myUID integer;
nRow integer;
rowNum integer;
checkrow integer;
myString varchar(272);

cur_check_row cursor for select uid , row_number() over (order by 1) as rn, substr(fieldxx,1,244)
from table where field1 = '1240200' and uid >= 1000 ORDER BY uid;

BEGIN

open cur_check_row;
loop
fetch cur_check_row into myUID, rowNum, myString;
EXIT WHEN NOT FOUND;

select mod(rowNum, 1000) into checkrow;

if checkrow = 0 then

    update table
    set fieldxx= myString||'O'
    where uid in (myUID);

end if;

end loop;
close cur_check_row;

How Update large table with millions of rows in SQL Server? Until MariaDB 10.3.2, for the multiple-table syntax, UPDATE updates rows in each table named in table_references that satisfy the conditions. In this case, ORDER BY and LIMIT cannot be used. This restriction was lifted in MariaDB 10.3.2 and both clauses can be used with multiple-table updates.

TOP (@BatchSize) tab SET tab. Value = 'abc1' FROM TableName tab WHERE tab. Parameter1 = 'abc' AND tab. I want to update them in batches of 1000 or 10000. I tried with @@ROWCOUNT but I am unable to get desired result. Just for testing purpose what I did is, I selected table with 14 records and set a row count of 5. This query is supposed to update records in 5, 5 and 4 but it just updates first 5 records. Query - 1:

How do I make my SQL Server update statement faster? Note: Be careful when updating records in a table! Notice the WHERE clause in the UPDATE statement. The WHERE clause specifies which record(s) that should be updated. If you omit the WHERE clause, all records in the table will be updated!

This script updates in small transaction batches of 1000 rows at a time. You can use the general idea for any bulk update as long as you are okay with having the SQL Server: Quickly Find Row Counts and Table Size⟶. Re: Massive Update commit every 1000 records. From: Alex Fatkulin <afatkulin@xxxxxxxxx>. To: wellmetus@xxxxxxxxx. Date: Fri, 13 Nov 2009 15:16:19 -0500. There could be a couple of answers which depend on how your table is organized.

Comments
  • I can't help but wonder ,why every 1000th row? That's not a very transactional operation - which is fine, but I'm having trouble considering your use case. I'm tempted to say that you could collect the rows' ids into a temporary table with a primary key that increments serially by one, then select all rows where the temp table's key is evenly divisible by 1000.
  • Because my task is to that every 1000 rows...the solution pf a temporary table seems to tricky regard to the issue. I will try a while loop
  • Every 1000th row based on what criteria? If you just take the raw output of a table, then a modified row goes to the end due to MVCC. Is the selection of what defines row 1000 aribtrary, or is there some logical order to it?
  • worse, if condition never evaluates to true (nRow starts at 1000, condition is nRow>1000)
  • Ok, the condition should be >=, but which return should I have? I am doing an update...what am I missing?
  • I have to check, every 1000 rows, the record '1240200'. For instance, row could be 1001 then 2001 and then 3005...This is the reason of: select uid from tab into myUID where field1 = '1240200' and uid >= nRow limit 1; I do not know exactly where the record is...I have to check it every "1000" rows and then update it.