avoiding write conflicts while re-sorting a table

how to ignore duplicate records in sql
distinguish between the base table values and the calculated values in select statements
best database for filtering
how to remove duplicates in sql using distinct
sql query subset of data
select distinct not removing duplicates
sql omit na
computation on table data in sql

I have a large table that I need to re-sort periodically. I am partly basing this on a suggestion I was given to stay away from using cluster keys since I am inserting data ordered differently (by time) from how I need it clustered (by ID), and that can cause re-clustering to get a little out of control.

Since I am writing to the table on a hourly I am wary of causing problems with these two processes conflicting: If I CTAS to a newly sorted temp table and then swap the table name it seems like I am opening the door to have a write on the source table not make it to the temp table.

I figure I can trigger a flag when I am re-sorting that causes the ETL to pause writing, but that seems a bit hacky and maybe fragile.

I was considering leveraging locking and transactions, but this doesn't seem to be the right use case for this since I don't think I'd be locking the table I am copying from while I write to a new table. Any advice on how to approach this?

There are some reasons to avoid the automatic reclustering, but they're basically all the same reasons why you shouldn't set up a job to re-cluster frequently. You're making the database do all the same work, but without the built in management of it.

If your table is big enough that you are seeing performance issues with the clustering by time, and you know that the ID column is the main way that this table is filtered (in JOINs and WHERE clauses) then this is probably a good candidate for automatic clustering.

So I would recommend at least testing out a cluster key on the ID and then monitoring/comparing performance.

To give a brief answer to the question about resorting without conflicts as written: I might recommend using a time column to re-sort records older than a given time (probably in a separate table). While it's sorting, you may get some new records. But you will be able to use that time column to marry up those new records with the, now sorted, older records.

What can I do to prevent write-write conflicts on a wiki-style website , In case of a conflict inform the user who was trying to write the entry A's version into User B's version while they are editing it, and notify them. If you’re upset and plan to send an email, it is advisable to write a draft and put it aside for a while. Sleep on it! Take time to ensure that the content of the message is really what you want to convey and that it is measured in tone and objective (as mentioned in Rule 4, email communication can very easily be misinterpreted).

I've asked some clarifying questions in the comments regarding the clustering that you are avoiding, but in regards to your sort, have you considered creating a nice 4XL warehouse and leveraging the INSERT OVERWRITE option back into itself? It'd look something like:


Assuming that your table isn't hundreds of TB in size, this will complete rather quickly (inside an hour, I would guess), and any inserts into the table during that period will queue up and wait for it to finish.

Data conflicts when procedures read from or write to tables, it is necessary to avoid conflicts when reading from and writing to tables. To avoid this sort of problem, the database manager do not allow operations that access context refers to the scope where conflicting operations on a table are� When the write conflict does not exist, the method continues with the first device issuing a set of next phase write requests to the storage units regarding storing revised encoded data slices. US10013471B2 - Avoiding write conflicts in a dispersed storage network - Google Patents

You might consider revoking INSERT, UPDATE, DELETE privileges on the original table within the same script or procedure that performs the CTAS creating the newly sorted copy of the table. After a successful swap you can re-enable the privileges for the roles that are used to perform updates.

[PDF] 141-2009: Avoiding Common Traps When Accessing , quite straightforward, especially when LIBNAME engines are used. However read from and write to a specific table, but may not be allowed to change its structure or delete it. All facets of database security are maintained: it must pass the same sort of There are several ways to deal with these collation order conflicts. If a transaction has to scan through a table with millions of rows it will be long and therefore will have to keep the locks on that table for a while. On the other hand, the transactions seeking particular values by taking advantage of indexes will be short and quick.

Sorting and Removing Duplicates – Databases and SQL, Write queries that eliminate duplicate values from data. For example, if we select the quantities that have been measured from the Survey table, we get this: By default, when we use ORDER BY , results are sorted in ascending order of the� I have a table with very frequent SELECT and INSERT operations. The table has 11 million records. I have added a new column to it and I need to copy over the data from an existing column in the same table to the new column. I am planning to use ROWLOCK hint to avoid escalating locks to table level locks and blocking out all other operations on

How to Sort Data in Excel Spreadsheets (The Right Way), all out of order. You might need to sort it based on the names inside of a column, or by I'll show you how to avoid that. Let's get When you are sorting data in Excel, the entire row is being sorted. Essentially I enjoy writing about productivity software that helps people do what they love, faster. Get in� Conflict happens. It happens everywhere: between friends, in the classroom, around the corporate conference table. The good news is that it doesn't have to damage friendships or business deals. Knowing how to resolve conflict, wherever it happens, creates confidence and eases stress.

Help:Sorting, It is especially easy when there are no rowspans in the body of a table. See the Note: From an April 2020 discussion someone using a screen reader wrote: "It's still very readable with the This can be avoided by declaring the sort type: ! data-sort-type="number"|Elev. Edit toolbar � CharInsert � Edit conflict � Reverting . I was just explaining conflict detection in this thread: RecVersion in PurchTable. When you switch to another record, the record is saved. The form first calls write() on the datasource, which calls update() on the table. super() in update() calls the kernel logic for saving, which then check if there aren't update conflicts.

  • Why not partition by id?
  • Define "large" in record count and total size. And what is special about the ID that you are sorting on? Do users filter on the ID column? Is it a unique key on the table? Is it sequential (getting larger over time)?
  • Right now it's at 13B records, and 0.5T and does get larger over time. This is about 2months of data; I already have another table acting as an older shard with 86B rows.
  • @BradKagawa How unique is the ID that you are clustering on and is that was is used to filter when querying that table?
  • Yeah the ID is always in my where clause, which is why I would like to order by it or cluster by it. It is not a unique ID for the table and can be present in the entire table were it sorted by time... Best analogy I can think of is if I were tracking purchases of a specific product through a history of transaction records. The transactions come in chronologically but the product ID that I filter for is interspersed so I need to re-order or cluster... If this doesn't work well I may start sharding the tables but I don't love the idea of maintaining so many tables.
  • I can try that, I have not seen a way to keep an eye on how much compute time is being attributed to the autoclustering, is this something that Snowflake breaks down in the billing anywhere?
  • Information on Viewing billing for automatic clustering is at the bottom of the page here: docs.snowflake.net/manuals/user-guide/…
  • This is a reasonable idea depending on detailed needs. Potential downsides: Because there is a truncate, this could put your table into a temporarily "invalid" state while records are being re-inserted where your table has some Old records, some new records, and a gap in between. New records could get inserted alongside the re-inserts in the same table and cause micropartitions to have wide ranges of ids, which would defeat the purpose here.
  • This isn't actually how that works. There is no truncate. In Snowflake, the INSERT OVERWRITE creates new micropartitions and when it is complete, creates the new metadata that points to those micropartitions. The metadata is locked during this operation, so no other insert statements would execute. They would queue up behind the commit of the INSERT OVERWRITE.
  • huh. the Snowflake docs say "Overwrite specifies to truncate the target table before inserting into the table, while retaining access control privileges on the table" docs.snowflake.net/manuals/sql-reference/sql/insert.html I'll have to look at that closer I suppose. Thanks.
  • My point was, you'll never see a TRUNCATE statement executed. It all happens within the metadata update, and this is locked during the process. So, there isn't a way for records to "sneak" into the table.
  • Ah. I suppose that makes sense. thank you for the clarifications.
  • This would cause incoming queries to fail unhelpfully. AND it requires higher level permissions which are otherwise unnecessary. Locking the table serves a similar purpose and is simpler.