I have an items table as follows:
CREATE TABLE items (
id PRIMARY KEY AUTOINCREMENT,
name STRING,
order_item INTEGER,
UNIQUE (order_item)
);
The order_item field represents the position of items, and users can reorder them using drag-and-drop functionality. Here’s the API structure for updating the order:
- API:
PUT {{domain}}/v2/items/:id/update_order
- Payload:
{ order_item: 1 }
Problem:
When a user updates an item’s order_item, I need to handle the following scenarios efficiently, considering there could be millions of records:
-
Reordering
If an item with order_item = 1,000,000 is moved to order_item = 5, I need to adjust the values of affected records. For example:
Increment/decrement the order_item values of records that would conflict or need reordering. -
Deletion Handling:
When an item is deleted, the order_item values of subsequent records must be adjusted to maintain continuous numbering. -
Large-Scale Operations:Considering the large number of records, I want to minimize performance bottlenecks like lock contention and ensure smooth operation for the API.
Key Questions:
Efficient Updates:
How can I efficiently update the order_item field for potentially millions of records during a reorder or deletion?
Optimal Data Type:
Should I continue using INTEGER for order_item, or would switching to a different type (e.g., FLOAT) be better for handling reordering scenarios?
API Design:
What strategies should I use to design the API for handling such large-scale updates efficiently?
Considerations & Updates:
Why Adjust on Deletion?:
Continuous numbering helps ensure consistent ordering. If this approach is problematic, I’m open to alternative suggestions.
Handling Conflicts:
When an item’s order_item is updated (e.g., from 1,000,000 to 5), any existing item with order_item = 5 (or in the affected range) must be shifted to accommodate the change.
Order Representation:
If a better representation of “order” exists, such as using intervals or ranges, I’d love to learn more.
5