With SQL Server 2022, INSERT INTO ... OUTPUT.Inserted.* VALUES ...
can get the inserted items. Then statement can insert data into one or two tables(with same table structure).
Is it possible insert data to multi-tables with different table structure?
CREATE TABLE tb_Master
(
Id INT IDENTITY(1,1) PRIMARY KEY,
[Master] VARCHAR(50)
)
CREATE TABLE tb_Slave
(
Id INT IDENTITY(1,1) PRIMARY KEY,
ParentId INT NOT NULL, --tb_Master.Id
[Slave] VARCHAR(50)
)
The C# entity models:
internal class MasterMdl
{
public int Id { get; set; }
public string Master { get; set; }
public IEnumerable<SlaveMdl> Slaves { get; set; }
}
internal class SlaveMdl
{
public int Id { get; set; }
public int ParentId { get; set; }
public string Slave { get; set; }
}
In C#, create the data to be inserted:
var list = new List<MasterMdl>();
for (int i = 0; i < 10; i++)
{
var m = new MasterMdl()
{
Master = $"Master {i}",
};
SlaveMdl[] slaves = new SlaveMdl[5];
for (int j = 0; j < 5; j++)
{
slaves[j] = new SlaveMdl()
{
Slave = $"Slave {j}"
};
}
m.Slaves = slaves;
list.Add(m);
}
Update: my sample C# code
using var masterCmd = await _dbConnectionContext.CreateCommand<TUserData>(mastSQL, masterParameterItems, token, false);
var reader = await masterCmd.ExecuteReaderAsync(token);
var allSlaves = new List<TSlave>();
var handler = statement.Context.GetSlaveHandler;
var parentId = statement.Context.ParentId;
await reader.ReadAsync(token);
foreach (var data in userData)
{
var slaves = handler.Invoke(data);
var id = reader[0];
foreach (var slave in slaves)
{
(parentId as IPropertyValue<TSlave>).SetValue(slave, id);
}
allSlaves.AddRange(slaves);
await reader.ReadAsync(token);
}
reader.Close();
using var slaveCmd = await _dbConnectionContext.CreateCommand<TSlave>(slaveSQL, slaveParameterItems, token, false);
var ret = await slaveCmd.ExecuteNonQueryAsync(token);
await _dbConnectionContext.Commit(token);
Insert Multiple Values Into Multiple Tables in SQL Server
https://www.geeksforgeeks.org/insert-multiple-values-into-multiple-tables-using-a-single-statement-in-sql-server/
CREATE TABLE GeekTable1 (
Id1 INT,
Name1 VARCHAR(200),
City1 VARCHAR(200)
);
CREATE TABLE GeekTable2 (
Id2 INT,
Name2 VARCHAR(200),
City2 VARCHAR(200)
);
INSERT INTO GeekTable1 (Id1, Name1, City1)
OUTPUT inserted.Id1, inserted.Name1, inserted.City1
INTO GeekTable2
VALUES (1, 'Komal', 'Delhi'),
(2, 'Khushi', 'Noida');
SELECT * FROM GeekTable1;
GO
SELECT * FROM GeekTable2;
GO
12
It is not possible to insert into two tables with the same command in SQL Server. There are various hacks that can allow it (OUTPUT INTO
with XML or JSON, triggers, and probably one or two other things), but they are not really intended for that purpose, and are likely just sugar on top of transactions.
I understand the desire for a single command in a few ways:
- Atomicity (solved by transactions)
- Round-trips (solved by batching)
- Master-child relationships (solved by
SCOPE_IDENTITY()
orOUTPUT INTO
+ surrogate keys)
Let’s discuss each of the three to look at how they can be solved:
Atomicity
Atomicity refers to the desire to have a mutation occur in whole or not at all (never fractionally). This comes up in parent-child relationships since we may not want to create an invoice, for example, without any invoice lines. The canonical solution for an atomic mutation is to make it a transaction. A transaction once begun can only be committed or rolled back in entirety.
You can create transactions in client code. When your logic is mostly in the client, this is often cleanest since it lets you avoid significant changes.
using (var tscope = new TransactionScope(/* options - especially if using async */))
{
// perform mutations as needed
// ...
// If the transaction should be committed, call Commit then dispose
tscope.Commit();
}
Note: there are various transaction mechanisms which do not rely upon TransactionScope
which can be higher performance since they do not allow fallback to DTC. They’re great when you need them, but TransactionScope
is still fine – even cross platform without MSDTC.
Alternately, you can create a transaction in SQL using BEGIN TRAN;
/ COMMIT TRAN;
. There is an implicit transaction on each command run inside SQL (e.g.: a trigger runs in the implicit transaction of the INSERT
it was triggered by).
SET XACT_ABORT ON;
BEGIN TRAN;
INSERT INTO Parent (...) VALUES (...);
DECLARE @ParentID INT = SCOPE_IDENTITY();
INSERT INTO Child (ParentID, ...) VALUES (@ParentID, ...);
COMMIT TRAN;
Transactions are highly affected by options, not least of which are TRANSACTION ISOLATION LEVEL
and XACT_ABORT
. XACT_ABORT
ensures that the transaction will be rolled back if a non-terminating error (e.g.: duplicate key) is encountered. Otherwise an error is thrown and the transaction remains open until explicitly committed or rolled back. Programmers from procedural backgrounds are probably used to the idea that an exception stops execution and thus should set SET XACT_ABORT ON;
in most cases.
Batching
There is a cost to submitting a batch (a set of commands) to the server. Each batch has to round-trip across the network to the DB server. By way of example, neglecting other factors, with 100ms round-trip time:
- 10x batches of 1x command will take 1 second
- 1x batch of 10x commands will take 100 milliseconds
In many cases, it is more significant to reduce the number of batches submitted – not the number of commands. Commands execute in-process of the SQL server and are comparatively lightweight.
You can reduce the number of batches by having multiple commands inside a single SqlCommand
or equivalent client code. E.g.:
var batch = @"
SET XACT_ABORT ON;
BEGIN TRAN;
INSERT INTO Parent (...) VALUES (...);
DECLARE @ParentID INT = SCOPE_IDENTITY();
INSERT INTO Child (ParentID, ...) VALUES (@ParentID, ...);
COMMIT TRAN;
";
var sqc = new SqlCommand(batch, con);
// ... add parameters ...
sqc.ExecuteNonReader();
This can get hairy pretty quick, so Stored Procedures are a great way to avoid unreadable client code. Stored procedures are a “remembered” batch stored on the server as part of the schema. Table valued parameters, XML, or JSON can be used to pass parent-child data to the server.
CREATE PROCEDURE InsertInvoice
@InvoiceID int = null OUTPUT,
@CustomerID int,
@Lines dbo.InvoiceLinesTVP READONLY
BEGIN;
SET NOCOUNT ON;
SET XACT_ABORT ON;
BEGIN TRAN;
INSERT INTO dbo.Invoices (CustomerID, ...) VALUES (@CustomerID, ...);
SET @InvoiceID = SCOPE_IDENTITY();
INSERT INTO dbo.InvoiceLines (InvoiceID, ...)
SELECT @InvoiceID, ...
FROM @Lines;
COMMIT TRAN;
END;
Stored procedures can be called by using the command type of StoredProcedure
(or equivalent in different clients):
var sqc = new SqlCommand("dbo.InsertInvoice", con);
sqc.CommandType = SqlCommandType.StoredProcedure;
// add parameters
sqc.ExecuteNonQuery();
Parent-child relationships
With a single parent, SCOPE_IDENTITY()
is sufficient as shown above.
When there are multiple parents, using surrogate keys with OUTPUT INTO
is appropriate. If we are inserting multiple invoices with disparate lines, we might have the client assign locally unique indices (or use the implicit XML indices):
InvoiceIndex | CustomerID | … |
---|---|---|
0 | 101 | … |
1 | 395 | … |
Those indices can be referenced from the lines TVP:
InvoiceIndex | ItemID | … |
---|---|---|
0 | 204 | … |
0 | 126 | … |
0 | 239 | … |
1 | 493 | … |
The stored procedure then inserts the invoices with OUTPUT INTO
:
CREATE PROCEDURE dbo.InsertInvoices
@Invoices dbo.InsertInvoicesTVP READONLY,
@Lines dbo.InsertInvoicesLinesTVP READONLY
BEGIN;
SET NOCOUNT ON;
SET XACT_ABORT ON;
BEGIN TRAN;
DECLARE @InvoiceIndexMap TABLE
(
InvoiceIndex int not null PRIMARY KEY,
InvoiceID int not null
);
INSERT INTO dbo.Invoices (...)
OUTPUT i.InvoiceIndex, INSERTED.InvoiceID INTO @InvoiceIndexMap
SELECT i.CustomerID, i.[...]
FROM @Invoices i;
INSERT INTO dbo.InvoiceLines (...)
SELECT m.InvoiceID, l.[...]
FROM @InvoiceIndexMap m
INNER JOIN @Lines l on m.InvoiceIndex = l.InvoiceIndex;
SELECT * FROM @InvoiceIndexMap;
COMMIT TRAN;
END;
If you are doing this a lot, you might consider using a document database instead of SQL server, or relying upon the XML or JSON functionality of SQL Server.
In SQL you could use a transaction to make sure both tables are filled, or none. You will still need 2 insert statemtents, but they will both be applied or rolledback as if they where one.
here is an example
declare @newid int;
begin try
begin tran
insert into tb_Master (Master) values ('hello world');
set @newid = Scope_identity();
insert into tb_slave (ParentId, Slave) values (@newid, 'good morning');
commit tran -- apply both insert statements
end try
begin catch
rollback tran -- undo both insert statements
throw
end catch;
select * from tb_Master;
select * from tb_Slave;
Because all inserts are done in a transaction begin tran
, the rollback tran
will undo all changes in all tables, while the commit tran
will apply all changes in all tables. So you are never stuck with half inserts/updates
See also this dbFiddle where you can test what happens when an error does occur
If you want a shorter written version, you can do it without the try/catch/throw like this
SET XACT_ABORT ON;
set nocount on;
declare @newid int;
begin tran
insert into tb_Master (Master) values ('hello world again');
set @newid = Scope_identity();
insert into tb_slave (ParentId, Slave) values (@newid, 'good morning again');
commit tran
In this case there is no need to write a rollback tran
because the set XACT_ABORT ON;
causes the transaction to be rollbacked automatic after the first exception that occurs
7