I’m working on a Go application where I need to perform multiple database queries concurrently using go routines. My goal is to handle a transaction in the main routine, ensuring that if any of the queries fail, the entire transaction is rolled back. Here’s the basic idea:
1-Start a transaction in the main routine.
2-Execute multiple queries concurrently in different goroutines.
3-If any goroutine fails to execute its query, revert the entire transaction
The code might look something like this ,but I am using this in api
package main
import (
"database/sql"
"fmt"
"log"
"sync"
_ "github.com/lib/pq"
)
type QueryResult struct {
Err error
}
func main() {
db, err := pgxpool.NewWithConfig(context.Background(), cfg) // using connection pool here
if err != nil {
log.Fatal(err)
}
defer db.Close()
tx, err := db.Begin()
if err != nil {
log.Fatal(err)
}
var wg sync.WaitGroup
resultCh := make(chan QueryResult, 3) // Adjust buffer size according to the number of goroutines
wg.Add(3)
go executeQuery(tx, &wg, resultCh)
go executeQuery(tx, &wg, resultCh)
go executeQuery(tx, &wg, resultCh)
wg.Wait()
close(resultCh)
commit := true
for result := range resultCh {
if result.Err != nil {
commit = false
break
}
}
if commit {
if err := tx.Commit(); err != nil {
log.Fatal(err)
}
fmt.Println("Transaction committed successfully")
} else {
if err := tx.Rollback(); err != nil {
log.Fatal(err)
}
fmt.Println("Transaction rolled back due to an error")
}
}
func executeQuery1(tx *sql.Tx, wg *sync.WaitGroup, resultCh chan QueryResult) {
defer wg.Done()
_, err := tx.Exec("INSERT INTO table1 (col1) VALUES ($1)", "value1")
resultCh <- QueryResult{Err: err}
}
Questions:
1-Is this approach of starting the transaction in the main routine and handling queries in goroutines a good practice?
2-Are there any potential issues with this method that I should be aware of?