I’m developing a small system with two components: one polls data from an internet resource and translates it into sql data to persist it locally; the second one reads that sql data from the local instance and serves it via json and a restful api.
I was originally planning to persist the data with postgresql, but because the application will have a very low-volume of data to store and traffic to serve, I thought that was overkill. Is SQLite up to the job? I love the idea of the small footprint and no need to maintain yet another sql server for this one task, but am concerned about concurrency.
It seems that with write ahead logging enabled, concurrently reading and writing a SQLite database can happen without locking either process out of the database.
Can a single SQLite instance sustain two concurrent processes accessing it, if only one reads and the other writes? I started writing the code but was wondering if this is a misapplication of SQLite.
6
You are looking for the File Locking And Concurrency documentation.
SQLite processes use a series of locks to handle concurrency; to read, several processes can obtain a SHARED
lock.
A process that writes, will need to obtain a RESERVED
lock, and only when actually having to flush changes to disk does it move to the PENDING
state. Any reading process will then have to unlock the file, after which the writing process is able to move to EXCLUSIVE
for writing to the actual database file.
Because the writer process only needs to lock the database file for actual writes (memory flushes, commits), a setup with just one reader and just one writer will perform quite well. I’d expect it to perform just as well, if not better, as a setup with just one process doing all the reading and writing.
SQLite is less suited when you have multiple processes frequently writing to the same database, as writing requires obtaining the exclusive PENDING
lock to serialize changes.
1