I have a MongoDB collection that is almost 1 TB in size. The collection already has indexes in place for faster querying. However, when I try to add new data, the insertion process becomes extremely slow due to these indexes. Currently, I delete the indexes before adding new data, then recreate them afterward. It’s a very time consuming approach.
I have no ability to use multiple servers.
What is the best approach to efficiently insert new data into a large indexed MongoDB collection without having to repeatedly drop and recreate the indexes?
Alhasan Mohammed is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.