For my project I have a hashtable with 1M buckets (sometimes more).
Each bucket contains lots of values and they are stored sorted in a dynamic array.
I am looking for easy enough way to store all values in a file (printf will do as well), but in sorted way.
Is there a way to do so in less N*N complexity (e.g. merging all 1M arrays/buckets)
4
I believe the second half of the mergesort algorithm is what you’re looking for, since the first half is splitting the numbers into subranges, and the second half is merging those subranges. Mergesort as a whole has a complexity of O(n log n), and I believe the second half by itself would also be O(n log n).
To get you started, here’s a reasonably short and readable implementation of mergesort in C, taken from http://www.cquestions.com/2011/07/merge-sort-program-in-c.html
#include<stdio.h>
#define MAX 50
void mergeSort(int arr[],int low,int mid,int high);
void partition(int arr[],int low,int high);
int main(){
int merge[MAX],i,n;
printf("Enter the total number of elements: ");
scanf("%d",&n);
printf("Enter the elements which to be sort: ");
for(i=0;i<n;i++){
scanf("%d",&merge[i]);
}
partition(merge,0,n-1);
printf("After merge sorting elements are: ");
for(i=0;i<n;i++){
printf("%d ",merge[i]);
}
return 0;
}
void partition(int arr[],int low,int high){
int mid;
if(low<high){
mid=(low+high)/2;
partition(arr,low,mid);
partition(arr,mid+1,high);
mergeSort(arr,low,mid,high);
}
}
void mergeSort(int arr[],int low,int mid,int high){
int i,m,k,l,temp[MAX];
l=low;
i=low;
m=mid+1;
while((l<=mid)&&(m<=high)){
if(arr[l]<=arr[m]){
temp[i]=arr[l];
l++;
}
else{
temp[i]=arr[m];
m++;
}
i++;
}
if(l>mid){
for(k=m;k<=high;k++){
temp[i]=arr[k];
i++;
}
}
else{
for(k=l;k<=mid;k++){
temp[i]=arr[k];
i++;
}
}
for(k=low;k<=high;k++){
arr[k]=temp[k];
}
}
Sample output:
Enter the total number of elements: 5
Enter the elements which to be sort: 2 5 0 9 1
After merge sorting elements are: 0 1 2 5 9
Note that ratchet freak’s suggestion is basically the heapsort algorithm, which is also O(n log n), so that one should be fine too.
You can make a min-heap where you store all buckets sorted by the first element.
Then you pop an element from the first bucket and re-heapify.
If you are not allowed to change the buckets then keep an index per bucket of which element still has to be popped from it.
5
From what you wrote, I don’t see a reason why original information about the hashtable and the buckets has to be used – it just overcomplicates things.
So flatten all buckets into a list (complexity O(N)) and apply an arbitrary sorting algorithm with complexity O(N * log(N)), for example, quick sort or merge sort. Then write the results into a file.