You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a data set of more than 16 million pieces of data, which is a CSV file with a size of about 1g. After generating the nanocubes index (the file size is about 8G) and starting the server, when starting the nanoviewer, the nanoserver will crash (the memory of the nanocubes project server is 32g). Have you ever had such a problem? How did you solve it?
The text was updated successfully, but these errors were encountered:
I have a data set of more than 16 million pieces of data, which is a CSV file with a size of about 1g. After generating the nanocubes index (the file size is about 8G) and starting the server, when starting the nanoviewer, the nanoserver will crash (the memory of the nanocubes project server is 32g). Have you ever had such a problem? How did you solve it?
The text was updated successfully, but these errors were encountered: