You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Deviation from expected behavior:
I've deployed rook-nfs using the quick-start guide, then followed create and initialize nfs server section to establish two nfs-servers. One NFS-Server was backed by HDD, and the other NFS-Server is backed by a SSD storage. The operator built the NFS servers succesfully.
Next thing I did was to create a deployment, and create PVC which used the SC for the NFS Server. When the pod first started, it created the PV fine, and binded correctly in the pod. Everything worked expected for a little white (like a week maybe?) Then all of a sudden, the pods were unable to access the volumes anymore. It would just hang, if you would open a shell and do 'ls' on the nfs volume.
When I restarted the pod that has the NFS volume, the pod failed to start. The pod never passes the "init" stage. Eventually, it will error out because it is unable to mount the volume that is backed by the NFS server.
I've attempted to restart all the nodes, try to schedule the pod on another node, but issue persists.
The only way I was able to get the pod to mount the volume again is to change the volume spec from PVC to NFS in the deployment:
volumes:
- name: gold-nfs-mount
nfs:
path: /gold-scratch/dir <--- Export
server: 172.30.17.118 <--- Service IP address of NFS Server
The weird thing was, this has happened one more time before, and the problem eventually went away. By itself.
Expected behavior:
Be able to continue to use persistentVolumeClaim for the volume instead of using nfs to mount volumes.
How to reproduce it (minimal and precise):
Create rook-nfs operator using the quick-start guide, then follow create and initialize nfs server section to establish nfs-servers.
Is this a bug report or feature request?
Deviation from expected behavior:
I've deployed rook-nfs using the quick-start guide, then followed create and initialize nfs server section to establish two nfs-servers. One NFS-Server was backed by HDD, and the other NFS-Server is backed by a SSD storage. The operator built the NFS servers succesfully.
Next thing I did was to create a deployment, and create PVC which used the SC for the NFS Server. When the pod first started, it created the PV fine, and binded correctly in the pod. Everything worked expected for a little white (like a week maybe?) Then all of a sudden, the pods were unable to access the volumes anymore. It would just hang, if you would open a shell and do 'ls' on the nfs volume.
When I restarted the pod that has the NFS volume, the pod failed to start. The pod never passes the "init" stage. Eventually, it will error out because it is unable to mount the volume that is backed by the NFS server.
I've attempted to restart all the nodes, try to schedule the pod on another node, but issue persists.
The only way I was able to get the pod to mount the volume again is to change the volume spec from PVC to NFS in the deployment:
The weird thing was, this has happened one more time before, and the problem eventually went away. By itself.
Expected behavior:
Be able to continue to use persistentVolumeClaim for the volume instead of using nfs to mount volumes.
How to reproduce it (minimal and precise):
Create rook-nfs operator using the quick-start guide, then follow create and initialize nfs server section to establish nfs-servers.
To make it easier, this is my manifest:
Persistent Volume:
PVC + NFS Server
StorageClass:
Verify:
Deploy an app, and use gold-local SC to for PVC. And Wait?
File(s) to submit:
The NFS Server does not show any errors.
Environment:
uname -a
): Linux 5.11.0-43-genericrook version
inside of a Rook Pod): Rook NFS 1.7.3ceph -v
): Rook NFS 1.7.3kubectl version
): v1.23.1The text was updated successfully, but these errors were encountered: