-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can`t output results of FIO test for rook-ceph-block storage #74
Comments
Hi @vkamlov,
I added some debugging that should print the original string. We can figure out why it's failing to parse it. |
Output is same. There is no debug information in this build |
Except for empty brackets ./kubestr fio -s rook-ceph-block |
Can you try again ? I added additional debugging. It looks like the FIO results are coming up empty. Which is why it failed to parse. |
./kubestr fio -s rook-ceph-block goroutine 54 [running]: |
Heh that made things worse. can we try again? |
./kubestr fio -s rook-ceph-block |
Any updates? |
Hey @vkamlov, sorry for the delay. |
If not can you try deploying the following pod and pvc -
After doing that we can run |
I deployed the container. Did some tests what I found in the official fio documentation.
|
So you have to run it against the directory |
|
I also run all test from source file fio_jobs.go. Results looks good like this last test:
Only this string looks strange, but I dont now if it matters |
Hmm, the last thing thats missing is the |
I tried on a cluster created with kubeadm k8s v1.15 and rancher k8s v1.15-1.21 |
Which is why this is perplexing. From within the pod it should seem like the same image with a volume attached at /data. |
Earlier you offered a call. Where can we try? |
@bathina2 & @vkamlov -- Interestingly, I've run into the same problem with v0.4.18 using ceph-block storage with Rook. The FIO tests run and results display successfully when I use Kubestr v0.4.17. I am running kubestr from CentOS 7 VM, and with a Kubernetes Cluster v1.21.3. I have Rook v1.7 deployed with Block Storage. |
Alright, I spoke too soon. I started changing some parameters, and I ran into issues once I've adjusted the runtime parameter on a seq-read fio test: The FIO test runs without issues with this setting:
However, the following causes the error condition when job finishes:
Result:
|
I've seen similar because I've had to turn the
Which makes the output from fio not JSON and, therefore, causes errors. |
Info:
Version kubestr - any
Workstation where run kubestr - macos big sur 11.2.3/Debian 10 (buster)
shell - bash/zsh
Scenario:
$ ./kubestr
Kubernetes Version Check:
Valid kubernetes version (v1.16.15) - OK
RBAC Check:
Kubernetes RBAC is enabled - OK
Aggregated Layer Check:
The Kubernetes Aggregated Layer is enabled - OK
Available Storage Provisioners:
rook-ceph.rbd.csi.ceph.com:
Cluster is not CSI snapshot capable. Requires VolumeSnapshotDataSource feature gate.
This is a CSI driver!
(The following info may not be up to date. Please check with the provider for more information.)
Provider: Ceph RBD
Website: https://github.com/ceph/ceph-csi
Description: A Container Storage Interface (CSI) Driver for Ceph RBD
Additional Features: Raw Block, Snapshot, Expansion, Topology, Cloning
$ ./kubestr fio -s rook-ceph-block
PVC created kubestr-fio-pvc-xvbxj
Pod created kubestr-fio-pod-vkzb8
Running FIO test (default-fio) on StorageClass (rook-ceph-block) with a PVC of Size (100Gi)
Elapsed time- 50.430796167s
FIO test results:
Failed while running FIO test.: Unable to parse fio output into json.: unexpected end of JSON input - Error
The text was updated successfully, but these errors were encountered: