You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 24, 2023. It is now read-only.
My pipeline is IO bound, and we are migrating our Bare VMs to Kubernetes, with the same SKUs but the temporary disk is missing.
I can't seem to make this work on my end. I have a Standard_F32s_v2 which contains 256GB of temporary storage. I even passed --node-osdisk-size 1024 and the temporary disk is not shown.
This is how we created the Cluster:
az aks create -g test -n test --node-vm-size Standard_F32s_v2 --node-count 1 --attach-acr test --node-osdisk-size 1024
When I query kubernetes, it is saying 1TB is the OSDisk. Still temporary disk is not listed.
When running lsblk, the temporary disk is there, but we cannot mount it:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 256G 0 disk
`-sdb1 8:17 0 256G 0 part
sr0 11:0 1 690K 0 rom
sda 8:0 0 1T 0 disk
|-sda14 8:14 0 4M 0 part
|-sda15 8:15 0 106M 0 part
`-sda1 8:1 0 1023.9G 0 part /etc/hosts
Mounting fails since its "special"
mount: special device /dev/sdb1 does not exist
Our pipeline was dependent on disk IO, previously it was doing 400MB/s and now it is doing 75MB/s, even when using a P30 Premium SSD which says it should be 200MB/s
We don't want 1TB Premium SSD P30, I was testing the IO in this container.
How do we use Temporary Disk in Kubernetes?
The text was updated successfully, but these errors were encountered:
We decided not to pursue AKS anymore, although it would have made everything cleaner and more manageable than managing a bunch of vms, right out of the box the perf of disk io was not there and the size of the temp disk was incorrect for a single node cluster.
I was expecting right after provisioning, we would get the 250GB temp disk and the 400MB/s disk io speed for the same sku we use for the VMs. But apparently it requires more configuration to make it work. Any reason why the defaults don't disable caching on OS disk?
My pipeline is IO bound, and we are migrating our Bare VMs to Kubernetes, with the same SKUs but the temporary disk is missing.
I can't seem to make this work on my end. I have a
Standard_F32s_v2
which contains 256GB of temporary storage. I even passed --node-osdisk-size 1024 and the temporary disk is not shown.This is how we created the Cluster:
When I query kubernetes, it is saying 1TB is the OSDisk. Still temporary disk is not listed.
When running lsblk, the temporary disk is there, but we cannot mount it:
Mounting fails since its "special"
Our pipeline was dependent on disk IO, previously it was doing 400MB/s and now it is doing 75MB/s, even when using a P30 Premium SSD which says it should be 200MB/s
We don't want 1TB Premium SSD P30, I was testing the IO in this container.
How do we use Temporary Disk in Kubernetes?
The text was updated successfully, but these errors were encountered: