Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Invalid character 'E' looking for beginning of value #19

Open
asc-adean opened this issue Jan 14, 2019 · 53 comments
Open

Invalid character 'E' looking for beginning of value #19

asc-adean opened this issue Jan 14, 2019 · 53 comments

Comments

@asc-adean
Copy link

If I set the 'host' portion of secret.yml to a base64 of an IP address or a DNS hostname, I get this issue in the logs:

kubectl -n kube-system logs -f freenas-nfs-provisioner-5559b967df-js7sb

I0114 21:55:22.266603       1 controller.go:926] provision "default/freenas-test-pvc" class "freenas-nfs": started
I0114 21:55:22.271765       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"freenas-test-pvc", UID:"a2d0a669-1846-11e9-b420-000c29289e1c", APIVersion:"v1", ResourceVersion:"10976528", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/freenas-test-pvc"
W0114 21:55:22.310721       1 dataset.go:60] invalid character 'E' looking for beginning of value
W0114 21:55:22.310772       1 controller.go:685] Retrying syncing claim "default/freenas-test-pvc" because failures 14 < threshold 15
E0114 21:55:22.310798       1 controller.go:700] error syncing claim "default/freenas-test-pvc": failed to provision volume with StorageClass "freenas-nfs": invalid character 'E' looking for beginning of value
I0114 21:55:22.310998       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"freenas-test-pvc", UID:"a2d0a669-1846-11e9-b420-000c29289e1c", APIVersion:"v1", ResourceVersion:"10976528", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "freenas-nfs": invalid character 'E' looking for beginning of value
@travisghansen
Copy link
Collaborator

What version are you running and can you send the secret (minus creds of course)?

@asc-adean
Copy link
Author

Kubernetes v1.12.1
FreeNAS-9.10.2.U1

---
apiVersion: v1
kind: Secret
metadata:
  name: freenas-nfs
  namespace: kube-system
type: Opaque
data:
  # all values should be base64 encoded (ie: echo -n 'https' | base64)
  #
  # note that all values are relative to where the provisioner is running
  # ie: if running the provisioner directly on the FreeNAS server 'localhost'
  # would be a valid 'host' value

  # http|https
  # default: http

  # default: localhost (for running the provisioner out of cluster directly on FreeNAS node)
  host: MTkyLjE2OC4wLjIxMQ==

  # default: 80
  port: NDQz

  # default: root (api is only available for root currently)
  #username: root
  password: *redacted*

  # allow for self-signed/untrusted certs if using https
  # true|false
  # default: false
  allowInsecure: dHJ1ZQ==

@travisghansen
Copy link
Collaborator

Can you send the version of the controller as well?

@asc-adean
Copy link
Author

Hi,

I'm not sure what you mean by controller version, I cloned the repo yesterday and applied the configuration as stated in the README. Where would I obtain this controller version you need? Thanks

@travisghansen
Copy link
Collaborator

That's enough info actually. The controller version is in the deployment manifest. Thanks!

@travisghansen
Copy link
Collaborator

@asc-adean looking at the code this appears to be a response to an API call from FreeNAS but not certain yet. Did you use the default class.yaml as well? If not can you send over what you used?

Also, what's the name of your pool in FreeNAS? Specifically, is it tank?

@travisghansen
Copy link
Collaborator

@asc-adean any luck?

@totallyGreg
Copy link

I just upgraded the deployment image from 2.3 to 2.4 and now I am having this issue as well. I've modified the two storage class providers I have to match the newer class.yaml.
I'm am now running FreeNAS 11.2U3 (Had this issue on 11.2U2 as well).

I0328 15:31:43.268383 1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"testspace", Name:"newtest", UID:"080c2369-516e-11e9-9b2b-a01e0b0bb9d5", APIVersion:"v1", ResourceVersion:"33798872", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "freenas-test": EOF

One thing to note, the new class.yaml has Mountoptions as an empty map but kubectl apply complains unless it's an empty list. So I did change this to Mountoptions: []

error: error validating "freenas-test.yaml": error validating data: ValidationError(StorageClass.mountOptions): invalid type for io.k8s.api.storage.v1.StorageClass.mountOptions: got "map", expected "array"; if you choose to ignore these errors, turn validation off with --validate=false

Let me know if I can provide anything to help out. -g

@nmaupu
Copy link
Owner

nmaupu commented Mar 28, 2019

Can you share the storage class used ?

According to https://godoc.org/k8s.io/api/storage/v1 , mountoptions is of type[]string so indeed, it will fail if you use a map instead...
So I just realized that class.yaml should be declared as empty slice ([]) and not dict ({}) :
https://github.com/nmaupu/freenas-provisioner/blob/master/deploy/class.yaml#L11

@totallyGreg
Copy link

Sure here is one. I have tried various configuration changes. Including commenting out the mount options as well as using an empty list [], currently I was working on the assumption that it wanted something in that field so I added nolock. I have also tried a new secret file although the old default was the one that has been working fine for months with the older 2.3 image.

---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: freenas-test
#  annotations:
#    storageclass.kubernetes.io/is-default-class: "true"
provisioner: freenas.org/nfs
allowVolumeExpansion: true
reclaimPolicy: Delete
mountOptions:
  - nolock
parameters:
  # namespace of the secret which contains FreeNAS server connection details
  # default: kube-system
  #serverSecretNamespace:

  # name of the secret which contains FreeNAS server connection details
  # default: freenas-nfs
  # serverSecretName: 

  # the name of the parent dataset (or simply pool) where all resources will
  # be created, it *must* exist before provisioner will work
  # example: tank/k8s/mycluster
  # default: tank
  datasetParentName: Hitachi/test

  # whether to enforce quotas for each dataset
  # if enabled each newly provisioned dataset will set the appropriate quota
  # per the PVC
  # default: true
  #datasetEnableQuotas:

  # whether to reserve space when the dataset is create
  # if enabled each newly provisioned dataset will set the appropriate value
  # per the PVC
  # default: true
  #datasetEnableReservation:

  # if enabled provisioner will create parent datasets for each namespace
  # otherwise, all datasets will be provisioned in a flat manner
  # default: true
  #datasetEnableNamespaces:

  # if datasetEnableNamespaces is enabled, sets a per-namespace quota
  # example: 5M | 10G | 1T  (M, Mi, MB, MiB, G, Gi, GB, GiB, T, Ti, TB, or TiB)
  # default: 0 (no quota)
  #datasetNamespaceQuota:

  # if datasetEnableNamespaces is enabled, sets a per-namespace reservation
  # this should not be greater than the datasetNamespaceQuota
  # example: 5M | 10G | 1T  (M, Mi, MB, MiB, G, Gi, GB, GiB, T, Ti, TB, or TiB)
  # default: 0
  #datasetNamespaceReservation:

  # if enabled created datasets will adhere to reliable pattern
  # if datasetNamespaces == true dataset pattern is: <datasetParentName>/<namespace>/<PVC Name>
  # if datasetNamespaces == false dataset pattern is: <datasetParentName>/<namespace>-<PVC Name>
  # if disabled, datasets will be created with a name pvc-<uid> (the name of the provisioned PV)
  # default: true
  #datasetDeterministicNames:

  # if enabled and datasetDeterministicNames is enabled then dataset that
  # already exist (pre-povisioned out of band) will be ratained by the
  # provisioner during deletion of the reclaim process
  # ignored if datasetDeterministicNames is disabled (collisions result in failure)
  # default: true
  #datasetRetainPreExisting:

  # the following parameters determine permissions and ownership of the
  # dataset mount directory (on FreeNAS) immediately upon creation
  # default: 0777, root, wheel
  #datasetPermissionsMode:
  #datasetPermissionsUser:
  #datasetPermissionsGroup:

  # this determines what the 'server' property of the NFS share will be in
  # in kubernetes, it's purpose is to provide flexibility between the control
  # and data planes of FreeNAS
  # default: uses the 'host' value from the secret
  #shareHost:

  # determines if newly created NFS shares will have the 'All Directories'
  # option checked
  # default: true
  #shareAlldirs:

  # Authorized hosts/networks (space-separated) allowed to access the shares
  # default: ""
  #shareAllowedHosts:
  #shareAllowedNetworks:

  # Determines root mapping
  # cannot be used simultaneously with shareMapAll{User,Group}
  # default: root:wheel
  #shareMaprootUser:
  #shareMaprootGroup:

  # Determines user mapping for all access (not recommended)
  # cannot be used simultaneously with shareMaproot{User,Group}
  # default: ""
  #shareMapallUser:
  #shareMapallGroup:

  # if enabled and datasetDeterministicNames is enabled then shares that
  # already exist (pre-provisioned out of band) will be retained by the
  # provisioner during deletion of the reclaim process
  # ignored if datasetDeterministicNames is disabled (collisions result in failure)
  # default: true
  #shareRetainPreExisting:

@totallyGreg
Copy link

Any thoughts on how I can debug this?

@travisghansen
Copy link
Collaborator

@totallyGreg can/are you running over http for the FreeNAS API? If so we can use tcpdump to observe traffic.

@totallyGreg
Copy link

totallyGreg commented Apr 4, 2019

Weirdly Freenas is responding with 401 Unauthorized. The secret and the credentials therein are the same as what I setup last fall.
I am also able to hit the endpoint manually in my browser using those credentials.

  • edited the weird line breaks that prevented the relevant parts from showing
16:37:38.300058 IP minix2.home.<domain>.us.46686 > freenasty.home.<domain>.us.http: Flags [P.], seq 2436945427:2436945617, ack 2470212457, win 229, options [nop,nop,TS val 3551132$
71 ecr 4189113551], length 190: HTTP: GET /api/v1.0/storage/dataset/Hitachi/test/ HTTP/1.1                                                                                        
E.....@.?.t.                                                                                                                                                                      
..+                                                                                                                                                                               
....^.P.@...<si...........                                                                                                                                                        
...o....GET /api/v1.0/storage/dataset/Hitachi/test/ HTTP/1.1                                                                                                                      
Host: freenasty.home.<domain>.us:80                                                                                                                                                
User-Agent: Go-http-client/1.1                                                                                                                                                    
Authorization: Basic ####################=                                                                                                                                         
Accept-Encoding: gzip

16:37:38.321268 IP freenasty.home.<domain>.us.http > minix2.home.<domain>.us.46686: Flags [P.], 
seq 1:281, ack 190, win 1026, options [nop,nop,TS val 4189113572 ecr 3551132271], length 280: HTTP: 
HTTP/1.1 401 Unauthorized                                                                                                                                                                                                                                                                                          
E..L..@.@.#w                                                                                                                                                                                                                                                                                                                                              
...                                                                                                                                                                                                                                                                                                                                                                
..+.P.^.<si.@.............                                                                                                                                                                                                                                                                                                                                  
.......oHTTP/1.1 401 Unauthorized                                                                                                                                                                                                                                                                                                            
Server: nginx                                                                                                                                                                                                                                                                                                                                              
Date: Thu, 04 Apr 2019 20:37:38 GMT                                                                                                                                                                                                                                                                                                 
Content-Type: text/html; charset=utf-8                                                                                                                                                                                                                                                                                                                   
Transfer-Encoding: chunked                                                                                                                                                                                                                                                                                                                             
Connection: keep-alive                                                                                                                                                                                                                                                                           
WWW-Authenticate: Basic Realm="django-tastypie"                                                                                                                                                                                                                                                                                                            
Vary: Accept-Language, Cookie                                                                                                                                                                                                                                                                                
Content-Language: en                                                                                                                                                                                                                                                                                               

@totallyGreg
Copy link

I could be wrong but it looks like it's sending the base64 encoded password from the secret instead of the actual password and thus FreeNAS is responding with unauthorized.

@travisghansen
Copy link
Collaborator

@totallyGreg interesting, what's the kubernetes version/environment you're running?

@totallyGreg
Copy link

1.13.4 as deployed by Rancher's RKE

k version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-26T00:04:52Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-02-28T13:30:26Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}

I'm working on trying to understand how I can recompile the go executable with debug info so I can attach Squash to it to see exactly what's happening. Or simply add a few more informative error statements somewhere.

@travisghansen
Copy link
Collaborator

@totallyGreg end up discovering anything? I also use rke so we're pretty close. I haven't updated to the latest build yet though but I have some additional tweaks to make soon to the provisioner and I'll keep an eye out for this.

@totallyGreg
Copy link

No, unfortunately I haven't had a chance to work on it for the last few weeks but I am motivated to figure it out since I can't add any more storage to my cluster (automatically) until I do. Yeah I don't understand where the EOF is coming from and my efforts to try and setup a go debug environment that could actually read the debug information wasn't very successful.

@lmorningstar116
Copy link

Is there any solution to this problem?

@travisghansen
Copy link
Collaborator

I haven't looked into it for a long time but may get a chance. I've actually started work on a csi driver to replace this project longer term but that's probably a ways out before it's ready.

If you find something let me know or if you can share your config files etc that would be helpful.

@lmorningstar116
Copy link

lmorningstar116 commented Jul 3, 2019

---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: freenas-nfs
#  annotations:
#    storageclass.kubernetes.io/is-default-class: "true"
provisioner: freenas.org/nfs
allowVolumeExpansion: true
reclaimPolicy: Delete
mountOptions: []
parameters:
  # namespace of the secret which contains FreeNAS server connection details
  # default: kube-system
  #serverSecretNamespace:

  # name of the secret which contains FreeNAS server connection details
  # default: freenas-nfs
  #serverSecretName:

  # the name of the parent dataset (or simply pool) where all resources will
  # be created, it *must* exist before provisioner will work
  # example: tank/k8s/mycluster
  # default: tank
  # mnt/vl0
  datasetParentName: vl0/k8s_storage

  # whether to enforce quotas for each dataset
  # if enabled each newly provisioned dataset will set the appropriate quota
  # per the PVC
  # default: true
  #datasetEnableQuotas:

  # whether to reserve space when the dataset is create
  # if enabled each newly provisioned dataset will set the appropriate value
  # per the PVC
  # default: true
  #datasetEnableReservation:

  # if enabled provisioner will create parent datasets for each namespace
  # otherwise, all datasets will be provisioned in a flat manner
  # default: true
  #datasetEnableNamespaces:

  # if datasetEnableNamespaces is enabled, sets a per-namespace quota
  # example: 5M | 10G | 1T  (M, Mi, MB, MiB, G, Gi, GB, GiB, T, Ti, TB, or TiB)
  # default: 0 (no quota)
  #datasetNamespaceQuota:

  # if datasetEnableNamespaces is enabled, sets a per-namespace reservation
  # this should not be greater than the datasetNamespaceQuota
  # example: 5M | 10G | 1T  (M, Mi, MB, MiB, G, Gi, GB, GiB, T, Ti, TB, or TiB)
  # default: 0
  #datasetNamespaceReservation:

  # if enabled created datasets will adhere to reliable pattern
  # if datasetNamespaces == true dataset pattern is: <datasetParentName>/<namespace>/<PVC Name>
  # if datasetNamespaces == false dataset pattern is: <datasetParentName>/<namespace>-<PVC Name>
  # if disabled, datasets will be created with a name pvc-<uid> (the name of the provisioned PV)
  # default: true
  #datasetDeterministicNames:

  # if enabled and datasetDeterministicNames is enabled then dataset that
  # already exist (pre-povisioned out of band) will be ratained by the
  # provisioner during deletion of the reclaim process
  # ignored if datasetDeterministicNames is disabled (collisions result in failure)
  # default: true
  #datasetRetainPreExisting:

  # the following parameters determine permissions and ownership of the
  # dataset mount directory (on FreeNAS) immediately upon creation
  # default: 0777, root, wheel
  #datasetPermissionsMode:
  #datasetPermissionsUser:
  #datasetPermissionsGroup:

  # this determines what the 'server' property of the NFS share will be in
  # in kubernetes, it's purpose is to provide flexibility between the control
  # and data planes of FreeNAS
  # default: uses the 'host' value from the secret
  #shareHost:

  # determines if newly created NFS shares will have the 'All Directories'
  # option checked
  # default: true
  #shareAlldirs:

  # Authorized hosts/networks (space-separated) allowed to access the shares
  # default: ""
  #shareAllowedHosts:
  #shareAllowedNetworks:

  # Determines root mapping
  # cannot be used simultaneously with shareMapAll{User,Group}
  # default: root:wheel
  #shareMaprootUser:
  #shareMaprootGroup:

  # Determines user mapping for all access (not recommended)
  # cannot be used simultaneously with shareMaproot{User,Group}
  # default: ""
  #shareMapallUser:
  #shareMapallGroup:

  # if enabled and datasetDeterministicNames is enabled then shares that
  # already exist (pre-provisioned out of band) will be retained by the
  # provisioner during deletion of the reclaim process
  # ignored if datasetDeterministicNames is disabled (collisions result in failure)
  # default: true
  #shareRetainPreExisting:

@travisghansen
Copy link
Collaborator

@lmorningstar116 can you send your secrets file over as well (redact the password) and give me some details about the version of FreeNAS you have running? Just spew out as much information about the env as you can including maybe what you can share of zpool status etc.

I'll see what I can figure out..

@travisghansen
Copy link
Collaborator

@totallyGreg and @lmorningstar116 can you attempt to curl the api directly and see what you get?

Something like:

curl -v -u 'root:password' "http://<host>/api/v1.0/storage/dataset/Hitachi/test/"
curl -v -u 'root:password' "http://<host>/api/v1.0/storage/dataset/vl0/k8s_storage/"

Also, if possible can both of you share whether your root password has special characters and what they are? Not sure if some weird escaping thing is going on with the go http client..

@wisepotato
Copy link

any update on this? Running into the same problem

@travisghansen
Copy link
Collaborator

Can you send the output of the appropriate curl command above for your setup?

I'm 75% done writing a csi driver at this point too which will support resizing/snapshots/etc. Should be another couple weeks assuming I have some time to have it ready for some beta testing.

@wisepotato
Copy link

Cool! ALthought im using rancher, and I dont know if i have the same API, im going to assume i need the baremetal k8s output?

@wisepotato
Copy link

Thanks for the fast response btw.

@travisghansen
Copy link
Collaborator

Nah, this has nothing to do with k8s, it's directly to your FreeNAS install (can run the command from your workstation).

Rancher should work fine generally. All my clusters are rke.

@wisepotato
Copy link

Cant seem to login for some reason, im going to guess that the "datasetname" is "mnt/{datasetname}" ?

@travisghansen
Copy link
Collaborator

Yeah, send over maybe what you have in the config map for parent and I'll send the exact command...

@wisepotato
Copy link

on root the curl request works yeah. But not using the user I want to use.. named "share"

@wisepotato
Copy link

I dont want my kubernetes to have root acces...

@travisghansen
Copy link
Collaborator

You don't want it to have root access to FreeNAS?

@wisepotato
Copy link

well, ill try to make the secret root user then. Lets see if that works.

@travisghansen
Copy link
Collaborator

The FreeNAS API only works as root...sorry.

@travisghansen
Copy link
Collaborator

I only left it as a configuration option for the case that they change that policy some day..

@wisepotato
Copy link

Darn, annoying. I got further now though.

image

Am i missing a service?

@travisghansen
Copy link
Collaborator

What are you using for host nodes? Looks to me like you simply need to enable that service and you'll be good to go..

@travisghansen
Copy link
Collaborator

Any luck? Get the service running etc?

@wisepotato
Copy link

Sorry haven't had the time, I appreciate your help very much.

I am using a docker VM (rancherOS) to run the master node (only one node atm)

@wisepotato
Copy link

Ill check in running that service first. Thanks, if i cant get it to work, ill make an issue. What exactly are you working on atm for the next version? What's different?

@travisghansen
Copy link
Collaborator

csi is a orchestrator agnostic spec that kubernetes, mesos, etc are adopting. The spec is a bit of a hot mess IMO but it is the way forward. In either case your host will need to be properly setup for nfs and/or iscsi connections so this same issue (stats that is) will apply there as well.

It looks like you're not alone with rancheros missing the necessary stuff out the box but I'm sure it could be added. I'm actually a little perplexed that it's not a standard service for rancheros.

Some are from the context of just mounting stuff generally (outside of kubernetes) but the trident link is a pretty direct parallel.

Not recommended but you could add the nolock option to the storage class mount options and try that too.

@wisepotato
Copy link

HI ive read all the topics and

rancher/os#1601 (comment)

gives some nice content, but I have no idea how this works.

Am i enabling nfs on rancheros, freenas or the docker instance that runs the nodes? or on the node VM itself?

@travisghansen
Copy link
Collaborator

You're trying to get nfs statd service running on the nodes that run kubelet (ie: rancheros which I assume is a VM in your case).

@lmorningstar116
Copy link

Hey.
Reinstalled FreeNAS from version 9 to 11.2.
PersistentVolumeClaim is successfully created, but when creating the Pod we get an error.

MountVolume.SetUp failed for volume "pvc-f12af1c5-e430-4aa3-a560-6526f36ce7be" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/90cb8377-b988-4c7b-bffb-07096e9f44e2/volumes/kubernetes.io~nfs/pvc-f12af1c5-e430-4aa3-a560-6526f36ce7be --scope -- mount -t nfs 10.0.0.221:/mnt/vl0/iocage/k8storage/default/freenas-test-pvc /var/lib/kubelet/pods/90cb8377-b988-4c7b-bffb-07096e9f44e2/volumes/kubernetes.io~nfs/pvc-f12af1c5-e430-4aa3-a560-6526f36ce7be Output: Running scope as unit: run-rccd5cf80e58c4e51bcc374320f0b654b.scope mount.nfs: access denied by server while mounting 10.0.0.221:/mnt/vl0/iocage/k8storage/default/freenas-test-pvc

Judging by the error in Pod, there are not enough rights to mount nfs. But it is not clear how to fix the error.

@travisghansen
Copy link
Collaborator

Is the share created properly in FreeNAS? Does it limit networks/ips for some reason?

@lmorningstar116
Copy link

there are no network or IP restrictions.
Share was created automatically when apply clame file

@travisghansen
Copy link
Collaborator

Are you certain you have all the appropriate nfs utils installed with stats running etc on the node(s)? What os is running on your nodes?

@travisghansen
Copy link
Collaborator

@lmorningstar116
Copy link

Node OS: Ubuntu 18.04
nfs-common already installed

@travisghansen
Copy link
Collaborator

Hmm, can you mount that same share (or another different manually created share) manually from that node to the same FreeNAS server?

@lmorningstar116
Copy link

Hmm, on FreeNAS we opened the share properties and saved, after that the pod began to be created correctly.

@travisghansen
Copy link
Collaborator

Interesting, delete the pod and pvc and try again and see what happens..

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants