Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pillar unavailable in orchestrate runner #34169

Closed
rbjorklin opened this issue Jun 21, 2016 · 4 comments
Closed

Pillar unavailable in orchestrate runner #34169

rbjorklin opened this issue Jun 21, 2016 · 4 comments
Labels
info-needed waiting for more info
Milestone

Comments

@rbjorklin
Copy link
Contributor

Description of Issue/Question

It is not possible to use pillar values from /srv/pillar with the orchestrate runner. However the pillar values passed in on command line as in the below example are available. Pillar values provided by the ext_pillar consul are also available. Is there some kind of (un?)documented way of distributing pillars to the master?

Setup

Run with: salt-run --out=json state.orchestrate orch.application.upgrade pillar='{"data":{"organization":"company","cluster":"demo","version":"1.0"}}'

# in /srv/pillar/application/managers.sls
managers:
  company:
    hostname: demo.example.org

# in /srv/salt/orch/application/upgrade.sls
upgrade-application-manager-{{ pillar['data']['organization'] }}-{{ pillar['data']['cluster'] }}:
  salt.state:
    - tgt: {{ pillar['managers'][pillar['data']['organization']]['hostname'] }}
    - sls:
      - test.ping
    - failhard: True

Running the above results in:

[ERROR   ] Rendering exception occurred: Jinja variable 'salt.utils.context.NamespacedDictWrapper object' has no attribute 'managers'
[CRITICAL] Rendering SLS 'base:orch.application.upgrade' failed: Jinja variable 'salt.utils.context.NamespacedDictWrapper object' has no attribute 'managers'
{
    "retcode": 1,
    "salt-master.localdomain_master": [
        "Rendering SLS 'base:orch.application.upgrade' failed: Jinja variable 'salt.utils.context.NamespacedDictWrapper object' has no attribute 'managers'"
    ]
}

Steps to Reproduce Issue

(Include debug logs if possible and relevant.)

Versions Report

Salt Version:
           Salt: 2016.3.0

Dependency Versions:
           cffi: 0.8.6
       cherrypy: 3.2.2
       dateutil: 1.5
          gitdb: Not Installed
      gitpython: Not Installed
          ioflo: Not Installed
         Jinja2: 2.7.2
        libgit2: Not Installed
        libnacl: Not Installed
       M2Crypto: 0.21.1
           Mako: Not Installed
   msgpack-pure: Not Installed
 msgpack-python: 0.4.7
   mysql-python: Not Installed
      pycparser: 2.14
       pycrypto: 2.6.1
         pygit2: Not Installed
         Python: 2.7.5 (default, Nov 20 2015, 02:00:19)
   python-gnupg: Not Installed
         PyYAML: 3.11
          PyZMQ: 14.7.0
           RAET: Not Installed
          smmap: Not Installed
        timelib: Not Installed
        Tornado: 4.2.1
            ZMQ: 4.0.5

System Versions:
           dist: centos 7.2.1511 Core
        machine: x86_64
        release: 3.10.0-327.18.2.el7.x86_64
         system: Linux
        version: CentOS Linux 7.2.1511 Core
@rbjorklin
Copy link
Contributor Author

This seems to be heavily related to #9442. Is there any way to call a runner from within a orchestrate state? I.E.

{% set master_pillar = salt-run['pillar.show_pillar']('salt.example.org') %}

See https://docs.saltstack.com/en/latest/ref/runners/all/salt.runners.pillar.html

@Ch3LL
Copy link
Contributor

Ch3LL commented Jun 21, 2016

@rbjorklin I think there are two issues going on here.

First {{ pillar['managers'][pillar['data']['organization']]['hostname'] }} will cause some issues. I'm guessing what you want is this: {{ pillar['managers']['company']['hostname' }} maybe?If you want to call two pillars you would have to seperate them out.

Also I think you might be running into this issue: #33647 which caused issues with pillars on the command line overriding pillar in /srv/pillar. This has been fixed in 2016.3.1. Once you fix that pillar is it working ?

@Ch3LL Ch3LL added the info-needed waiting for more info label Jun 21, 2016
@Ch3LL Ch3LL added this to the Blocked milestone Jun 21, 2016
@rbjorklin
Copy link
Contributor Author

rbjorklin commented Jun 22, 2016

@Ch3LL I use nested pillar calls in several other states so unless there is something different with orchestration that should be fine I think.

It does indeed look like I'm running into #33647 however upgrading to 2016.3.1 seems to make me run into #29028 which is actually a lot worse in my case.

EDIT: Just upgraded to 2016.3.1 and it seems to work, so far #29028 hasn't resurfaced.

@rbjorklin
Copy link
Contributor Author

Looks like #33647 was my issue. Distributing pillars to the master is done by appending _master to the minion name of the minion running on the master (if you have one). Closing this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
info-needed waiting for more info
Projects
None yet
Development

No branches or pull requests

2 participants