Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Switch to oc set env/volume, since oc env/volume is now removed #11449

Merged
merged 1 commit into from
Aug 14, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions admin_guide/high_availability.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -369,7 +369,7 @@ decimal) is typical.
----
$ oc set env dc/ipf-ha-router \
OPENSHIFT_HA_CHECK_SCRIPT=/etc/keepalive/mycheckscript.sh
$ oc volume dc/ipf-ha-router --add --overwrite \
$ oc set volume dc/ipf-ha-router --add --overwrite \
--name=config-volume \
--mount-path=/etc/keepalive \
--source='{"configMap": { "name": "mycustomcheck", "defaultMode": 493}}'
Expand Down Expand Up @@ -595,12 +595,12 @@ IP failover management is limited to 254 groups of VIP addresses. By default
[product-title] assigns one IP address to each group. You can use the
`virtual-ip-groups` option to change this so multiple IP addresses are in each
group and define the number of VIP groups available for each VRRP instance when
xref:configuring-ip-failover[configuring IP failover].
xref:configuring-ip-failover[configuring IP failover].

Grouping VIPs creates a wider range of allocation of VIPs per VRRP in the case
of VRRP failover events, and is useful when all hosts in the cluster have access
to a service locally. For example, when a service is being exposed with an
`ExternalIP`.
`ExternalIP`.

[NOTE]
====
Expand Down
2 changes: 1 addition & 1 deletion admin_guide/managing_networking.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -201,7 +201,7 @@ name of `router` for the deployment configuration and the service-account):
. Set the `ROUTER_ENABLE_INGRESS` environment variable to `true`:
+
----
$ oc env dc router ROUTER_ENABLE_INGRESS=true`
$ oc set env dc router ROUTER_ENABLE_INGRESS=true`
----

. Add the `cluster-reader` role to the router, where `-z` is the service
Expand Down
4 changes: 2 additions & 2 deletions admin_guide/pruning_resources.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -609,7 +609,7 @@ To switch the registry to read-only mode:
.. Set the following envirornment variable:
+
----
$ oc env -n default \
$ oc set env -n default \
dc/docker-registry \
'REGISTRY_STORAGE_MAINTENANCE_READONLY={"enabled":true}'
----
Expand Down Expand Up @@ -706,7 +706,7 @@ Freed up 2.835 GiB of disk space
finished, the registry can be switched back to read-write mode by executing:
+
----
$ oc env -n default dc/docker-registry REGISTRY_STORAGE_MAINTENANCE_READONLY-
$ oc set env -n default dc/docker-registry REGISTRY_STORAGE_MAINTENANCE_READONLY-
----
endif::[]

Expand Down
4 changes: 2 additions & 2 deletions architecture/networking/routes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -805,7 +805,7 @@ To implement both scenarios, run:

----
$ oc adm router adrouter ...
$ oc env dc/adrouter ROUTER_ALLOWED_DOMAINS="okd.io, kates.net" \
$ oc set env dc/adrouter ROUTER_ALLOWED_DOMAINS="okd.io, kates.net" \
ROUTER_DENIED_DOMAINS="ops.openshift.org, metrics.kates.net"
----

Expand Down Expand Up @@ -960,6 +960,6 @@ $ oc adm router ... --disable-namespace-ownership-check=true
----

----
$ oc env dc/router ROUTER_DISABLE_NAMESPACE_OWNERSHIP_CHECK=true
$ oc set env dc/router ROUTER_DISABLE_NAMESPACE_OWNERSHIP_CHECK=true
----
endif::openshift-origin,openshift-enterprise[]
10 changes: 5 additions & 5 deletions cli_reference/basic_cli_operations.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -235,7 +235,7 @@ $ oc edit <object_type>/<object_name> \
=== volume
Modify a xref:../dev_guide/volumes.adoc#dev-guide-volumes[volume]:
----
$ oc volume <object_type>/<object_name> [--option]
$ oc set volume <object_type>/<object_name> [--option]
----

[[oc-label]]
Expand Down Expand Up @@ -655,22 +655,22 @@ $ oc debug -h
----

When debugging images and setup problems, you can get an exact copy of a
running pod configuration and troubleshoot with a shell. Since a failing pod
running pod configuration and troubleshoot with a shell. Since a failing pod
may not be started and not accessible to `rsh` or `exec`, running the `debug`
command creates a carbon copy of that setup.
command creates a carbon copy of that setup.

The default mode is to start a shell inside of the first container of the
referenced pod, replication controller, or deployment configuration. The started pod
will be a copy of your source pod, with labels stripped, the command changed to
`/bin/sh`, and readiness and liveness checks disabled. If you just want to run a
command, add `--` and a command to run. Passing a command will not create a TTY
or send STDIN by default. Other flags are supported for altering the container
or pod in common ways.
or pod in common ways.

A common problem running containers is a security policy that prohibits you from
running as a root user on the cluster. You can use this command to test running
a pod as non-root (with `--as-user`) or to run a non-root pod as root (with
`--as-root`).
`--as-root`).

The debug pod is deleted when the remote command completes or you interrupt
the shell.
Expand Down
24 changes: 12 additions & 12 deletions cli_reference/cli_by_example_content.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -387,40 +387,40 @@
----
// Add new volume of type 'emptyDir' for deployment config 'registry' and mount under /opt inside the containers
// The volume name is auto generated
$ oc volume dc/registry --add --mount-path=/opt
$ oc set volume dc/registry --add --mount-path=/opt

// Add new volume 'v1' with secret 'magic' for pod 'p1'
$ oc volume pod/p1 --add --name=v1 -m /etc --type=secret --secret-name=magic
$ oc set volume pod/p1 --add --name=v1 -m /etc --type=secret --secret-name=magic

// Add new volume to pod 'p1' based on gitRepo (or other volume sources not supported by --type)
$ oc volume pod/p1 --add -m /repo --source=<json-string>
$ oc set volume pod/p1 --add -m /repo --source=<json-string>

// Add emptyDir volume 'v1' to a pod definition on disk and update the pod on the server
$ oc volume -f pod.json --add --name=v1
$ oc set volume -f pod.json --add --name=v1

// Create a new persistent volume and overwrite existing volume 'v1' for replication controller 'r1'
$ oc volume rc/r1 --add --name=v1 -t persistentVolumeClaim --claim-name=pvc1 --overwrite
$ oc set volume rc/r1 --add --name=v1 -t persistentVolumeClaim --claim-name=pvc1 --overwrite

// Change pod 'p1' mount point to /data for volume v1
$ oc volume pod p1 --add --name=v1 -m /data --overwrite
$ oc set volume pod p1 --add --name=v1 -m /data --overwrite

// Remove all volumes for pod 'p1'
$ oc volume pod/p1 --remove --confirm
$ oc set volume pod/p1 --remove --confirm

// Remove volume 'v1' from deployment config 'registry'
$ oc volume dc/registry --remove --name=v1
$ oc set volume dc/registry --remove --name=v1

// Unmount volume v1 from container c1 on pod p1 and remove the volume v1 if it is not referenced by any containers on pod p1
$ oc volume pod/p1 --remove --name=v1 --containers=c1
$ oc set volume pod/p1 --remove --name=v1 --containers=c1

// List volumes defined on replication controller 'r1'
$ oc volume rc r1 --list
$ oc set volume rc r1 --list

// List volumes defined on all pods
$ oc volume pods --all --list
$ oc set volume pods --all --list

// Output json object with volume info for pod 'p1' but don't alter the object on server
$ oc volume pod/p1 --add --name=v1 --mount=/opt -o json
$ oc set volume pod/p1 --add --name=v1 --mount=/opt -o json
----
====

Expand Down
2 changes: 1 addition & 1 deletion day_two_guide/topics/proc_restoring-data-new-pvc.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ The following steps assume that a new `pvc` has been created.
. Overwrite the currently defined `claim-name`:
+
----
$ oc volume dc/demo --add --name=persistent-volume \
$ oc set volume dc/demo --add --name=persistent-volume \
--type=persistentVolumeClaim --claim-name=filestore \ --mount-path=/opt/app-root/src/uploaded --overwrite
----

Expand Down
6 changes: 3 additions & 3 deletions day_two_guide/topics/pvc_backup_and_restore.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ claim.
Depending on the provider that is hosting the {product-title} environment, the
ability to launch third party snapshot services for backup and restore purposes
also exists. As {product-title} does not have the ability to launch these
services, this guide does not describe these steps.
services, this guide does not describe these steps.
====

Consult any product documentation for the correct backup procedures of specific
Expand Down Expand Up @@ -73,7 +73,7 @@ Containers:
----
+
The above shows that the persistent data is currently located in the
`/opt/app-root/src/uploaded` directory.
`/opt/app-root/src/uploaded` directory.

. Copy the data locally:
+
Expand Down Expand Up @@ -152,7 +152,7 @@ The following steps assume that a new `pvc` has been created.
. Overwrite the currently defined `claim-name`:
+
----
$ oc volume dc/demo --add --name=persistent-volume \
$ oc set volume dc/demo --add --name=persistent-volume \
--type=persistentVolumeClaim --claim-name=filestore \ --mount-path=/opt/app-root/src/uploaded --overwrite
----

Expand Down
2 changes: 1 addition & 1 deletion dev_guide/dev_tutorials/maven_tutorial.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ endif::[]
Add a PVC to the Nexus deployment configuration.

----
$ oc volumes dc/nexus --add \
$ oc set volume dc/nexus --add \
--name 'nexus-volume-1' \
--type 'pvc' \
--mount-path '/sonatype-work/' \
Expand Down
36 changes: 18 additions & 18 deletions dev_guide/volumes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ to your pods.
FSGroup, if the FSGroup parameter is enabled by your cluster administrator.
====

You can use the CLI command `oc volume` to xref:adding-volumes[add],
You can use the CLI command `oc set volume` to xref:adding-volumes[add],
xref:updating-volumes[update], or xref:removing-volumes[remove] volumes and
volume mounts for any object that has a pod template like
xref:../architecture/core_concepts/deployments.adoc#replication-controllers[replication
Expand All @@ -47,10 +47,10 @@ object that has a pod template.

== General CLI Usage

The `oc volume` command uses the following general syntax:
The `oc set volume` command uses the following general syntax:

----
$ oc volume <object_selection> <operation> <mandatory_parameters> <optional_parameters>
$ oc set volume <object_selection> <operation> <mandatory_parameters> <optional_parameters>
----

This topic uses the form `_<object_type>_/_<name>_` for `_<object_selection>_`
Expand Down Expand Up @@ -98,7 +98,7 @@ selected operation and are discussed in later sections.
To add a volume, a volume mount, or both to pod templates:

----
$ oc volume <object_type>/<name> --add [options]
$ oc set volume <object_type>/<name> --add [options]
----

[[add-options]]
Expand Down Expand Up @@ -165,22 +165,22 @@ values: `json`, `yaml`.
Add a new volume source *emptyDir* to deployment configuration *registry*:

----
$ oc volume dc/registry --add
$ oc set volume dc/registry --add
----

Add volume *v1* with secret *$ecret* for replication controller *r1* and mount
inside the containers at *_/data_*:

----
$ oc volume rc/r1 --add --name=v1 --type=secret --secret-name='$ecret' --mount-path=/data
$ oc set volume rc/r1 --add --name=v1 --type=secret --secret-name='$ecret' --mount-path=/data
----

Add existing persistent volume *v1* with claim name *pvc1* to deployment
configuration *_dc.json_* on disk, mount the volume on container *c1* at
*_/data_*, and update the deployment configuration on the server:

----
$ oc volume -f dc.json --add --name=v1 --type=persistentVolumeClaim \
$ oc set volume -f dc.json --add --name=v1 --type=persistentVolumeClaim \
--claim-name=pvc1 --mount-path=/data --containers=c1
----

Expand All @@ -189,7 +189,7 @@ Add volume *v1* based on Git repository
all replication controllers:

----
$ oc volume rc --all --add --name=v1 \
$ oc set volume rc --all --add --name=v1 \
--source='{"gitRepo": {
"repository": "https://github.com/namespace1/project1",
"revision": "5125c45f9f563"
Expand All @@ -202,7 +202,7 @@ Updating existing volumes or volume mounts is the same as
xref:adding-volumes[adding volumes], but with the `--overwrite` option:

----
$ oc volume <object_type>/<name> --add --overwrite [options]
$ oc set volume <object_type>/<name> --add --overwrite [options]
----

[discrete]
Expand All @@ -213,21 +213,21 @@ Replace existing volume *v1* for replication controller *r1* with existing
persistent volume claim *pvc1*:

----
$ oc volume rc/r1 --add --overwrite --name=v1 --type=persistentVolumeClaim --claim-name=pvc1
$ oc set volume rc/r1 --add --overwrite --name=v1 --type=persistentVolumeClaim --claim-name=pvc1
----

Change deployment configuration *d1* mount point to *_/opt_* for volume *v1*:

----
$ oc volume dc/d1 --add --overwrite --name=v1 --mount-path=/opt
$ oc set volume dc/d1 --add --overwrite --name=v1 --mount-path=/opt
----

[[removing-volumes]]
== Removing Volumes
To remove a volume or volume mount from pod templates:

----
$ oc volume <object_type>/<name> --remove [options]
$ oc set volume <object_type>/<name> --remove [options]
----

.Supported Options for Removing Volumes
Expand Down Expand Up @@ -265,28 +265,28 @@ values: `json`, `yaml`.
Remove a volume *v1* from deployment configuration *d1*:

----
$ oc volume dc/d1 --remove --name=v1
$ oc set volume dc/d1 --remove --name=v1
----

Unmount volume *v1* from container *c1* for deployment configuration *d1* and
remove the volume *v1* if it is not referenced by any containers on *d1*:

----
$ oc volume dc/d1 --remove --name=v1 --containers=c1
$ oc set volume dc/d1 --remove --name=v1 --containers=c1
----

Remove all volumes for replication controller *r1*:

----
$ oc volume rc/r1 --remove --confirm
$ oc set volume rc/r1 --remove --confirm
----

[[listing-volumes]]
== Listing Volumes
To list volumes or volume mounts for pods or pod templates:

----
$ oc volume <object_type>/<name> --list [options]
$ oc set volume <object_type>/<name> --list [options]
----

List volume supported options:
Expand All @@ -312,12 +312,12 @@ character.
List all volumes for pod *p1*:

----
$ oc volume pod/p1 --list
$ oc set volume pod/p1 --list
----

List volume *v1* defined on all deployment configurations:
----
$ oc volume dc --all --name=v1
$ oc set volume dc --all --name=v1
----

[[volumes-specifying-a-subpath]]
Expand Down
4 changes: 2 additions & 2 deletions getting_started/configure_openshift.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ $ oc adm policy add-cluster-role-to-user cluster-admin admin
----
+
// tag::ocadm-note[]
When running `oc adm` commands, you should run them only from
When running `oc adm` commands, you should run them only from
the first master listed in the Ansible host inventory file,
by default *_/etc/ansible/hosts_*.
// end::ocadm-note[]
Expand Down Expand Up @@ -246,7 +246,7 @@ now need to add this claim to the registry.

[subs="verbatim,macros"]
----
$ oc volume dc/docker-registry \
$ oc set volume dc/docker-registry \
--add --overwrite -t persistentVolumeClaim \
--claim-name=pass:quotes[_registry-volume-claim_] \
--name=registry-storage
Expand Down
Loading