diff --git a/CHANGELOG.md b/CHANGELOG.md index e1cfc3f0c819e..9ee26633013d5 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1231,9 +1231,6 @@ The “teleport-cluster” Helm chart underwent significant refactoring in Telep deployments and the new “scratch” chart mode makes it easier to provide a custom Teleport config. -“Custom” mode users should follow the [migration -guide](docs/pages/admin-guides/deploy-a-cluster/helm-deployments/migration-v12.mdx). - ### Dropped support for SHA1 in Teleport-protected servers Newer OpenSSH clients connecting to Teleport 12 clusters no longer need the @@ -1256,10 +1253,7 @@ Teleport 12 before upgrading. #### Helm charts -The teleport-cluster Helm chart underwent significant changes in Teleport 12. To -upgrade from an older version of the Helm chart deployed in “custom” mode, -follow -the [migration guide](docs/pages/admin-guides/deploy-a-cluster/helm-deployments/migration-v12.mdx). +The teleport-cluster Helm chart underwent significant changes in Teleport 12. Additionally, PSPs are removed from the chart when installing on Kubernetes 1.23 and higher to account for the deprecation/removal of PSPs by Kubernetes. diff --git a/docs/pages/admin-guides/access-controls/access-lists/access-lists.mdx b/docs/pages/admin-guides/access-controls/access-lists/access-lists.mdx index 0fca45b8f7297..7e8706407b256 100644 --- a/docs/pages/admin-guides/access-controls/access-lists/access-lists.mdx +++ b/docs/pages/admin-guides/access-controls/access-lists/access-lists.mdx @@ -5,7 +5,7 @@ layout: tocless-doc --- Access Lists allow Teleport users to be granted long term access to resources -managed within Teleport. With Access Lists, administrators and access list +managed within Teleport. With Access Lists, administrators and Access List owners can regularly audit and control membership to specific roles and traits, which then tie easily back into Teleport's existing RBAC system. diff --git a/docs/pages/admin-guides/access-controls/access-lists/guide.mdx b/docs/pages/admin-guides/access-controls/access-lists/guide.mdx index 72ebd74d7efb2..cc2fc58e1ec3d 100644 --- a/docs/pages/admin-guides/access-controls/access-lists/guide.mdx +++ b/docs/pages/admin-guides/access-controls/access-lists/guide.mdx @@ -4,7 +4,7 @@ description: Learn how to use Access Lists to manage and audit long lived access --- This guide will help you: -- Create an access list +- Create an Access List - Assign a member to it - Verify permissions granted through the list membership @@ -47,7 +47,7 @@ Try logging into the cluster with the test user to verify that no resources show ## Step 3/4. Create an Access List -Next, we'll create a simple access list that will grant the `access` role to its members. +Next, we'll create a simple Access List that will grant the `access` role to its members. Login as the administrative user mentioned in the prerequisites. Click on "Add New" in the left pane, and then "Create an Access List." ![Navigate to create new Access List](../../../../img/access-controls/access-lists/create-new-access-list.png) @@ -64,10 +64,10 @@ not be able to manage the list, though they will still be reflected as an owner. ![Select an owner](../../../../img/access-controls/access-lists/select-owner.png) -Under "Members" select `requester` as a required role, then add your test user to the access list. Similar to +Under "Members" select `requester` as a required role, then add your test user to the Access List. Similar to the owner requirements, this will ensure that any member of the list must have the `requester` role in order to be granted the access described in this list. If the user loses this role later, they will not be granted the -roles or traits described in the access list. +roles or traits described in the Access List. ![Add a member](../../../../img/access-controls/access-lists/add-member.png) diff --git a/docs/pages/admin-guides/access-controls/access-request-plugins/ssh-approval-msteams.mdx b/docs/pages/admin-guides/access-controls/access-request-plugins/ssh-approval-msteams.mdx index b915535ddd584..02b3703690465 100644 --- a/docs/pages/admin-guides/access-controls/access-request-plugins/ssh-approval-msteams.mdx +++ b/docs/pages/admin-guides/access-controls/access-request-plugins/ssh-approval-msteams.mdx @@ -35,7 +35,7 @@ Once enrolled you can download the required `app.zip` file from the integrations - An Azure resource group in the same directory. This will host resources for the Microsoft Teams Access Request plugin. You should have enough permissions to create and edit Azure Bot Services in this resource group. -- Someone with Global Admin rights on the Azure Active Directory that will grant +- Someone with Global Admin rights on Microsoft Entra ID in order to grant permissions to the plugin. - Someone with the `Teams administrator` role that can approve installation requests for Microsoft Teams Apps. diff --git a/docs/pages/admin-guides/access-controls/device-trust/enforcing-device-trust.mdx b/docs/pages/admin-guides/access-controls/device-trust/enforcing-device-trust.mdx index 389ac3cff2faa..1481070376f79 100644 --- a/docs/pages/admin-guides/access-controls/device-trust/enforcing-device-trust.mdx +++ b/docs/pages/admin-guides/access-controls/device-trust/enforcing-device-trust.mdx @@ -35,11 +35,10 @@ by the `device_trust_mode` authentication setting: (!docs/pages/includes/device-trust/prereqs.mdx!) -- We expect your Teleport cluster to be on version 13.3.6 and above, which has - the preset `require-trusted-device` role. The preset `require-trusted-device` - role does not enforce the use of a trusted device for - [Apps](#web-application-support) or [Desktops](#desktop-support). Refer to - their corresponding sections for instructions. +This guide makes use of the preset `require-trusted-device` role, which does not +enforce the use of a trusted device for [Apps](#web-application-support) or +[Desktops](#desktop-support). Refer to their corresponding sections for +instructions. ## Role-based trusted device enforcement @@ -111,7 +110,7 @@ metadata: name: cluster-auth-preference spec: type: local - second_factor: "on" + second_factors: ["webauthn"] webauthn: rp_id: (=clusterDefaults.clusterName=) device_trust: @@ -140,8 +139,8 @@ leaf clusters. ## Web application support -The Teleport App Service may enforce Device Trust via [role-based enforcement]( -#role-based-trusted-device-enforcement). +The Teleport App Service may enforce Device Trust via [role-based +enforcement](#role-based-trusted-device-enforcement). To access apps protected by Device Trust using the Web UI (Teleport v16 or later), make sure your device is [registered and enrolled]( diff --git a/docs/pages/admin-guides/access-controls/guides/hardware-key-support.mdx b/docs/pages/admin-guides/access-controls/guides/hardware-key-support.mdx index 7749f44ee445f..281b31fced18b 100644 --- a/docs/pages/admin-guides/access-controls/guides/hardware-key-support.mdx +++ b/docs/pages/admin-guides/access-controls/guides/hardware-key-support.mdx @@ -255,7 +255,7 @@ Make sure that the touch and PIN policy satisfy the hardware key requirement for ### `ERROR: private key policy not met` -This error is returned by the Auth and Proxy services if a user does not meet the required private key policy. +This error is returned by the Auth Service and Proxy Service if a user does not meet the required private key policy. Both `tsh` and Teleport Connect automatically catch these errors and require the user to sign in again with a valid hardware-based private key. ### `ERROR: authenticating with management key: auth challenge: smart card error 6982: security status not satisfied` diff --git a/docs/pages/admin-guides/access-controls/guides/headless.mdx b/docs/pages/admin-guides/access-controls/guides/headless.mdx index 195d7c6a8e0dd..7716c0e19d8ba 100644 --- a/docs/pages/admin-guides/access-controls/guides/headless.mdx +++ b/docs/pages/admin-guides/access-controls/guides/headless.mdx @@ -26,7 +26,7 @@ For example: ## Prerequisites - A Teleport cluster with WebAuthn configured. - See the [Second Factor: WebAuthn](./webauthn.mdx) guide. + See the [Harden your Cluster Against IdP Compromises](./webauthn.mdx) guide. - WebAuthn hardware device, such as YubiKey. - Machines for Headless WebAuthn activities have [Linux](../../../installation.mdx), [macOS](../../../installation.mdx) or [Windows](../../../installation.mdx) `tsh` binary installed. - Machines used to approve Headless WebAuthn requests have a Web browser with [WebAuthn support]( diff --git a/docs/pages/admin-guides/access-controls/guides/locking.mdx b/docs/pages/admin-guides/access-controls/guides/locking.mdx index 69567f86022f3..e9dd44c9847e8 100644 --- a/docs/pages/admin-guides/access-controls/guides/locking.mdx +++ b/docs/pages/admin-guides/access-controls/guides/locking.mdx @@ -3,7 +3,7 @@ title: Session and Identity Locking description: How to lock compromised users or agents --- -System administrators can disable a compromised user or Teleport agent—or +System administrators can disable a compromised user or Teleport Agent—or prevent access during cluster maintenance—by placing a lock on a session, user or host identity. @@ -19,7 +19,7 @@ A lock can target the following objects or attributes: ../device-trust/enforcing-device-trust.mdx#locking-a-device) by the device ID - an MFA device by the device's UUID - an OS/UNIX login -- a Teleport agent by the agent's server UUID (effectively unregistering it from the +- a Teleport Agent by the Agent's server UUID (effectively unregistering it from the cluster) - a Windows desktop by the desktop's name - an [Access Request](../access-requests/access-requests.mdx) by UUID diff --git a/docs/pages/admin-guides/access-controls/guides/mfa-for-admin-actions.mdx b/docs/pages/admin-guides/access-controls/guides/mfa-for-admin-actions.mdx index 809a3b193ba8d..242f392aba388 100644 --- a/docs/pages/admin-guides/access-controls/guides/mfa-for-admin-actions.mdx +++ b/docs/pages/admin-guides/access-controls/guides/mfa-for-admin-actions.mdx @@ -13,7 +13,7 @@ Examples of administrative actions include, but are not limited to: - Inviting new users - Updating cluster configuration resources - Modifying access management resources -- Approving access requests +- Approving Access Requests - Generating new join tokens - Impersonation - Creating new bots for Machine ID @@ -41,7 +41,7 @@ their on-disk Teleport certificates. - (!docs/pages/includes/tctl.mdx!) - [WebAuthn configured](webauthn.mdx) on this cluster -- Second factor hardware device, such as YubiKey or SoloKey +- Multi-factor authentication hardware device, such as YubiKey or SoloKey - A Web browser with [WebAuthn support]( https://developers.yubico.com/WebAuthn/WebAuthn_Browser_Support/) (if using SSH or desktop sessions from the Teleport Web UI). @@ -49,7 +49,7 @@ their on-disk Teleport certificates. ## Require MFA for administrative actions MFA for administrative actions is automatically enforced for clusters where -WebAuthn is the only form of second factor allowed. +WebAuthn is the only form of multi-factor authentication allowed. In a future major version, Teleport may enforce MFA for administrative actions diff --git a/docs/pages/admin-guides/access-controls/guides/passwordless.mdx b/docs/pages/admin-guides/access-controls/guides/passwordless.mdx index a2931cda9ed6c..8efe117fe62cd 100644 --- a/docs/pages/admin-guides/access-controls/guides/passwordless.mdx +++ b/docs/pages/admin-guides/access-controls/guides/passwordless.mdx @@ -11,16 +11,18 @@ usernameless authentication for Teleport. (!docs/pages/includes/edition-prereqs-tabs.mdx!) -- Teleport must be configured for WebAuthn. See the [Second Factor: - WebAuthn](./webauthn.mdx) guide. -- A hardware device with support for WebAuthn and resident keys. - As an alternative, you can use a Mac with biometrics / Touch ID or device that +- Teleport must be configured for WebAuthn. See the [Harden your Cluster Against + IdP Compromises](./webauthn.mdx) guide. +- A hardware device with support for WebAuthn and resident keys. As an + alternative, you can use a Mac with biometrics / Touch ID or device that supports Windows Hello (Windows 10 19H1 or later). -- A web browser with WebAuthn support. To see if your browser supports - WebAuthn, check the [WebAuthn - Compatibility](https://developers.yubico.com/WebAuthn/WebAuthn_Browser_Support/) page. -- A signed and notarized version of `tsh` is required for Touch ID. This means versions - installed from Homebrew or compiled from source will not work. [Download the macOS tsh installer](../../../installation.mdx#macos). +- A web browser with WebAuthn support. To see if your browser supports WebAuthn, + check the [WebAuthn + Compatibility](https://developers.yubico.com/WebAuthn/WebAuthn_Browser_Support/) + page. +- A signed and notarized version of `tsh` is required for Touch ID. This means + versions installed from Homebrew or compiled from source will not work. + [Download the macOS tsh installer](../../../installation.mdx#macos). - (!docs/pages/includes/tctl.mdx!) A Teleport cluster capable of WebAuthn is automatically capable of passwordless. @@ -46,8 +48,8 @@ If you are using a hardware device, a passwordless registration will occupy a resident key slot. Resident keys, also called discoverable credentials, are stored in persistent memory in the authenticator (i.e., the device that is used to authenticate). In contrast, MFA keys are encrypted by the authenticator and -stored in the Teleport Auth Server. Regardless of your device type, passwordless -registrations may also be used for regular MFA. +stored in the Teleport Auth Service backend. Regardless of your device type, +passwordless registrations may also be used for regular MFA. If you plan on relying exclusively on passwordless, it's recommended to register diff --git a/docs/pages/admin-guides/access-controls/guides/per-session-mfa.mdx b/docs/pages/admin-guides/access-controls/guides/per-session-mfa.mdx index 1c6b1c99d4152..2ec097e4f97a7 100644 --- a/docs/pages/admin-guides/access-controls/guides/per-session-mfa.mdx +++ b/docs/pages/admin-guides/access-controls/guides/per-session-mfa.mdx @@ -29,7 +29,7 @@ their on-disk Teleport certificates. - (!docs/pages/includes/tctl.mdx!) - [WebAuthn configured](webauthn.mdx) on this cluster -- Second factor hardware device, such as YubiKey or SoloKey +- Hardware device for multi-factor authentication, such as YubiKey or SoloKey - A Web browser with [WebAuthn support]( https://developers.yubico.com/WebAuthn/WebAuthn_Browser_Support/) (if using SSH or desktop sessions from the Teleport Web UI). diff --git a/docs/pages/admin-guides/access-controls/idps/saml-gcp-workforce-identity-federation.mdx b/docs/pages/admin-guides/access-controls/idps/saml-gcp-workforce-identity-federation.mdx index c28c2b66d96df..6b1c20624ae4c 100644 --- a/docs/pages/admin-guides/access-controls/idps/saml-gcp-workforce-identity-federation.mdx +++ b/docs/pages/admin-guides/access-controls/idps/saml-gcp-workforce-identity-federation.mdx @@ -79,7 +79,7 @@ resource ID for workforce pool and workforce pool provider, respectively. -## Step 2/3 Add workforce pool To Teleport +## Step 2/3. Add workforce pool To Teleport Proceed to the next step in the UI by clicking the **Next** button. @@ -95,7 +95,7 @@ values or attribute mapping in GCP, you must also updated the respective SAML se -## Step 3/3 Create GCP IAM policy +## Step 3/3. Create GCP IAM policy Once a pool and pool provider is configured in the GCP, and its respective configuration is added to Teleport as a SAML service provider resource, users can sign in into the GCP web console, as @@ -252,7 +252,7 @@ Save the spec as **pool_provider_name.yaml** file. And create the saml service p $ tctl create pool_provider_name.yaml ``` -## Step 3/3: Create GCP IAM policy +## Step 3/3. Create GCP IAM policy This step is similar to Step 3 in the guided configuration flow. You will need to create a GCP IAM policy representing the workforce principal. diff --git a/docs/pages/admin-guides/access-controls/sso/azuread.mdx b/docs/pages/admin-guides/access-controls/sso/azuread.mdx index 52dd8cc7701c5..fe8ba5eaa43b0 100644 --- a/docs/pages/admin-guides/access-controls/sso/azuread.mdx +++ b/docs/pages/admin-guides/access-controls/sso/azuread.mdx @@ -3,26 +3,28 @@ title: Teleport Authentication with Azure Active Directory (AD) description: How to configure Teleport access with Azure Active Directory. --- -This guide will cover how to configure Microsoft Azure Active Directory to issue -credentials to specific groups of users with a SAML Authentication Connector. -When used in combination with role-based access control (RBAC), it allows Teleport +This guide will cover how to configure Microsoft Entra ID to issue credentials +to specific groups of users with a SAML Authentication Connector. When used in +combination with role-based access control (RBAC), it allows Teleport administrators to define policies like: -- Only members of the "DBA" Azure AD group can connect to PostgreSQL databases. +- Only members of the "DBA" Microsoft Entra ID group can connect to PostgreSQL + databases. - Developers must never SSH into production servers. The following steps configure an example SAML authentication connector matching -Azure AD groups with security roles. You can choose to configure other options. +Microsoft Entra ID groups with security roles. You can choose to configure other +options. ## Prerequisites Before you get started, you’ll need: -- An Azure AD admin account with access to creating non-gallery applications - (P2 License). +- A Microsoft Entra ID admin account with access to creating non-gallery + applications (P2 License). - To register one or more users in the directory. -- To create at least two security groups in Azure AD and assign one or more - users to each group. +- To create at least two security groups in Microsoft Entra ID and assign one or + more users to each group. - A Teleport role with access to maintaining `saml` resources. This is available in the default `editor` role. @@ -30,7 +32,7 @@ Before you get started, you’ll need: - (!docs/pages/includes/tctl.mdx!) -## Step 1/3. Configure Azure AD +## Step 1/3. Configure Microsoft Entra ID ### Create an enterprise application diff --git a/docs/pages/admin-guides/access-controls/sso/gitlab.mdx b/docs/pages/admin-guides/access-controls/sso/gitlab.mdx index b404383ba2650..fcb18323910f4 100644 --- a/docs/pages/admin-guides/access-controls/sso/gitlab.mdx +++ b/docs/pages/admin-guides/access-controls/sso/gitlab.mdx @@ -183,7 +183,7 @@ spec: - Developers also do not have any "allow rules" i.e. they will not be able to see/replay past sessions or re-configure the Teleport cluster. -Create both roles on the auth server: +Create both roles on the Auth Service: ```code $ tctl create -f admin.yaml diff --git a/docs/pages/admin-guides/access-controls/sso/sso.mdx b/docs/pages/admin-guides/access-controls/sso/sso.mdx index 26c0003ea9128..316e5505c20e8 100644 --- a/docs/pages/admin-guides/access-controls/sso/sso.mdx +++ b/docs/pages/admin-guides/access-controls/sso/sso.mdx @@ -7,7 +7,7 @@ Teleport users can log in to servers, Kubernetes clusters, databases, web applications, and Windows desktops through their organization's Single Sign-On (SSO) provider. -- [Azure Active Directory (AD)](azuread.mdx): Configure Azure Active Directory SSO for SSH, Kubernetes, databases, desktops and web apps. +- [Microsoft Entra ID](azuread.mdx): Configure Microsoft Entra ID SSO for SSH, Kubernetes, databases, desktops and web apps. - [Active Directory (ADFS)](adfs.mdx): Configure Windows Active Directory SSO for SSH, Kubernetes, databases, desktops and web apps. - [Google Workspace](google-workspace.mdx): Configure Google Workspace SSO for SSH, Kubernetes, databases, desktops and web apps. - [GitHub](github-sso.mdx): Configure GitHub SSO for SSH, @@ -449,7 +449,7 @@ Teleport can also support multiple connectors. For example, a Teleport administrator can define and create multiple connector resources using `tctl create` as shown above. -To see all configured connectors, execute this command on the Auth Server: +To see all configured connectors, execute this command on the Auth Service: ```code $ tctl get connectors diff --git a/docs/pages/admin-guides/api/getting-started.mdx b/docs/pages/admin-guides/api/getting-started.mdx index cfdbe207dedc5..34d8438868ec7 100644 --- a/docs/pages/admin-guides/api/getting-started.mdx +++ b/docs/pages/admin-guides/api/getting-started.mdx @@ -113,7 +113,7 @@ func main() { } ``` -Now you can run the program and connect the client to the Teleport Auth Server to fetch the server version. +Now you can run the program and connect the client to the Teleport Auth Service to fetch the server version. ```code $ go run main.go diff --git a/docs/pages/admin-guides/deploy-a-cluster/deployments/aws-gslb-proxy-peering-ha-deployment.mdx b/docs/pages/admin-guides/deploy-a-cluster/deployments/aws-gslb-proxy-peering-ha-deployment.mdx index 78bac12f33752..420b3018ae535 100644 --- a/docs/pages/admin-guides/deploy-a-cluster/deployments/aws-gslb-proxy-peering-ha-deployment.mdx +++ b/docs/pages/admin-guides/deploy-a-cluster/deployments/aws-gslb-proxy-peering-ha-deployment.mdx @@ -5,7 +5,7 @@ description: "Deploying a high-availability Teleport cluster using Proxy Peering This deployment architecture features two important design decisions: -- AWS Route 53 latency-based routing is used for global server load balancing +- Amazon Route 53 latency-based routing is used for global server load balancing ([GSLB](https://www.cloudflare.com/learning/cdn/glossary/global-server-load-balancing-gslb/)). This allows for efficient distribution of traffic across resources that are globally distributed. - Teleport's [Proxy Peering](../../../reference/architecture/proxy-peering.mdx) is used to reduce the total number of tunnel connections in the Teleport cluster. @@ -22,12 +22,12 @@ entry while also ensuring minimal latency when accessing connected resources. - Deployed exclusively in the AWS ecosystem - High-availability Auto Scaling group of Auth Service instances that must remain in a single region - High-availability Auto Scaling group of Proxy Service instances deployed across multiple regions -- [AWS Route 53 latency-based routing](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy-latency.html) +- [Amazon Route 53 latency-based routing](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy-latency.html) - [GSLB](https://www.cloudflare.com/learning/cdn/glossary/global-server-load-balancing-gslb/) - [Teleport TLS Routing](../../../reference/architecture/tls-routing.mdx) to reduce the number of ports needed to use Teleport - [Teleport Proxy Peering](../../../reference/architecture/proxy-peering.mdx) for reducing the number of resource connections - [AWS Network Load Balancing](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html) -- [AWS DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html) for cluster state storage +- [Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html) for cluster state storage - [AWS S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) for session recording storage ## Advantages of this deployment architecture @@ -37,7 +37,7 @@ entry while also ensuring minimal latency when accessing connected resources. - Provides a highly resilient, redundant HA architecture for Teleport that can quickly scale with an organization's needs. - All required Teleport components can be provisioned within the AWS ecosystem. -- Using load balancers for the Proxy and Auth Services allows for increased availability +- Using load balancers for the Proxy Service and Auth Service allows for increased availability during Teleport cluster upgrades. ## Disadvantages of this deployment architecture @@ -61,7 +61,7 @@ In other words, this must be a Layer 4 load balancer, not a Layer 7 type="warning" title="Note" > -Cross-zone load balancing is required for the Auth and Proxy service NLB configurations to route +Cross-zone load balancing is required for the Auth Service and Proxy Service NLB configurations to route traffic across multiple zones. Doing this improves resiliency against localized AWS zone outages. @@ -182,7 +182,7 @@ additional settings. In this deployment architecture, [Proxy Peering](../../../reference/architecture/proxy-peering.mdx) is used to restrict the number of connections made from resources to proxies in the Teleport Cluster. -This guide covers the necessary Proxy Peering settings for deploying an HA Teleport Cluster routing resource +This guide covers the necessary Proxy Peering settings for deploying an HA Teleport cluster routing resource traffic with GSLB. ### Auth Service Proxy Peering configuration @@ -196,7 +196,7 @@ auth_service: type: proxy_peering agent_connection_count: 2 ``` -Reference the [Auth Server configuration](../../../reference/config.mdx) reference page +Reference the [Auth Service configuration](../../../reference/config.mdx) reference page for additional settings. ### Proxy Service Proxy Peering configuration diff --git a/docs/pages/admin-guides/deploy-a-cluster/deployments/aws-ha-autoscale-cluster-terraform.mdx b/docs/pages/admin-guides/deploy-a-cluster/deployments/aws-ha-autoscale-cluster-terraform.mdx index cb6bb3e42a5c7..421b7645decbe 100644 --- a/docs/pages/admin-guides/deploy-a-cluster/deployments/aws-ha-autoscale-cluster-terraform.mdx +++ b/docs/pages/admin-guides/deploy-a-cluster/deployments/aws-ha-autoscale-cluster-terraform.mdx @@ -219,7 +219,7 @@ here. The license file isn't used in Teleport Community Edition installs.) $ export TF_VAR_route53_zone="example.com" ``` -Our Terraform setup requires you to have your domain provisioned in AWS Route 53 - it will automatically add +Our Terraform setup requires you to have your domain provisioned in Amazon Route 53 - it will automatically add DNS records for [`route53_domain`](#route53\_domain) as set up below. You can list these with this command: ```code @@ -367,7 +367,7 @@ $ export TF_VAR_enable_auth_asg_instance_refresh="false" ``` This variable can be used to enable automatic instance refresh on the Teleport -**auth server** AWS Autoscaling Group (ASG) - the refresh is triggered by +**Auth Service** AWS Autoscaling Group (ASG) - the refresh is triggered by changes to the launch template or configuration. Enable the auth ASG instance refresh with caution - upgrading the version of Teleport will trigger an instance refresh and **auth servers must be scaled down diff --git a/docs/pages/admin-guides/deploy-a-cluster/deployments/aws-starter-cluster-terraform.mdx b/docs/pages/admin-guides/deploy-a-cluster/deployments/aws-starter-cluster-terraform.mdx index 9a3556b489d07..e5cd6d02c83e9 100644 --- a/docs/pages/admin-guides/deploy-a-cluster/deployments/aws-starter-cluster-terraform.mdx +++ b/docs/pages/admin-guides/deploy-a-cluster/deployments/aws-starter-cluster-terraform.mdx @@ -9,14 +9,14 @@ and describe how to manage the resulting Teleport deployment. This module will deploy the following components: - One AWS EC2 instance running the Teleport Auth Service, Proxy Service and SSH Service components -- AWS DynamoDB tables for storing the Teleport backend database and audit events +- Amazon DynamoDB tables for storing the Teleport backend database and audit events - An AWS S3 bucket for storing Teleport session recordings - A minimal AWS IAM role granting permissions for the EC2 instance to use DynamoDB and S3 - An AWS security group restricting inbound traffic to the EC2 instance -- An AWS Route 53 DNS record pointing to the subdomain you control and choose during installation +- An Amazon Route 53 DNS record pointing to the subdomain you control and choose during installation It also optionally deploys the following components when ACM is enabled: -- An AWS ACM certificate for the subdomain in AWS Route 53 that you control and choose during installation +- An Amazon ACM certificate for the subdomain in Amazon Route 53 that you control and choose during installation - An AWS Application Load Balancer using the above ACM certificate to secure incoming traffic More details are [provided below](#reference-deployment-defaults). @@ -232,8 +232,10 @@ here. The license file isn't used in Teleport Community Edition installs.) $ export TF_VAR_route53_zone="example.com" ``` -Our Terraform setup requires you to have your domain provisioned in AWS Route 53 - it will automatically add -DNS records for [`route53_domain`](#route53\_domain) as set up below. You can list these with this command: +Our Terraform setup requires you to have your domain provisioned in Amazon Route +53 - it will automatically add DNS records for +[`route53_domain`](#route53\_domain) as set up below. You can list these with +this command: ```code $ aws route53 list-hosted-zones --query "HostedZones[*].Name" --output json diff --git a/docs/pages/admin-guides/deploy-a-cluster/deployments/gcp.mdx b/docs/pages/admin-guides/deploy-a-cluster/deployments/gcp.mdx index aee699f103a1d..24ed2c6fcac79 100644 --- a/docs/pages/admin-guides/deploy-a-cluster/deployments/gcp.mdx +++ b/docs/pages/admin-guides/deploy-a-cluster/deployments/gcp.mdx @@ -74,7 +74,7 @@ updates to keep individual Auth Servers in sync, and requires Firestore configur in native mode. To configure Teleport to store audit events in Firestore, add the following to -the teleport section of your Auth Server's config file (by default it's `/etc/teleport.yaml`): +the teleport section of your Auth Service's config file (by default it's `/etc/teleport.yaml`): ```yaml teleport: @@ -123,7 +123,7 @@ Cloud DNS is used to set up the public URL of the Teleport Proxy. ### Access: Service accounts -The Teleport Auth Server will need to read and write to Firestore and +The Teleport Auth Service will need to read and write to Firestore and Google Cloud Storage. For this you will need a Service Account with the correct permissions. @@ -160,7 +160,7 @@ custom role and must be used in later steps. $ export IAM_ROLE= ``` -If you don't already have a GCP service account for your Teleport Auth Server +If you don't already have a GCP service account for your Teleport Auth Service you can create one with the following command, otherwise use your existing service account. @@ -219,8 +219,8 @@ Follow install instructions from our [installation page](../../../installation.m We recommend configuring Teleport as per the below steps: - -**1. Configure Teleport Auth Server** using the below example `teleport.yaml`,and start it + +**1. Configure Teleport Auth Service** using the below example `teleport.yaml`,and start it using [systemd](../../management/admin/daemon.mdx). The DEB/RPM installations will automatically include the `systemd` configuration. @@ -255,7 +255,7 @@ ssh_service: ``` -**1. Configure Teleport Auth Server** using the below example `teleport.yaml`, and start it +**1. Configure Teleport Auth Service** using the below example `teleport.yaml`, and start it using [systemd](../../management/admin/daemon.mdx). The DEB/RPM installations will automatically include the `systemd` configuration. diff --git a/docs/pages/admin-guides/deploy-a-cluster/helm-deployments/aws.mdx b/docs/pages/admin-guides/deploy-a-cluster/helm-deployments/aws.mdx index de375be78b5df..5338a038ae712 100644 --- a/docs/pages/admin-guides/deploy-a-cluster/helm-deployments/aws.mdx +++ b/docs/pages/admin-guides/deploy-a-cluster/helm-deployments/aws.mdx @@ -446,7 +446,7 @@ You'll need to set up a DNS `A` record for `teleport.example.com`. In our exampl (!docs/pages/includes/dns-app-access.mdx!) -Here's how to do this in a hosted zone with AWS Route 53: +Here's how to do this in a hosted zone with Amazon Route 53: @@ -576,7 +576,7 @@ $ aws route53 get-change --id "${CHANGEID?}" | jq '.ChangeInfo.Status' ## Step 7/7. Create a Teleport user -Create a user to be able to log into Teleport. This needs to be done on the Teleport auth server, +Create a user to be able to log into Teleport. This needs to be done on the Teleport Auth Service, so we can run the command using `kubectl`: diff --git a/docs/pages/admin-guides/deploy-a-cluster/helm-deployments/azure.mdx b/docs/pages/admin-guides/deploy-a-cluster/helm-deployments/azure.mdx index 201cf6fd92bc5..0f469787bfe05 100644 --- a/docs/pages/admin-guides/deploy-a-cluster/helm-deployments/azure.mdx +++ b/docs/pages/admin-guides/deploy-a-cluster/helm-deployments/azure.mdx @@ -252,7 +252,7 @@ $ az network dns record-set a add-record --resource-group ${ZONE_RG} --zone-name ## Step 5/5. Create a Teleport user -Create a user to be able to log into Teleport. This needs to be done on the Teleport auth server, +Create a user to be able to log into Teleport. This needs to be done on the Teleport Auth Service, so we can run the command using `kubectl`: diff --git a/docs/pages/admin-guides/deploy-a-cluster/helm-deployments/custom.mdx b/docs/pages/admin-guides/deploy-a-cluster/helm-deployments/custom.mdx index 494e295fb9f7c..68f50615d6452 100644 --- a/docs/pages/admin-guides/deploy-a-cluster/helm-deployments/custom.mdx +++ b/docs/pages/admin-guides/deploy-a-cluster/helm-deployments/custom.mdx @@ -204,7 +204,7 @@ replicaset.apps/teleport-proxy-c6bf55cfc 2 2 2 22h If you're not migrating an existing Teleport cluster, you'll need to create a user to be able to log into Teleport. This needs to be done on the Teleport -auth server, so we can run the command using `kubectl`: +Auth Service, so we can run the command using `kubectl`: diff --git a/docs/pages/admin-guides/deploy-a-cluster/helm-deployments/digitalocean.mdx b/docs/pages/admin-guides/deploy-a-cluster/helm-deployments/digitalocean.mdx index b689c0e518cdd..8503dc760c106 100644 --- a/docs/pages/admin-guides/deploy-a-cluster/helm-deployments/digitalocean.mdx +++ b/docs/pages/admin-guides/deploy-a-cluster/helm-deployments/digitalocean.mdx @@ -278,5 +278,5 @@ Teleport: - Connect another Kubernetes cluster to Teleport by [deploying the Teleport Kubernetes Service](../../../enroll-resources/kubernetes-access/getting-started.mdx) - [Set up Machine ID with Kubernetes](../../../enroll-resources/machine-id/access-guides/kubernetes.mdx) -- [Single-Sign On and Kubernetes Access Control](../../../enroll-resources/kubernetes-access/controls.mdx) +- [Single-Sign On and RBAC for Kubernetes Clusters](../../../enroll-resources/kubernetes-access/controls.mdx) diff --git a/docs/pages/admin-guides/deploy-a-cluster/helm-deployments/gcp.mdx b/docs/pages/admin-guides/deploy-a-cluster/helm-deployments/gcp.mdx index 024f982a16ed5..5a4b44f4c499a 100644 --- a/docs/pages/admin-guides/deploy-a-cluster/helm-deployments/gcp.mdx +++ b/docs/pages/admin-guides/deploy-a-cluster/helm-deployments/gcp.mdx @@ -416,7 +416,7 @@ $ gcloud dns record-sets transaction execute --zone="${MYZONE?}" ## Step 6/6. Create a Teleport user -Create a user to be able to log into Teleport. This needs to be done on the Teleport auth server, +Create a user to be able to log into Teleport. This needs to be done on the Teleport Auth Service, so we can run the command using `kubectl`: diff --git a/docs/pages/admin-guides/deploy-a-cluster/helm-deployments/helm-deployments.mdx b/docs/pages/admin-guides/deploy-a-cluster/helm-deployments/helm-deployments.mdx index e35528834cf72..b11f11f774436 100644 --- a/docs/pages/admin-guides/deploy-a-cluster/helm-deployments/helm-deployments.mdx +++ b/docs/pages/admin-guides/deploy-a-cluster/helm-deployments/helm-deployments.mdx @@ -34,5 +34,4 @@ our `teleport-cluster` Helm chart. ## Migration Guides -- [Migrating from v11 to v12](migration-v12.mdx) - [Kubernetes 1.25 and PSP removal](migration-kubernetes-1-25-psp.mdx) diff --git a/docs/pages/admin-guides/deploy-a-cluster/helm-deployments/migration-v12.mdx b/docs/pages/admin-guides/deploy-a-cluster/helm-deployments/migration-v12.mdx deleted file mode 100644 index 7b504b29de8bc..0000000000000 --- a/docs/pages/admin-guides/deploy-a-cluster/helm-deployments/migration-v12.mdx +++ /dev/null @@ -1,234 +0,0 @@ ---- -title: Migrating to teleport-cluster v12 -description: How to upgrade to teleport-cluster Helm chart version 12 ---- - -This guide covers the major changes of the `teleport-cluster` v12 chart -and how to upgrade existing releases from version 11 to version 12. - -{/*TODO(ptgott): Remove this guide in v16*/} - -## Changes summary - -The main changes in version 12 of the `teleport-cluster` chart are: - -- PodSecurityPolicy has been removed on Kubernetes 1.23 and 1.24 -- Teleport now deploys its Auth and Proxy Services as separate pods. - Running Teleport with this new topology allows it to be more resilient to - disruptions and scale better. -- Proxies are now deployed as stateless workloads. The `proxy` session recording - mode uploads recordings asynchronously. Non-uploaded records might be lost - during rollouts (config changes or version upgrades for example). - `proxy-sync` ensures consistency and does not have this limitation. -- `custom` mode has been removed as it was broken by the topology change. - It is replaced by a new configuration override mechanism allowing you to pass - arbitrary Teleport configuration values. -- The values `standalone.*` that were previously deprecated in favor of `persistence` - have been removed. -- The chart can now be scaled up in `standalone` mode. Proxy replication requires - a TLS certificate; Auth replication requires using [HA storage backends](../../../reference/backends.mdx). - - -The chart has always been versioned with Teleport but was often compatible with -the previous Teleport major version. This is not the case for v12. Using the chart -v12 requires at least Teleport v12. - - -## How to upgrade - -If you are running Kubernetes 1.23 and above, follow our -[Kubernetes 1.25 PSP removal guide](./migration-kubernetes-1-25-psp.mdx). - -Then, the upgrade path mainly depends on the `chartMode` used. If you used a "managed" -mode like `aws`, `gcp` or `standalone` it should be relatively straightforward. -If you relied on the `custom` chart mode, you will have to perform configuration changes. - -Before upgrading, always: - -- [backup the cluster content](../../management/operations/backup-restore.mdx), -- test the upgrade in a non-production environment. - - -During the upgrade, Kubernetes will delete existing deployments and create new ones. -**This is not seamless and will cause some downtime** until the new pods are up and all health checks are passing. -This usually takes around 5 minutes. - - -### If you use `gcp`, `aws` or `standalone` mode - -The upgrade should not require configuration changes. Make sure you don't rely -on `standalone.*` for storage configuration (if you do, switch to using -`persistence` values instead). - -Upgrading to v12 will increase the amount of pods deployed as it will deploy auth -and proxies separately. The chart will try to deploy multiple proxy replicas when -possible (proxies can be replicated if certs are provided through a secret or -`cert-manager`). Make sure you have enough room in your Kubernetes cluster to run -the additional Teleport pods: - -- `aws` and `gcp` will deploy twice the amount of pods -- `standalone` will deploy 1 or 2 additional pods (depending if the proxy can be replicated) - -The additional pods might take more time than before to deploy and become ready. -If you are running helm with `--wait` or `--atomic` make sure to increase your -timeouts to at least 10 minutes. - -### If you use `custom` mode - -The `custom` mode worked by passing the Teleport configuration through a ConfigMap. -Due to the version 12 topology change, existing `custom` configuration won't work -as-is and will need to be split in two separate configurations: one for the proxies -and one for the auths. - -To avoid a surprise breaking upgrade, the `teleport-cluster` v12 chart will refuse -to deploy in `custom` mode and point you to this migration guide. - -Version 12 has introduced a new way to pass arbitrary configuration to Teleport -without having to write a full configuration file. If you were using `custom` mode -because of a missing chart feature (like etcd backend support for example) this -might be a better fit for you than managing a fully-custom config. - -#### If you deploy a Teleport cluster - -You can now use the existing modes `aws`, `gcp` and `standalone` and pass your custom -configuration overrides through the `auth.teleportConfig` and `proxy.teleportConfig` -values. For most use-cases this is the recommended setup as you will automatically -benefit from future configuration upgrades. - -You must split the configuration in two configurations, one for each node type: - -- The `proxy` configuration must contain at least the `proxy_service` section - and the `teleport` section without the `storage` part. -- The `auth` configuration must contain at least the `auth_service` and - `teleport` sections. - -For example - a v11 custom configuration that looked like this: - -```yaml -teleport: - log: - output: stderr - severity: INFO -auth_service: - enabled: true - cluster_name: custom.example.com - tokens: # This is custom configuration - - "proxy,node:(=presets.tokens.first=)" - - "trusted_cluster:(=presets.tokens.second=)" - listen_addr: 0.0.0.0:3025 - public_addr: custom.example.com:3025 - session_recording: node-sync -proxy_service: - enabled: true - listen_addr: 0.0.0.0:3080 - public_addr: custom.example.com:443 - ssh_public_addr: ssh-custom.example.com:3023 # This is custom configuration -``` - -Can be converted into these values: - -```yaml -chartMode: standalone -clusterName: custom.example.com - -sessionRecording: node-sync - -auth: - teleportConfig: - auth_service: - tokens: - - "proxy,node:(=presets.tokens.first=)" - - "trusted_cluster:(=presets.tokens.second=)" - -proxy: - teleportConfig: - proxy_service: - ssh_public_addr: ssh-custom.example.com:3023 -``` - - -`teleport.cluster_name` and `teleport.auth_service.authentication.webauthn.rp_id` MUST NOT change. - - -#### If you deploy Teleport nodes - -If you used the `teleport-cluster` chart in `custom` mode to deploy only services -like `app_service`, `db_service`, `kube_service`, `windows_service` or `discovery_service`, -you should use the `teleport-kube-agent` chart for this purpose. - -The chart offers values to configure `app_service`, `kube_service` and `db_service`, -but other services can be configured through the `teleportConfig` value. - -To migrate to the `teleport-kube-agent` chart from `teleport-cluster`, -use the following values: - -```yaml -proxyAddr: teleport.example.com -# pass the token through joinParams instead of `teleportConfig` so it lives -# in a Kubernetes Secret instead of a ConfigMap -joinParams: - method: token - tokenName: (=presets.tokens.first=) - -# Roles can be empty if you pass all the configuration through `teleportConfig` -roles: "" - -# Put all your previous `teleport.yaml` values except the `teleport` section below -teleportConfig: - # kubernetes_service: - # enabled: true - # [...] - # discovery_service: - # enabled: true - # [...] -``` - -## Going further - -The new topology allows you to replicate the proxies to increase availability. -You might also want to tune settings like Kubernetes resources or affinities. - -By default, each value applies to both `proxy` and `auth` pods, e.g.: - -```yaml -resources: - requests: - cpu: "1" - memory: "2GiB" - limits: - cpu: "1" - memory: "2GiB" - -highAvailability: - requireAntiAffinity: true -``` - -But you can scope the value to a specific pod set by nesting it under the `proxy` -or `auth` values. If both the value at the root and a set-specific value are set, -the specific value takes precedence: - -```yaml -# By default, all pods use those resources -resources: - requests: - cpu: "1" - memory: "2GiB" - limits: - cpu: "1" - memory: "2GiB" - -proxy: - # But the proxy pods have have different resource requests and no cpu limits - resources: - requests: - cpu: "0.5" - memory: "1GiB" - limits: - cpu: ~ # Generic and specific config are merged: if you want to unset a value, you must do it explicitly - memory: "1GiB" - -auth: - # Only auth pods will require an anti-affinity - highAvailability: - requireAntiAffinity: true -``` diff --git a/docs/pages/admin-guides/deploy-a-cluster/hsm.mdx b/docs/pages/admin-guides/deploy-a-cluster/hsm.mdx index 804c0ad9b6cb6..b551372b232fa 100644 --- a/docs/pages/admin-guides/deploy-a-cluster/hsm.mdx +++ b/docs/pages/admin-guides/deploy-a-cluster/hsm.mdx @@ -11,7 +11,7 @@ hardware security module (HSM) to store and handle private keys. - Teleport v(=teleport.version=) Enterprise (self-hosted). - (!docs/pages/includes/tctl.mdx!) -- An HSM reachable from your Teleport auth server. +- An HSM reachable from your Teleport Auth Service. - The PKCS#11 module for your HSM. (!docs/pages/includes/enterprise/hsm-warning.mdx!) @@ -22,12 +22,13 @@ CloudHSM, YubiHSM2, and SoftHSM2. ## Step 1/5. Set up your HSM You will need to set up your HSM and make sure that it is accessible from your -Teleport Auth Server. You should create a unique HSM user or token for Teleport +Teleport Auth Service. You should create a unique HSM user or token for Teleport to use. -1. Create a CloudHSM cluster in the VPC where you will run your Teleport Auth Server. +1. Create a CloudHSM cluster in the VPC where you will run your Teleport Auth + Service. https://docs.aws.amazon.com/cloudhsm/latest/userguide/create-cluster.html 1. Wait for the newly created cluster to enter the "Uninitialized" state. @@ -106,9 +107,9 @@ to use. 1. A security group with the same name as your CloudHSM cluster will have been automatically created when you created the cluster. Attach this security group to the EC2 instance where you will run your - Teleport Auth Server to allow traffic between the Auth Server and your HSM. + Teleport Auth Service to allow traffic between the Auth Service and your HSM. -1. On the Auth Server EC2 instance, install the CloudHSM CLI for the CloudHSM +1. On the Auth Service EC2 instance, install the CloudHSM CLI for the CloudHSM Client SDK 5. https://docs.aws.amazon.com/cloudhsm/latest/userguide/gs_cloudhsm_cli-install.html @@ -163,7 +164,7 @@ to use. aws-cloudhsm > quit ``` -1. Install the PKCS#11 library for the Client SDK 5 on the same Auth Server EC2 instance +1. Install the PKCS#11 library for the Client SDK 5 on the same Auth Service EC2 instance https://docs.aws.amazon.com/cloudhsm/latest/userguide/pkcs11-library-install.html Bootstrap the PKCS#11 library by configuring the HSM IP address. @@ -243,7 +244,7 @@ to use. 1. Set the environment variable `YUBIHSM_PKCS11_CONF` to the path of your configuration file. This will be read by the PKCS#11 module and needs to be set in the Teleport - auth server's environment. + Auth Service's environment. ```code $ export YUBIHSM_PKCS11_CONF=/etc/yubihsm_pkcs11.conf ``` @@ -254,7 +255,7 @@ to use. To configure Teleport to use an HSM for all CA private key generation, storage, and signing, include the `ca_key_params` section in `/etc/teleport.yaml` on the -auth server. +Auth Service. diff --git a/docs/pages/admin-guides/infrastructure-as-code/terraform-starter/rbac.mdx b/docs/pages/admin-guides/infrastructure-as-code/terraform-starter/rbac.mdx index 865192382bc8a..22bd412e157b0 100644 --- a/docs/pages/admin-guides/infrastructure-as-code/terraform-starter/rbac.mdx +++ b/docs/pages/admin-guides/infrastructure-as-code/terraform-starter/rbac.mdx @@ -291,7 +291,7 @@ If you plan to skip this step, make sure to remove the `module "saml"` or Workspace](../../../admin-guides/access-controls/sso/google-workspace.mdx#step-14-configure-google-workspace) - [OneLogin](../../../admin-guides/access-controls/sso/one-login.mdx#step-13-create-teleport-application-in-onelogin) - [Azure - AD](../../../admin-guides/access-controls/sso/azuread.mdx#step-13-configure-azure-ad) + AD](../../../admin-guides/access-controls/sso/azuread.mdx#step-13-configure-microsoft-entra-id) - [Okta](../../../admin-guides/access-controls/sso/okta.mdx#step-24-configure-okta) 1. Configure the redirect URL (for OIDC) or assertion consumer service (for SAML): diff --git a/docs/pages/admin-guides/management/admin/troubleshooting.mdx b/docs/pages/admin-guides/management/admin/troubleshooting.mdx index 2f4db10fd9ccc..294e63fb0d02a 100644 --- a/docs/pages/admin-guides/management/admin/troubleshooting.mdx +++ b/docs/pages/admin-guides/management/admin/troubleshooting.mdx @@ -7,7 +7,7 @@ In this guide, we will explain how to address issues or unexpected behavior in y Teleport cluster. You can use these steps to get more visibility into the `teleport` process so -you can troubleshoot the Auth Service, Proxy Service, and Teleport agent +you can troubleshoot the Auth Service, Proxy Service, and Teleport Agent services such as the Application Service and Database Service. ## Prerequisites @@ -208,7 +208,7 @@ purposes and seeing it within your logs is not necessarily an indication that anything is incorrect. Firstly, Teleport uses this value within certificates (as a DNS Subject -Alternative Name) issued to the Auth and Proxy Service. Teleport clients can +Alternative Name) issued to the Auth Service and Proxy Service. Teleport clients can then use this value to validate the service's certificates during the TLS handshake regardless of the service address as long as the client already has a copy of the cluster's certificate authorities. This is important as there are @@ -220,7 +220,7 @@ HTTP requests to the Teleport API. This is because the Teleport API client uses special logic to open the connection to the Auth Service to make the request, rather than connecting to a single address as a typical client may do. This special logic is necessary for the client to be able to support connecting to a -list of Auth Services or to be able to connect to the Auth Service through a +list of Auth Service instances or to be able to connect to the Auth Service through a tunnel via the Proxy Service. This means that `teleport.cluster.local` appears in log messages that show the URL of a request made to the Auth Service, and does not explicitly indicate that something is misconfigured. diff --git a/docs/pages/admin-guides/management/admin/trustedclusters.mdx b/docs/pages/admin-guides/management/admin/trustedclusters.mdx index ae6bbe6428094..629b63565e886 100644 --- a/docs/pages/admin-guides/management/admin/trustedclusters.mdx +++ b/docs/pages/admin-guides/management/admin/trustedclusters.mdx @@ -180,7 +180,7 @@ logging in to the server in the leaf cluster. To add a user and role for accessing the trusted cluster: -1. Open a terminal shell on the server running the Teleport agent in the leaf cluster. +1. Open a terminal shell on the server running the Teleport Agent in the leaf cluster. 1. Add the local `visitor` user and create a home directory for the user by running the following command: @@ -223,7 +223,7 @@ your Teleport username: ``` You must explicitly allow access to nodes with labels to SSH into the server running - the Teleport agent. In this example, the `visitor` login is allowed access to any server. + the Teleport Agent. In this example, the `visitor` login is allowed access to any server. 1. Create the `visitor` role by running the following command: @@ -570,7 +570,7 @@ running the following command: ``` -1. Confirm that the server running the Teleport agent is joined to the leaf cluster by +1. Confirm that the server running the Teleport Agent is joined to the leaf cluster by running a command similar to the following: ```code @@ -893,8 +893,9 @@ command to start the teleport service: teleport start --debug` ``` -You can also enable verbose output by updating the configuration file for both Auth Services. -Open the `/etc/teleport.yaml` configuration file and add `DEBUG` to the `log` configuration section: +You can also enable verbose output by updating the configuration file for both +Auth Service instances. Open the `/etc/teleport.yaml` configuration file and +add `DEBUG` to the `log` configuration section: ```yaml # Snippet from /etc/teleport.yaml diff --git a/docs/pages/admin-guides/management/admin/users.mdx b/docs/pages/admin-guides/management/admin/users.mdx index 6e22ca3c22c47..0ae4068bcc5e1 100644 --- a/docs/pages/admin-guides/management/admin/users.mdx +++ b/docs/pages/admin-guides/management/admin/users.mdx @@ -61,7 +61,7 @@ NOTE: Make sure :443 points at a Teleport proxy which users can acce The user completes registration by visiting this URL in their web browser, picking a password, and configuring multi-factor authentication. If the -credentials are correct, the Teleport Auth Server generates and signs a new +credentials are correct, the Teleport Auth Service generates and signs a new certificate, and the client stores this key and will use it for subsequent logins. diff --git a/docs/pages/admin-guides/management/guides/ec2-tags.mdx b/docs/pages/admin-guides/management/guides/ec2-tags.mdx index 93338221fd7b6..aab4d5f6c65f1 100644 --- a/docs/pages/admin-guides/management/guides/ec2-tags.mdx +++ b/docs/pages/admin-guides/management/guides/ec2-tags.mdx @@ -27,8 +27,8 @@ fakehost.example.com 127.0.0.1:3022 env=example,hostname=ip-172-31-53-70,aws/Nam ## Prerequisites (!docs/pages/includes/edition-prereqs-tabs.mdx!) -- One Teleport agent running on an Amazon EC2 instance. See - [our guides](../../../enroll-resources/agents/join-services-to-your-cluster/join-services-to-your-cluster.mdx) for how to set up Teleport agents. +- One Teleport Agent running on an Amazon EC2 instance. See + [our guides](../../../enroll-resources/agents/join-services-to-your-cluster/join-services-to-your-cluster.mdx) for how to set up Teleport Agents. ## Enable tags in instance metadata diff --git a/docs/pages/admin-guides/management/guides/gcp-tags.mdx b/docs/pages/admin-guides/management/guides/gcp-tags.mdx index 031f8679b5bef..937d3ef378e68 100644 --- a/docs/pages/admin-guides/management/guides/gcp-tags.mdx +++ b/docs/pages/admin-guides/management/guides/gcp-tags.mdx @@ -35,8 +35,8 @@ fakehost.example.com 127.0.0.1:3022 gcp/label/testing=yes,gcp/tag/environment=st ## Prerequisites (!docs/pages/includes/edition-prereqs-tabs.mdx!) -- One Teleport agent running on a GCP Compute instance. See - [our guides](../../../enroll-resources/agents/join-services-to-your-cluster/join-services-to-your-cluster.mdx) for how to set up Teleport agents. +- One Teleport Agent running on a GCP Compute instance. See + [our guides](../../../enroll-resources/agents/join-services-to-your-cluster/join-services-to-your-cluster.mdx) for how to set up Teleport Agents. ## Configure service account on instances with Teleport nodes @@ -76,4 +76,4 @@ $ gcloud projects add-iam-policy-binding -If your `second_factor` configuration is set to `off` and a user creates an account without a second factor, changing `second_factor` to a value that requires MFA will force that user to authenticate with a credential they have not registered. This will lock them out of their account. You have two ways to avoid this scenario: +If your `second_factor` configuration is set to `off` and a user creates an account without a multi-factor credential, changing `second_factor` to a value that requires MFA will force that user to authenticate with a credential they have not registered. This will lock them out of their account. You have two ways to avoid this scenario: - Set `second_factor` to `optional` until you have confirmed that existing users have enrolled their MFA devices. - Run the `tctl users reset ` command to force a user to enter new credentials, including any required MFA device. diff --git a/docs/pages/admin-guides/management/security/restrict-privileges.mdx b/docs/pages/admin-guides/management/security/restrict-privileges.mdx index 006306224cce1..7b2f15dcf6fb4 100644 --- a/docs/pages/admin-guides/management/security/restrict-privileges.mdx +++ b/docs/pages/admin-guides/management/security/restrict-privileges.mdx @@ -28,7 +28,7 @@ To prevent changes to labels from granting elevated privileges, you should: Don't give users permissive roles when more a restrictive role will do. For example, don't assign users the preset `access` or `editor` role, which give them permission to access and edit all cluster resources. Instead, define roles with the minimum required permissions for each - user and configure **access requests** or **access lists** to provide temporary elevated permissions. + user and configure **Access Requests** or **Access Lists** to provide temporary elevated permissions. ## Restrict root access @@ -37,7 +37,7 @@ accounts with administrative privileges. Privileged users could manipulate Teleport agents in ways that affect your authorization controls. For example, a privileged user with access to the Teleport configuration file might modify settings to bypass role-based access controls. Similarly, a user with elevated access to the Teleport Auth Service, -Teleport Proxy Service, or Teleport agent services might replace the Teleport executable to infiltrate or +Teleport Proxy Service, or Teleport Agent services might replace the Teleport executable to infiltrate or exfiltrate cluster systems, manipulate the discovery of dynamic resources, compromise sensitive credentials and sessions, or obscure auditing. diff --git a/docs/pages/admin-guides/teleport-policy/policy-connections.mdx b/docs/pages/admin-guides/teleport-policy/policy-connections.mdx index 85120c116e6f0..890381551a760 100644 --- a/docs/pages/admin-guides/teleport-policy/policy-connections.mdx +++ b/docs/pages/admin-guides/teleport-policy/policy-connections.mdx @@ -44,7 +44,7 @@ External users (created from authentication connectors for GitHub, SAML, etc.) a User Groups are created from Teleport roles and Access Requests. Roles create User Groups where the members are the users that have that role. Access requests create a temporary User Group where the members are the users that -got the access through the accepted access request. +got the access through the accepted Access Request. ### Actions @@ -102,4 +102,4 @@ In the graph, database objects are connected by multiple edges: Resources are created from Teleport resources like nodes, databases, and Kubernetes clusters. ## Next steps -- Uncover [privileges, permissions, and construct SQL queries](./policy-how-to-use.mdx) in Teleport Policy. \ No newline at end of file +- Uncover [privileges, permissions, and construct SQL queries](./policy-how-to-use.mdx) in Teleport Policy. diff --git a/docs/pages/reference/helm-reference/teleport-cluster.mdx b/docs/pages/reference/helm-reference/teleport-cluster.mdx index f1dfa6b81cc11..d165ee4274edb 100644 --- a/docs/pages/reference/helm-reference/teleport-cluster.mdx +++ b/docs/pages/reference/helm-reference/teleport-cluster.mdx @@ -41,12 +41,6 @@ Get started with a guide for each mode: | `azure` | Leverages Azure managed services to store data. | [Running an HA Teleport cluster using a Microsoft Azure AKS cluster](../../admin-guides/deploy-a-cluster/helm-deployments/azure.mdx) | | `scratch` (v12 and above) | Generates empty Teleport configuration. User must pass their own config. This is discouraged, use `standalone` mode with [`auth.teleportConfig`](#authteleportconfig) and [`proxy.teleportConfig`](#proxyteleportconfig) instead. | [Running a Teleport cluster with a custom config](../../admin-guides/deploy-a-cluster/helm-deployments/custom.mdx) | - -`custom` mode has been removed in Teleport version 12. See the [version 12 -migration guide](../../admin-guides/deploy-a-cluster/helm-deployments/migration-v12.mdx) for -more information. - - The chart is versioned with Teleport. No compatibility guarantees are ensured between new charts and previous major Teleport versions. It is strongly recommended