-
Notifications
You must be signed in to change notification settings - Fork 5.6k
/
Copy pathCONFIGURATION.md
793 lines (607 loc) · 24.4 KB
/
CONFIGURATION.md
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
<!-- markdownlint-disable MD024 -->
# Configuration
Telegraf's configuration file is written using [TOML][] and is composed of
three sections: [global tags][], [agent][] settings, and [plugins][].
View the default [telegraf.conf][] config file with all available plugins.
## Generating a Configuration File
A default config file can be generated by telegraf:
```sh
telegraf config > telegraf.conf
```
To generate a file with specific inputs and outputs, you can use the
--input-filter and --output-filter flags:
```sh
telegraf config --input-filter cpu:mem:net:swap --output-filter influxdb:kafka
```
[View the full list][flags] of Telegraf commands and flags or by running `telegraf --help`.
### Windows PowerShell v5 Encoding
In PowerShell 5, the default encoding is UTF-16LE and not UTF-8. Telegraf
expects a valid UTF-8 file. This is not an issue with PowerShell 6 or newer,
as well as the Command Prompt or with using the Git Bash shell.
As such, users will need to specify the output encoding when generating a full
configuration file:
```sh
telegraf.exe config | Out-File -Encoding utf8 telegraf.conf
```
This will generate a UTF-8 encoded file with a BOM. However, Telegraf can
handle the leading BOM.
## Configuration Loading
The location of the configuration file can be set via the `--config` command
line flag.
When the `--config-directory` command line flag is used files ending with
`.conf` in the specified directory will also be included in the Telegraf
configuration.
On most systems, the default locations are `/etc/telegraf/telegraf.conf` for
the main configuration file and `/etc/telegraf/telegraf.d` for the directory of
configuration files.
## Environment Variables
Environment variables can be used anywhere in the config file, simply surround
them with `${}`. Replacement occurs before file parsing. For strings
the variable must be within quotes, e.g., `"${STR_VAR}"`, for numbers and booleans
they should be unquoted, e.g., `${INT_VAR}`, `${BOOL_VAR}`.
When using the `.deb` or `.rpm` packages, you can define environment variables
in the `/etc/default/telegraf` file.
**Example**:
`/etc/default/telegraf`:
For InfluxDB 1.x:
```shell
USER="alice"
INFLUX_URL="http://localhost:8086"
INFLUX_SKIP_DATABASE_CREATION="true"
INFLUX_PASSWORD="monkey123"
```
For InfluxDB OSS 2:
```shell
INFLUX_HOST="http://localhost:8086" # used to be 9999
INFLUX_TOKEN="replace_with_your_token"
INFLUX_ORG="your_username"
INFLUX_BUCKET="replace_with_your_bucket_name"
```
For InfluxDB Cloud 2:
```shell
# For AWS West (Oregon)
INFLUX_HOST="https://us-west-2-1.aws.cloud2.influxdata.com"
# Other Cloud URLs at https://v2.docs.influxdata.com/v2.0/reference/urls/#influxdb-cloud-urls
INFLUX_TOKEN=”replace_with_your_token”
INFLUX_ORG="[email protected]"
INFLUX_BUCKET="replace_with_your_bucket_name"
```
`/etc/telegraf.conf`:
```toml
[global_tags]
user = "${USER}"
[[inputs.mem]]
# For InfluxDB 1.x:
[[outputs.influxdb]]
urls = ["${INFLUX_URL}"]
skip_database_creation = ${INFLUX_SKIP_DATABASE_CREATION}
password = "${INFLUX_PASSWORD}"
# For InfluxDB OSS 2:
[[outputs.influxdb_v2]]
urls = ["${INFLUX_HOST}"]
token = "${INFLUX_TOKEN}"
organization = "${INFLUX_ORG}"
bucket = "${INFLUX_BUCKET}"
# For InfluxDB Cloud 2:
[[outputs.influxdb_v2]]
urls = ["${INFLUX_HOST}"]
token = "${INFLUX_TOKEN}"
organization = "${INFLUX_ORG}"
bucket = "${INFLUX_BUCKET}"
```
The above files will produce the following effective configuration file to be
parsed:
```toml
[global_tags]
user = "alice"
[[inputs.mem]]
# For InfluxDB 1.x:
[[outputs.influxdb]]
urls = "http://localhost:8086"
skip_database_creation = true
password = "monkey123"
# For InfluxDB OSS 2:
[[outputs.influxdb_v2]]
urls = ["http://127.0.0.1:8086"] # double check the port. could be 9999 if using OSS Beta
token = "replace_with_your_token"
organization = "your_username"
bucket = "replace_with_your_bucket_name"
# For InfluxDB Cloud 2:
[[outputs.influxdb_v2]]
# For AWS West (Oregon)
INFLUX_HOST="https://us-west-2-1.aws.cloud2.influxdata.com"
# Other Cloud URLs at https://v2.docs.influxdata.com/v2.0/reference/urls/#influxdb-cloud-urls
token = "replace_with_your_token"
organization = "[email protected]"
bucket = "replace_with_your_bucket_name"
```
## Secret-store secrets
Additional or instead of environment variables, you can use secret-stores
to fill in credentials or similar. To do so, you need to configure one or more
secret-store plugin(s) and then reference the secret in your plugin
configurations. A reference to a secret is specified in form
`@{<secret store id>:<secret name>}`, where the `secret store id` is the unique
ID you defined for your secret-store and `secret name` is the name of the secret
to use.
**NOTE:** Both, the `secret store id` as well as the `secret name` can only
consist of letters (both upper- and lowercase), numbers and underscores.
**Example**:
This example illustrates the use of secret-store(s) in plugins
```toml
[global_tags]
user = "alice"
[[secretstores.os]]
id = "local_secrets"
[[secretstores.jose]]
id = "cloud_secrets"
path = "/etc/telegraf/secrets"
# Optional reference to another secret store to unlock this one.
password = "@{local_secrets:cloud_store_passwd}"
[[inputs.http]]
urls = ["http://server.company.org/metrics"]
username = "@{local_secrets:company_server_http_metric_user}"
password = "@{local_secrets:company_server_http_metric_pass}"
[[outputs.influxdb_v2]]
urls = ["https://us-west-2-1.aws.cloud2.influxdata.com"]
token = "@{cloud_secrets:influxdb_token}"
organization = "[email protected]"
bucket = "replace_with_your_bucket_name"
```
## Intervals
Intervals are durations of time and can be specified for supporting settings by
combining an integer value and time unit as a string value. Valid time units are
`ns`, `us` (or `µs`), `ms`, `s`, `m`, `h`.
```toml
[agent]
interval = "10s"
```
## Global Tags
Global tags can be specified in the `[global_tags]` table in key="value"
format. All metrics that are gathered will be tagged with the tags specified.
Global tags are overriden by tags set by plugins.
```toml
[global_tags]
dc = "us-east-1"
```
## Agent
The agent table configures Telegraf and the defaults used across all plugins.
- **interval**: Default data collection [interval][] for all inputs.
- **round_interval**: Rounds collection interval to [interval][]
ie, if interval="10s" then always collect on :00, :10, :20, etc.
- **metric_batch_size**:
Telegraf will send metrics to outputs in batches of at most
metric_batch_size metrics.
This controls the size of writes that Telegraf sends to output plugins.
- **metric_buffer_limit**:
Maximum number of unwritten metrics per output. Increasing this value
allows for longer periods of output downtime without dropping metrics at the
cost of higher maximum memory usage.
- **collection_jitter**:
Collection jitter is used to jitter the collection by a random [interval][].
Each plugin will sleep for a random time within jitter before collecting.
This can be used to avoid many plugins querying things like sysfs at the
same time, which can have a measurable effect on the system.
- **collection_offset**:
Collection offset is used to shift the collection by the given [interval][].
This can be be used to avoid many plugins querying constraint devices
at the same time by manually scheduling them in time.
- **flush_interval**:
Default flushing [interval][] for all outputs. Maximum flush_interval will be
flush_interval + flush_jitter.
- **flush_jitter**:
Default flush jitter for all outputs. This jitters the flush [interval][]
by a random amount. This is primarily to avoid large write spikes for users
running a large number of telegraf instances. ie, a jitter of 5s and interval
10s means flushes will happen every 10-15s.
- **precision**:
Collected metrics are rounded to the precision specified as an [interval][].
Precision will NOT be used for service inputs. It is up to each individual
service input to set the timestamp at the appropriate precision.
- **debug**:
Log at debug level.
- **quiet**:
Log only error level messages.
- **logtarget**:
Log target controls the destination for logs and can be one of "file",
"stderr" or, on Windows, "eventlog". When set to "file", the output file is
determined by the "logfile" setting.
- **logfile**:
Name of the file to be logged to when using the "file" logtarget. If set to
the empty string then logs are written to stderr.
- **logfile_rotation_interval**:
The logfile will be rotated after the time interval specified. When set to
0 no time based rotation is performed.
- **logfile_rotation_max_size**:
The logfile will be rotated when it becomes larger than the specified size.
When set to 0 no size based rotation is performed.
- **logfile_rotation_max_archives**:
Maximum number of rotated archives to keep, any older logs are deleted. If
set to -1, no archives are removed.
- **log_with_timezone**:
Pick a timezone to use when logging or type 'local' for local time. Example: 'America/Chicago'.
[See this page for options/formats.](https://socketloop.com/tutorials/golang-display-list-of-timezones-with-gmt)
- **hostname**:
Override default hostname, if empty use os.Hostname()
- **omit_hostname**:
If set to true, do no set the "host" tag in the telegraf agent.
- **snmp_translator**:
Method of translating SNMP objects. Can be "netsnmp" (deprecated) which
translates by calling external programs `snmptranslate` and `snmptable`,
or "gosmi" which translates using the built-in gosmi library.
- **statefile**:
Name of the file to load the states of plugins from and store the states to.
If uncommented and not empty, this file will be used to save the state of
stateful plugins on termination of Telegraf. If the file exists on start,
the state in the file will be restored for the plugins.
## Plugins
Telegraf plugins are divided into 4 types: [inputs][], [outputs][],
[processors][], and [aggregators][].
Unlike the `global_tags` and `agent` tables, any plugin can be defined
multiple times and each instance will run independently. This allows you to
have plugins defined with differing configurations as needed within a single
Telegraf process.
Each plugin has a unique set of configuration options, reference the
sample configuration for details. Additionally, several options are available
on any plugin depending on its type.
### Input Plugins
Input plugins gather and create metrics. They support both polling and event
driven operation.
Parameters that can be used with any input plugin:
- **alias**: Name an instance of a plugin.
- **interval**:
Overrides the `interval` setting of the [agent][Agent] for the plugin. How
often to gather this metric. Normal plugins use a single global interval, but
if one particular input should be run less or more often, you can configure
that here.
- **precision**:
Overrides the `precision` setting of the [agent][Agent] for the plugin.
Collected metrics are rounded to the precision specified as an [interval][].
When this value is set on a service input, multiple events occuring at the
same timestamp may be merged by the output database.
- **collection_jitter**:
Overrides the `collection_jitter` setting of the [agent][Agent] for the
plugin. Collection jitter is used to jitter the collection by a random
[interval][].
- **collection_offset**:
Overrides the `collection_offset` setting of the [agent][Agent] for the
plugin. Collection offset is used to shift the collection by the given
[interval][].
- **name_override**: Override the base name of the measurement. (Default is
the name of the input).
- **name_prefix**: Specifies a prefix to attach to the measurement name.
- **name_suffix**: Specifies a suffix to attach to the measurement name.
- **tags**: A map of tags to apply to a specific input's measurements.
The [metric filtering][] parameters can be used to limit what metrics are
emitted from the input plugin.
#### Examples
Use the name_suffix parameter to emit measurements with the name `cpu_total`:
```toml
[[inputs.cpu]]
name_suffix = "_total"
percpu = false
totalcpu = true
```
Use the name_override parameter to emit measurements with the name `foobar`:
```toml
[[inputs.cpu]]
name_override = "foobar"
percpu = false
totalcpu = true
```
Emit measurements with two additional tags: `tag1=foo` and `tag2=bar`
> **NOTE**: With TOML, order matters. Parameters belong to the last defined
> table header, place `[inputs.cpu.tags]` table at the _end_ of the plugin
> definition.
```toml
[[inputs.cpu]]
percpu = false
totalcpu = true
[inputs.cpu.tags]
tag1 = "foo"
tag2 = "bar"
```
Alternatively, when using the inline table syntax, the tags do not need
to go at the end:
```toml
[[inputs.cpu]]
tags = {tag1 = "foo", tag2 = "bar"}
percpu = false
totalcpu = true
```
Utilize `name_override`, `name_prefix`, or `name_suffix` config options to
avoid measurement collisions when defining multiple plugins:
```toml
[[inputs.cpu]]
percpu = false
totalcpu = true
[[inputs.cpu]]
percpu = true
totalcpu = false
name_override = "percpu_usage"
fielddrop = ["cpu_time*"]
```
### Output Plugins
Output plugins write metrics to a location. Outputs commonly write to
databases, network services, and messaging systems.
Parameters that can be used with any output plugin:
- **alias**: Name an instance of a plugin.
- **flush_interval**: The maximum time between flushes. Use this setting to
override the agent `flush_interval` on a per plugin basis.
- **flush_jitter**: The amount of time to jitter the flush interval. Use this
setting to override the agent `flush_jitter` on a per plugin basis.
- **metric_batch_size**: The maximum number of metrics to send at once. Use
this setting to override the agent `metric_batch_size` on a per plugin basis.
- **metric_buffer_limit**: The maximum number of unsent metrics to buffer.
Use this setting to override the agent `metric_buffer_limit` on a per plugin
basis.
- **name_override**: Override the original name of the measurement.
- **name_prefix**: Specifies a prefix to attach to the measurement name.
- **name_suffix**: Specifies a suffix to attach to the measurement name.
The [metric filtering][] parameters can be used to limit what metrics are
emitted from the output plugin.
#### Examples
Override flush parameters for a single output:
```toml
[agent]
flush_interval = "10s"
flush_jitter = "5s"
metric_batch_size = 1000
[[outputs.influxdb]]
urls = [ "http://example.org:8086" ]
database = "telegraf"
[[outputs.file]]
files = [ "stdout" ]
flush_interval = "1s"
flush_jitter = "1s"
metric_batch_size = 10
```
### Processor Plugins
Processor plugins perform processing tasks on metrics and are commonly used to
rename or apply transformations to metrics. Processors are applied after the
input plugins and before any aggregator plugins.
Parameters that can be used with any processor plugin:
- **alias**: Name an instance of a plugin.
- **order**: The order in which the processor(s) are executed. starting with 1.
If this is not specified then processor execution order will be the order in
the config. Processors without "order" will take precedence over those
with a defined order.
The [metric filtering][] parameters can be used to limit what metrics are
handled by the processor. Excluded metrics are passed downstream to the next
processor.
#### Examples
If the order processors are applied matters you must set order on all involved
processors:
```toml
[[processors.rename]]
order = 1
[[processors.rename.replace]]
tag = "path"
dest = "resource"
[[processors.strings]]
order = 2
[[processors.strings.trim_prefix]]
tag = "resource"
prefix = "/api/"
```
### Aggregator Plugins
Aggregator plugins produce new metrics after examining metrics over a time
period, as the name suggests they are commonly used to produce new aggregates
such as mean/max/min metrics. Aggregators operate on metrics after any
processors have been applied.
Parameters that can be used with any aggregator plugin:
- **alias**: Name an instance of a plugin.
- **period**: The period on which to flush & clear each aggregator. All
metrics that are sent with timestamps outside of this period will be ignored
by the aggregator.
- **delay**: The delay before each aggregator is flushed. This is to control
how long for aggregators to wait before receiving metrics from input
plugins, in the case that aggregators are flushing and inputs are gathering
on the same interval.
- **grace**: The duration when the metrics will still be aggregated
by the plugin, even though they're outside of the aggregation period. This
is needed in a situation when the agent is expected to receive late metrics
and it's acceptable to roll them up into next aggregation period.
- **drop_original**: If true, the original metric will be dropped by the
aggregator and will not get sent to the output plugins.
- **name_override**: Override the base name of the measurement. (Default is
the name of the input).
- **name_prefix**: Specifies a prefix to attach to the measurement name.
- **name_suffix**: Specifies a suffix to attach to the measurement name.
- **tags**: A map of tags to apply to the measurement - behavior varies based on aggregator.
The [metric filtering][] parameters can be used to limit what metrics are
handled by the aggregator. Excluded metrics are passed downstream to the next
aggregator.
#### Examples
Collect and emit the min/max of the system load1 metric every 30s, dropping
the originals.
```toml
[[inputs.system]]
fieldpass = ["load1"] # collects system load1 metric.
[[aggregators.minmax]]
period = "30s" # send & clear the aggregate every 30s.
drop_original = true # drop the original metrics.
[[outputs.file]]
files = ["stdout"]
```
Collect and emit the min/max of the swap metrics every 30s, dropping the
originals. The aggregator will not be applied to the system load metrics due
to the `namepass` parameter.
```toml
[[inputs.swap]]
[[inputs.system]]
fieldpass = ["load1"] # collects system load1 metric.
[[aggregators.minmax]]
period = "30s" # send & clear the aggregate every 30s.
drop_original = true # drop the original metrics.
namepass = ["swap"] # only "pass" swap metrics through the aggregator.
[[outputs.file]]
files = ["stdout"]
```
## Metric Filtering
Metric filtering can be configured per plugin on any input, output, processor,
and aggregator plugin. Filters fall under two categories: Selectors and
Modifiers.
### Selectors
Selector filters include or exclude entire metrics. When a metric is excluded
from a Input or an Output plugin, the metric is dropped. If a metric is
excluded from a Processor or Aggregator plugin, it is skips the plugin and is
sent onwards to the next stage of processing.
- **namepass**:
An array of [glob pattern][] strings. Only metrics whose measurement name matches
a pattern in this list are emitted.
- **namedrop**:
The inverse of `namepass`. If a match is found the metric is discarded. This
is tested on metrics after they have passed the `namepass` test.
- **tagpass**:
A table mapping tag keys to arrays of [glob pattern][] strings. Only metrics
that contain a tag key in the table and a tag value matching one of its
patterns is emitted. This can either use the explicit table synax (e.g.
a subsection using a `[...]` header) or inline table syntax (e.g like
a JSON table with `{...}`.
- **tagdrop**:
The inverse of `tagpass`. If a match is found the metric is discarded. This
is tested on metrics after they have passed the `tagpass` test.
> NOTE: Due to the way TOML is parsed, when using the explicit table
> syntax (with `[...]`) for `tagpass` and `tagdrop` parameters, they
> must be defined at the **end** of the plugin definition, otherwise subsequent
> plugin config options will be interpreted as part of the tagpass/tagdrop
> tables. This limitation does not apply when using the inline table
> syntax (`{...}`).
### Modifiers
Modifier filters remove tags and fields from a metric. If all fields are
removed the metric is removed.
- **fieldpass**:
An array of [glob pattern][] strings. Only fields whose field key matches a
pattern in this list are emitted.
- **fielddrop**:
The inverse of `fieldpass`. Fields with a field key matching one of the
patterns will be discarded from the metric. This is tested on metrics after
they have passed the `fieldpass` test.
- **taginclude**:
An array of [glob pattern][] strings. Only tags with a tag key matching one of
the patterns are emitted. In contrast to `tagpass`, which will pass an entire
metric based on its tag, `taginclude` removes all non matching tags from the
metric. Any tag can be filtered including global tags and the agent `host`
tag.
- **tagexclude**:
The inverse of `taginclude`. Tags with a tag key matching one of the patterns
will be discarded from the metric. Any tag can be filtered including global
tags and the agent `host` tag.
### Filtering Examples
#### Using tagpass and tagdrop
```toml
[[inputs.cpu]]
percpu = true
totalcpu = false
fielddrop = ["cpu_time"]
# Don't collect CPU data for cpu6 & cpu7
[inputs.cpu.tagdrop]
cpu = [ "cpu6", "cpu7" ]
[[inputs.disk]]
[inputs.disk.tagpass]
# tagpass conditions are OR, not AND.
# If the (filesystem is ext4 or xfs) OR (the path is /opt or /home)
# then the metric passes
fstype = [ "ext4", "xfs" ]
# Globs can also be used on the tag values
path = [ "/opt", "/home*" ]
[[inputs.win_perf_counters]]
[[inputs.win_perf_counters.object]]
ObjectName = "Network Interface"
Instances = ["*"]
Counters = [
"Bytes Received/sec",
"Bytes Sent/sec"
]
Measurement = "win_net"
# Don't send metrics where the Windows interface name (instance) begins with isatap or Local
# This illustrates the inline table syntax
tagdrop = {instance = ["isatap*", "Local*"]}
```
#### Using fieldpass and fielddrop
```toml
# Drop all metrics for guest & steal CPU usage
[[inputs.cpu]]
percpu = false
totalcpu = true
fielddrop = ["usage_guest", "usage_steal"]
# Only store inode related metrics for disks
[[inputs.disk]]
fieldpass = ["inodes*"]
```
#### Using namepass and namedrop
```toml
# Drop all metrics about containers for kubelet
[[inputs.prometheus]]
urls = ["http://kube-node-1:4194/metrics"]
namedrop = ["container_*"]
# Only store rest client related metrics for kubelet
[[inputs.prometheus]]
urls = ["http://kube-node-1:4194/metrics"]
namepass = ["rest_client_*"]
```
#### Using taginclude and tagexclude
```toml
# Only include the "cpu" tag in the measurements for the cpu plugin.
[[inputs.cpu]]
percpu = true
totalcpu = true
taginclude = ["cpu"]
# Exclude the "fstype" tag from the measurements for the disk plugin.
[[inputs.disk]]
tagexclude = ["fstype"]
```
#### Metrics can be routed to different outputs using the metric name and tags
```toml
[[outputs.influxdb]]
urls = [ "http://localhost:8086" ]
database = "telegraf"
# Drop all measurements that start with "aerospike"
namedrop = ["aerospike*"]
[[outputs.influxdb]]
urls = [ "http://localhost:8086" ]
database = "telegraf-aerospike-data"
# Only accept aerospike data:
namepass = ["aerospike*"]
[[outputs.influxdb]]
urls = [ "http://localhost:8086" ]
database = "telegraf-cpu0-data"
# Only store measurements where the tag "cpu" matches the value "cpu0"
[outputs.influxdb.tagpass]
cpu = ["cpu0"]
```
#### Routing metrics to different outputs based on the input
Metrics are tagged with `influxdb_database` in the input, which is then used to
select the output. The tag is removed in the outputs before writing.
```toml
[[outputs.influxdb]]
urls = ["http://influxdb.example.com"]
database = "db_default"
[outputs.influxdb.tagdrop]
influxdb_database = ["*"]
[[outputs.influxdb]]
urls = ["http://influxdb.example.com"]
database = "db_other"
tagexclude = ["influxdb_database"]
[outputs.influxdb.tagpass]
influxdb_database = ["other"]
[[inputs.disk]]
[inputs.disk.tags]
influxdb_database = "other"
```
## Transport Layer Security (TLS)
Reference the detailed [TLS][] documentation.
[TOML]: https://github.com/toml-lang/toml#toml
[global tags]: #global-tags
[interval]: #intervals
[agent]: #agent
[plugins]: #plugins
[inputs]: #input-plugins
[outputs]: #output-plugins
[processors]: #processor-plugins
[aggregators]: #aggregator-plugins
[metric filtering]: #metric-filtering
[telegraf.conf]: /etc/telegraf.conf
[TLS]: /docs/TLS.md
[glob pattern]: https://github.com/gobwas/glob#syntax
[flags]: /docs/COMMANDS_AND_FLAGS.md