Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Cannot create monitors/alerts on OpenSearch-Dashboards:2.0.0 #254

Closed
dogukanl opened this issue May 30, 2022 · 16 comments
Closed

[BUG] Cannot create monitors/alerts on OpenSearch-Dashboards:2.0.0 #254

dogukanl opened this issue May 30, 2022 · 16 comments
Assignees
Labels
bug Something isn't working

Comments

@dogukanl
Copy link

dogukanl commented May 30, 2022

Cannot create monitors/alerts

With the new update I can create a new notification channel just fine. But when I try to create a monitor, I get the exceptions below. The alerting plugin is looking for the .opendistro-alerting-config index which the cluster does not seem to create automatically like it does with other system indices.

To Reproduce
I am using docker to bring up my cluster:

version: '3.9'
services:
  opensearch-node1:
    image: opensearchproject/opensearch:2.0.0
    container_name: opensearch-node
    environment:
      - cluster.name=opensearch-cluster
      - node.name=opensearch-node
      - discovery.type=single-node
      - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m"
    volumes:
      - opensearch-data:/usr/share/opensearch/data
    ports:
      - 9200:9200
      - 9600:9600
    networks:
      - opensearch-net
    
  opensearch-dashboards:
    image: opensearchproject/opensearch-dashboards:2.0.0
    container_name: opensearch-dashboards
    ports:
      - 5601:5601
    environment:
      OPENSEARCH_HOSTS: '["https://opensearch-node:9200"]'
    networks:
      - opensearch-net

volumes:
  opensearch-data:

networks:
  opensearch-net:

OpenSearch Version
opensearchproject/opensearch:2.0.0

Dashboards Version
opensearchproject/opensearch-dashboards:2.0.0

Plugins

  • opensearch-alerting
  • opensearch-anomaly-detection
  • opensearch-asynchronous-search
  • opensearch-cross-cluster-replication
  • opensearch-index-management
  • opensearch-job-scheduler
  • opensearch-knn
  • opensearch-ml
  • opensearch-notifications
  • opensearch-notifications-core
  • opensearch-observability
  • opensearch-performance-analyzer
  • opensearch-reports-scheduler
  • opensearch-security
  • opensearch-sql

Host/Environment

  • centos7
  • ubuntu 21.10
  • Google Chrome 101.0.4951.41 (Official Build) (64-bit)
  • Docker version: 20.10.16

Errors

When I click on "Add Trigger" under "Create Monitor"

opensearch-node          | [2022-05-30T14:16:17,825][ERROR][o.o.a.u.AlertingException] [opensearch-node] Alerting error: [.opendistro-alerting-config] IndexNotFoundException[no such index [.opendistro-alerting-config]]
opensearch-dashboards    | Alerting - MonitorService - searchMonitor: StatusCodeError: [alerting_exception] Configured indices are not found: [.opendistro-alerting-config]

When I try to create the monitor

opensearch-node          | [2022-05-30T14:16:17,870][WARN ][r.suppressed             ] [opensearch-node] path: /_plugins/_alerting/monitors/_execute, params: {dryrun=false}
opensearch-node          | java.lang.IllegalStateException: Can't get text on a START_OBJECT at 1:932
opensearch-node          | 	at org.opensearch.common.xcontent.json.JsonXContentParser.text(JsonXContentParser.java:97) ~[opensearch-x-content-2.0.0.jar:2.0.0]
opensearch-node          | 	at org.opensearch.alerting.model.action.ActionExecutionScope$Companion.parse(ActionExecutionScope.kt:60) ~[?:?]
opensearch-node          | 	at org.opensearch.alerting.model.action.ActionExecutionPolicy$Companion.parse(ActionExecutionPolicy.kt:61) ~[?:?]
opensearch-node          | 	at org.opensearch.alerting.model.action.Action$Companion.parse(Action.kt:156) ~[?:?]
opensearch-node          | 	at org.opensearch.alerting.model.QueryLevelTrigger$Companion.parseInner(QueryLevelTrigger.kt:162) ~[?:?]
opensearch-node          | 	at org.opensearch.alerting.model.Trigger$Companion.parse(Trigger.kt:50) ~[?:?]
opensearch-node          | 	at org.opensearch.alerting.model.Monitor$Companion.parse(Monitor.kt:279) ~[?:?]
opensearch-node          | 	at org.opensearch.alerting.resthandler.RestExecuteMonitorAction.prepareRequest$lambda-0(RestExecuteMonitorAction.kt:67) ~[?:?]
opensearch-node          | 	at org.opensearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:125) ~[opensearch-2.0.0.jar:2.0.0]
opensearch-node          | 	at org.opensearch.security.filter.SecurityRestFilter$1.handleRequest(SecurityRestFilter.java:128) ~[?:?]
opensearch-node          | 	at org.opensearch.rest.RestController.dispatchRequest(RestController.java:311) [opensearch-2.0.0.jar:2.0.0]
opensearch-node          | 	at org.opensearch.rest.RestController.tryAllHandlers(RestController.java:397) [opensearch-2.0.0.jar:2.0.0]
opensearch-node          | 	at org.opensearch.rest.RestController.dispatchRequest(RestController.java:240) [opensearch-2.0.0.jar:2.0.0]
opensearch-node          | 	at org.opensearch.security.ssl.http.netty.ValidatingDispatcher.dispatchRequest(ValidatingDispatcher.java:63) [opensearch-security-2.0.0.0.jar:2.0.0.0]
opensearch-node          | 	at org.opensearch.http.AbstractHttpServerTransport.dispatchRequest(AbstractHttpServerTransport.java:366) [opensearch-2.0.0.jar:2.0.0]
opensearch-node          | 	at org.opensearch.http.AbstractHttpServerTransport.handleIncomingRequest(AbstractHttpServerTransport.java:445) [opensearch-2.0.0.jar:2.0.0]
opensearch-node          | 	at org.opensearch.http.AbstractHttpServerTransport.incomingRequest(AbstractHttpServerTransport.java:356) [opensearch-2.0.0.jar:2.0.0]
opensearch-node          | 	at org.opensearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:55) [transport-netty4-client-2.0.0.jar:2.0.0]
opensearch-node          | 	at org.opensearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:41) [transport-netty4-client-2.0.0.jar:2.0.0]
opensearch-node          | 	at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at org.opensearch.http.netty4.Netty4HttpPipeliningHandler.channelRead(Netty4HttpPipeliningHandler.java:71) [transport-netty4-client-2.0.0.jar:2.0.0]
opensearch-node          | 	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:327) [netty-codec-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:299) [netty-codec-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) [netty-handler-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1371) [netty-handler-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1234) [netty-handler-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1283) [netty-handler-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:510) [netty-codec-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:449) [netty-codec-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:279) [netty-codec-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:623) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:586) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496) [netty-transport-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) [netty-common-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.73.Final.jar:4.1.73.Final]
opensearch-node          | 	at java.lang.Thread.run(Thread.java:833) [?:?]

UPDATE:

Upon further inspection, I've found that ".opendistro-alerting-config" index not existing before trying to create a monitor does not seem to be the issue, or at least from what I can tell.

I've cloned every repo under opensearch-project and used ag to find where "system_indices" are mentioned and I really couldn't find a function, class or a piece of code that creates all system_indices as a bootstrap process.

Long story short I think the issue is with the API because:

This is okay

It automatically creates the ".opendistro-alerting-config" and the monitor works.

POST _plugins/_alerting/monitors
{
  "name": "MyTestMonitor",
  "type": "monitor",
  "monitor_type": "cluster_metrics_monitor",
  "enabled": true,
  "schedule": {
    "period": {
      "interval": 1,
      "unit": "MINUTES"
    }
  },
  "inputs": [
    {
      "uri": {
        "api_type": "CLUSTER_HEALTH",
        "path": "_cluster/health/",
        "path_params": "",
        "url": "http://localhost:9200/_cluster/health/"
      }
    }
  ],
  "triggers": [
    {
      "name": "MyTestTrigger",
      "severity": "1",
      "condition": {
        "script": {
          "lang": "painless",
          "source": "return true"
        }
      },
      "actions": [
        {
          "name": "MyTestAction",
          "destination_id": "fEAKFYEBpplJRNa113rm",
          "subject_template": {
            "lang": "mustache",
            "source": "MyTestSubject"
          },
          "message_template": {
            "lang": "mustache",
            "source": "Monitor {{ctx.monitor.name}} just entered alert status. Please investigate the issue.\n  - Trigger: {{ctx.trigger.name}}\n  - Severity: {{ctx.trigger.severity}}\n  - Period start: {{ctx.periodStart}}\n  - Period end: {{ctx.periodEnd}}"
          },
          "throttle_enabled": false
        }
      ],
      "min_time_between_executions": null,
      "rolling_window_size": null
    }
  ]
}

This is NOT OKAY

POST _plugins/_alerting/monitors
{
  "name": "MyTestMonitor",
  "type": "monitor",
  "monitor_type": "cluster_metrics_monitor",
  "enabled": true,
  "schedule": {
    "period": {
      "interval": 1,
      "unit": "MINUTES"
    }
  },
  "inputs": [
    {
      "uri": {
        "api_type": "CLUSTER_HEALTH",
        "path": "_cluster/health/",
        "path_params": "",
        "url": "http://localhost:9200/_cluster/health/"
      }
    }
  ],
  "triggers": [
    {
      "name": "MyTestTrigger",
      "severity": "1",
      "condition": {
        "script": {
          "lang": "painless",
          "source": "return true"
        }
      },
      "actions": [
        {
          "name": "MyTestAction",
          "destination_id": "fEAKFYEBpplJRNa113rm",
          "subject_template": {
            "lang": "mustache",
            "source": "MyTestSubject"
          },
          "message_template": {
            "lang": "mustache",
            "source": "Monitor {{ctx.monitor.name}} just entered alert status. Please investigate the issue.\n  - Trigger: {{ctx.trigger.name}}\n  - Severity: {{ctx.trigger.severity}}\n  - Period start: {{ctx.periodStart}}\n  - Period end: {{ctx.periodEnd}}"
          },
          "throttle_enabled": false,
          "action_execution_policy": {
            "action_execution_scope": {
              "per_alert": {
                "actionable_alerts": [
                  {
                    "value": "DEDUPED",
                    "label": "De-duplicated"
                  },
                  {
                    "value": "NEW",
                    "label": "New"
                  }
                ]
              }
            }
          }
        }
      ],
      "min_time_between_executions": null,
      "rolling_window_size": null
    }
  ]
}

The "action_execution_policy" map somehow creates a problem and results in the following error:

{
  "error" : {
    "root_cause" : [
      {
        "type" : "illegal_state_exception",
        "reason" : "Can't get text on a START_OBJECT at 49:19"
      }
    ],
    "type" : "illegal_state_exception",
    "reason" : "Can't get text on a START_OBJECT at 49:19"
  },
  "status" : 500
}

Similarly

POST _plugins/_alerting/monitors
{
    "name": "MyTestMonitor",
    "type": "monitor",
    "monitor_type": "cluster_metrics_monitor",
    "enabled": true,
    "schedule": {
        "period": {
            "interval": 1,
            "unit": "MINUTES"
        }
    },
    "inputs": [
        {
            "uri": {
                "api_type": "CLUSTER_HEALTH",
                "path": "_cluster/health/",
                "path_params": "",
                "url": "http://localhost:9200/_cluster/health/"
            }
        }
    ],
    "triggers": [
        {
            "name": "MyTestTrigger",
            "severity": "1",
            "condition": {
                "script": {
                    "lang": "painless",
                    "source": "ctx.results[0].status == \"green\""
                }
            },
            "actions": [
                {
                    "name": "MyTestAction",
                    "destination_id": "fEAKFYEBpplJRNa113rm",
                    "subject_template": {
                        "lang": "mustache",
                        "source": "MyTestSubject"
                    },
                    "message_template": {
                        "lang": "mustache",
                        "source": "Monitor {{ctx.monitor.name}} just entered alert status. Please investigate the issue.\n  - Trigger: {{ctx.trigger.name}}\n  - Severity: {{ctx.trigger.severity}}\n  - Period start: {{ctx.periodStart}}\n  - Period end: {{ctx.periodEnd}}"
                    },
                    "throttle_enabled": false,
                    "action_execution_policy": {
                        "action_execution_scope": "per_execution"
                    }
                }
            ],
            "min_time_between_executions": null,
            "rolling_window_size": null
        }
    ]
}

This time "action_execution_scope" is "per_execution" instead of "per_alert". And this version causes:

{
  "error" : {
    "root_cause" : [
      {
        "type" : "parsing_exception",
        "reason" : "Failed to parse object: expecting token of type [START_OBJECT] but found [VALUE_STRING]",
        "line" : 46,
        "col" : 51
      }
    ],
    "type" : "parsing_exception",
    "reason" : "Failed to parse object: expecting token of type [START_OBJECT] but found [VALUE_STRING]",
    "line" : 46,
    "col" : 51
  },
  "status" : 400
}

The last 2 erronous requests are what opensearch dashboards sends (minus the "ui_metadata" key) when you are trying to create a monitor.

I don't know whether this is dashboards not adhering to any API changes or an internal problem where opensearch is meant to parse the "action_execution_policy" maps above. Just my two cents.

UPDATE 2

I think I found the problem. Opensearch excepts a list of strings for "actionable_alerts" key, see. Wheras dashboards send a list of dictionaries such as:

 {
    "value": "DEDUPED",
    "label": "De-duplicated"
  },
  {
    "value": "NEW",
    "label": "New"
}
@dogukanl dogukanl added bug Something isn't working untriaged labels May 30, 2022
@dogukanl dogukanl changed the title [BUG] [BUG] Cannot create monitors/alerts on OpenSearch-Dashboards:2.0.0 May 30, 2022
@kavilla
Copy link
Member

kavilla commented May 30, 2022

Hello @dogukanl ,

Thanks for opening and thanks for onboarding on to 2.0.0 so quickly! This index is hidden by default and I know the security demo makes them system index by default. [ref]. With that said and looking at the stack trace, I wonder if the issue is coming from the OpenSearch Security plugin.

I will re-route this to the alerting plugin dashboards repo first and defer to the maintainers insight if it belongs in their repo or the security plugin repo.

Thanks again!

@kavilla kavilla transferred this issue from opensearch-project/OpenSearch-Dashboards May 30, 2022
@dogukanl
Copy link
Author

dogukanl commented May 31, 2022

Hey @kavilla,

Thanks for getting back to me! To give you a little more insight:

I have experienced the same issue on my test cluster that I have updated from 1.3.2 to 2.0.0 with no apparent problems by changing container images and some breaking config settings in opensearch.yml. Prior to the update, there was already an .opendistro_security index created by the securityadmin.sh. So neither the securityadmin script nor the demo script has run on that cluster after the update.

Thinking this could be caused by the update, I have deployed the fresh 2.0.0 cluster above and still got the same errors.

@ravikiranvuppu
Copy link

Hello,

I have run into the same issue and currently stuck with same error being thrown no matter whatever tried. I tried multiple notification types and it looks like monitor doesn't like the action block with notification channel set and it doesn't even let you save the config or test fire action with 'send message' and keeps throwing same java.lang.IllegalStateException: Can't get text on a START_OBJECT error.

@kavilla
Copy link
Member

kavilla commented Jun 3, 2022

@AWSHurneyt, @lezzago, seems like a duplicate came up #260. This could be due to the backend plugin but I was trying to see if any breaking changes were called in configs in weren't seeing anything. This wasn't caught in the sanity tests either (at least for the one that I checked prior to something delaying the initial release).

Do y'all have some insight here?

@dogukanl
Copy link
Author

dogukanl commented Jun 3, 2022

@kavilla I think I identified the problem. I updated the issue itself rather than commenting them. You can check out the updates.

@lezzago
Copy link
Member

lezzago commented Jun 3, 2022

Hi @dogukanl, thanks for bringing up this issue.

Looking more into this, it seems there is an issue with the frontend plugin. The action_execution_policy is only supported for Bucket level and Document level monitors, but it seems like the front end is requesting that information and trying to pass it to the backend, which is rejecting it naturally. As a temporary workaround until this is fixed, please use the APIs to create/update query level monitors or cluster metrics monitors that have actions configured for them.

Once we have a fix in place, we will provide a way to build the artifact here for you to update your OpenSearch 2.0 clusters with to no longer have this issue.

@lezzago
Copy link
Member

lezzago commented Jun 3, 2022

The attached plugin artifact (alertingDashboards-2.0.0.0.zip) can be used to reinstall the alerting plugin.

To reinstall the plugin, please follow these steps outlined here.

To build the plugin artifact manually, please follow these steps.

@pietrogu
Copy link

The attached plugin artifact (alertingDashboards-2.0.0.0.zip) can be used to reinstall the alerting plugin.

To reinstall the plugin, please follow these steps outlined here.

To build the plugin artifact manually, please follow these steps.

Hi,

could you give the instruction to use this in a docker environment?

Thank you
Pietro

@dogukanl
Copy link
Author

dogukanl commented Jun 13, 2022

@lezzago @AWSHurneyt

I still have the same issue creating cluster level metrics monitor after applying the patch.

My Dockerfile:

FROM opensearchproject/opensearch-dashboards:2.0.0
RUN bin/opensearch-dashboards-plugin remove alertingDashboards \
    && bin/opensearch-dashboards-plugin install https://github.com/opensearch-project/alerting-dashboards-plugin/files/8835333/alertingDashboards-2.0.0.0.zip

My compose file is the same as above, except I'm using the new image.

Edit: I accidentally clicked Close. Can you please re-open it.

@lezzago lezzago reopened this Jun 14, 2022
@lezzago
Copy link
Member

lezzago commented Jun 14, 2022

@pietrogu, you can reinstall it using: https://opensearch.org/docs/latest/opensearch/install/docker#customize-the-docker-image

@dogukanl, did you restart the OpenSearch-Dashboards? Also you may need to refresh your browser cache for the fix to show up

@kavilla
Copy link
Member

kavilla commented Jun 14, 2022

@pietrogu, you can reinstall it using: https://opensearch.org/docs/latest/opensearch/install/docker#customize-the-docker-image

@dogukanl, did you restart the OpenSearch-Dashboards?

I'm not positive how the zip was built but we should also open in incognito. The cache-buster mechanism might only be based on the OpenSearch Dashboards build number.

@dogukanl
Copy link
Author

@lezzago @kavilla

Okay sorry, my bad. It still shows "failed to load destinations" when you click "add trigger" until after you've created your first monitor. So I just assumed it didn't work. When I clicked create the monitor was created successfully and subsequent create actions do not have the error pop up on the bottom right. The issue seems to be solved, except the misleading error message.

@pietrogu
Copy link

@lezzago the issue was accidentaly closed by another user.
Could you please re-open it?
Also could you share the steps to implement the workaround in a docker environment?

@mateuszk-gain
Copy link

mateuszk-gain commented Jun 15, 2022

@lezzago
Copy link
Member

lezzago commented Jun 17, 2022

Hi everyone, we have now release OpenSearch 2.0.1, which includes the patch to fix this problem. We recommend upgrading the 2.0 clusters to 2.0.1 as there are other fixes mentioned here

@lezzago lezzago closed this as completed Jun 17, 2022
@anubisg1
Copy link

anubisg1 commented Sep 23, 2022

i'm using 2.2.1 and i still see this.

the cluster was created using the opensearch-k8s-operator and installed in version 1.3.4 , then upgraded to 2.2.1

looking at my browser the reason seems to be this:

Request URL: https://opensearch.xxx.com/api/alerting/monitors/_search

and the response being

{"ok":false,"resp":"[alerting_exception] Configured indices are not found: [.opendistro-alerting-config]"}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

8 participants