diff --git a/doc/additional-plugins.md b/doc/additional-plugins.md deleted file mode 100755 index 8bb52b536e9f9..0000000000000 --- a/doc/additional-plugins.md +++ /dev/null @@ -1,22 +0,0 @@ ---- -title: Additional Plugins -keywords: plugins, plugin, plug -last_updated: Feb 1, 2018 -sidebar: mydoc_sidebar -permalink: additionalplugins.html -disqus: false ---- - -## Third-party agents/plugins - -There may be agents, and plugins that are being developed and managed by other individuals/organizations. - -Below include agents and plugins that are not merged into this repository. -Take a look at them if you are interested and would like to help out. -* Agents - * NodeJS agent - https://github.com/peaksnail/pinpoint-node-agent -* Plugins - * Websphere - https://github.com/sjmittal/pinpoint/tree/cpu_monitoring_fix/plugins/websphere - * RocketMQ - https://github.com/ruizlake/pinpoint/tree/master/plugins/rocketmq - -If you are working on an agent or a plugin and want to add it to this list, please feel free to [contact us](mailto:roy.kim@navercorp.com) anytime. diff --git a/doc/alarm.md b/doc/alarm.md deleted file mode 100644 index 3c95e002c19c2..0000000000000 --- a/doc/alarm.md +++ /dev/null @@ -1,1106 +0,0 @@ ---- -title: Setting Alarm -keywords: alarm -last_updated: June 02, 2021 -sidebar: mydoc_sidebar -permalink: alarm.html -disqus: false ---- - -[English](#alarm) | [한글](#alarm-1) - -# Alarm - -Application's status is periodically checked and alarm is triggered if certain pre-configured conditions (rules) are satisfied. - -pinpoint-batch server checks every 3 minutes based on the last 5 minutes of data. And if the conditions are satisfied, it sends sms/email/webhook to the users listed in the user group. - -> If an email/sms/webhook is sent everytime when a threshold is exceeded, we felt that alarm message would be spammable.
-> Therefore we decided to gradually increase the transmission frequency for alarms.
-> ex) If an alarm occurs continuously, transmission frequency is increased by a factor of two. 3 min -> 6min -> 12min -> 24min - -> NOTICE!
->
-> batch was run in the background of pinpoint-web server until v2.2.0 From v2.2.1 it will be dealt with in pinpoint-batch server. -> Since the batch logic(code) in pinpoint-web will be deprecated in the future, we advice you to transfer the execution of batch to pinpoint-batch server. - -## 1. User Guide - -1) Configuration menu -![alarm_figure01.gif](images/alarm/alarm_figure01.gif) - -2) Registering users -![alarm_figure02.gif](images/alarm/alarm_figure02.gif) - -3) Creating user groups -![alarm_figure03.gif](images/alarm/alarm_figure03.gif) - -4) Adding users to user group -![alarm_figure04.gif](images/alarm/alarm_figure04.gif) - -5) Setting alarm rules -![alarm_figure05.gif](images/alarm/alarm_figure05.gif) - -**Alarm Rules** -``` -SLOW COUNT - Triggered when the number of slow requests sent to the application exceeds the configured threshold. - -SLOW RATE - Triggered when the percentage(%) of slow requests sent to the application exceeds the configured threshold. - -ERROR COUNT - Triggered when the number of failed requests sent to the application exceeds the configured threshold. - -ERROR RATE - Triggered when the percentage(%) of failed requests sent to the application exceeds the configured threshold. - -TOTAL COUNT - Triggered when the number of all requests sent to the application exceeds the configured threshold. - -SLOW COUNT TO CALLEE - Triggered when the number of slow requests sent by the application exceeds the configured threshold. - You must specify the domain or the address(ip, port) in the configuration UI's "Note..." box - ex) www.naver.com, 127.0.0.1:8080 - -SLOW RATE TO CALLEE - Triggered when the percentage(%) of slow requests sent by the application exceeds the configured threshold. - You must specify the domain or the address(ip, port) in the configuration UI's "Note..." box - ex) www.naver.com, 127.0.0.1:8080 - -ERROR COUNT TO CALLEE - Triggered when the number of failed requests sent by the application exceeds the configured threshold. - You must specify the domain or the address(ip, port) in the configuration UI's "Note..." box - ex) www.naver.com, 127.0.0.1:8080 - -ERROR RATE TO CALLEE - Triggered when the percentage(%) of failed requests sent by the application exceeds the configured threshold. - You must specify the domain or the address(ip, port) in the configuration UI's "Note..." box - ex) www.naver.com, 127.0.0.1:8080 - -TOTAL COUNT TO CALLEE - Triggered when the number of all requests sent by the application exceeds the configured threshold. - You must specify the domain or the address(ip, port) in the configuration UI's "Note..." box - ex) www.naver.com, 127.0.0.1:8080 - -HEAP USAGE RATE - Triggered when the application's heap usage(%) exceeds the configured threshold. - -JVM CPU USAGE RATE - Triggered when the application's CPU usage(%) exceeds the configured threshold. - -SYSTEM CPU USAGE RATE - Sends an alarm when the application's CPU usage(%) exceeds the configured threshold. - -DATASOURCE CONNECTION USAGE RATE - Triggered when the application's DataSource connection usage(%) exceeds the configured threshold. - -FILE DESCRIPTOR COUNT - Sends an alarm when the number of open file descriptors exceeds the configured threshold. -``` - - -## 2. Configuration & Implementation - -Alarms generated by Pinpoint can be configured to be sent over email, sms and webhook. - -Sending alarms over email is simple - you will simply need to configure the property file. -Sending alarms over sms requires some implementation. Read on to find out how to do this. -The alarm using webhook requires an webhook receiver service to get webhook messages. -You should implement the webhook receiver service - which is not provided by Pinpoint, or You can use [the sample project](https://github.com/doll6777/slack-receiver) - -Few modifications are required in pinpoint-batch and pinpoint-web to use the alarm feature. -Add some implementations and settings in pinpoint-batch. -Configure Pinpoint-web for user to set an alarm settings. - - -## 2.1 Configuration & Implementation in pinpoint-batch - -### 2.1.1) Email configuration, sms and webhook implementation - -**A. Email alarm service** - -To use the mailing feature, you need to configure the SMTP server information and information to be included in the email in the [batch-root.properties](https://github.com/pinpoint-apm/pinpoint/blob/master/batch/src/main/resources/batch-root.properties) file. - -``` -pinpoint.url= #pinpoint-web server url -alarm.mail.server.url= #smtp server address -alarm.mail.server.port= #smtp server port -alarm.mail.server.username= #username for smtp server authentication -alarm.mail.server.password= #password for smtp server authentication -alarm.mail.sender.address= #sender's email address - -ex) -pinpoint.url=http://pinpoint.com -alarm.mail.server.url=stmp.server.com -alarm.mail.server.port=587 -alarm.mail.server.username=pinpoint -alarm.mail.server.password=pinpoint -alarm.mail.sender.address=pinpoint_operator@pinpoint.com -``` - -The class that sends emails is already registered as Spring bean in [applicationContext-batch-sender.xml](https://github.com/pinpoint-apm/pinpoint/blob/master/batch/src/main/resources/applicationContext-batch-sender.xml). - -``` - - - - - - - - - - - - - - ${alarm.mail.transport.protocol:} - ${alarm.mail.smtp.port:} - ${alarm.mail.sender.address:} - ${alarm.mail.smtp.auth:false} - ${alarm.mail.smtp.starttls.enable:false} - ${alarm.mail.smtp.starttls.required:false} - ${alarm.mail.debug:false} - - - -``` - -If you would like to implement your own mail sender, simply replace the `SpringSmtpMailSender`, `JavaMailSenderImpl` beans above with your own implementation that implements `com.navercorp.pinpoint.web.alarm.MailSender` interface. - -``` -public interface MailSender { - void sendEmail(AlarmChecker checker, int sequenceCount); -} -``` - -**B. Sms alarm service** - -To send alarms over sms, you will need to implement your own sms sender by implementing `com.navercorp.pinpoint.batch.alarm.SmsSender` interface. -If there is no `SmsSender` implementation, then alarms will not be sent over sms. - -``` -public interface SmsSender { - public void sendSms(AlarmChecker checker, int sequenceCount); -} -``` - -**C. Webhook alarm service** - -Webhook alarm service is a feature that can transmit Pinpoint's alarm message through Webhook API. - -The webhook receiver service that receives the webhook message should be implemented by your own, or use [a sample project](https://github.com/doll6777/slack-receiver) provided (in this case Slack). - -The alarm messages(refer to as payloads) sent to webhook receiver have the different schema depending on the Alarm Checker type. -You can see the payload schemas in [3.Others - The Specification of webhook payloads and the examples](##3.Others). - -To enable the webhook alarm service, -You need to configure *webhook.enable* and *webhook.receiver.url* in [batch-root.properties](https://github.com/pinpoint-apm/pinpoint/blob/master/batch/src/main/resources/batch-root.properties) file. - -```properties -# webhook config -webhook.enable=true -webhook.receiver.url=http://www.webhookexample.com/alarm/ -``` - ->**NOTICE!**
-> ->As the webhook alarm service has been available from Pinpoint 2.1.1, You should add column 'webhook_send' in table 'alarm_rule' of pinpoint MYSQL if you updated previous release of Pinpoint 2.2.1. -> ->SQL : ALTER TABLE `alarm_rule` ADD COLUMN `webhook_send` CHAR(1) DEFAULT NULL; - -The class in charge of sending the webhook is WebhookSenderImpl which Pinpoint provides. - -WebhookSender class is added in [applicationContext-batch-sender.xml](https://github.com/pinpoint-apm/pinpoint/blob/master/batch/src/main/resources/applicationContext-batch-sender.xml) of Pinpoint-batch. - -```xml - - - - - -``` - -### 2.1.2) Configuring MYSQL - -**step 1** - -Prepare MYSQL Instance to persist the alarm service metadata. - -**step 2** - -Set up a MYSQL server and configure connection information in [jdbc-root.properties](https://github.com/pinpoint-apm/pinpoint/blob/master/batch/src/main/resources/jdbc-root.properties) file. - -```properties -jdbc.driverClassName=com.mysql.jdbc.Driver -jdbc.url=jdbc:mysql://localhost:13306/pinpoint -jdbc.username=admin -jdbc.password=admin -``` -**step 3** - -Create tables for the alarm service. Use below DDL files. - -- [CreateTableStatement-mysql.sql](https://github.com/pinpoint-apm/pinpoint/blob/master/web/src/main/resources/sql/CreateTableStatement-mysql.sql) -- [SpringBatchJobReositorySchema-mysql.sql](https://github.com/pinpoint-apm/pinpoint/blob/master/web/src/main/resources/sql/SpringBatchJobRepositorySchema-mysql.sql) - - -### 2.1.3) How to execute pinpoint-batch - -The pinpoint-batch project is based on spring boot and can be executed with the following command. -After build, the executable file is placed under the target/deploy folder of the pinpoint-batch. - -``` -java -Dspring.profiles.active=XXXX -jar pinpoint-batch-VERSION.jar - -ex) java -Dspring.profiles.active=local -jar pinpoint-batch-2.1.1.jar -``` - -## 2.2 How to configure pinpoint-web - -### 2.2.1) Configuring MYSQL Server IP - -In order to persist user alarm settings, set the mysql connection information in [jdbc-root.properties](https://github.com/pinpoint-apm/pinpoint/blob/master/web/src/main/resources/jdbc-root.properties) file in pinpoint-web. - -``` -jdbc.driverClassName=com.mysql.jdbc.Driver -jdbc.url=jdbc:mysql://localhost:13306/pinpoint -jdbc.username=admin -jdbc.password=admin -``` - -### 2.2.2) Enabling Webhook Alarm Service - -Set *webhook.enable* in [batch-root.properties](https://github.com/pinpoint-apm/pinpoint/blob/master/web/src/main/resources/batch-root.properties) as *true* for user to configure the webhook alarm in *Alarm* menu. - -```properties -# webhook config -webhook.enable=true -``` - -As you enable the webhook alarm service, You can set the webhook as alarm type. See the below. - -![alarm_figure06](images/alarm/alarm_figure06.png) - -## 3. Others - -## 3.1 Configuration, Execution, Performance. - -**1) You may change the batch execution period by modifying the cron expression in *[applicationContext-batch-schedule.xml](https://github.com/pinpoint-apm/pinpoint/blob/master/batch/src/main/resources/applicationContext-batch-schedule.xml)* file** - -``` - - - -``` - -**2) Ways to improve alarm batch performance** -The alarm batch was designed to run concurrently. If you have a lot of applications with alarms registered, you may increase the size of the executor's thread pool by modifying `pool-size` in *[applicationContext-alarmJob.xml](https://github.com/pinpoint-apm/pinpoint/blob/master/batch/src/main/resources/job/applicationContext-alarmJob.xml)* file. - -Note that increasing this value will result in higher resource usage. -``` - -``` - -If there are a lot of alarms registered to applications, you may set the `alarmStep` registered in *[applicationContext-alarmJob.xml](https://github.com/pinpoint-apm/pinpoint/blob/master/batch/src/main/resources/job/applicationContext-alarmJob.xml)* file to run concurrently. -``` - - - - - - -``` - -**3) Use quickstart's web** -Pinpoint Web uses Mysql to persist users, user groups, and alarm configurations.
-However Quickstart uses MockDAO to reduce memory usage.
-Therefore if you want to use Mysql for Quickstart, please refer to Pinpoint Web's [applicationContext-dao-config.xml](https://github.com/pinpoint-apm/pinpoint/blob/master/web/src/main/resources/applicationContext-dao-config.xml -), [jdbc.properties](https://github.com/pinpoint-apm/pinpoint/blob/master/web/src/main/resources/jdbc.properties). - -## 3.2 Details on Webhook - -### 3.2.1) webhook receiver sample project - -[Slack-Receiver](https://github.com/doll6777/slack-receiver) is an example project of the webhook receiver. -The project can receives alarm of the Pinpoint webhook and sends the message to Slack. -If you want more details, see [the project repository](https://github.com/doll6777/slack-receiver) - -### 3.2.2) The Specification of webhook payloads and the examples - -**The Schemas of webhook payloads** - -Key - -| Name | Type | Description | Nullable | -| ------------- | --------- | ------------------------------------------------------------ | -------- | -| pinpointUrl | String | Pinpoint-web server URL | O | -| batchEnv | String | Batch server environment variable | X | -| applicationId | String | Alarm target application Id | X | -| serviceType | String | Alarm target application service type | X | -| userGroup | UserGroup | The UserGroup in the user group page | X | -| checker | Checker | The checker info in the alarm setting page | X | -| unit | String | The unit of detected value by checker | O | -| threshold | Integer | The threshold of value detected by checker during a set time | X | -| notes | String | The notes in the alarm setting page | O | -| sequenceCount | Integer | The number of alarm occurence | X | - - - -UserGroup - -| Name | Type | Description | Nullable | -| ---------------- | ------------ | ---------------------------------------- | -------- | -| userGroupId | String | The user group id in the user group page | X | -| userGroupMembers | UserMember[] | Members Info of a specific user group | X | - - - -Checker - -| Name | Type | Description | Nullable | -| ------------- | -------------------------- | ------------------------------------------------------------ | -------- | -| name | String | The name of checker in the alarm setting page | X | -| type | String | The type of checker abstracted by value detected by checker
"LongValueAlarmChecker" type is the abstracted checker type of “Slow Count”, “Slow Rate”, “Error Count”, “Error Rate”, “Total Count”, “Slow Count To Callee”, “Slow Rate To Callee”, “Error Count To Callee”, “Error Rate To Callee”, “Total Count to Callee”.
"LongValueAgentChecker" type is the abstracted checker type of "Heap Usage Rate", "Jvm Cpu Usage Rate", "System Cpu Usage Rate", "File Descriptor Count".
"BooleanValueAgentChecker" type is the abstracted checker type of "Deadlock or not".
"DataSourceAlarmListValueAgentChecker" type is the abstracted checker type of "DataSource Connection Usage Rate". | X | -| detectedValue | Integer or DetectedAgent[] | The value detected by checker
If “type” is “LongValueAlarmChecker”, “detectedValue” is Integer type.
If "type" is not "LongValueAlarmChecker", "detectedValue" is DetectedAgents[] type. | X | - - - -UserMember - -| Name | Type | Description | Nullable | -| ---------------- | ------ | ------------------------- | -------- | -| id | String | Member id | X | -| name | String | Member name | X | -| email | String | Member email | O | -| department | String | Member department | O | -| phoneNumber | String | Member phone number | O | -| phoneCountryCode | String | Member phone country code | O | - - - -DetectedAgent - -| Name | Type | Description | Nullable | -| ---------- | ----------------------------------------------- | ------------------------------------------------------------ | -------- | -| agentId | String | Agent id detected by checker | X | -| agentValue | Integer or
Boolean or
DataSourceAlarm[] | The value of Agent detected by checker
If “type” is “LongValueAgentChecker”, “agentValue” is Integer type.
If “type” is “BooleanValueAgentChecker”,“agentValue” is Boolean type.
If “type” is “DataSourceAlarmListValueAgentChecker”, “agentValue” is DataSourceAlarm[] type | X | - - - -DataSourceAlarm - -| Name | Type | Description | Nullable | -| --------------- | ------- | --------------------------------------------- | -------- | -| databaseName | String | The database name connected to application | X | -| connectionValue | Integer | The application's DataSource connection usage | X | - -**The Examples of the webhook Payload** - -LongValueAlarmChecker - -```json -{ - "pinpointUrl": "http://pinpoint.com", - "batchEnv": "release", - "applicationId": "TESTAPP", - "serviceType": "TOMCAT", - "userGroup": { - "userGroupId": "Group-1", - "userGroupMembers": [ - { - "id": "msk1111", - "name": "minsookim", - "email": "pinpoint@naver.com", - "department": "Platform", - "phoneNumber": "01012345678", - "phoneCountryCode": 82 - } - ] - }, - "checker": { - "name": "TOTAL COUNT", - "type": "LongValueAlarmChecker", - "detectedValue": 33 - }, - "unit": "", - "threshold": 15, - "notes": "Note Example", - "sequenceCount": 4 -} -``` - - - -LongValueAgentChecker - -```json -{ - "pinpointUrl": "http://pinpoint.com", - "batchEnv": "release", - "applicationId": "TESTAPP", - "serviceType": "TOMCAT", - "userGroup": { - "userGroupId": "Group-1", - "userGroupMembers": [ - { - "id": "msk1111", - "name": "minsookim", - "email": "pinpoint@naver.com", - "department": "Platform", - "phoneNumber": "01012345678", - "phoneCountryCode": 82 - } - ] - }, - "checker": { - "name": "HEAP USAGE RATE", - "type": "LongValueAgentChecker", - "detectedValue": [ - { - "agentId": "test-agent", - "agentValue": 8 - } - ] - }, - "unit": "%", - "threshold": 5, - "notes": "Note Example", - "sequenceCount": 4 -} -``` - - - -BooleanValueAgentChecker - -```json -{ - "pinpointUrl": "http://pinpoint.com", - "batchEnv": "release", - "applicationId": "TESTAPP", - "serviceType": "TOMCAT", - "userGroup": { - "userGroupId": "Group-1", - "userGroupMembers": [ - { - "id": "msk1111", - "name": "minsookim", - "email": "pinpoint@naver.com", - "department": "Platform", - "phoneNumber": "01012345678", - "phoneCountryCode": 82 - } - ] - }, - "checker": { - "name": "DEADLOCK OCCURRENCE", - "type": "BooleanValueAgentChecker", - "detectedValue": [ - { - "agentId": "test-agent", - "agentValue": true - } - ] - }, - "unit": "BOOLEAN", - "threshold": 1, - "notes": "Note Example", - "sequenceCount": 4 -} - - -``` - - - -DataSourceAlarmListValueAgentChecker - -```json -{ - "pinpointUrl": "http://pinpoint.com", - "batchEnv": "release", - "applicationId": "TESTAPP", - "serviceType": "TOMCAT", - "userGroup": { - "userGroupId": "Group-1", - "userGroupMembers": [ - { - "id": "msk1111", - "name": "minsookim", - "email": "pinpoint@naver.com", - "department": "Platform", - "phoneNumber": "01012345678", - "phoneCountryCode": 82 - } - ] - }, - "checker": { - "name": "DATASOURCE CONNECTION USAGE RATE", - "type": "DataSourceAlarmListValueAgentChecker", - "detectedValue": [ - { - "agentId": "test-agent", - "agentValue": [ - { - "databaseName": "test", - "connectionValue": 32 - } - ] - } - ] - }, - "unit": "%", - "threshold": 16, - "notes": "Note Example", - "sequenceCount": 4 -} -``` - - - ---- - -# Alarm - -pinpoint는 application 상태를 주기적으로 체크하여 application 상태의 수치가 임계치를 초과할 경우 사용자에게 알람을 전송하는 기능을 제공한다. - -application 상태 값이 사용자가 설정한 임계치를 초과하는지 판단하는 batch는 [pinpoint-batch](https://github.com/pinpoint-apm/pinpoint/tree/master/batch)에서 동작 한다. -alarm batch는 기본적으로 3분에 한번씩 동작이 된다. 최근 5분동안의 데이터를 수집해서 alarm 조건을 만족하면 user group에 속한 user 들에게 sms/email/webhook message를 전송한다. - -> 연속적으로 알람 조건이 임계치를 초과한 경우에 매번 sms/email/webhook를 전송하지 않는다.
-> 알람 조건이 만족할때마다 매번 sms/email/webhook이 전송되는것은 오히려 방해가 된다고 생각하기 때문이다. 그래서 연속해서 알람이 발생할 경우 sms/email/webhook 전송 주기가 점증적으로 증가된다.
-> 예) 알람이 연속해서 발생할 경우, 전송 주기는 3분 -> 6분 -> 12분 -> 24분 으로 증가한다. - -> **알림**
->
-> batch는 pinpoint 2.2.0 버전까지는 [pinpoint-web](https://github.com/pinpoint-apm/pinpoint/tree/master/web)에서 동작되었지만, 2.2.1 버전 부터는 batch가 [pinpoint-batch](https://github.com/pinpoint-apm/pinpoint/tree/master/batch)에서 동작되도록 로직을 분리했다.
-> 앞으로 pinpoint-web의 batch로직은 제거를 할 예정이므로, pinpoint-web에서 batch를 동작시키는 경우 pinpoint-batch에서 batch를 실행하도록 구성하는것을 추천한다. - -## 1. Alarm 기능 사용 방법 - -1) 설정 화면으로 이동 -![alarm_figure01.gif](images/alarm/alarm_figure01.gif) -2) user를 등록 -![alarm_figure02.gif](images/alarm/alarm_figure02.gif) -3) userGroup을 생성 -![alarm_figure03.gif](images/alarm/alarm_figure03.gif) -4) userGroup에 member를 등록 -![alarm_figure04.gif](images/alarm/alarm_figure04.gif) -5) alarm rule을 등록 -![alarm_figure05.gif](images/alarm/alarm_figure05.gif) - -alarm rule에 대한 설명은 아래를 참고하시오. - -``` - -SLOW COUNT - 외부에서 application을 호출한 요청 중에 외부서버로 응답을 늦게 준 요청의 개수가 임계치를 초과한 경우 알람이 전송된다. - -SLOW RATE - 외부에서 application을 호출한 요청 중에 외부서버로 응답을 늦게 준 요청의 비율(%)이 임계치를 초과한 경우 알람이 전송된다. - -ERROR COUNT - 외부에서 application을 호출한 요청 중에 에러가 발생한 요청의 개수가 임계치를 초과한 경우 알람이 전송된다. - -ERROR RATE - 외부에서 application을 호출한 요청 중에 에러가 발생한 요청의 비율(%)이 임계치를 초과한 경우 알람이 전송된다. - -TOTAL COUNT - 외부에서 application을 호출한 요청 개수가 임계치를 초과한 경우 알람이 전송된다. - -SLOW COUNT TO CALLEE - application 내에서 외부서버를 호출한 요청 중 slow 호출의 개수가 임계치를 초과한 경우 알람이 전송된다. - 설정 화면의 Note 항목에 외부서버의 도메인 이나 주소(ip, port)를 입력해야 합니다. ex) naver.com, 127.0.0.1:8080 - -SLOW RATE TO CALLEE - application 내에서 외부서버를 호출한 요청 중 slow 호출의 비율(%)이 임계치를 초과한 경우 알람이 전송된다. - 설정 화면의 Note 항목에 외부서버의 도메인 이나 주소(ip, port)를 입력해야 합니다. ex) naver.com, 127.0.0.1:8080 - -ERROR COUNT TO CALLEE - application 내에서 외부서버를 호출한 요청 중 error 가 발생한 호출의 개수가 임계치를 초과한 경우 알람이 전송된다. - 설정 화면의 Note 항목에 외부서버의 도메인 이나 주소(ip, port)를 입력해야 합니다. ex) naver.com, 127.0.0.1:8080 - -ERROR RATE TO CALLEE - application 내에서 외부서버를 호출한 요청 중 error 가 발생한 호출의 비율이 임계치를 초과한 경우 알람이 전송된다. - 설정 화면의 Note 항목에 외부서버의 도메인 이나 주소(ip, port)를 입력해야 합니다. ex) naver.com, 127.0.0.1:8080 - -TOTAL COUNT TO CALLEE - application 내에서 외부서버를 호출한 요청의 개수가 임계치를 초과한 경우 알람이 전송된다. - 설정 화면의 Note 항목에 외부서버의 도메인 이나 주소(ip, port)를 입력해야 합니다. ex) naver.com, 127.0.0.1:8080 - -HEAP USAGE RATE - heap의 사용률이 임계치를 초과한 경우 알람이 전송된다. - -JVM CPU USAGE RATE - applicaiton의 CPU 사용률이 임계치를 초과한 경우 알람이 전송된다. - -SYSTEM CPU USAGE RATE - 서버의 CPU 사용률이 임계치를 초과한 경우 알람이 전송된다. - -DATASOURCE CONNECTION USAGE RATE - applicaiton의 DataSource내의 Connection 사용률이 임계치를 초과한 경우 알람이 전송된다. - -FILE DESCRIPTOR COUNT - 열려있는 File Descriptor 개수가 임계치를 초가한 경우 알람이 전송된다. -``` - - -## 2. 설정 및 구현 방법 - -알람을 전송하는 방법은 총 3가지로서, email, sms와 webhook으로 알람을 전송 할 수 있다.
- -email 전송은 설정만 추가하면 기능을 사용할 수 있고, sms 전송을 하기 위해서는 직접 전송 로직을 구현해야 한다.
-webhook 전송은 webhook message를 받는 webhook receiver 서비스를 별도로 준비해야한다. -webhook receiver 서비스는 [샘플 프로젝트](https://github.com/doll6777/slack-receiver)를 사용하거나 직접 구현해야 한다. - -alarm 기능을 사용하려면 pinpoint-batch와 pinpoint-web를 수정해야한다. -pinpoint-batch에는 alarm batch 동작을 위해서 설정 및 구현체를 추가해야 한다. -pinpoint-web에는 사용자가 알람을 추가할 수 있도록 설정해야한다. - -## 2.1 pinpoint-batch 설정 및 구현 방법 - -### 2.1.1) email/sms/webhook 전송 설정 및 구현 - -**A. email 전송** - -email 전송 기능을 사용하기 위해서 [batch-root.properties](https://github.com/pinpoint-apm/pinpoint/blob/master/batch/src/main/resources/batch-root.properties)파일에 smtp 서버 정보와 email에 포함될 정보들을 설정해야 한다. - -``` -pinpoint.url= #pinpoint-web 서버의 url -alarm.mail.server.url= #smtp 서버 주소 -alarm.mail.server.port= #smtp 서버 port -alarm.mail.server.username= #smtp 인증을 위한 userName -alarm.mail.server.password= #smtp 인증을 위한 password -alarm.mail.sender.address= # 송신자 email - -ex) -pinpoint.url=http://pinpoint.com -alarm.mail.server.url=stmp.server.com -alarm.mail.server.port=587 -alarm.mail.server.username=pinpoint -alarm.mail.server.password=pinpoint -alarm.mail.sender.address=pinpoint_operator@pinpoint.com -``` - -참고로
-[applicationContext-batch-sender.xml](https://github.com/pinpoint-apm/pinpoint/blob/master/batch/src/main/resources/applicationContext-batch-sender.xml) 파일에 email을 전송하는 class가 bean으로 등록 되어있다. - -``` - - - - - - - - - - - - - - ${alarm.mail.transport.protocol:} - ${alarm.mail.smtp.port:} - ${alarm.mail.sender.address:} - ${alarm.mail.smtp.auth:false} - ${alarm.mail.smtp.starttls.enable:false} - ${alarm.mail.smtp.starttls.required:false} - ${alarm.mail.debug:false} - - - -``` - -만약 email 전송 로직을 직접 구현하고 싶다면 위의 SpringSmtpMailSender, JavaMailSenderImpl bean 선언을 제거하고 com.navercorp.pinpoint.web.alarm.MailSender interface를 구현해서 bean을 등록하면 된다. - -``` -public interface MailSender { - void sendEmail(AlarmChecker checker, int sequenceCount); -} -``` - -**B. sms 전송** - -sms 전송 기능을 사용 하려면 com.navercorp.pinpoint.batch.alarm.SmsSender interface를 구현하고 bean으로 등록해야 한다. -SmsSender 구현 class가 없는 경우 sms는 전송되지 않는다. - -``` -public interface SmsSender { - public void sendSms(AlarmChecker checker, int sequenceCount); -} -``` - -**C. webhook 전송** - -Webhook 전송 기능은 Pinpoint의 Alarm message를 Webhook API로 전송 할 수 있는 기능이다. - -**webhook message를 전송받는 webhook receiver 서비스는 [샘플 프로젝트](https://github.com/doll6777/slack-receiver)를 사용하거나 직접 구현해야 한다.** -Webhook Receiver 서버에 전송되는 Alarm message(이하 payload)는 Alarm Checker 타입에 따라 스키마가 다르다. -Checker 타입에 따른 payload 스키마는 [**3.기타** - webhook 페이로드 스키마 명세, 예시](##3.기타)에서 설명한다. - -webhook 기능을 활성화 하기위해서, -[batch-root.properties](https://github.com/pinpoint-apm/pinpoint/blob/master/batch/src/main/resources/batch-root.properties) 파일에 Webhook 전송 여부(webhook.enable)와 receiver 서버 정보(webhook.receiver.url)를 설정한다. - -```properties -# webhook config -webhook.enable=true -webhook.receiver.url=http://www.webhookexample.com/alarm/ -``` - - ->**알림**
->webhook 기능이 추가되면서 mysql 테이블 스키마가 수정되었기 때문에, Pinpoint 2.1.1 미만 버전에서 2.1.1 버전 이상으로 업그레이드한 경우 Mysql의 'alarm_rule' 테이블에 'webhook_send' 컬럼을 추가해야한다. -> ->SQL : ALTER TABLE `alarm_rule` ADD COLUMN `webhook_send` CHAR(1) DEFAULT NULL; - -참고로
-Webhook을 전송하는 클래스는 Pinpoint가 제공하는 WebhookSenderImpl가 담당한다. -WebhookSender 클래스는 Pinpoint-batch의 [applicationContext-batch-sender.xml](https://github.com/pinpoint-apm/pinpoint/blob/master/batch/src/main/resources/applicationContext-batch-sender.xml) 파일에 bean으로 등록 되어있다. - -```xml - - - - - -``` - -### 2.1.2) MYSQL 서버 IP 주소 설정 & table 생성 - -**step 1** - -알람에 관련된 데이터를 저장하기 위해 Mysql 서버를 준비한다. - -**step 2** - -mysql 접근을 위해서 pinpoint-batch의 [jdbc-root.properties](https://github.com/pinpoint-apm/pinpoint/blob/master/batch/src/main/resources/jdbc-root.properties) 파일에 접속 정보를 설정한다. - -```properties -jdbc.driverClassName=com.mysql.jdbc.Driver -jdbc.url=jdbc:mysql://localhost:13306/pinpoint -jdbc.username=admin -jdbc.password=admin -``` -**step 3** - -mysql에 Alarm 기능에 필요한 table을 생성한다. table 스키마는 아래 파일을 참조한다. -- *[CreateTableStatement-mysql.sql](https://github.com/pinpoint-apm/pinpoint/blob/master/web/src/main/resources/sql/CreateTableStatement-mysql.sql)* -- *[SpringBatchJobRepositorySchema-mysql.sql](https://github.com/pinpoint-apm/pinpoint/blob/master/web/src/main/resources/sql/SpringBatchJobRepositorySchema-mysql.sql)* - - -### 2.1.3) pinpoint-batch 실행 방법 - -pinpoint-batch 프로젝트는 spring boot기반으로 되어있고 아래와 같은 명령어로 실행하면 된다. -빌드후 실행파일은 pinpoint-batch 모듈의 target/deploy 폴더 하위에서 확인할 수 있다. - -``` -java -Dspring.profiles.active=XXXX -jar pinpoint-batch-VERSION.jar - -ex) java -Dspring.profiles.active=local -jar pinpoint-batch-2.1.1.jar -``` - -## 2.2 pinpoint-web 설정 방법 - - -### 2.2.1) MYSQL 서버 IP 주소 설정 -사용자 알람 설정을 저장하기 위해서 pinpoint-web의 [jdbc-root.properties](https://github.com/pinpoint-apm/pinpoint/blob/master/web/src/main/resources/jdbc-root.properties) 파일에 mysql 접속 정보를 설정한다. - -``` -jdbc.driverClassName=com.mysql.jdbc.Driver -jdbc.url=jdbc:mysql://localhost:13306/pinpoint -jdbc.username=admin -jdbc.password=admin -``` - - -### 2.2.2) webhook 기능 활성화 - -사용자가 알람 설정에 webhook 기능을 적용할수 있도록 [batch-root.properties](https://github.com/pinpoint-apm/pinpoint/blob/master/web/src/main/resources/batch-root.properties) 파일에 webhook 기능을 활성화한다. - -```properties -# webhook config -webhook.enable=true -``` - -webhook 기능을 활성화하면, 아래 그림처럼 알람 설정 화면에서 webhook을 알람 타입으로 선택할 수 있다. - -![alarm_figure06](images/alarm/alarm_figure06.png) - -## 3. 기타 - -## 3.1 설정, 실행, 성능 - -**1) batch의 동작 주기를 조정하고 싶다면 *[applicationContext-batch-schedule.xml](https://github.com/pinpoint-apm/pinpoint/blob/master/batch/src/main/resources/applicationContext-batch-schedule.xml)* 파일의 cron expression을 수정하면 된다.** -``` - - - -``` - -**2) alarm batch 성능을 높이는 방법은 다음과 같다.** -alarm batch 성능 튜닝을 위해서 병렬로 동작이 가능하도록 구현을 해놨다. -그래서 아래에서 언급된 조건에 해당하는 경우 설정값을 조정한다면 성능을 향상 시킬수 있다. 단 병렬성을 높이면 리소스의 사용률이 높아지는것은 감안해야한다. - -alarm이 등록된 application의 개수가 많다면 *[applicationContext-alarmJob.xml](https://github.com/pinpoint-apm/pinpoint/blob/master/batch/src/main/resources/job/applicationContext-alarmJob.xml)* 파일의 poolTaskExecutorForPartition의 pool size를 늘려주면 된다. - -``` - -``` - -application 각각마다 등록된 alarm의 개수가 많다면 *[applicationContext-alarmJob.xml](https://github.com/pinpoint-apm/pinpoint/blob/master/batch/src/main/resources/job/applicationContext-alarmJob.xml)* 파일에 선언된 alarmStep이 병렬로 동작되도록 설정하면 된다. -``` - - - - - - -``` - -**3) quickstart web을 사용한다면.** -pinpoint web은 mockDAO를 사용하기 때문에 pinpont web의 설정들을 참고해서 기능을 사용해야한다. -[applicationContext-dao-config.xml](https://github.com/pinpoint-apm/pinpoint/blob/master/web/src/main/resources/applicationContext-dao-config.xml), [jdbc.properties](https://github.com/pinpoint-apm/pinpoint/blob/master/web/src/main/resources/jdbc.properties). - -## 3.2 webhook 상세 - -### 3.2.1 webhook receiver sample project - -**webhook receiver 프로젝트 예시** - -[Slack-Receiver](https://github.com/doll6777/slack-receiver) 는 Webhook Receiver의 예제 프로젝트이다. -이 프로젝트는 Pinpoint의 webhook의 알람을 받아서 Slack으로 메시지를 전송할 수 있는 스프링 부트로 구현된 서비스이다. -이 프로젝트의 자세한 사항은 [해당 GitHub 저장소](https://github.com/doll6777/slack-receiver) 를 참고하면 된다. - -### 3.2.2 webhook 페이로드 스키마 및 예시 - -**페이로드 스키마** - -Key - -| Name | Type | Description | Nullable | -| ------------- | --------- | ----------------------------------------- | -------- | -| pinpointUrl | String | Pinpoint-web의 서버 URL 주소 | O | -| batchEnv | String | Batch 서버의 환경 변수 | X | -| applicationId | String | 타겟 애플리케이션 ID | X | -| serviceType | String | 타겟 애플리케이션 서비스 타입 | X | -| userGroup | UserGroup | 유저 그룹 페이지의 유저 그룹 | X | -| checker | Checker | alarm 설정 페이지의 checker 정보 | X | -| unit | String | checker가 감지한 값의 단위 | O | -| threshold | Integer | 설정된 시간동안 체커가 감지한 값의 임계치 | X | -| notes | String | 알람 설정 페이지의 notes | O | -| sequenceCount | Integer | 알람 발생 횟수 | X | - - - -UserGroup - -| Name | Type | Description | Nullable | -| ---------------- | ------------ | ------------------------------- | -------- | -| userGroupId | String | 유저 그룹 페이지의 유저 그룹 ID | X | -| userGroupMembers | UserMember[] | 특정 유저 그룹의 멤버 정보 | X | - - - -Checker - -| Name | Type | Description | Nullable | -| ------------- | -------------------------- | ------------------------------------------------------------ | -------- | -| name | String | 알람 설정 페이지의 checker 이름 | X | -| type | String | 체커가 감지한 값의 추상 타입, 다음 중 하나에 해당됨
"LongValueAlarmChecker" 타입은 "Slow Count", “Slow Count”, “Slow Rate”, “Error Count”, “Error Rate”, “Total Count”, “Slow Count To Callee”, “Slow Rate To Callee”, “Error Count To Callee”, “Error Rate To Callee”, “Total Count to Callee”의 추상 타입에 속한다.
"LongValueAgentChecker" 타입은 "Heap Usage Rate", "Jvm Cpu Usage Rate", "System Cpu Usage Rate", "File Descriptor Count"의 추상타입이다.
"BooleanValueAgentChecker" 타입은 "Deadlock or not"의 추상 타입이다.
"DataSourceAlarmListValueAgentChecker" 타입은 "DataSource Connection Usage Rate"의 추상타입이다. | X | -| detectedValue | Integer or DetectedAgent[] | Checker가 감지한 값
“LongValueAlarmChecker”, “detectedValue” 타입은 Integer 타입이다.
"LongValueAlarmChecker", "detectedValue"이 아닌 타입은 DetectedAgents[] 타입 이다. | X | - - - -UserMember - -| Name | Type | Description | Nullable | -| ---------------- | ------ | ------------------------- | -------- | -| id | String | 멤버의 id | X | -| name | String | 멤버의 name | X | -| email | String | 멤버의 email | O | -| department | String | 멤버의 department | O | -| phoneNumber | String | 멤버의 phone number | O | -| phoneCountryCode | String | 멤버의 phone country code | O | - - - -DetectedAgent - -| Name | Type | Description | Nullable | -| ---------- | ----------------------------------------------- | ------------------------------------------------------------ | -------- | -| agentId | String | Checker가 감지한 에이전트 ID | X | -| agentValue | Integer or
Boolean or
DataSourceAlarm[] | 체커가 감지한 에이전트의 값
“LongValueAgentChecker”, “agentValue” 은 Integer 타입이다.
“BooleanValueAgentChecker”,“agentValue” 은 Boolean 타입이다..
“DataSourceAlarmListValueAgentChecker”, “agentValue”은 DataSourceAlarm[] 타입이다. | X | - - - -DataSourceAlarm - -| Name | Type | Description | Nullable | -| --------------- | ------- | ---------------------------------------------- | -------- | -| databaseName | String | 애플리케이션에 접속한 데이터베이스 이름 | X | -| connectionValue | Integer | Applicaiton의 DataSource내의 Connection 사용률 | X | - - -**webhook Payload 예제** - -LongValueAlarmChecker - -```json -{ - "pinpointUrl": "http://pinpoint.com", - "batchEnv": "release", - "applicationId": "TESTAPP", - "serviceType": "TOMCAT", - "userGroup": { - "userGroupId": "Group-1", - "userGroupMembers": [ - { - "id": "msk1111", - "name": "minsookim", - "email": "pinpoint@naver.com", - "department": "Platform", - "phoneNumber": "01012345678", - "phoneCountryCode": 82 - } - ] - }, - "checker": { - "name": "TOTAL COUNT", - "type": "LongValueAlarmChecker", - "detectedValue": 33 - }, - "unit": "", - "threshold": 15, - "notes": "Note Example", - "sequenceCount": 4 -} -``` - - - -LongValueAgentChecker - -```json -{ - "pinpointUrl": "http://pinpoint.com", - "batchEnv": "release", - "applicationId": "TESTAPP", - "serviceType": "TOMCAT", - "userGroup": { - "userGroupId": "Group-1", - "userGroupMembers": [ - { - "id": "msk1111", - "name": "minsookim", - "email": "pinpoint@naver.com", - "department": "Platform", - "phoneNumber": "01012345678", - "phoneCountryCode": 82 - } - ] - }, - "checker": { - "name": "HEAP USAGE RATE", - "type": "LongValueAgentChecker", - "detectedValue": [ - { - "agentId": "test-agent", - "agentValue": 8 - } - ] - }, - "unit": "%", - "threshold": 5, - "notes": "Note Example", - "sequenceCount": 4 -} -``` - - - -BooleanValueAgentChecker - -```json -{ - "pinpointUrl": "http://pinpoint.com", - "batchEnv": "release", - "applicationId": "TESTAPP", - "serviceType": "TOMCAT", - "userGroup": { - "userGroupId": "Group-1", - "userGroupMembers": [ - { - "id": "msk1111", - "name": "minsookim", - "email": "pinpoint@naver.com", - "department": "Platform", - "phoneNumber": "01012345678", - "phoneCountryCode": 82 - } - ] - }, - "checker": { - "name": "DEADLOCK OCCURRENCE", - "type": "BooleanValueAgentChecker", - "detectedValue": [ - { - "agentId": "test-agent", - "agentValue": true - } - ] - }, - "unit": "BOOLEAN", - "threshold": 1, - "notes": "Note Example", - "sequenceCount": 4 -} - - -``` - - - -DataSourceAlarmListValueAgentChecker - -```json -{ - "pinpointUrl": "http://pinpoint.com", - "batchEnv": "release", - "applicationId": "TESTAPP", - "serviceType": "TOMCAT", - "userGroup": { - "userGroupId": "Group-1", - "userGroupMembers": [ - { - "id": "msk1111", - "name": "minsookim", - "email": "pinpoint@naver.com", - "department": "Platform", - "phoneNumber": "01012345678", - "phoneCountryCode": 82 - } - ] - }, - "checker": { - "name": "DATASOURCE CONNECTION USAGE RATE", - "type": "DataSourceAlarmListValueAgentChecker", - "detectedValue": [ - { - "agentId": "test-agent", - "agentValue": [ - { - "databaseName": "test", - "connectionValue": 32 - } - ] - } - ] - }, - "unit": "%", - "threshold": 16, - "notes": "Note Example", - "sequenceCount": 4 -} -``` diff --git a/doc/application-inspector.md b/doc/application-inspector.md deleted file mode 100644 index bb81b185bbacf..0000000000000 --- a/doc/application-inspector.md +++ /dev/null @@ -1,212 +0,0 @@ ---- -title: How to use Application Inspector -keywords: inspector, how, how-to -last_updated: Feb 1, 2018 -sidebar: mydoc_sidebar -permalink: applicationinspector.html -disqus: false ---- - -[English](#application-inspector) | [한글](#application-inspector-1) -# Application Inspector - -## 1. Introduction - -Application inspector provides an aggregate view of all the agent's resource data (cpu, memory, tps, datasource connection count, etc) registered under the same application name. A separate view is provided for the application inspector with stat charts similar to the agent inspector. - -To access application inspector, click on the application inspector menu on the left side of the screen. - -- 1 : application inspector menu, 2 : application stat data -![inspector_view.jpg](images/applicationInspector/inspector_view.jpg) - -The Heap Usage chart above for example, shows the average(Avg), smallest(Min), greatest(Max) heap usage of the agents registered under the same application name along with the id of the agent that had the smallest/greatest heap usage at a certain point in time. The application inspector also provides other statistics found in the agent inspector in a similar fashion. - -![graph.jpg](images/applicationInspector/graph.jpg) - - -Application inspector requires [flink](https://flink.apache.org) and [zookeeper](https://zookeeper.apache.org/). Please read on for more detail. - -## 2. Architecture - -![execute_flow.jpg](images/applicationInspector/execute_flow.jpg) - -**A.** Run a streaming job on [flink](https://flink.apache.org). -**B.** The taskmanager server is registered to zookeeper as a data node once the job starts. -**C.** The Collector obtains the flink server info from zookeeper to create a tcp connection with it and starts sending agent data. -**D.** The flink server aggregates data sent by the Collector and stores them into hbase. - -## 3. Configuration - -In order to enable application inspector, you will need to do the following and run Pinpoint. - -**A.** Create **ApplicationStatAggre** table (refer to [create table script](https://github.com/pinpoint-apm/pinpoint/tree/master/hbase/scripts)), which stores application stat data. - -**B.** Configure zookeeper address in [Pinpoint-flink.properties](https://github.com/pinpoint-apm/pinpoint/blob/master/flink/src/main/resources/pinpoint-flink.properties) which will be used to store flink's taskmanager server information. -```properties - flink.cluster.enable=true - flink.cluster.zookeeper.address=YOUR_ZOOKEEPER_ADDRESS - flink.cluster.zookeeper.sessiontimeout=3000 - flink.cluster.zookeeper.retry.interval=5000 - flink.cluster.tcp.port=19994 -``` - -**C.** Configure job execution type and the number of listeners to receive data from the Collector in [Pinpoint-flink.properties](https://github.com/pinpoint-apm/pinpoint/blob/master/flink/src/main/resources/profiles/release/pinpoint-flink.properties). -* If you are running a flink cluster, set *flink.StreamExecutionEnvironment* to **server**. -* If you are running flink as a standalone, set *flink.StreamExecutionEnvironment* to **local**. -```properties - flink.StreamExecutionEnvironment=server -``` - -**D.** Configure hbase address in [hbase.properties](https://github.com/pinpoint-apm/pinpoint/blob/master/flink/src/main/resources/profiles/release/hbase.properties) which will be used to store aggregated application data. -```properties - hbase.client.host=YOUR_HBASE_ADDRESS - hbase.client.port=2181 -``` - -**E.** Build [Pinpoint-flink](https://github.com/pinpoint-apm/pinpoint/tree/master/flink) and run the streaming job file created under *target* directory on the flink server. - - The name of the streaming job is `pinpoint-flink-job-{pinpoint.version}.jar`. - - For details on how to run the job, please refer to the [flink website](https://flink.apache.org). - - You must put `spring.profiles.active release` or` spring.profiles.active local` as the job parameter so that the job can refer to the configuration file configured above when running. It must be entered because it uses the spring profile function inside the job to refer to the configuration file. - -**F.** Configure zookeeper address in [Pinpoint-Collector.properties](https://github.com/pinpoint-apm/pinpoint/blob/master/collector/src/main/resources/profiles/release/pinpoint-collector.properties) so that the Collector can connect to the flink server. -```properties - flink.cluster.enable=true - flink.cluster.zookeeper.address=YOUR_ZOOKEEPER_ADDRESS - flink.cluster.zookeeper.sessiontimeout=3000 -``` - -**G.** Enable application inspector in the web-ui by enabling the following configuration in [pinpoint-web.properties](https://github.com/pinpoint-apm/pinpoint/blob/master/web/src/main/resources/pinpoint-web-root.properties). - -```properties - config.show.applicationStat=true -``` - -## 4. Monitoring Streaming Jobs - -There is a batch job that monitors how Pinpoint streaming jobs are running. To enable this batch job, configure the following files for *Pinpoint-web*. - -**batch.properties** -```properties -batch.flink.server=FLINK_MANGER_SERVER_IP_LIST -# Flink job manager server IPs, separated by ','. -# ex) batch.flink.server=123.124.125.126,123.124.125.127 -``` -**applicationContext-batch-schedule.xml** -```xml - - ... - - -``` - -If you would like to send alarms in case of batch job failure, you must implement `com.navercorp.pinpoint.web.batch.JobFailMessageSender class` and register it as a Spring bean. - -## 5. Others - -For more details on how to install and operate flink, please refer to the [flink website](https://flink.apache.org). - - -# Application Inspector - -## 1. 기능 설명 - -application inspector 기능은 agent들의 리소스 데이터(stat : cpu, memory, tps, datasource connection count)를 집계하여 데이터를 보여주는 기능이다. 참고로 application은 agent의 그룹으로 이뤄진다. 그리고 agent의 리소스 데이터는 agent inspector 화면에서 에서 볼 수 있다. application inspector 기능 또한 별도의 화면에서 확인할 수 있다. - -inspector 화면 왼쪽 메뉴의 링크를 클릭하면 application inspector 버튼을 클릭하고 데이터를 볼 수 있다. - -- 1 : application inspector menu, 2: application stat data -![inspector_view.jpg](images/applicationInspector/inspector_view.jpg) - -예를들면 A라는 application에 포함된 agent들의 heap 사용량을 모아서 heap 사용량 평균값 , heap 사용량의 평균값, heap 사용량이 가장 높은 agentid와 사용량, heap 사용량이 가장 적은 agentid와 사용량을 보여준다. 이외에도 agent inspector 에서 제공하는 다른 데이터들도 집계하여 application inspector에서 제공한다. - -![graph.jpg](images/applicationInspector/graph.jpg) - - -application inspector 기능을 동작시키기 위해서는 [flink](https://flink.apache.org)와 [zookeeper](https://zookeeper.apache.org/)가 필요하고, 기능의 동작 구조와 구성 및 설정 방법을 아래 설명한다. - -## 2. 동작 구조 - -application inspector 기능의 동작 및 구조를 그림과 함께 보자. - -![execute_flow.jpg](images/applicationInspector/execute_flow.jpg) - - - -**A.** [flink](https://flink.apache.org)에 streaming job을 실행시킨다. -**B.** job이 실행되면 taskmanager 서버의 정보가 zookeeper의 데이터 노드로 등록이 된다. -**C.** Collector는 zookeeper에서 flink 서버의 정보를 가져와서 flink 서버와 tcp 연결을 맺고 agent stat 데이터를 전송한다. -**D.** flink 서버에서는 agent 데이터를 집계하여 통계 데이터를 hbase에 저장한다. - -## 3. 기능 실행 방법 - -application inspector 기능을 실행하기 위해서 아래와 같이 설정을 변경하고 Pinpoint를 실행해야 한다. - -**A.** [테이블 생성 스크립트를 참조](https://github.com/pinpoint-apm/pinpoint/tree/master/hbase/scripts)하여 application 통계 데이터를 저장하는 **ApplicationStatAggre** 테이블을 생성한다. - -**B.** flink 프로젝트 설정파일([Pinpoint-flink.properties](https://github.com/pinpoint-apm/pinpoint/blob/master/flink/src/main/resources/profiles/release/pinpoint-flink.properties))에 taskmanager 서버 정보를 저장하는 zookeeper 주소를 설정한다. -```properties - flink.cluster.enable=true - flink.cluster.zookeeper.address=YOUR_ZOOKEEPER_ADDRESS - flink.cluster.zookeeper.sessiontimeout=3000 - flink.cluster.zookeeper.retry.interval=5000 - flink.cluster.tcp.port=19994 -``` - -**C.** flink 프로젝트 설정파일([Pinpoint-flink.properties](https://github.com/pinpoint-apm/pinpoint/blob/master/flink/src/main/resources/profiles/release/pinpoint-flink.properties))에 job의 실행 방법과 Collector에서 데이터를 받는 listener의 개수를 설정한다. -- flink를 cluster로 구축해서 사용한다면 *flink.StreamExecutionEnvironment*에는 **server**를 설정한다. -- flink를 Standalone 형태로 실행한다면 *flink.StreamExecutionEnvironment*에는 **local**을 설정한다. - -```properties -    flink.StreamExecutionEnvironment=server -   flink.sourceFunction.Parallel=1 -``` - -**D.** flink 프로젝트 설정파일([hbase.properties](https://github.com/pinpoint-apm/pinpoint/blob/master/flink/src/main/resources/profiles/release/hbase.properties))에 집계 데이터를 저장하는 hbase 주소를 설정한다. -```properties - hbase.client.host=YOUR_HBASE_ADDRESS - hbase.client.port=2181 -``` - -**E.** [flink 프로젝트](https://github.com/pinpoint-apm/pinpoint/tree/master/flink)를 빌드하여 target 폴더 하위에 생성된 streaming job 파일을 flink 서버에 job을 실행한다. - - streaming job 파일 이름은 `pinpoint-flink-job-{pinpoint.version}.jar` 이다. - - 실행방법은 [flink 사이트](https://flink.apache.org)를 참조한다. - - 반드시 실행시 job이 위에서 설정한 설정파일을 참고 할수 있도록 job parameter로 `spring.profiles.active release` or `spring.profiles.active local`를 넣어주야 한다. job 내부에서 spring profile 기능을 사용하여 설정파일을 참고 하고 있기때문에 반드시 입력해야한다. - -**F.** Collector에서 flink와 연결을 맺을 수 있도록 설정파일([Pinpoint-Collector.properties](https://github.com/pinpoint-apm/pinpoint/blob/master/collector/src/main/resources/pinpoint-collector.properties))에 zookeeper 주소를 설정한다. - -```properties - flink.cluster.enable=true - flink.cluster.zookeeper.address=YOUR_ZOOKEEPER_ADDRESS - flink.cluster.zookeeper.sessiontimeout=3000 -``` - -**G.** web에서 application inspector 버튼을 활성화 하기 위해서 설정파일(pinpoint-web.properties)을 수정한다. - -```properties - config.show.applicationStat=true -``` - -## 4. streaming job 동작 확인 모니터링 batch - -Pinpoint streaming job이 실행되고 있는지 확인하는 batch job이 있다. -batch job을 동작 시키고 싶다면 Pinpoint web 프로젝트의 설정 파일을 수정하면 된다. - -**batch.properties** -```properties -batch.flink.server=FLINK_MANGER_SERVER_IP_LIST -#`batch.flink.server` 속성 값에 flink job manager 서버 IP를 입력하면 된다. 서버 리스트의 구분자는 ','이다. -# ex) batch.flink.server=123.124.125.126,123.124.125.127 -``` -**applicationContext-batch-schedule.xml** -```xml - - ... - - -``` - -batch job이 실패할 경우 알람이 전송되도록 기능을 추가 하고싶다면 `com.navercorp.pinpoint.web.batch.JobFailMessageSender class` 구현체를 만들고 bean으로 등록하면 된다. - -## 5. 기타 - -자세한 flink 운영 설치에 대한 내용은 [flink 사이트](https://flink.apache.org)를 참고하자. diff --git a/doc/compatibilityHbase.md b/doc/compatibilityHbase.md deleted file mode 100755 index 3fe553f0265e6..0000000000000 --- a/doc/compatibilityHbase.md +++ /dev/null @@ -1,7 +0,0 @@ -| Pinpoint Version | HBase 0.98.x | HBase 1.0.x | HBase 1.2.x | HBase 2.0.x | -| :--- | :--- | :--- | :--- | :--- | -| 1.5.x | not tested | yes | not tested | no | -| 1.6.x | not tested | not tested | yes | no | -| 1.7.x | not tested | not tested | yes | no | -| 1.8.x | not tested | not tested | yes | no | -| 2.0.x | not tested | not tested | yes | optional | \ No newline at end of file diff --git a/doc/compatibilityJava.md b/doc/compatibilityJava.md deleted file mode 100755 index 4f44d8ea9817b..0000000000000 --- a/doc/compatibilityJava.md +++ /dev/null @@ -1,8 +0,0 @@ -| Pinpoint Version | Agent | Collector | Web | -| :--- | :--- | :--- | :--- | -| 1.5.x | 6-8 | 7-8 | 7-8 | -| 1.6.x | 6-8 | 7-8 | 7-8 | -| 1.7.x | 6-8 | 8 | 8 | -| 1.8.0 | 6-10 | 8 | 8 | -| 1.8.1+ | 6-11 | 8 | 8 | -| 2.0.x | 6-11 | 8 | 8 | \ No newline at end of file diff --git a/doc/compatibilityPinpoint.md b/doc/compatibilityPinpoint.md deleted file mode 100755 index 48da2bd4d916e..0000000000000 --- a/doc/compatibilityPinpoint.md +++ /dev/null @@ -1,7 +0,0 @@ -| Agent Version | Collector 1.5.x | Collector 1.6.x | Collector 1.7.x | Collector 1.8.x | Collector 2.0.x | -| :--- | :--- | :--- | :--- | :--- | :--- | -| 1.5.x | yes | yes | yes | yes | yes | -| 1.6.x | not tested | yes | yes | yes | yes | -| 1.7.x | no | no | yes | yes | yes | -| 1.8.x | no | no | no | yes | yes | -| 2.0.x | no | no | no | no | yes | \ No newline at end of file diff --git a/doc/contribution.md b/doc/contribution.md deleted file mode 100755 index 9a6011e7b9055..0000000000000 --- a/doc/contribution.md +++ /dev/null @@ -1,95 +0,0 @@ ---- -title: Contribution -keywords: help -last_updated: Feb 1, 2018 -sidebar: mydoc_sidebar -permalink: contribution.html -disqus: false ---- - -Thank you very much for choosing to share your contribution with us. Please read this page to help yourself to the contribution. - -Before making first pull-request, please make sure you've signed the [Contributor License Agreement](http://goo.gl/forms/A6Bp2LRoG3). This isn't a copyright - it gives us (Naver) permission to use and redistribute your code as part of the project. - -## Making Pull Requests -Apart from trivial fixes such as typo or formatting, all pull requests should have a corresponding issue associated with them. It is always helpful to know what people are working on, and different (often better) ideas may pop up while discussing them. -Please keep these in mind before you create a pull request: -* Every new java file must have a copy of the license comment. You may copy this from an existing file. -* Make sure you've tested your code thoroughly. For plugins, please try your best to include integration tests if possible. -* Before submitting your code, make sure any changes introduced by your code does not break the build, or any tests. -* Clean up your commit log into logical chunks of work to make it easier for us to figure out what and why you've changed something. (`git rebase -i` helps) -* Please try best to keep your code updated against the master branch before creating a pull request. -* Make sure you create the pull request against our master branch. -* If you've created your own plugin, please take a look at [plugin contribution guideline](#plugin-contribution-guideline) - - -## Plugin Contribution Guideline -We welcome your plugin contribution. -Currently, we would love to see additional tracing support for libraries such as [Storm](https://storm.apache.org "Apache Storm"), [HBase](http://hbase.apache.org "Apache HBase"), as well as profiler support for additional languages (.NET, C++). - -### Technical Guide -**For technical guides for developing plug-in,** take a look at our [plugin development guide](https://pinpoint-apm.github.io/pinpoint/plugindevguide.html "Pinpoint Plugin Development Guide"), along with [plugin samples](https://github.com/pinpoint-apm/pinpoint-plugin-sample "Pinpoint Plugin Samples project") project to get an idea of how we do instrumentation. The samples will provide you with example codes to help you get started. - -### Contributing Plugin -If you want to contribute your plugin, it has to satisfy the following requirements: - -1. Configuration key names must start with `profiler.[pluginName]`. -2. At least 1 plugin integration test. - -Once your plugin is complete, please open an issue to contribute the plugin as below: - -``` -Title: [Target Library Name] Plugin Contribution - -Link: Plugin Repository URL -Target: Target Library Name -Supported Version: -Description: Simple description about the target library and/or target library homepage URL - -ServiceTypes: List of service type names and codes the plugin adds -Annotations: List of annotation key names and codes the plugin adds -Configurations: List of configuration keys and description the plugin adds. -``` - -Our team will review the plugin, and your plugin repository will be linked at the third-party plugin list page if everything checks out. If the plugin is for a widely used library, and if we feel confident that we can continuously provide support for it, you may be asked to send us a PR. Should you choose to accept it, your plugin will be merged to the Pinpoint repository. - -As much as we'd love to merge all the plugins to the source repository, we do not have the man power to manage all of them, yet. We are a very small team, and we certainly are not experts in all of the target libraries. We feel that it would be better to not merge a plugin if we are not confident in our ability to provide continuous support for it. - -To send a PR, you have to modify your plugin like this: - -* Fork Pinpoint repository -* Copy your plugin under /plugins directory -* Set parent pom -``` - - com.navercorp.pinpoint - pinpoint-plugins - Current Version - -``` -* Add your plugin to *plugins/pom.xml* as a sub-module. -* Add your plugin to *plugins/assembly/pom.xml* as a dependency. -* Copy your plugin integration tests under /agent-it/src/test directory. -* Add your configurations to /agent/src/main/resources/*.config files. -* Insert following license header to all java source files. -``` -/* - * Copyright 2018 Pinpoint contributors and NAVER Corp. - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -``` - -If you do not want to be bothered with a PR, you may choose to tell us to do it ourselves. However, please note that your contribution will not visible through git history or the Github profile. - - diff --git a/doc/dev-architecture.md b/doc/dev-architecture.md deleted file mode 100644 index 69aac83d39ee9..0000000000000 --- a/doc/dev-architecture.md +++ /dev/null @@ -1,24 +0,0 @@ -# Architecture - -Pinpoint is comprised of 3 main components (Agent, Collector, Web UI), and a HBase storage. - -![Pinpoint Architecture](images/pinpoint-architecture.png) - -## Components - -### Pinpoint Agent -Pinpoint Agent attaches itself to a host application (such as Tomcat) as a java agent to instrument various classes for tracing. When a class marked for tracing is loaded into the JVM, the Agent injects code around pre-defined methods to collect and send trace data to the Collector. - -In addition to trace data, the agent also collects various information about the host application such as JVM arguments, loaded libraries, CPU usage, Memory Usage and Garbage Collection. - -For a more detailed information, please take a look [here](dev-profiler.md). - -### Pinpoint Collector -The Collector listens for data sent by the Agents and writes them into the HBase storage. - -Click [here](dev-collector.md) for more information. - -### Pinpoint Web -The Web provides users with various information collected by the Agents. These include an automatically generated server map, call stacks on distributed transactions, and various information on the host applications. - -Click [here](dev-web.md) for more information. diff --git a/doc/docker.md b/doc/docker.md deleted file mode 100755 index 08ac2ba09a510..0000000000000 --- a/doc/docker.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Pinpoint on Docker -keywords: docker pinpoint, pinpoint install -last_updated: May 14, 2018 -sidebar: mydoc_sidebar -permalink: docker.html -disqus: false ---- - -## Want to install Pinpoint inside docker? - -We've create docker files to support docker. -Installing Pinpoint with these docker files will take approximately 10min. to check out the features of Pinpoint. - -Visit [Official Pinpoint-Docker repository](https://github.com/pinpoint-apm/pinpoint-docker) for more information. \ No newline at end of file diff --git a/doc/faq.md b/doc/faq.md deleted file mode 100755 index 16bee869cf677..0000000000000 --- a/doc/faq.md +++ /dev/null @@ -1,77 +0,0 @@ ---- -title: FAQ -sidebar: mydoc_sidebar -keywords: faq, question, answer, frequently asked questions, FAQ, question and answer -last_updated: Feb 1, 2018 -permalink: faq.html -toc: false -disqus: false ---- - -[Github issues](https://github.com/pinpoint-apm/pinpoint/issues) -[Google group](https://groups.google.com/forum/#!forum/pinpoint_user) -[Gitter](https://gitter.im/naver/pinpoint) - -Chinese groups - -QQ Group: 897594820 | DING Group -:----------------: | :-----------: -![QQ Group](images/NAVERPinpoint.png) | ![DING Group](images/NaverPinpoint交流群-DING.jpg) - - -### How do I get the call stack view? -Click on a server node, which will populate the scatter chart on the right. This chart shows all succeeded/failed requests that went through the server. If there are any requests that spike your interest, simply **drag on the scatter chart** to select them. This will bring up the call stack view containing the requests you've selected. - -### How do I change agent's log level? -You can change the log level by modifying the agent's *log4j.xml* located in *PINPOINT_AGENT/lib* directory. - -### Why is only the first/some of the requests traced? -There is a sampling rate option in the agent's pinpoint.config file (profiler.sampling.rate). -Pinpoint agent samples 1 trace every N transactions if this value was set as N. -Changing this value to 1 will allow you to trace every transaction. - -### Request count in the Scatter Chart is different from the ones in Response Summary chart. Why is this? -The Scatter Chart data have a second granularity, so the requests counted here can be differentiated by a second interval. -On the other hand, the Server Map, Response Summary, and Load Chart data are stored in a minute granularity (the collector aggregates these in memory and flushes them every minute due to performance reasons). -For example, if the data is queried from 10:00:30 to 10:05:30, the Scatter Chart will show the requests counted between 10:00:30 and 10:05:30, whereas the server map, response summary, and load chart will show the requests counted between 10:00:00 and 10:05:59. - -### How do I delete application name and/or agent id from HBase? -Application names and agent ids, once registered, stay in HBase until their TTL expires (default 1year). -You may however delete them proactively using [admin APIs](https://github.com/pinpoint-apm/pinpoint/blob/master/web/src/main/java/com/navercorp/pinpoint/web/controller/AdminController.java) once they are no longer used. -* Remove application name - `/admin/removeApplicationName.pinpoint?applicationName=$APPLICATION_NAME&password=$PASSWORD` -* Remove agent id - `/admin/removeAgentId.pinpoint?applicationName=$APPLICATION_NAME&agentId=$AGENT_ID&password=$PASSWORD` -Note that the value for the password parameter is what you defined `admin.password` property in *pinpoint-web.properties*. Leaving this blank will allow you to call admin APIs without the password parameter. - -### What are the criteria for the application name? -Pinpoint's applicationName doesn't support special characters. such as @,#,$,%,*. -Pinpoint's applicationName only supports [a-zA-Z0-9], '.', '-', '_' characters. - -### HBase is taking up too much space, which data should I delete first? -Hbase is very scalable so you can always add more region servers if you're running out of space. Shortening the TTL values, especially for **AgentStatV2** and **TraceV2**, can also help (though you might have to wait for a major compaction before space is reclaimed). For details on how to major compact, please refer to [this](https://github.com/pinpoint-apm/pinpoint/blob/master/hbase/scripts/hbase-major-compact-htable.hbase) script. - -However, if you **must** make space asap, data in **AgentStatV2** and **TraceV2** tables are probably the safest to delete. You will lose agent statistic data (inspector view) and call stack data (transaction view), but deleting these will not break anything. - -Note that deleting ***MetaData** tables will result in **-METADATA-NOT-FOUND* being displayed in the call stack and the only way to "fix" this is to restart all the agents, so it is generally a good idea to leave these tables alone. - -### My custom jar application is not being traced. Help! -Pinpoint Agent need an entry point to start off a new trace for a transaction. This is usually done by various WAS plugins (such as Tomcat, Jetty, etc) in which a new trace is started when they receive an RPC request. -For custom jar applications, you need to set this manually as the Agent does not have knowledge of when and where to start a trace. -You can set this by configuring `profiler.entrypoint` in *pinpoint.config* file. - -### Building is failing after new release. Help! -Please remember to run the command `./mvnw clean verify -DskipTests=true` if you've used a previous version before, and replace './mvnw' with './mvnw.cmd' if you are using Windows. - -### How to set java runtime option when using atlassian OSGi -`-Datlassian.org.osgi.framework.bootdelegation=sun.,com.sun.,com.navercorp.*,org.apache.xerces.*` - -### Why do I see UI send requests to https://www.google-analytics.com/collect? -Pinpoint Web module has google analytics attached which tracks the number and the order of button clicks in the Server Map, Transaction List, and the Inspector View. -This data is used to better understand how users interact with the Web UI which gives us valuable information on improving Pinpoint Web's user experience. To disable this for any reason, set following option to false in pinpoint-web.properties for your web instance. -``` -config.sendUsage=false -``` - -### I'd like to use Hbase 2.x for Pinpoint. -If you'd like to use Hbase 2.x for Pinpoint database, check out [Hbase2-module](https://github.com/pinpoint-apm/pinpoint/tree/master/hbase2-module). - - diff --git a/doc/hbase-upgrade.md b/doc/hbase-upgrade.md deleted file mode 100755 index b65e778cf4ed2..0000000000000 --- a/doc/hbase-upgrade.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -title: Hbase Upgrade -keywords: hbase, upgrade -last_updated: Mar 8, 2019 -sidebar: mydoc_sidebar -permalink: hbaseupgrade.html -disqus: false ---- - -## Do you like to use Hbase 2.x for Pinpoint? - -Default settings of current releases are for Hbase 1.x. - -If you'd like to use Hbase 2.x for Pinpoint database. - -check out [Hbase2-module](https://github.com/pinpoint-apm/pinpoint/tree/master/hbase2-module). diff --git a/doc/history.md b/doc/history.md deleted file mode 100755 index c2878586360b7..0000000000000 --- a/doc/history.md +++ /dev/null @@ -1,29 +0,0 @@ ---- -title: History -keywords: history -last_updated: Feb 1, 2018 -sidebar: mydoc_sidebar -permalink: history.html -disqus: false ---- - -Pinpoint is a platform that analyzes large-scale distributed systems and provides a solution to handle large collections of trace data. It has been developed since July 2012 and was launched as an open-source project on January 9, 2015. - -This article introduces Pinpoint; it describes what motivated us to start this project, which technologies are used, and how the Pinpoint Agent can be optimized. - -> 本文的中文翻译版本 [请见这里](https://github.com/skyao/leaning-pinpoint/blob/master/design/technical_overview.md) - -## Motivation to Get Started & Pinpoint Characteristics - -Compared to nowadays, the number of Internet users was relatively small and the architecture of Internet services was less complex. Web services were generally configured using a 2-tier (web server and database) or 3-tier (web server, application server, and database) architecture. However, today, supporting a large number of concurrent connections is required and functionalities and services should be organically integrated as the Internet has grown, resulting in much more complex combinations of the software stack. That is n-tier architecture more than 3 tiers has become more widespread. A service-oriented architecture (SOA) or the [microservices](http://en.wikipedia.org/wiki/Microservices) architecture is now a reality. - -The system's complexity has consequently increased. The more complex it is, the more difficult you solve problems such as system failure or performance issues. For example, Finding solutions in a 3-tier system is far less complicated. You only need to analyze 3 main components; a web server, an application server, and a database where the number of servers is small. While, if a problem occurs in an n-tier architecture, a large number of components and servers should be investigated. Another problem is that it is difficult to see the big picture only with an analysis of individual components; a low visibility issue is raised. The higher the degree of system complexity is, the longer it takes time to find out the reasons. Even worse, probability increases in which we may not even find them at all. - -Such problems have occurred in the systems at NAVER. A variety of tools like Application Performance Management (APM) were used but they were not enough to handle the problems effectively; so we finally ended up developing a new tracing platform for n-tier architecture, which can provide solutions to systems with an n-tier architecture. - -Pinpoint, began development in July 2012 and was launched as an open-source project in January 2015, is an n-tier architecture tracing platform for large-scale distributed systems. The characteristics of Pinpoint are as follows: -* Distributed transaction tracing to trace messages across distributed applications -* Automatically detecting the application topology that helps you to figure out the configurations of an application -* Horizontal scalability to support large-scale server group -* Providing code-level visibility to easily identify points of failure and bottlenecks -* Adding a new functionality without code modifications, using the bytecode instrumentation technique diff --git a/doc/http-status-code-failure.md b/doc/http-status-code-failure.md deleted file mode 100644 index e40d697461292..0000000000000 --- a/doc/http-status-code-failure.md +++ /dev/null @@ -1,70 +0,0 @@ ---- -title: Marking Transaction as Fail -keywords: http, code fail, failure, http status -last_updated: Feb 1, 2018 -sidebar: mydoc_sidebar -permalink: httpstatuscodefailure.html -disqus: false ---- - -# HTTP Status Code with Request Failure. - -![overview](images/http-status-code-failure-overview.png) - -## Pinpoint Configuration - -pinpoint.config -~~~ -profiler.http.status.code.errors=5xx, 401, 403, 406 -~~~ -Comma separated list of HTTP status codes. -* 1xx: Informational(100 ~ 199). - * 100: Continue - * 101: Switching Protocols -* 2xx: Successful(200 ~ 299). - * 200: OK - * 201: Created - * 202: Accepted - * 203: Non-Authoritative Information - * 204: No Content - * 205: Reset Content - * 206: Partial Content -* 3xx: Redirection(300 ~ 399). - * 300: Multiple Choices - * 301: Moved Permanently - * 302: Found - * 303: See Other - * 304: Not Modified - * 305: Use Proxy - * 307: Temporary Redirect -* 4xx: Client Error(400 ~ 499). - * 400: Bad Request - * 401: Unauthorized - * 402: Payment Required - * 403: Forbidden - * 404: Not Found - * 405: Method Not Allowed - * 406: Not Acceptable - * 407: Proxy Authentication Required - * 408: Request Time-out - * 409: Conflict - * 410: Gone - * 411: Length Required - * 412: Precondition Failed - * 413: Request Entity Too Large - * 414: Request-URI Too Large - * 415: Unsupported Media Type - * 416: Requested range not satisfiable - * 417: Expectation Failed -* 5xx: Server Error(500 ~ 599). - * 500: Internal Server Error - * 501: Not Implemented - * 502: Bad Gateway - * 503: Service Unavailable - * 504: Gateway Time-out - * 505: HTTP Version not supported - -Resources -* https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html - - diff --git a/doc/installation.md b/doc/installation.md deleted file mode 100644 index 8ce9bc95c59c6..0000000000000 --- a/doc/installation.md +++ /dev/null @@ -1,376 +0,0 @@ ---- -title: Installation -keywords: pinpoint, pinpoint homepage, install, start, installation -last_updated: Feb 1, 2018 -sidebar: mydoc_sidebar -permalink: installation.html -disqus: false ---- - -To set up your very own Pinpoint instance you can either **download the build results** from our [**latest release**](https://github.com/pinpoint-apm/pinpoint/releases/latest), or manually build from your Git clone. -In order to run your own Pinpoint instance, you will need to run below components: - -* **HBase** (for storage) -* **Pinpoint Collector** (deployed on a web container) -* **Pinpoint Web** (deployed on a web container) -* **Pinpoint Agent** (attached to a java application for profiling) - -To try out a simple quickstart project, please refer to the [quick-start guide](./quickstart.html). - -## Quick Overview of Installation -1. HBase ([details](#1-hbase)) - 1. Set up HBase cluster - [Apache HBase](http://hbase.apache.org) - 2. Create HBase Schemas - feed `/scripts/hbase-create.hbase` to hbase shell. -2. Build Pinpoint (Optional)([details](#2-building-pinpoint-optional)) - No need if you use the binaries.([here](https://github.com/pinpoint-apm/pinpoint/releases)). - 1. Clone Pinpoint - `git clone $PINPOINT_GIT_REPOSITORY` - 2. Set JAVA_HOME environment variable to JDK 8 home directory. - 3. Set JAVA_7_HOME environment variable to JDK 7 home directory ([Zulu jdk7](https://www.azul.com/downloads/zulu-community/?version=java-7-lts) recommended). - 4. Set JAVA_8_HOME environment variable to JDK 8 home directory. - 5. Set JAVA_9_HOME environment variable to JDK 9 home directory. - 6. Run `./mvnw clean install -DskipTests=true` (or `./mvnw.cmd` for Windows) -3. Pinpoint Collector ([details](#3-pinpoint-collector)) - 1. Start *pinpoint-collector-boot-$VERSION.jar* with java -jar command. - - e.g.) `java -jar -Dpinpoint.zookeeper.address=localhost pinpoint-collector-boot-2.2.1.jar` - - 2. It will start with default settings. To learn more about default values or how to override them, please see the details below. -4. Pinpoint Web ([details](#4-pinpoint-web)) - 1. Start *pinpoint-web-boot-$VERSION.jar* with java -jar command. - - e.g.) `java -jar -Dpinpoint.zookeeper.address=localhost pinpoint-web-boot-2.2.1.jar` - - 2. It will start with default settings. To learn more about default values or how to override them, please see the details below. -5. Pinpoint Agent ([details](#5-pinpoint-agent)) - 1. Extract/move *pinpoint-agent/* to a convenient location (`$AGENT_PATH`). - 2. Set `-javaagent:$AGENT_PATH/pinpoint-bootstrap-$VERSION.jar` JVM argument to attach the agent to a java application. - 3. Set `-Dpinpoint.agentId` and `-Dpinpoint.applicationName` command-line arguments. - a) If you're launching an agent in a containerized environment with dynamically changing *agent id*, consider adding `-Dpinpoint.container` command-line argument. - 4. Launch java application with the options above. - -## 1. HBase -Pinpoint uses HBase as its storage backend for the Collector and the Web. - -To set up your own cluster, take a look at the [HBase website](http://hbase.apache.org) for instructions. The HBase compatibility table is given below: - -{% include_relative compatibilityHbase.md %} - -Once you have HBase up and running, make sure the Collector and the Web are configured properly and are able to connect to HBase. - -### Creating Schemas for HBase -There are 2 scripts available to create tables for Pinpoint: *hbase-create.hbase*, and *hbase-create-snappy.hbase*. Use *hbase-create-snappy.hbase* for snappy compression (requires [snappy](http://google.github.io/snappy/)), otherwise use *hbase-create.hbase* instead. - -To run these scripts, feed them into the HBase shell like below: - -`$HBASE_HOME/bin/hbase shell hbase-create.hbase` - -See [here](https://github.com/pinpoint-apm/pinpoint/tree/master/hbase/scripts "Pinpoint HBase scripts") for a complete list of scripts. - -## 2. Building Pinpoint - -There are two options: - -1. Download the build results from our [**latest release**](https://github.com/pinpoint-apm/pinpoint/releases/latest) and skip building process. **(Recommended)** - -2. Build Pinpoint manually from the Git clone. **(Optional)** - - In order to do so, the following **requirements** must be met: - - * JDK 7 installed ([jdk1.7.0_80](http://www.oracle.com/technetwork/java/javase/downloads/java-archive-downloads-javase7-521261.html#jdk-7u80-oth-JPR) recommended) - * JDK 8 installed - * JDK 9 installed - * JAVA_HOME environment variable set to JDK 8 home directory. - * JAVA_7_HOME environment variable set to JDK 7 home directory. - * JAVA_8_HOME environment variable set to JDK 8 home directory. - * JAVA_9_HOME environment variable set to JDK 9 home directory. - - Agent compatibility to Collector table: - - {% include_relative compatibilityPinpoint.md %} - - Once the above requirements are met, simply run the command below (you may need to add permission for **mvnw** so that it can be executed) : - - `./mvnw install -DskipTests=true` - - The default agent built this way will have log level set to DEBUG by default. If you're building an agent for release and need a higher log level, you can set maven profile to *release* when building : - `./mvnw install -Prelease -DskipTests=true` - - Note that having multibyte characters in maven local repository path, or any class paths may cause the build to fail. - - The guide will refer to the full path of the pinpoint home directory as `$PINPOINT_PATH`. - - -Regardless of your method, you should end up with the files and directories mentioned in the following sections. - -## 3. Pinpoint Collector -You should have the following **executable jar** file. - -*pinpoint-collector-boot-$VERSION.jar* - -The path to this file should look like *$PINPOINT_PATH/collector/target/deploy/pinpoint-collector-boot-$VERSION.jar* if you built it manually. - -### Installation -Since Pinpoint Collector is packaged as an executable jar file, you can start Collector by running it directly. - -e.g.) `java -jar -Dpinpoint.zookeeper.address=localhost pinpoint-collector-boot-2.2.1.jar` - -### Configuration -There are 3 configuration files used for Pinpoint Collector: *pinpoint-collector-root.properties*, *pinpoint-collector-grpc.properties*, and *hbase.properties*. - -* pinpoint-collector-root.properties - contains configurations for the collector. Check the following values with the agent's configuration options : - * `collector.receiver.base.port` (agent's *profiler.collector.tcp.port* - default: 9994/TCP) - * `collector.receiver.stat.udp.port` (agent's *profiler.collector.stat.port* - default: 9995/UDP) - * `collector.receiver.span.udp.port` (agent's *profiler.collector.span.port* - default: 9996/UDP) -* pinpoint-collector-grpc.properties - contains configurations for the grpc. - * `collector.receiver.grpc.agent.port` (agent's *profiler.transport.grpc.agent.collector.port*, *profiler.transport.grpc.metadata.collector.port* - default: 9991/TCP) - * `collector.receiver.grpc.stat.port` (agent's *profiler.transport.grpc.stat.collector.port* - default: 9992/TCP) - * `collector.receiver.grpc.span.port` (agent's *profiler.transport.grpc.span.collector.port* - default: 9993/TCP) -* hbase.properties - contains configurations to connect to HBase. - * `hbase.client.host` (default: localhost) - * `hbase.client.port` (default: 2181) - -You may take a look at the full list of default configurations here: -- [pinpoint-collector-root.properties](https://github.com/pinpoint-apm/pinpoint/blob/master/collector/src/main/resources/pinpoint-collector-root.properties) -- [pinpoint-collector-grpc.properties](https://github.com/pinpoint-apm/pinpoint/blob/master/collector/src/main/resources/profiles/local/pinpoint-collector-grpc.properties) -- [hbase.properties](https://github.com/pinpoint-apm/pinpoint/blob/master/collector/src/main/resources/profiles/local/hbase.properties) - -#### When Building Manually -You can modify default configuration values or add new profiles under `collector/src/main/resources/profiles/`. - -#### When Using Released Binary **(Recommended)** -- You can override any configuration values with `-D` option. For example, - - `java -jar -Dspring.profiles.active=release -Dpinpoint.zookeeper.address=localhost -Dhbase.client.port=1234 pinpoint-collector-boot-2.2.1.jar` - -- To import a list of your customized configuration values from a file, you can use `--spring.config.additional-location` option. For example, - - Create a file `./config/collector.properties`, and list the configuration values you want to override. - > - > spring.profiles.active=release - > - > pinpoint.zookeeper.address=localhost - > - > collector.receiver.grpc.agent.port=9999 - > - > collector.receiver.stat.udp.receiveBufferSize=1234567 - > - - - Execute with `java -jar pinpoint-collector-boot-2.2.1.jar --spring.config.additional-location=./config/collector.properties` - -- To further explore how to use externalized configurations, refer to [Spring Boot Reference Document](https://docs.spring.io/spring-boot/docs/2.2.x/reference/html/spring-boot-features.html#boot-features-external-config-application-property-files). - -### Profiles -Pinpoint Collector provides two profiles: [release](https://github.com/pinpoint-apm/pinpoint/tree/master/collector/src/main/resources/profiles/release) and [local](https://github.com/pinpoint-apm/pinpoint/tree/master/collector/src/main/resources/profiles/local) (default). - -To specify which profile to use, configure `spring.profiles.active` value as described in the previous section. - -#### Adding a custom profile - -To add a custom profile, you need to rebuild `pinpoint-collector` module. - - 1. Add a new folder under `collector/src/main/resources/profiles` with a profile name. - 2. Copy files from local or release profiles folder, and modify configuration values as needed. - 3. To use the new profile, rebuild `pinpoint-collector` module and configure `spring.profiles.active` as described in the previous section. - -When using released binary, you cannot add a custom profile. Instead, you can manage your configuration values in separate files and use them to override default values as described in the [previous section](#3-pinpoint-collector). - - -## 4. Pinpoint Web -You should have the following **executable jar** file. - -*pinpoint-web-boot-$VERSION.jar* - -The path to this file should look like *$PINPOINT_PATH/web/target/deploy/pinpoint-web-boot-$VERSION.jar* if you built it manually. - -Pinpoint Web Supported Browsers: - -* Chrome - -### Installation -Since Pinpoint Web is packaged as an executable jar file, you can start Web by running it directly. - -e.g.) `java -jar -Dpinpoint.zookeeper.address=localhost pinpoint-web-boot-2.2.1.jar` - -### Configuration -There are 2 configuration files used for Pinpoint Web: *pinpoint-web-root.properties*, and *hbase.properties*. - -* hbase.properties - contains configurations to connect to HBase. - * `hbase.client.host` (default: localhost) - * `hbase.client.port` (default: 2181) - -You may take a look at the default configuration files here - - [pinpoint-web-root.properties](https://github.com/pinpoint-apm/pinpoint/blob/master/web/src/main/resources/pinpoint-web-root.properties) - - [hbase.properties](https://github.com/pinpoint-apm/pinpoint/blob/master/web/src/main/resources/profiles/release/hbase.properties) - - [pinpoint-web.properties](https://github.com/pinpoint-apm/pinpoint/blob/master/web/src/main/resources/profiles/release/pinpoint-web.properties) - -#### When Building Manually -You can modify default configuration values or add new profiles under `web/src/main/resources/profiles/`. - -#### When Using Released Binary **(Recommended)** -- You can override any configuration values with `-D` option. For example, - - `java -jar -Dspring.profiles.active=release -Dpinpoint.zookeeper.address=localhost -Dhbase.client.port=1234 pinpoint-web-boot-2.2.1.jar` - -- To import a list of your customized configuration values from a file, you can use `--spring.config.additional-location` option. For example, - - Create a file `./config/web.properties`, and list the configuration values you want to override. - > - > spring.profiles.active=release - > - > pinpoint.zookeeper.address=localhost - > - > cluster.zookeeper.sessiontimeout=10000 - > - - - Execute with `java -jar pinpoint-web-boot-2.2.1.jar --spring.config.additional-location=./config/web.properties` - -- To further explore how to use externalized configurations, refer to [Spring Boot Reference Document](https://docs.spring.io/spring-boot/docs/2.2.x/reference/html/spring-boot-features.html#boot-features-external-config-application-property-files). - -### Profiles - -Pinpoint Web provides two profiles: [release](https://github.com/pinpoint-apm/pinpoint/tree/master/web/src/main/resources/profiles/release) (default) and [local](https://github.com/pinpoint-apm/pinpoint/tree/master/web/src/main/resources/profiles/local). - -To specify which profile to use, configure `spring.profiles.active` value as described in the previous section. - -#### Adding a custom profile - -To add a custom profile, you need to rebuild `pinpoint-web` module. - - 1. Add a new folder under `web/src/main/resources/profiles` with a profile name. - 2. Copy files from local or release profiles folder, and modify configuration values as needed. - 3. To use the new profile, rebuild `pinpoint-web` module and configure `spring.profiles.active` as described in the previous section. - -When using released binary, you cannot add a custom profile. Instead, you can manage your configuration values in separate files and use them to override default values as described in the [previous section](#4-pinpoint-web). - -## 5. Pinpoint Agent -If downloaded, unzip the Pinpoint Agent file. You should have a **pinpoint-agent** directory with the layout below : - -``` -pinpoint-agent -|-- boot -| |-- pinpoint-annotations-$VERSION.jar -| |-- pinpoint-bootstrap-core-$VERSION.jar -| |-- pinpoint-bootstrap-java8-$VERSION.jar -| |-- pinpoint-bootstrap-java9-$VERSION.jar -| |-- pinpoint-commons-$VERSION.jar -|-- lib -| |-- pinpoint-profiler-$VERSION.jar -| |-- pinpoint-profiler-optional-$VERSION.jar -| |-- pinpoint-rpc-$VERSION.jar -| |-- pinpoint-thrift-$VERSION.jar -| |-- ... -|-- plugin -| |-- pinpoint-activemq-client-plugin-$VERSION.jar -| |-- pinpoint-tomcat-plugin-$VERSION.jar -| |-- ... -|-- profiles -| |-- local -| | |-- log4j.xml -| | |-- pinpoint.config -| |-- release -| |-- log4j.xml -| |-- pinpoint.config -|-- pinpoint-bootstrap-$VERSION.jar -|-- pinpoint-root.config -``` -The path to this directory should look like *$PINPOINT_PATH/agent/target/pinpoint-agent* if you built it manually. - -You may move/extract the contents of **pinpoint-agent** directory to any location of your choice. The guide will refer to the full path of this directory as `$AGENT_PATH`. - -> Note that you may change the agent's log level by modifying the *log4j.xml* located in the *profiles/$PROFILE/log4j.xml* directory above. - -Agent compatibility to Collector table: - -{% include_relative compatibilityJava.md %} - -### Installation -Pinpoint Agent runs as a java agent attached to an application to be profiled (such as Tomcat). - -To wire up the agent, pass *$AGENT_PATH/pinpoint-bootstrap-$VERSION.jar* to the *-javaagent* JVM argument when running the application: - -* `-javaagent:$AGENT_PATH/pinpoint-bootstrap-$VERSION.jar` - -Additionally, Pinpoint Agent requires 2 command-line arguments in order to identify itself in the distributed system: - -* `-Dpinpoint.agentId` - uniquely identifies the application instance in which the agent is running on -* `-Dpinpoint.applicationName` - groups a number of identical application instances as a single service - -Note that *pinpoint.agentId* must be globally unique to identify an application instance, and all applications that share the same *pinpoint.applicationName* are treated as multiple instances of a single service. - -If you're launching the agent in a containerized environment, you might have set your *agent id* to be auto-generated every time the container is launched. With frequent deployment and auto-scaling, this will lead to the Web UI being cluttered with all the list of agents that were launched and destroyed previously. For such cases, you might want to add `-Dpinpoint.container` in addition to the 2 required command-line arguments above when launching the agent. - -**Tomcat Example** - -Add *-javaagent*, *-Dpinpoint.agentId*, *-Dpinpoint.applicationName* to *CATALINA_OPTS* in the Tomcat startup script (*catalina.sh*). - -
-CATALINA_OPTS="$CATALINA_OPTS -javaagent:$AGENT_PATH/pinpoint-bootstrap-$VERSION.jar"
-CATALINA_OPTS="$CATALINA_OPTS -Dpinpoint.agentId=$AGENT_ID"
-CATALINA_OPTS="$CATALINA_OPTS -Dpinpoint.applicationName=$APPLICATION_NAME"
-
- -Start up Tomcat to start profiling your web application. - -Some application servers require additional configuration and/or may have caveats. Please take a look at the links below for further details. -* [JBoss](https://github.com/pinpoint-apm/pinpoint/tree/master/plugins/jboss#pinpoint-jboss-plugin-configuration) -* [Jetty](https://github.com/pinpoint-apm/pinpoint/blob/master/plugins/jetty/README.md) -* [Resin](https://github.com/pinpoint-apm/pinpoint/tree/master/plugins/resin#pinpoint-resin-plugin-configuration) - -### Configuration - -There are various configuration options for Pinpoint Agent available in *$AGENT_PATH/pinpoint-root.config*. - -Most of these options are self explanatory, but the most important configuration options you must check are **Collector ip address**, and the **TCP/UDP ports**. These values are required for the agent to establish connection to the *Collector* and function correctly. - -Set these values appropriately in *pinpoint-root.config*: - -**THRIFT** -* `profiler.collector.ip` (default: 127.0.0.1) -* `profiler.collector.tcp.port` (collector's *collector.receiver.base.port* - default: 9994/TCP) -* `profiler.collector.stat.port` (collector's *collector.receiver.stat.udp.port* - default: 9995/UDP) -* `profiler.collector.span.port` (collector's *collector.receiver.span.udp.port* - default: 9996/UDP) - -**GRPC** -* `profiler.transport.grpc.collector.ip` (default: 127.0.0.1) -* `profiler.transport.grpc.agent.collector.port` (collector's *collector.receiver.grpc.agent.port* - default: 9991/TCP) -* `profiler.transport.grpc.metadata.collector.port` (collector's *collector.receiver.grpc.agent.port* - default: 9991/TCP) -* `profiler.transport.grpc.stat.collector.port` (collector's *collector.receiver.grpc.stat.port* - default: 9992/TCP) -* `profiler.transport.grpc.span.collector.port` (collector's *collector.receiver.grpc.span.port* - default: 9993/TCP) - -You may take a look at the default *pinpoint-root.config* file [here](https://github.com/pinpoint-apm/pinpoint/blob/master/agent/src/main/resources/pinpoint-root.config "pinpoint.config") along with all the available configuration options. - -### Profiles -Add `-Dkey=value` to Java System Properties -* $PINPOINT_AGENT_DIR/profiles/$PROFILE - - `-Dpinpoint.profiler.profiles.active=release or local` - - Modify `pinpoint.profiler.profiles.active=release` in $PINPOINT_AGENT_DIR/pinpoint-root.config - - Default profile : `release` -* Custom Profile - 1. Create a custom profile in $PINPOINT_AGENT_HOME/profiles/MyProfile - - Add pinpoint.config & log4j.xml - 2. Add `-Dpinpoint.profiler.profiles.active=MyProfile` -* Support external config - - `-Dpinpoint.config=$MY_EXTERNAL_CONFIG_PATH` - -## Miscellaneous - -### HBase region servers hostname resolution -Please note that collector/web must be able to resolve the hostnames of HBase region servers. -This is because HBase region servers are registered to ZooKeeper by their hostnames, so when the collector/web asks ZooKeeper for a list of region servers to connect to, it receives their hostnames. -Please ensure that these hostnames are in your DNS server, or add these entries to the collector/web instances' *hosts* file. - -### Routing Web requests to Agents - -Starting from 1.5.0, Pinpoint can send requests from the Web to Agents directly via the Collector (and vice-versa). To make this possible, we use Zookeeper to co-ordinate the communication channels established between Agents and Collectors, and those between Collectors and Web instances. With this addition, real-time communication (for things like active thread count monitoring) is now possible. - -We typically use the Zookeeper instance provided by the HBase backend so no additional Zookeeper configuration is required. Related configuration options are shown below. - -* **Collector** - *pinpoint-collector.properties* - * `cluster.enable` - * `cluster.zookeeper.address` - * `cluster.zookeeper.sessiontimeout` - * `cluster.listen.ip` - * `cluster.listen.port` -* **Web** - *pinpoint-web.properties* - * `cluster.enable` - * `cluster.web.tcp.port` - * `cluster.zookeeper.address` - * `cluster.zookeeper.sessiontimeout` - * `cluster.zookeeper.retry.interval` - * `cluster.connect.address` - diff --git a/doc/main.md b/doc/main.md deleted file mode 100755 index 5140602f305dc..0000000000000 --- a/doc/main.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: "Pinpoint 2.2.2" -keywords: pinpoint release, 2.2.2 -permalink: main.html -sidebar: mydoc_sidebar ---- - -# What's New in 2.2.2 - -v2.2.2 is a bug fix release of 2.2.1 - -There is a bug in the Reactor-netty plugin (v2.0.0 ~ 2.2.1) which inserts incorrect endPoint value. -It only occurs in certain circumstances relating the high overload in Pinpoint-Collector -To prevent this, it is recommended to upgrade to version 2.2.2 or higher when using the Reactor-netty plugin. - -## Upgrade consideration - -HBase compatibility table: - -{% include_relative compatibilityHbase.md %} - -Agent compatibility to Collector table: - -{% include_relative compatibilityPinpoint.md %} - -Additionally, the required Java version to run each Pinpoint component is given below: - -{% include_relative compatibilityJava.md %} - -## Supported Modules - -* JDK 6+ -* Supported versions of the \* indicated library may differ from the actual version. - -{% include_relative modules.md %} - - diff --git a/doc/overview.md b/doc/overview.md deleted file mode 100755 index 6c8bb14b7a9bc..0000000000000 --- a/doc/overview.md +++ /dev/null @@ -1,30 +0,0 @@ ---- -title: Overview -keywords: overview, architecture -last_updated: Feb 1, 2018 -sidebar: mydoc_sidebar -permalink: overview.html -disqus: false ---- - - -## Overview -Services nowadays often consist of many different components, communicating amongst themselves as well as making API calls to external services. How each and every transaction gets executed is often left as a blackbox. Pinpoint traces transaction flows between these components and provides a clear view to identify problem areas and potential bottlenecks.
- - -* **ServerMap** - Understand the topology of any distributed systems by visualizing how their components are interconnected. Clicking on a node reveals details about the component, such as its current status, and transaction count. -* **Realtime Active Thread Chart** - Monitor active threads inside applications in real-time. -* **Request/Response Scatter Chart** - Visualize request count and response patterns over time to identify potential problems. Transactions can be selected for additional detail by **dragging over the chart**. - - ![Server Map](images/ss_server-map.png) - -* **CallStack** - Gain code-level visibility to every transaction in a distributed environment, identifying bottlenecks and points of failure in a single view. - - ![Call Stack](images/ss_call-stack.png) - -* **Inspector** - View additional details on the application such as CPU usage, Memory/Garbage Collection, TPS, and JVM arguments. - - ![Inspector](images/ss_inspector.png) - -## Architecture -![Pinpoint Architecture](images/pinpoint-architecture.png) diff --git a/doc/per-request_feature_guide.md b/doc/per-request_feature_guide.md deleted file mode 100644 index f635f81375043..0000000000000 --- a/doc/per-request_feature_guide.md +++ /dev/null @@ -1,575 +0,0 @@ ---- -title: Separate Logging Per Request -keywords: history -last_updated: Feb 1, 2018 -sidebar: mydoc_sidebar -permalink: perrequestfeatureguide.html -disqus: false ---- - -# ENGLISH GUIDE - -## Per-request logging - -### 1. Description -Pinpoint saves additional information(transactionId, spanId) in log messages to classify them by request. - -When tomcat processes multiple requests concurrently, we can see log files printed in chronological order. -But we can not classify them by each request. -For example when an exception message is logged, we can not easily identify all the logs related to the request that threw the exception. - -Pinpoint is able to classify logs by requests by storing additional information(transactionId, spanId) in MDC of each request. -The transactionId printed in the log message is the same as the transactionId in Pinpoint Web’s Transaction List view. - -Let’s take a look at a more specific example. -The log below is from an exception that occurred without using Pinpoint. -As you can see, it is hard to identify the logs related to the request that threw the exception. -ex) Without Pinpoint -``` -2015-04-04 14:35:20 [INFO](ContentInfoCollector:76 ) get content name : TECH -2015-04-04 14:35:20 [INFO](ContentInfoCollector:123 ) get content name : OPINION -2015-04-04 14:35:20 [INFO](ContentInfoCollector:12) get content name : SPORTS -2015-04-04 14:35:20 [INFO](ContentInfoCollector:25 ) get content name : TECH -2015-04-04 14:35:20 [INFO](ContentInfoCollector:56 ) get content name : NATIONAL -2015-04-04 14:35:20 [INFO](ContentInfoCollector:34 ) get content name : OPINION -2015-04-04 14:35:20 [INFO](ContentInfoService:55 ) check authorization of user -2015-04-04 14:35:20 [INFO](ContentInfoService:14 ) get title of content -2015-04-04 14:35:21 [INFO](ContentDAOImpl:14 ) execute query ... -2015-04-04 14:35:21 [INFO](ContentDAOImpl:114 ) execute query ... -2015-04-04 14:35:20 [INFO](ContentInfoService:74 ) get top linking for content -2015-04-04 14:35:21 [INFO](ContentDAOImpl:14 ) execute query ... -2015-04-04 14:35:21 [INFO](ContentDAOImpl:114 ) execute query ... -2015-04-04 14:35:22 [INFO](ContentDAOImpl:186 ) execute query ... -2015-04-04 14:35:22 [ERROR]ContentDAOImpl:158 ) - com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure - at example.ContentDAO.executequery(ContentDAOImpl.java:152) - ... - at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) - at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) - at com.mysql.jdbc.ConnectionImpl.(ConnectionImpl.java:787) - ... - Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: - Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. - The driver has not received any packets from the server. - at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2181) - ... 12 more - Caused by: java.net.ConnectException: Connection refused - at java.net.PlainSocketImpl.socketConnect(Native Method) - at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) - at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) - at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) - at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:432) - at java.net.Socket.connect(Socket.java:529) - ...13 more -2015-04-04 14:35:22 [INFO](ContentDAO:145 ) execute query ... -2015-04-04 14:35:20 [INFO](ContentInfoService:38 ) update hits for content -2015-04-04 14:35:20 [INFO](ContentInfoService:89 ) check of user -2015-04-04 14:35:24 [INFO](ContentDAO:146 ) execute query ... -2015-04-04 14:35:25 [INFO](ContentDAO:123 ) execute query ... -``` - -Pinpoint classifies logs by requests by storing additional information(transactionId, spanId) in MDC of each request. -ex) With Pinpoint -``` -2015-04-04 14:35:20 [INFO](ContentInfoCollector:76) [txId : agent^14252^17 spanId : 1224] get content name : TECH -2015-04-04 14:35:20 [INFO](ContentInfoCollector:123) [txId : agent^142533^18 spanId : 1231] get content name : OPINION -2015-04-04 14:35:20 [INFO](ContentInfoCollector:12) [txId : agent^142533^19 spanId : 1246] get content name : SPORTS -2015-04-04 14:35:20 [INFO](ContentInfoCollector:25) [txId : agent^142533^20 spanId : 1263] get content name : TECH -2015-04-04 14:35:20 [INFO](ContentInfoCollector:56) [txId : agent^142533^21 spanId : 1265] get content name : NATIONAL -2015-04-04 14:35:20 [INFO](ContentInfoCollector:34) [txId : agent^142533^22 spanId : 1278] get content name : OPINION -2015-04-04 14:35:20 [INFO](ContentInfoService:55) [txId : agent^14252^18 spanId : 1231] check authorization of user -2015-04-04 14:35:20 [INFO](ContentInfoService:14) [txId : agent^14252^17 spanId : 1224] get title of content -2015-04-04 14:35:21 [INFO](ContentDAOImpl:14) [txId : agent^14252^17 spanId : 1224] execute query ... -2015-04-04 14:35:21 [INFO](ContentDAOImpl:114) [txId : agent^142533^19 spanId : 1246] execute query ... -2015-04-04 14:35:20 [INFO](ContentInfoService:74) [txId : agent^14252^17 spanId : 1224] get top linking for content -2015-04-04 14:35:21 [INFO](ContentDAOImpl:14) [txId : agent^142533^18 spanId : 1231] execute query ... -2015-04-04 14:35:21 [INFO](ContentDAOImpl:114) [txId : agent^142533^21 spanId : 1265] execute query ... -2015-04-04 14:35:22 [INFO](ContentDAOImpl:186) [txId : agent^142533^22 spanId : 1278] execute query ... -2015-04-04 14:35:22 [ERROR](ContentDAOImpl:158) [txId : agent^142533^18 spanId : 1231] - com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure - at com.pinpoint.example.dao.ContentDAO.executequery(ContentDAOImpl.java:152) - ... - at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) - at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) - at com.mysql.jdbc.ConnectionImpl.(ConnectionImpl.java:787) - ... - Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: - Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. - The driver has not received any packets from the server. - ... - at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2181) - ... 12 more - Caused by: java.net.ConnectException: Connection refused - at java.net.PlainSocketImpl.socketConnect(Native Method) - at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) - at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) - at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) - at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:432) - at java.net.Socket.connect(Socket.java:529) - ... 13 more -2015-04-04 14:35:22 [INFO](ContentDAO:145) [txId : agent^14252^17 spanId : 1224] execute query ... -2015-04-04 14:35:20 [INFO](ContentInfoService:38) [txId : agent^142533^19 spanId : 1246] update hits for content -2015-04-04 14:35:20 [INFO](ContentInfoService:89) [txId : agent^142533^21 spanId : 1265] check of user -2015-04-04 14:35:24 [INFO](ContentDAO:146) [txId : agent^142533^22 spanId : 1278] execute query ... -2015-04-04 14:35:25 [INFO](ContentDAO:123) [txId : agent^14252^17 spanId : 1224] execute query ... -``` - -The transactionId printed in the log message is the same as the transactionId in Pinpoint Web’s Transaction List view. -![per-request_feature_1.jpg](images/per-request_feature_1.jpg) - -### 2. How to configure - -**2-1 Pinpoint agent configuration** - -To enable this feature, set the logging property corresponding to the logging library in use to true in *pinpoint.config*. -For example, - -ex) pinpoint.config when using log4j -``` -########################################################### -# log4j -########################################################### -profiler.log4j.logging.transactioninfo=true -``` - -ex) pinpoint.config when using log4j2 -``` -########################################################### -# log4j2 -########################################################### -profiler.log4j2.logging.transactioninfo=true - -``` - -ex) pinpoint.config when using logback -``` -########################################################### -# logback -########################################################### -profiler.logback.logging.transactioninfo=true -``` - -**2-2 log4j, log4j2, logback configuration** - -Change the log message format to print the transactionId, and spanId saved in MDC. - -ex) log4j : log4j.xml -```xml -Before - - - - - - -After - - - - - -``` - -ex) log4j2 - log4j2.xml -```xml -Before - - - - - - -After - - - - - -``` - -ex) logback : logback.xml -```xml -Before - - - %d{HH:mm} %-5level %logger{36} - %msg%n - - - -After - - - %d{HH:mm} %-5level %logger{36} - [TxId : %X{PtxId} , SpanId : %X{PspanId}] %msg%n - - -``` - -**2-3 Checking log message** - -If the per-request logging is correctly configured, the transactionId, and spanId are printed in the log file. - -``` -2015-04-04 14:35:20 [INFO](ContentInfoCollector:76 ) [txId : agent^14252^17 spanId : 1224] get content name : TECH -2015-04-04 14:35:20 [INFO](ContentInfoCollector:123 ) [txId : agent^142533^18 spanId : 1231] get content name : OPINION -2015-04-04 14:35:20 [INFO](ContentInfoCollector:12) [txId : agent^142533^19 spanId : 1246] get content name : SPORTS -``` - -### 3. expose log in Pinpoint web - -If you want to add links to the logs in the transaction list view, you should configure and implement the logic as below. -Pinpoint Web only adds link buttons - you should implement the logic to retrieve the log message. - -If you want to expose your agent’s log messages, please follow the steps below. - -**step 1** -You should implement a controller that receives transactionId, spanId, transaction_start_time as parameters and retrieve the logs yourself. -We do not yet provide a way to retrieve the logs. - -example) -```java -@RestController -public class Nelo2LogController { - - @RequestMapping(value = "/????") - public String NeloLogForTransactionId(@RequestParam (value= "transactionId", required=true) String transactionId, - @RequestParam(value= "spanId" , required=false) String spanId, - @RequestParam(value="time" , required=true) long time ) { - - // you should implement the logic to retrieve your agent’s logs. - } -``` - -**step 2** -In *pinpoint-web.properties* file, set `log.enable` to true, and `log.page.url` to the url of the controller above. -The value set in `log.button.name` will show up as the button text in the Web UI. -```properties -log.enable= true -log.page.url=XXXX.pinpoint -log.button.name= log -``` - -**step 3** -Pinpoint 1.5.0 or later, we improve button to decided enable/disable depending on whether or not being logged. -You should implement interceptor for using logging appender to add logic whether or not being logged. you also should create plugin for logging appender internally. -Please refer to Pinpoint Profiler Plugin Sample([Link](https://github.com/pinpoint-apm/pinpoint-plugin-sample)). -Location added logic of interceptor is method to log for data of LoggingEvent in appender class. you should review your appender class and find method. -This is interceptor example. - -``` -public class AppenderInterceptor implements AroundInterceptor0 { - - private final TraceContext traceContext; - - public AppenderInterceptor(TraceContext traceContext) { - this.traceContext = traceContext; - } - - @Override - public void before(Object target) { - Trace trace = traceContext.currentTraceObject(); - - if (trace != null) { - SpanRecorder recorder = trace.getSpanRecorder(); - recorder.recordLogging(LoggingInfo.LOGGED); - } - } - - @IgnoreMethod - @Override - public void after(Object target, Object result, Throwable throwable) { - - } -} -``` - -If those are correctly configured, the buttons are added in the transaction list view. -![per-request_feature_2.jpg](images/per-request_feature_2.jpg) - -For details in how the log buttons are generated, please refer to Pinpoint Web’s BusinessTransactionController and ScatterChartController. - ---------------------- - -# 한국어 가이드 - -## Per-request logging - -### 1. 기능 설명 - -Pinpoint에서는 log message를 request 단위로 구분할 수 있도록 log message 에 추가정보를 저장해준다. - -다수의 요청을 처리하는 tomcat을 사용할 경우 로그 파일을 보면 시간순으로 출력된 로그를 확인할 수 있다. -그러나 동시에 요청된 다수의 request 각각에 대한 로그를 구분 해서 볼 수 없다. -예를 들어 로그에서 exception message가 출력됐을 때 그 exception이 발생한 request의 모든 log를 확인하기 힘들다. - -Pinpoint는 log message 마다 request와 연관된 정보(transactionId, spanId)를 MDC에 넣어줘서 request 단위로 log message를 구분할 수 있도록 해준다. -로그에 출력된 transactionId는 pinpoint web의 transaction List 화면에 출력된 transactionId와 일치한다. - -구체적으로 예를 들어보자. -Pinpoint를 사용하지 않았을 때 exception이 발생했을 경우 로그 메시지를 살펴 보자. -요청된 다수의 request 각각을 구분하여 로그를 확인할 수가 없다. - -ex) Without Pinpoint -``` -2015-04-04 14:35:20 [INFO](ContentInfoCollector:76 ) get content name : TECH -2015-04-04 14:35:20 [INFO](ContentInfoCollector:123 ) get content name : OPINION -2015-04-04 14:35:20 [INFO](ContentInfoCollector:12) get content name : SPORTS -2015-04-04 14:35:20 [INFO](ContentInfoCollector:25 ) get content name : TECH -2015-04-04 14:35:20 [INFO](ContentInfoCollector:56 ) get content name : NATIONAL -2015-04-04 14:35:20 [INFO](ContentInfoCollector:34 ) get content name : OPINION -2015-04-04 14:35:20 [INFO](ContentInfoService:55 ) check authorization of user -2015-04-04 14:35:20 [INFO](ContentInfoService:14 ) get title of content -2015-04-04 14:35:21 [INFO](ContentDAOImpl:14 ) execute query ... -2015-04-04 14:35:21 [INFO](ContentDAOImpl:114 ) execute query ... -2015-04-04 14:35:20 [INFO](ContentInfoService:74 ) get top linking for content -2015-04-04 14:35:21 [INFO](ContentDAOImpl:14 ) execute query ... -2015-04-04 14:35:21 [INFO](ContentDAOImpl:114 ) execute query ... -2015-04-04 14:35:22 [INFO](ContentDAOImpl:186 ) execute query ... -2015-04-04 14:35:22 [ERROR]ContentDAOImpl:158 ) - com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure - at example.ContentDAO.executequery(ContentDAOImpl.java:152) - ... - at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) - at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) - at com.mysql.jdbc.ConnectionImpl.(ConnectionImpl.java:787) - ... - Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: - Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. - The driver has not received any packets from the server. - at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2181) - ... 12 more - Caused by: java.net.ConnectException: Connection refused - at java.net.PlainSocketImpl.socketConnect(Native Method) - at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) - at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) - at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) - at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:432) - at java.net.Socket.connect(Socket.java:529) - ...13 more -2015-04-04 14:35:22 [INFO](ContentDAO:145 ) execute query ... -2015-04-04 14:35:20 [INFO](ContentInfoService:38 ) update hits for content -2015-04-04 14:35:20 [INFO](ContentInfoService:89 ) check of user -2015-04-04 14:35:24 [INFO](ContentDAO:146 ) execute query ... -2015-04-04 14:35:25 [INFO](ContentDAO:123 ) execute query ... -``` - -Pinpoint는 로그 메세지 마다 request와 연관된 정보(transactionId, spanId)를 MDC에 넣어줘서 request 단위로 log message를 구분시켜 준다. - -ex) With Pinpoint - -``` -2015-04-04 14:35:20 [INFO](ContentInfoCollector:76) [txId : agent^14252^17 spanId : 1224] get content name : TECH -2015-04-04 14:35:20 [INFO](ContentInfoCollector:123) [txId : agent^142533^18 spanId : 1231] get content name : OPINION -2015-04-04 14:35:20 [INFO](ContentInfoCollector:12) [txId : agent^142533^19 spanId : 1246] get content name : SPORTS -2015-04-04 14:35:20 [INFO](ContentInfoCollector:25) [txId : agent^142533^20 spanId : 1263] get content name : TECH -2015-04-04 14:35:20 [INFO](ContentInfoCollector:56) [txId : agent^142533^21 spanId : 1265] get content name : NATIONAL -2015-04-04 14:35:20 [INFO](ContentInfoCollector:34) [txId : agent^142533^22 spanId : 1278] get content name : OPINION -2015-04-04 14:35:20 [INFO](ContentInfoService:55) [txId : agent^14252^18 spanId : 1231] check authorization of user -2015-04-04 14:35:20 [INFO](ContentInfoService:14) [txId : agent^14252^17 spanId : 1224] get title of content -2015-04-04 14:35:21 [INFO](ContentDAOImpl:14) [txId : agent^14252^17 spanId : 1224] execute query ... -2015-04-04 14:35:21 [INFO](ContentDAOImpl:114) [txId : agent^142533^19 spanId : 1246] execute query ... -2015-04-04 14:35:20 [INFO](ContentInfoService:74) [txId : agent^14252^17 spanId : 1224] get top linking for content -2015-04-04 14:35:21 [INFO](ContentDAOImpl:14) [txId : agent^142533^18 spanId : 1231] execute query ... -2015-04-04 14:35:21 [INFO](ContentDAOImpl:114) [txId : agent^142533^21 spanId : 1265] execute query ... -2015-04-04 14:35:22 [INFO](ContentDAOImpl:186) [txId : agent^142533^22 spanId : 1278] execute query ... -2015-04-04 14:35:22 [ERROR](ContentDAOImpl:158) [txId : agent^142533^18 spanId : 1231] - com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure - at com.pinpoint.example.dao.ContentDAO.executequery(ContentDAOImpl.java:152) - ... - at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) - at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) - at com.mysql.jdbc.ConnectionImpl.(ConnectionImpl.java:787) - ... - Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: - Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. - The driver has not received any packets from the server. - ... - at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2181) - ... 12 more - Caused by: java.net.ConnectException: Connection refused - at java.net.PlainSocketImpl.socketConnect(Native Method) - at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) - at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) - at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) - at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:432) - at java.net.Socket.connect(Socket.java:529) - ... 13 more -2015-04-04 14:35:22 [INFO](ContentDAO:145) [txId : agent^14252^17 spanId : 1224] execute query ... -2015-04-04 14:35:20 [INFO](ContentInfoService:38) [txId : agent^142533^19 spanId : 1246] update hits for content -2015-04-04 14:35:20 [INFO](ContentInfoService:89) [txId : agent^142533^21 spanId : 1265] check of user -2015-04-04 14:35:24 [INFO](ContentDAO:146) [txId : agent^142533^22 spanId : 1278] execute query ... -2015-04-04 14:35:25 [INFO](ContentDAO:123) [txId : agent^14252^17 spanId : 1224] execute query ... -``` - -로그메시지에 출력된 transactionId는 Pinpoint web의 transactionlist의 transactionId와 일치한다. -![per-request_feature_1.jpg](images/per-request_feature_1.jpg) - -### 2. 설정 방법 - -**2-1 Pinpoint agent 설정** - -Pinpoint를 사용하려면 Pinpoint agent 설정파일(Pinpoint.config)의 logging 설정 값을 true로 변경해야 한다. -사용하는 logging 라이브러리에 해당하는 설정값만 true로 변경하면 된다. -아래 설정에 대한 예시가 있다. - -ex) Pinpoint.config - log4j 를 사용할 경우 -``` -########################################################### -# log4j -########################################################### -profiler.log4j.logging.transactioninfo=true -``` - -ex) Pinpoint.config - log4j2 를 사용할 경우 -``` -########################################################### -# log4j2 -########################################################### -profiler.log4j2.logging.transactioninfo=true - -``` - -ex) Pinpoint.config - logback 를 사용할 경우 -``` -########################################################### -# logback -########################################################### -profiler.logback.logging.transactioninfo=true -``` - -**2-2 log4j, log4j2, logback 설정 파일 설정** - -logging 설정 파일의 log message pattern 설정에 Pinpoint에서 MDC에 저장한 transactionId, spanId값이 출력될수 있도록 설정을 추가하자. - -ex) log4j - log4j.xml -```xml -변경 전 - - - - - - -변경 후 - - - - - -``` - -ex) log4j2 - log4j2.xml -```xml -변경 전 - - - - - - -변경 후 - - - - - -``` - -ex) logback - logback.xml -```xml -변경 전 - - - %d{HH:mm} %-5level %logger{36} - %msg%n - - - -변경 후 - - - %d{HH:mm} %-5level %logger{36} - [TxId : %X{PtxId} , SpanId : %X{PspanId}] %msg%n - - -``` - -**2-3 로그 출력 확인** - -Pinpoint agent가 적용된 서비스를 동작하여 log message에 아래와 같이 transactionId, spanId 정보가 출력되는것을 확인하면 된다. - -``` -2015-04-04 14:35:20 [INFO](ContentInfoCollector:76 ) [txId : agent^14252^17 spanId : 1224] get content name : TECH -2015-04-04 14:35:20 [INFO](ContentInfoCollector:123 ) [txId : agent^142533^18 spanId : 1231] get content name : OPINION -2015-04-04 14:35:20 [INFO](ContentInfoCollector:12) [txId : agent^142533^19 spanId : 1246] get content name : SPORTS -``` - -### 3. Pinpoint web에서 로그 확인 -Pinpoint web의 transaction list 화면에서 log를 출력하는 링크를 제공하고 싶다면 아래와 같이 설정 및 구현을 추가하면 된다. -Pinpoint web에서는 버튼 을 추가해주기만 하고 로그를 가져오는 로직은 직접 구현을 해야한다. - - -로그 메시지를 Pinpoint web에서 보여주기 위해서는 아래와 같이 2가지 step을 따라야 한다. - -**step 1** -transactionId와 spanId, transaction 시작 시간을 파라미터로 받아서 로그 메시지를 가져오는 controller을 구현해야한다. - -example) -```java -@RestController -public class Nelo2LogController { - - @RequestMapping(value = "/XXXX") - public String NeloLogForTransactionId(@RequestParam (value= "transactionId", required=true) String transactionId, - @RequestParam(value= "spanId" , required=false) String spanId, - @RequestParam(value="time" , required=true) long time ) { - - // you should implement the logic to retrieve your agent’s logs. - } -``` - - -**step 2** -Pinpoint-web.properties 파일에서 버튼을 추가해주는 기능을 활성화 하기 위해서 log.enable의 값을 true로 설정하고 -위에서 구현한 controller의 url과 button의 이름을 추가해주자. - -```properties -log.enable=true -log.page.url=XXXX.Pinpoint -log.button.name=log -``` - - -**step 3** -pinpoint 1.5 이후 버전부터 log 기록 여부에 따라 log 버튼의 활성화가 결정되도록 개선 됐기 때문에 -당신이 사용하는 logging appender의 로깅 메소드에 logging 여부를 저장하는 interceptor를 추가하는 플러그인을 개발해야 한다. -플러그인 개발 방법은 다음 링크를 참고하면 된다([Link](https://github.com/pinpoint-apm/pinpoint-plugin-sample)). interceptor 로직이 추가돼야 하는 위치는 appender class 내에 LoggingEvent 객체의 데이터를 이용하여 로깅을 하는 메소드다. -아래는 interceptor 예제이다. -``` -public class AppenderInterceptor implements AroundInterceptor0 { - - private final TraceContext traceContext; - - public AppenderInterceptor(TraceContext traceContext) { - this.traceContext = traceContext; - } - - @Override - public void before(Object target) { - Trace trace = traceContext.currentTraceObject(); - - if (trace != null) { - SpanRecorder recorder = trace.getSpanRecorder(); - recorder.recordLogging(LoggingInfo.LOGGED); - } - } - - @IgnoreMethod - @Override - public void after(Object target, Object result, Throwable throwable) { - - } -} -``` - - -위와 같이 설정 및 구현을 추가하고 pinpoint web을 동작시키면 아래와 같이 버튼이 추가 된다. -![per-request_feature_2.jpg](images/per-request_feature_2.jpg) -로그 버튼을 생성해주는 과정을 보시려면, Pinpoint Web의 BusinessTransactionController 와 ScatterChartController class를 참고하세요. diff --git a/doc/performance.md b/doc/performance.md deleted file mode 100755 index 5d62a675deefd..0000000000000 --- a/doc/performance.md +++ /dev/null @@ -1,67 +0,0 @@ ---- -title: Performance Analysis -keywords: performance, test -last_updated: Oct 23, 2019 -sidebar: mydoc_sidebar -permalink: performance.html -disqus: false ---- - -# Introduction - - Team members of Pinpoint are always aware of performance and stability issues. - We've adapted technologies to reduced elements that hinder performance and always carefully examine the codes when there is a plugin pull requests.(plugin codes affects most in performance) - - While we have been testing internally everyday for last few years, We've finally had the chance to make the data presentable. - - This article doesn't include results compared with other APMs. It's pointless to compare with others due to the difference in collected data. - Pinpoint collects massive data to enhance observability as much as possible. But still with minimum impact on the performance - -# Test Environment - - JVM : 1.8.0_77 (G1, -Xms4g, -Xmx4g) - Server : Tomcat - Database : Cubrid - Stress test generator : [NGrinder](https://github.com/pinpoint-apm/ngrinder) - -# Test Result - - ![Test Result](images/20191022_Performance.png) - - *off : non traced - *on-20 : trace 5% transaction using thrift - *grpc-on-20 : trace 5% transaction using grpc - *on-1 : trace 100% transaction using thrift - *grpc-on-1 : trace 100% transaction using grpc - - **Test result shows that Pinpoint affects less than 3% in performance and memory** - **TPS is effected by various reasons, which may not always be exact** - **gRPC is little slow than thrift in this test, the performance gap between the two is expected to be reduced, or even more, reversed in v1.9.0 release** - - -# Conclusion - - Pinpoint is already being used in dozens of global companies in the world. - With right environment and configuration it's been proved to be worthy. - We believe most of the services can spare their 3% of resource to gain high observability with Pinpoint. - -# Check List - - If you still have low performance issue due to Pinpoint. - Here are several items to check in advance. - - 1. Check the default log option for Pinpoint-Agent (Default was `DEBUG` prior to v1.8.1) - 2. JVM option - - use G1 for the GC Type - - fix initial/maximum memory allocation pool with same size. ex) -Xms4g -Xmx4g - 3. Change [sampling rate](https://pinpoint-apm.github.io/pinpoint/faq.html#why-is-only-the-firstsome-of-the-requests-traced). Even 1~2% would be enough if you are dealing big data. - - When certain transaction doesn't bypass database, it may appear that Pinpoint is consuming much more resources than 3%, since instrumentation time is not relative, but absolute. - But this phenomenon appears in all APM, not only Pinpoint. - -# Reference Data - - We run test with various technology stacks. Planning to expand more as we go. - [Full Result](images/20191022_Perf_Full.html) - - \ No newline at end of file diff --git a/doc/plugin-dev-guide.md b/doc/plugin-dev-guide.md deleted file mode 100755 index 3786357c5afd5..0000000000000 --- a/doc/plugin-dev-guide.md +++ /dev/null @@ -1,305 +0,0 @@ - ---- -title: Plugin Developer Guide -keywords: plugin, plug-in, plug -last_updated: Jan 21, 2019 -sidebar: mydoc_sidebar -permalink: plugindevguide.html -disqus: false ---- - -You can write Pinpoint profiler plugins to extend profiling target coverage. It is highly advisable to look into the trace data recorded by pinpoint plugins before jumping into plugin development. - - * There is a [fast auto pinpoint agent plugin generate tool](https://github.com/bbossgroups/pinpoint-plugin-generate) from a 3rd party for creating a simple plug-in, if you'd like to check out. - -## I. Trace Data -In Pinpoint, a transaction consists of a group of `Spans`. Each `Span` represents a trace of a single logical node where the transaction has gone through. - -To aid in visualization, let's suppose that there is a system like below. The *FrontEnd* server receives requests from users, then sends request to the *BackEnd* server, which queries a DB. Among these nodes, let's assume only the *FrontEnd* and *BackEnd* servers are profiled by the Pinpoint Agent. - -![trace](https://user-images.githubusercontent.com/10043788/133535491-adafcd89-c04e-49af-9ad7-f7746bb9c95c.PNG) - -When a request arrives at the *FrontEnd* server, Pinpoint Agent generates a new transaction id and creates a `Span` with it. To handle the request, the *FrontEnd* server then invokes the *BackEnd* server. At this point, Pinpoint Agent injects the transaction id (plus a few other values for propagation) into the invocation message. When the *BackEnd* server receives this message, it extracts the transaction id (and the other values) from the message and creates a new `Span` with them. Resulting, all `Spans` in a single transaction share the same transaction id. - -A `Span` records important method invocations and their related data(arguments, return value, etc) before encapsulating them as `SpanEvents` in a call stack like representation. The `Span` itself and each of its `SpanEvents` represents a method invocation. - -`Span` and `SpanEvent` have many fields, but most of them are handled internally by Pinpoint Agent and most plugin developers won't need to worry about them. But the fields and data that must be handled by plugin developers will be listed throughout this guide. - - -## II. Pinpoint Plugin Structure -Pinpoint plugin consists of *type-provider.yml* and `ProfilerPlugin` implementations. *type-provider.yml* defines the `ServiceTypes` and `AnnotationKeys` that will be provided by the plugin, and provides them to Pinpoint Agent, Web and Collector. `ProfilerPlugin` implementations are used by Pinpoint Agent to transform target classes to record trace data. - -Plugins are deployed as jar files. These jar files are packaged under the *plugin* directory for the agent, while the collector and web have them deployed under *WEB-INF/lib*. -On start up, Pinpoint Agent, Collector, and Web iterates through each of these plugins; parses *type-provider.yml*, and loads `ProfilerPlugin` implementations using `ServiceLoader` from the following locations: - -* META-INF/pinpoint/type-provider.yml -* META-INF/services/com.navercorp.pinpoint.bootstrap.plugin.ProfilerPlugin - -Here is a [template plugin project](https://github.com/pinpoint-apm/pinpoint-plugin-template) you can use to start creating your own plugin. - - -### 1. type-provider.yml -*type-provider.yml* defines the `ServiceTypes` and `AnnotationKeys` that will be used by the plugin and provided to the agent, collector and web; the format of which is outlined below. - -```yaml -serviceTypes: - - code: - name: - desc: # May be omitted, defaulting to the same value as name. - property: # May be omitted, all properties defaulting to false. - terminal: # May be omitted, defaulting to false. - queue: # May be omitted, defaulting to false. - recordStatistics: # May be omitted, defaulting to false. - includeDestinationId: # May be omitted, defaulting to false. - alias: # May be omitted, defaulting to false. - matcher: # May be omitted - type: # Any one of 'args', 'exact', 'none' - code: # Annotation key code - required only if type is 'exact' - -annotationKeys: - - code: - name: - property: # May be omitted, all properties defaulting to false. - viewInRecordSet: -``` - -`ServiceType` and `AnnotationKey` defined here are instantiated when the agent loads, and can be obtained using `ServiceTypeProvider` and `AnnotationKeyProvider` like below. -```java -// ServiceType -ServiceType serviceType = ServiceTypeProvider.getByCode(1000); // by ServiceType code -ServiceType serviceType = ServiceTypeProvider.getByName("NAME"); // by ServiceType name -// AnnotationKey -AnnotationKey annotationKey = AnnotationKeyProvider.getByCode("100"); -``` - -#### 1.1 ServiceType - -Every `Span` and `SpanEvent` contains a `ServiceType`. The `ServiceType` represents which library the traced method belongs to, as well as how the `Span` and `SpanEvent` should be handled. - -The table below shows the `ServiceType`'s properties. - -property | description ---- | --- -name | name of the `ServiceType`. Must be unique -code | short type code value of the `ServiceType`. Must be unique -desc | description -properties | properties - -`ServiceType` code must use a value from its appropriate category. The table below shows these categories and their range of codes. - -category | range ---- | --- -Internal Use | 0 ~ 999 -Server | 1000 ~ 1999 -DB Client | 2000 ~ 2999 -Cache Client | 8000 ~ 8999 -RPC Client | 9000 ~ 9999 -Others | 5000 ~ 7999 - - -`ServiceType` code must be unique. Therefore, if you are writing a plugin that will be shared publicly, **you must** contact Pinpoint dev. team to get a `ServiceType` code assigned. If your plugin is for private use, you may freely pick a value for `ServiceType` code from the table below. - -category | range ---- | --- -Server | 1900 ~ 1999 -DB client | 2900 ~ 2999 -Cache client | 8900 ~ 8999 -RPC client | 9900 ~ 9999 -Others | 7500 ~ 7999 - - -`ServiceTypes` can have the following properties. - -property | description ---- | --- -TERMINAL | This `Span` or `SpanEvent` invokes a remote node but the target node is not traceable with Pinpoint -QUEUE | This `Span` or `SpanEvent` consumes/produces a message from/to a message queue. -INCLUDE_DESTINATION_ID | This `Span` or `SpanEvent` records a `destination id` and remote server is not a traceable type. -RECORD_STATISTICS | Pinpoint Collector should collect execution time statistics of this `Span` or `SpanEvent` -ALIAS | The service may or may not have Pinpoint-Agent attached at the following service but regardlessly have knowledge what will follow. (Ex. Elasticsearch client) - - -#### 1.2 AnnotationKey -You can annotate spans and span events with more information. An **Annotation** is a key-value pair where the key is an `AnnotationKey` type and the value is a primitive type, String or a byte[]. There are pre-defined `AnnotationKeys` for commonly used annotation types, but you can define your own keys in *type-provider.yml* if these are not enough. - - -property | description ---- | --- -name | Name of the `AnnotationKey` -code | int type code value of the `AnnotationKey`. Must be unique. -properties | properties - -If you are writing a plugin for public use, and are looking to add a new `AnnotationKey`, you must contact the Pinpoint dev. team to get an `AnnotationKey` code assigned. If your plugin is for private use, you may pick a value between 900 to 999 safely to use as `AnnotationKey` code. - -The table below shows the `AnnotationKey` properties. - -property | description ---- | --- -VIEW_IN_RECORD_SET | Show this annotation in transaction call tree. -ERROR_API_METADATA | This property is not for plugins. - - -#### Example -You can find *type-provider.yml* sample [here](https://github.com/pinpoint-apm/pinpoint-plugin-sample/blob/master/plugin/src/main/resources/META-INF/pinpoint/type-provider.yml). - -You may also define and attach an `AnnotationKeyMatcher` with a `ServiceType` (`matcher` element in the sample *type-provider* code above). If you attach an `AnnotationKeyMatcher` this way, matching annotations will be displayed as representative annotation when the `ServiceType`'s `Span` or `SpanEvent` is displayed in the transaction call tree. - - - -### 2. ProfilerPlugin -`ProfilerPlugin` modifies target library classes to collect trace data. - -`ProfilerPlugin` works in the order of following steps: - -1. Pinpoint Agent is started when the JVM starts. -2. Pinpoint Agent loads all plugins under `plugin` directory. -3. Pinpoint Agent invokes `ProfilerPlugin.setup(ProfilerPluginSetupContext)` for each loaded plugin. -4. In the `setup` method, the plugin registers a `TransformerCallback` to all classes that are going to be transformed. -5. Target application starts. -6. Every time a class is loaded, Pinpoint Agent looks for the `TransformerCallback` registered to the class. -7. If a `TransformerCallback` is registered, the Agent invokes it's `doInTransform` method. -8. `TransformerCallback` modifies the target class' byte code. (e.g. add interceptors, add fields, etc.) -9. The modified byte code is returned to the JVM, and the class is loaded with the returned byte code. -10. Application continues running. -11. When a modified method is invoked, the injected interceptor's `before` and `after` methods are invoked. -12. The interceptor records the trace data. - -The most important points to consider when writing a plugin are 1) figuring out which methods are interesting enough to warrant tracing, and 2) injecting interceptors to actually trace these methods. -These interceptors are used to extract, store, and pass trace data around before they are sent off to the Collector. Interceptors may even cooperate with each other, sharing context between them. Plugins may also aid in tracing by adding getters or even custom fields to the target class so that the interceptors may access them during execution. [Pinpoint plugin sample](https://github.com/pinpoint-apm/pinpoint-plugin-sample) shows you how the `TransformerCallback` modifies classes and what the injected interceptors do to trace a method. - -We will now describe what interceptors must do to trace different kinds of methods. - -#### 2.1 Plain method -*Plain method* refers to anything that is not a top-level method of a node, or is not related to remote or asynchronous invocation. [Sample 2](https://github.com/pinpoint-apm/pinpoint-plugin-sample/tree/master/plugin/src/main/java/com/navercorp/pinpoint/plugin/sample/_02_Injecting_Custom_Interceptor) shows you how to trace these plain methods. - -#### 2.2 Top level method of a node -*Top level method of a node* is a method in which its interceptor begins a new trace in a node. These methods are typically acceptors for RPCs, and the trace is recorded as a `Span` with `ServiceType` categorized as a server. - -How the `Span` is recorded depends on whether the transaction has already begun at any previous nodes. - -##### 2.2.1 New transaction -If the current node is the first one that is recording the transaction, you must issue a new transaction id and record it. `TraceContext.newTraceObject()` will handle this task automatically, so you will simply need to invoke it. - -##### 2.2.2 Continue Transaction -If the request came from another node traced by a Pinpoint Agent, then the transaction will already have a transaction id issued; and you will have to record the data below to the `Span`. (Most of these data are sent from the previous node, usually packed in the request message) - -name | description ---- | --- -transactionId | Transaction ID -parentSpanId | Span ID of the previous node -parentApplicationName | Application name of the previous node -parentApplicationType | Application type of the previous node -rpc | Procedure name (Optional) -endPoint | Server(current node) address -remoteAddr | Client address -acceptorHost | Server address that the client used - -Pinpoint finds caller-callee relation between nodes using *acceptorHost*. In most cases, *acceptorHost* is identical to *endPoint*. However, the address which client sent the request to may sometimes be different from the address the server received the request (proxy). To handle such cases, you have to record the actual address the client used to send the request to as *acceptorHost*. Normally, the client plugin will have added this address into the request message along with the transaction data. - -Moreover, you must also use the span id issued and sent by the previous node. - -Sometimes, the previous node marks the transaction to not be traced. In this case, you must not trace the transaction. - -As you can see, the client plugin must be able pass trace data to the server plugin, and how to do this is protocol dependent. - -You can find an example of top-level method server interceptor [here](https://github.com/pinpoint-apm/pinpoint-plugin-sample/tree/master/plugin/src/main/java/com/navercorp/pinpoint/plugin/sample/_14_RPC_Server). - -#### 2.3 Methods invoking a remote node - -An interceptor of a method that invokes a remote node has to record the following data: - -name | description ---- | --- -endPoint | Target server address -destinationId | Logical name of the target -rpc | Invoking target procedure name (optional) -nextSpanId | Span id that will be used by next node's span (If next node is traceable by Pinpoint) - - -Whether or not the next node is traceable by Pinpoint affects how the interceptor is implemented. The term "traceable" here is about possibility. For example, a HTTP client's next node is a HTTP server. Pinpoint does not trace all HTTP servers, but it is possible to trace them (and there already are HTTP server plugins). In this case, the HTTP client's next node is traceable. On the other hand, MySQL JDBC's next node, a MySQL database server, is not traceable. - -##### 2.3.1 If the next node is traceable -If the next node is traceable, the interceptor must propagate the following data to the next node. How to pass them is protocol dependent, and in worst cases may be impossible to pass them at all. - -name | description ---- | --- -transactionId | Transaction ID -parentApplicationName | Application name of current node -parentApplicationType | Application type of current node -parentSpanId | Span id of trace at current node -nextSpanId | Span id that will be used by the next node's span (same value with nextSpanId of above table) - -Pinpoint finds out caller-callee relation by matching *destinationId* of client trace and *acceptorHost* of server trace. Therefore the client plugin has to record *destinationId* and the server plugin has to record *acceptorHost* with the same value. If server cannot acquire the value by itself, client plugin has to pass it to server. - -The interceptor's recorded `ServiceType` must be from the RPC client category. - -You can find an example for these interceptors [here](https://github.com/pinpoint-apm/pinpoint-plugin-sample/tree/master/plugin/src/main/java/com/navercorp/pinpoint/plugin/sample/_13_RPC_Client). - -##### 2.3.2 If the next node is not traceable -If the next node is not traceable, your `ServiceType` must have the `TERMINAL` property. - -If you want to record the *destinationId*, it must also have the `INCLUDE_DESTINATION_ID` property. If you record *destinationId*, server map will show a node per destinationId even if they have same *endPoint*. - -Also, the `ServiceType` must be a DB client or Cache client category. Note that you do not need to concern yourself about the terms "DB" or "Cache", as any plugin tracing a client library with non-traceable target server may use them. The only difference between "DB" and "Cache" is the time range of the response time histogram ("Cache" having smaller intervals for the histogram). - - -#### 2.4 Asynchronous task - -Trace objects are bound to the thread that first created them via **ThreadLocal** and whenever the execution crosses a thread boundary, trace objects are *lost* to the new thread. Therefore, in order to trace tasks across thread boundaries, you must take care of passing the current trace context over to the new thread. This is done by injecting an **AsyncContext** into an object shared by both the invocation thread and the execution thread. -The invocation thread creates an **AsyncContext** from the current trace, and injects it into an object that will be passed over to the execution thread. The execution thread then retrieves the **AsyncContext** from the object, creates a new trace out of it and binds it to it's own **ThreadLocal**. -You must therefore create interceptors for two methods : i) one that initiates the task (invocation thread), and ii) the other that actually handles the task (execution thread). - -The initiating method's interceptor has to issue an **AsyncContext** and pass it to the handling method. How to pass this value depends on the target library. In worst cases, you may not be able to pass it at all. - -The handling method's interceptor must then continue the trace using the propagated **AsyncContext** and bind it to it's own thread. However, it is very strongly recommended that you simply extend the **AsyncContextSpanEventSimpleAroundInterceptor** so that you do not have to handle this manually. - -Keep in mind that since the shared object must be able have **AsyncContext** injected into it, you have to add a field using `AsyncContextAccessor` during it's class transformation. -You can find an example for tracing asynchronous tasks [here](https://github.com/pinpoint-apm/pinpoint-plugin-sample/tree/master/plugin/src/main/java/com/navercorp/pinpoint/plugin/sample/_12_Asynchronous_Trace). - -#### 2.5 Case Study: HTTP -HTTP client is an example of _a method invoking a remote node_ (client), and HTTP server is an example of a _top level method of a node_ (server). As mentioned before, client plugins must have a way to pass transaction data to server plugins to continue the trace. Note that the implementation is protocol dependent, and [HttpMethodBaseExecuteMethodInterceptor](https://github.com/pinpoint-apm/pinpoint/blob/master/plugins/httpclient3/src/main/java/com/navercorp/pinpoint/plugin/httpclient3/interceptor/HttpMethodBaseExecuteMethodInterceptor.java) of [HttpClient3 plugin](https://github.com/pinpoint-apm/pinpoint/tree/master/plugins/httpclient3) and [StandardHostValveInvokeInterceptor](https://github.com/pinpoint-apm/pinpoint/blob/master/plugins/tomcat/src/main/java/com/navercorp/pinpoint/plugin/tomcat/interceptor/StandardHostValveInvokeInterceptor.java) of [Tomcat plugin](https://github.com/pinpoint-apm/pinpoint/tree/master/plugins/tomcat) show a working example of this for HTTP: - -1. Pass transaction data as HTTP headers. You can find header names [here](https://github.com/pinpoint-apm/pinpoint/blob/master/bootstrap-core/src/main/java/com/navercorp/pinpoint/bootstrap/context/Header.java) -2. Client plugin records `IP:PORT` of the server as `destinationId`. -3. Client plugin passes `destinationId` value to server as `Header.HTTP_HOST` header. -4. Server plugin records `Header.HTTP_HOST` header value as `acceptorHost`. - -One more thing you have to remember is that all the clients and servers using the same protocol must pass the transaction data in the same way to ensure compatibility. So if you are writing a plugin of some other HTTP client or server, your plugin has to record and pass transaction data as described above. - -### 3. Plugin Integration Test -You can run plugin integration tests (`mvn integration-test`) with [PinointPluginTestSuite](https://github.com/pinpoint-apm/pinpoint/blob/master/test/src/main/java/com/navercorp/pinpoint/test/plugin/PinpointPluginTestSuite.java), which is a *JUnit Runner*. It downloads all the required dependencies from maven repositories and launches a new JVM with the Pinpoint Agent and the aforementioned dependencies. The JUnit tests are executed in this JVM. - -To run the plugin integration test, it needs a complete agent distribution - which is why integration tests are in the *plugin-sample-agent* module and why they are run in **integration-test phase**. - -For the actual integration test, you will want to first invoke the method you are tracing, and then use [PluginTestVerifier](https://github.com/pinpoint-apm/pinpoint/blob/master/bootstrap-core/src/main/java/com/navercorp/pinpoint/bootstrap/plugin/test/PluginTestVerifier.java) to check if the trace data is correctly recorded. - - -#### 3.1 Test Dependency -`PinointPluginTestSuite` doesn't use the project's dependencies (configured in pom.xml). It uses the dependencies that are listed by `@Dependency` annotation. This way, you may test multiple versions of the target library using the same test class. - -Dependencies are declared as following. You may specify versions or version ranges for a dependency library. -``` -@Dependency({"some.group:some-artifact:1.0", "another.group:another-artifact:2.1-RELEASE"}) -@Dependency({"some.group:some-artifact:[1.0,)"}) -@Dependency({"some.group:some-artifact:[1.0,1.9]"}) -@Dependency({"some.group:some-artifact:[1.0],[2.1],[3.2])"}) -``` -`PinointPluginTestSuite` by default searches the local repository and maven central repository. You may also add your own repositories by using the `@Repository` annotation. - -#### 3.2 Jvm Version -You can specify the JVM version for a test using `@JvmVersion`. If `@JvmVersion` is not present, JVM at `java.home property` will be used. - -#### 3.3 Application Test -`PinpointPluginTestSuite` is not for applications that has to be launched by its own main class. You can extend [AbstractPinpointPluginTestSuite](https://github.com/pinpoint-apm/pinpoint/blob/master/test/src/main/java/com/navercorp/pinpoint/test/plugin/AbstractPinpointPluginTestSuite.java) and related types to test such applications. - - -### 4. Adding Images - -If you're developing a plugin for applications, you need to add images so the server map can render the corresponding node. The plugin jar itself cannot provide these image files and for now, you will have to add the image files to the web module manually. - -First, put the PNG files to following directories: - -* web/src/main/webapp/images/icons (25x25) -* web/src/main/webapp/images/servermap (80x40) - -Then, add `ServiceType` name and the image file name to `htIcons` in *web/src/main/webapp/components/server-map2/jquery.ServerMap2.js*. diff --git a/doc/powered-by.md b/doc/powered-by.md deleted file mode 100755 index 22138c85c5731..0000000000000 --- a/doc/powered-by.md +++ /dev/null @@ -1,32 +0,0 @@ ---- -title: Powered by Pinpoint -keywords: pinpoint, used, working, use, inuse, poweredby -last_updated: Feb 25, 2020 -sidebar: mydoc_sidebar -permalink: poweredby.html -disqus: false ---- - -# Powered by Pinpoint -This page, documents **alphabetical list** of organizations using Pinpoint. - -## Sites using Pinpoint - -1. Coupang (www.coupang.com) -1. Echemi (https://www.echemi.com) -1. NAVER (www.naver.com) -1. NHN Entertainment -1. Pikicast (www.pikicast.com) -1. SKPlanet(www.skplanet.com) -1. XLGAMES (http://www.xlgames.com) -1. Toss (https://toss.im/) - -## Naver -Naver Co., Ltd. uses Pinpoint as primary APM. Monitoring 2k+ applications with 10k+ instances. -Supports 870k+ tps with only 17 Pinpoint-Collectors. Around 70 billion span chunks are collected per day. -Which is equivalent to 10 billion transaction. - - - - - diff --git a/doc/proxy-http-header.md b/doc/proxy-http-header.md deleted file mode 100644 index 1b82ff36c84a6..0000000000000 --- a/doc/proxy-http-header.md +++ /dev/null @@ -1,88 +0,0 @@ ---- -title: Monitoring Proxy Server -keywords: proxy, http, header -last_updated: Feb 1, 2018 -sidebar: mydoc_sidebar -permalink: proxyhttpheader.html -disqus: false ---- - -# Proxy monitoring using HTTP headers -It is used to know the elapsed time between proxy and WAS. - -![overview](images/proxy-http-header-overview.png) - -## Pinpoint Configuration - -pinpoint.config -~~~ -profiler.proxy.http.header.enable=true -~~~ - -## Proxy Configuration -### Apache HTTP Server -* http://httpd.apache.org/docs/2.4/en/mod/mod_headers.html - -Add HTTP header. -~~~ -Pinpoint-ProxyApache: t=991424704447256 D=3775428 i=51 b=49 -~~~ - -e.g. - -httpd.conf -~~~ - -... -RequestHeader set Pinpoint-ProxyApache "%t %D %i %b" -... - -~~~ -**%t is required value.** - -### Nginx -* http://nginx.org/en/docs/http/ngx_http_core_module.html -* http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_set_header - -Add HTTP header. -~~~ -Pinpoint-ProxyNginx: t=1504248328.423 D=0.123 -~~~ - -e.g. - -nginx.conf -~~~ -... - server { - listen 9080; - server_name localhost; - - location / { - ... - set $pinpoint_proxy_header "t=$msec D=$request_time"; - proxy_set_header Pinpoint-ProxyNginx $pinpoint_proxy_header; - } - } -... -~~~ -or -~~~ -http { -... - - proxy_set_header Pinpoint-ProxyNginx t=$msec; - -... -} -~~~ -**t=$msec is required value.** - -### App -Milliseconds since epoch (13 digits) and app information. - -Add HTTP header. -~~~ -Pinpoint-ProxyApp: t=1594316309407 app=foo-bar -~~~ -**t=epoch is required value.** diff --git a/doc/quickstart.Win.en.md b/doc/quickstart.Win.en.md deleted file mode 100644 index 62a1525d2864f..0000000000000 --- a/doc/quickstart.Win.en.md +++ /dev/null @@ -1,46 +0,0 @@ -# Running QuickStart on Windows - -## Starting -Download Pinpoint with `git clone https://github.com/pinpoint-apm/pinpoint.git` or [download](https://github.com/pinpoint-apm/pinpoint/archive/master.zip) the project as a zip file and unzip. - -Install Pinpoint by running `mvnw.cmd install -DskipTests=true` - -### Notice -If you run QuickStart's cmd file on Windows, you must run it at `quickstart\bin` directory. - -If you want to run it in a different directory, you need to set the absolute path of the `quickstart\bin` directory in the `QUICKSTART_BIN_PATH` environment variable. - -### Install & Start HBase -Download `HBase-1.0.x-bin.tar.gz` from [Apache download site](http://apache.mirror.cdnetworks.com/hbase/)) and unzip it to `quickstart\hbase` directory. - -Rename the unzipped directory to `hbase` so that the final HBase directory looks like `quickstart\hbase\hbase`. - -**Start HBase** - Run `start-hbase.cmd` - -**Initialize Tables** - Run `init-hbase.cmd` - -### Start Pinpoint Daemons - -**Collector** - Run `start-collector.cmd` - -**TestApp** - Run `start-testapp.cmd` - -**Web UI** - Run `start-web.cmd` - -### Check Status -Once HBase and the 3 daemons are running, you may visit the following addresses to test out your very own Pinpoint instance. - -* Web UI - http://localhost:28080 -* TestApp - http://localhost:28081 - -You can feed trace data to Pinpoint using the TestApp UI, and check them using Pinpoint Web UI. TestApp registers itself as *test-agent* under *TESTAPP*. - -## Stopping - -**Web UI** - Run `stop-web.cmd` - -**TestApp** - Run `stop-testapp.cmd` - -**Collector** - Run `stop-collector.cmd` - -**HBase** - Run `stop-hbase.cmd` diff --git a/doc/quickstart.Win.ko.md b/doc/quickstart.Win.ko.md deleted file mode 100644 index f1df5169a0f88..0000000000000 --- a/doc/quickstart.Win.ko.md +++ /dev/null @@ -1,48 +0,0 @@ -# Windows 환경에서 QuickStart 실행하기 -Pinpoint는 공식적으로는 Linux와 OS X를 지원한다. 하지만 Pinpoint와 HBase 모두 Java 기술을 사용하고 있기 때문에 QuickStart를 Windows에서도 실행할 수 있다. 여기에서는 Windows 환경에서 Cygwin을 사용하지 않고 HBase와 Pinpoint를 실행하는 것에 대해 설명한다. - -## 시작하기 - -`git clone https://github.com/pinpoint-apm/pinpoint.git`로 Pinpoint를 다운로드 하거나 zip 파일로 프로젝트를 [다운로드](https://github.com/pinpoint-apm/pinpoint/archive/master.zip)하고 압축을 해제한다. - -`mvnw.cmd install -DskipTests=true`를 실행하여 Pinpoint를 설치한다. - -### 주의사항 -윈도우에서 QuickStart의 cmd파일을 실행할 경우 반드시, `quickstart\bin` 디렉토리의 위치에서 실행해야 한다. - -만약 다른 디렉토리에서 실행하고 싶다면, `QUICKSTART_BIN_PATH` 환경변수에 `quickstart\bin` 디렉토리의 절대 경로를 설정 해야 한다. - -### 설치 및 HBase 시작하기 -**[Apache 다운로드 사이트](http://apache.mirror.cdnetworks.com/hbase/)에서 HBase 1.0.x 버전을 다운로드 받는다. - -**다운로드 받은 파일을 quickstart\hbase 디렉토리에 압축을 풀고 디렉토리 이름을 hbase로 변경하여 `quickstart\hbase\hbase`로 만든다. - -**HBase 시작** - `start-hbase.cmd` 실행 - -**테이블 초기화** - `init-hbase.cmd` 실행 - -### Pinpoint 데몬 시작하기 - -**Collector** - `start-collector.cmd` 실행 - -**TestApp** - `start-testapp.cmd` 실행 - -**Web UI** - `start-web.cmd` 실행 - -### 상태 확인 -HBase와 3개 데몬이 실행한 후 Pinpoint 인스턴스를 테스트하려면 아래 주소로 접속한다. - -* Web UI - http://localhost:28080 -* TestApp - http://localhost:28081 - -TestApp UI를 사용하여 Pinpoint로 추적 데이터를 전송하고, Pinpoint Web UI를 통해 해당 데이터들을 확인할 수 있다. TestApp은 *TESTAPP*에 *test-agent*로 등록된다. - -## 종료하기 - -**Web UI** - `stop-web.cmd` 실행 - -**TestApp** - `stop-testapp.cmd` 실행 - -**Collector** - `stop-collector.cmd` 실행 - -**HBase** - `stop-hbase.cmd` 실행 diff --git a/doc/quickstart.md b/doc/quickstart.md deleted file mode 100755 index 4804c42303630..0000000000000 --- a/doc/quickstart.md +++ /dev/null @@ -1,124 +0,0 @@ ---- -title: Quick Start Guide -keywords: start, begin, quickstart, quick -last_updated: Feb 1, 2018 -sidebar: mydoc_sidebar -permalink: quickstart.html -disqus: false ---- - -# QuickStart -Pinpoint QuickStart provides a sample TestApp for the Agent. - -## Docker -Installing Pinpoint with these docker files will take approximately 10min. - -Visit [Official Pinpoint-Docker repository](https://github.com/pinpoint-apm/pinpoint-docker) for more information. - -## Installation -To set up your very own Pinpoint instance you can either **download the build results** from our [**latest release**](https://github.com/pinpoint-apm/pinpoint/releases/latest). - -### HBase -Download, Configure, and Start HBase - [1. Hbase](https://pinpoint-apm.github.io/pinpoint/installation.html#1-hbase). - -~~~ -$ tar xzvf hbase-x.x.x-bin.tar.gz -$ cd hbase-x.x.x/ -$ ./bin/start-hbase.sh -~~~ - -See [scripts](https://github.com/pinpoint-apm/pinpoint/tree/master/hbase/scripts) and Run. - -~~~ -$ ./bin/hbase shell hbase-create.hbase -~~~ - -### Pinpoint Collector -Download, and Start Collector - [3. Pinpoint Collector](https://pinpoint-apm.github.io/pinpoint/installation.html#3-pinpoint-collector) - -~~~ -$ java -jar -Dpinpoint.zookeeper.address=localhost pinpoint-collector-boot-2.2.1.jar -~~~ - -### Pinpoint Web -Download, and Start Web - [4. Pinpoint Web](https://pinpoint-apm.github.io/pinpoint/installation.html#4-pinpoint-web) - -~~~ -$ java -jar -Dpinpoint.zookeeper.address=localhost pinpoint-web-boot-2.2.1.jar -~~~ - -## Java Agent - -### Requirements -In order to build Pinpoint, the following requirements must be met: - -* JDK 8 installed - -### When Using Released Binary(Recommended) -Download Pinpoint from [Latest Release](https://github.com/pinpoint-apm/pinpoint/releases/latest). - -Extract the downloaded file. -~~~ -$ tar xvzf pinpoint-agent-2.2.1.tar.gz -~~~ - -Run the JAR file, as follows: -~~~ -$ java -jar -javaagent:pinpoint-agent-2.2.1/pinpoint-bootstrap.jar -Dpinpoint.agentId=test-agent -Dpinpoint.applicationName=TESTAPP pinpoint-quickstart-testapp-2.2.1.jar -~~~ - -### When Building Manually -Download Pinpoint with `git clone https://github.com/pinpoint-apm/pinpoint.git` or [download](https://github.com/pinpoint-apm/pinpoint/archive/master.zip) the project as a zip file and unzip. - -Change to the pinpoint directory, and build. -~~~ -$ cd pinpoint -$ ./mvnw install -DskipTests=true -~~~ - -Change to the quickstart testapp directory, and build. -Let's build and run. -~~~ -$ cd quickstart/testapp -$ ./mvnw clean package -~~~ - -Change to the pinpoint directory, and run. -~~~ -$ cd ../../ -$ java -jar -javaagent:agent/target/pinpoint-agent-2.2.1/pinpoint-bootstrap.jar -Dpinpoint.agentId=test-agent -Dpinpoint.applicationName=TESTAPP quickstart/testapp/target/pinpoint-quickstart-testapp-2.2.1.jar -~~~ - -### Get Started -You should see some output that looks very similar to this: -~~~ - - . ____ _ __ _ _ - /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ -( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ - \\/ ___)| |_)| | | | | || (_| | ) ) ) ) - ' |____| .__|_| |_|_| |_\__, | / / / / - =========|_|==============|___/=/_/_/_/ - :: Spring Boot :: (v2.3.2.RELEASE) - -2020-08-06 17:24:59.519 INFO 19236 --- [ main] com.navercorp.pinpoint.testapp.TestApp : Starting TestApp on AD01160256 with PID 19236 (C:\repository\github\pinpoint\quickstart\testapp\target\classes started by Naver in C:\repository\github\pinpoint) -2020-08-06 17:24:59.520 INFO 19236 --- [ main] com.navercorp.pinpoint.testapp.TestApp : No active profile set, falling back to default profiles: default -2020-08-06 17:24:59.520 DEBUG 19236 --- [ main] o.s.boot.SpringApplication : Loading source class com.navercorp.pinpoint.testapp.TestApp -2020-08-06 17:24:59.558 DEBUG 19236 --- [ main] o.s.b.c.c.ConfigFileApplicationListener : Loaded config file 'file:/C:/repository/github/pinpoint/quickstart/testapp/target/classes/application.yml' (classpath:/application.yml) -2020-08-06 17:24:59.558 DEBUG 19236 --- [ main] ConfigServletWebServerApplicationContext : Refreshing org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@46185a1b -08-06 17:24:59.059 [ main] INFO .n.p.p.DefaultDynamicTransformerRegistry:67 -- added dynamic transformer classLoader: sun.misc.Launcher$AppClassLoader@18b4aac2, className: com.navercorp.pinpoint.testapp.controller.ApisController, registry size: 1 -08-06 17:24:59.059 [ main] INFO .n.p.p.DefaultDynamicTransformerRegistry:67 -- added dynamic transformer classLoader: sun.misc.Launcher$AppClassLoader@18b4aac2, className: com.navercorp.pinpoint.testapp.controller.CallSelfController, registry size: 2 -08-06 17:24:59.059 [ main] INFO .n.p.p.DefaultDynamicTransformerRegistry:67 -- added dynamic transformer classLoader: sun.misc.Launcher$AppClassLoader@18b4aac2, className: com.navercorp.pinpoint.testapp.controller.HttpClient4Controller, registry size: 3 -08-06 17:24:59.059 [ main] INFO .n.p.p.DefaultDynamicTransformerRegistry:67 -- added dynamic transformer classLoader: sun.misc.Launcher$AppClassLoader@18b4aac2, className: com.navercorp.pinpoint.testapp.controller.SimpleController, registry size: 4 -08-06 17:24:59.059 [ main] INFO .n.p.p.DefaultDynamicTransformerRegistry:67 -- added dynamic transformer classLoader: sun.misc.Launcher$AppClassLoader@18b4aac2, className: com.navercorp.pinpoint.testapp.controller.StressController, registry size: 5 -2020-08-06 17:25:00.313 DEBUG 19236 --- [ main] .s.b.w.e.t.TomcatServletWebServerFactory : Code archive: C:\Users\Naver\.m2\repository\org\springframework\boot\spring-boot\2.3.2.RELEASE\spring-boot-2.3.2.RELEASE.jar -2020-08-06 17:25:00.313 DEBUG 19236 --- [ main] .s.b.w.e.t.TomcatServletWebServerFactory : Code archive: C:\Users\Naver\.m2\repository\org\springframework\boot\spring-boot\2.3.2.RELEASE\spring-boot-2.3.2.RELEASE.jar -2020-08-06 17:25:00.314 DEBUG 19236 --- [ main] .s.b.w.e.t.TomcatServletWebServerFactory : None of the document roots [src/main/webapp, public, static] point to a directory and will be ignored. -2020-08-06 17:25:00.347 INFO 19236 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8082 (http) -2020-08-06 17:25:00.355 INFO 19236 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat] -2020-08-06 17:25:00.356 INFO 19236 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.37] -~~~ - -The last couple of lines here tell us that Spring has started. Spring Boot’s embedded Apache Tomcat server is acting as a webserver and is listening for requests on localhost port 8082. Open your browser and in the address bar at the top enter http://localhost:8082 - - diff --git a/doc/resources.md b/doc/resources.md deleted file mode 100755 index 2ae0353c2622c..0000000000000 --- a/doc/resources.md +++ /dev/null @@ -1,28 +0,0 @@ ---- -title: Resources -keywords: resources, other sites -last_updated: Feb 1, 2018 -sidebar: mydoc_sidebar -permalink: resources.html -disqus: false ---- - -If you have created informative posts on pinpoint and want the link to be added. -Feel free to contact us anytime. We are glad to add more links. - -## Resources (KOREAN) -* 유용한 자료를 작성하셨다면 공유부탁드립니다!!! -* [Pinpoint 개발자가 작성한 Pinpoint 기술문서 (helloworld.naver.com)](http://helloworld.naver.com/helloworld/1194202) -* [설치 가이드 동영상 강좌 1 (okjsp 대표 허광남님)](https://www.youtube.com/watch?v=hrvKaEaDEGs) -* [설치 가이드 동영상 강좌 2 (okjsp 대표 허광남님)](https://www.youtube.com/watch?v=fliKPGHGXK4) - -## Resources (ENGLISH) -* Anyone who would like to share any document are always welcome -* [Technical Overview of Pinpoint](https://github.com/pinpoint-apm/pinpoint/wiki/Technical-Overview-Of-Pinpoint) -* [Official Docker Repository](https://github.com/pinpoint-apm/pinpoint-docker) -* [Notes on Jetty Plugin for Pinpoint](https://github.com/cijung/Docs/blob/master/JettyPluginNotes.md) ([@cijung](https://github.com/cijung)) - -## Resources (中文) -* [Pinpoint学习笔记](http://skyao.gitbooks.io/leaning-pinpoint/):中文资料收集整理和更重要的-中文翻译! -* [Pinpoint - 应用性能管理(APM)平台实践之部署篇](https://sconts.com/11) -* [开源APM工具Pinpoint线上部署](https://www.iqarr.com/2018/02/04/java/pinpoint/pinpoint-deploy/) \ No newline at end of file diff --git a/doc/roadmap.md b/doc/roadmap.md deleted file mode 100755 index 2b86d705e716b..0000000000000 --- a/doc/roadmap.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -title: Roadmap -keywords: roadmap, future -last_updated: Feb 1, 2018 -sidebar: mydoc_sidebar -permalink: roadmap.html -disqus: false ---- - -## 2017 Roadmap -* Server Map Enhancement - * Performance - * Improve query speed through parallelism via asynchronous I/O operation, and code optimization - * Change/introduce a new data structure more suitable for dealing with a large number of agents - * Realtime - * Improve realtime update/rendering - * Support grouping of multiple applications -* Scatter Chart Enhancement - * Introduce grouping by type of errors (db access fail, rpc fail, cache access fail, etc) -* Statistics/Aggregation - * Introduce realtime data pipeline (Apache Flink) for statistics and data aggregation - * Application-level min/max statistics and response time histograms - * Statistics by request URLs -* Agent - * Active thread dump - * Collect DataSource information - * Improve asynchronous trace support - * Add vert.x support - * Introduce agent trace data format v2 - * Type optimization & compressed format - * Protocol buffer 3 - * Improve interceptor and thread local lookup performance - * Introduce adapters for different interceptor types/patterns - * Adaptive sampling - * Adaptive callstack tracing - * Discard relatively insignificant method invocations - * Combine multiple highly similar/identical callstacks when sending them over the wire - * Ability to store stack traces - * Introduce log-level histograms -* UI/Usability - * Improve performance - * Migrate to AngularJS 2 - * Improve personalized configuration for users -* HBase - * Data store optimization - Reduce rowkey sizes \ No newline at end of file diff --git a/doc/techdetail.md b/doc/techdetail.md deleted file mode 100755 index 8724f9c656a85..0000000000000 --- a/doc/techdetail.md +++ /dev/null @@ -1,259 +0,0 @@ ---- -title: Technical Details -keywords: tech, technology -last_updated: Feb 1, 2018 -sidebar: mydoc_sidebar -permalink: techdetail.html -disqus: false ---- - - -In this article, we describe the Pinpoint's techniques such as transaction tracing and bytecode instrumentation. And we explain the optimization method applied to Pinpoint Agent, which modifies bytecode and record performance data. - -## Distributed Transaction Tracing, Modeled after Google's Dapper - -Pinpoint traces distributed requests in a single transaction, modeled after Google's Dapper. - -### How Distributed Transaction Tracing Works in Google's Dapper - -The purpose of a distributed tracing system is to identify relationships between Node 1 and Node 2 in a distributed system when a message is sent from Node 1 to Node 2 (Figure 1). - -![Figure 1. Message relationship in a distributed system](images/td_figure1.png) - -Figure 1. Message relationship in a distributed system - -The problem is that there is no way to identify relationships between messages. For example, we cannot recognize relationships between N messages sent from Node 1 and N' messages received in Node 2. In other words, when X-th message is sent from Node 1, the X-th message cannot be identified among N' messages received in Node 2. An attempt was made to trace messages at TCP or operating system level. However, implementation complexity was high with low performance because it should be implemented separately for each protocol. In addition, it was difficult to accurately trace messages. - -However, a simple solution to resolve such issues has been implemented in Google's Dapper. The solution is to add application-level tags that can be a link between messages when sending a message. For example, it includes tag information for a message in the HTTP header at an HTTP request and traces the message using this tag. - -> Google's Dapper - -> For more information on Google's Dapper, see "[Dapper, a Large-Scale Distributed Systems Tracing Infrastructure](http://research.google.com/pubs/pub36356.html)." - -Pinpoint is modeled on the tracing technique of Google's Dapper but has been modified to add application-level tag data in the call header to trace distributed transactions at a remote call. The tag data consists of a collection of keys, which is defined as a TraceId. - -### Data Structure in Pinpoint - -In Pinpoint, the core of data structure consists of Spans, Traces, and TraceIds. -* Span: The basic unit of RPC (remote procedure call) tracing; it indicates work processed when an RPC arrives and contains trace data. To ensure the code-level visibility, a Span has children labeled SpanEvent as a data structure. Each Span contains a TraceId. -* Trace: A collection of Spans; it consists of associated RPCs (Spans). Spans in the same trace share the same TransactionId. A Trace is sorted as a hierarchical tree structure through SpanIds and ParentSpanIds. -* TraceId: A collections of keys consisting of TransactionId, SpanId, and ParentSpanId. The TransactionId indicates the message ID, and both the SpanId and the ParentSpanId represent the parent-child relationship of RPCs. - - TransactionId (TxId): The ID of the message sent/received across distributed systems from a single transaction; it must be globally unique across the entire group of servers. - - SpanId: The ID of a job processed when receiving RPC messages; it is generated when an RPC arrives at a node. - - ParentSpanId (pSpanId): The SpanId of the parent span which generated the RPC. If a node is the starting point of a transaction, there will not be a parent span - for these cases, we use a value of -1 to denote that the span is the root span of a transaction. - -> Differences in terms between Google's Dapper and NAVER's Pinpoint - -> The term "TransactionId" in Pinpoint has the same meaning as the term "TraceId" in Google's Dapper and the term "TraceId" in Pinpoint refers to a collection of keys. - -### How TraceId Works -The figure below illustrates the behavior of a TraceId in which RPCs were made 3 times within 4 nodes. - -![Figure 2. Example of a TraceId behavior](images/td_figure2.png) - -Figure 2. Example of a TraceId behavior - -A TransactionId (TxId) represents that three different RPCs are associated with each other as a single transaction in Figure 2. However, a TransactionId itself can't explicitly describe the relationship between RPCs. To identify the relationships between RPCs, a SpanId and a ParentSpanId (pSpanId) are required. Suppose that a node is Tomcat. You can think of a SpanId as a thread which handles HTTP requests. A ParentSpanId indicates the SpainId of a parent that makes RPC calls. - -Pinpoint can find associated n Spans using a TransactionId and can sort them as a hierarchical tree structure using a SpanId and a ParentSpanId. - -A SpanId and a ParentSpanId are 64-bit long integers. A conflict might arise because the number is generated arbitrarily, but considering the range of value from -9223372036854775808 to 9223372036854775807, this is unlikely to happen. If there is a conflict between keys, Pinpoint as well as Google's Dapper lets developers know what happened, instead of resolving the conflict. - -A TransactionId consists of AgentIds, JVM (Java virtual machine) startup time, and SequenceNumbers. - -* AgentId: A user-created ID when JVM starts; it must be globally unique across the entire group of servers where Pinpoint has been installed. The easiest way to make it unique is to use a hostname ($HOSTNAME) because the hostname is not duplicate in general. If you need to run multiple JVMs within the server group, add a postfix to the hostname to avoid duplicates. -* JVM startup time: Required to guarantee a unique SequenceNumber which starts with zero. This value is used to prevent ID conflicts when a user creates duplicate AgentId by mistake. -* SequenceNumber: ID issued by the Pinpoint Agent, with sequentially increasing numbers that start with zero; it is issued per message. - -Dapper and [Zipkin](https://github.com/twitter/zipkin), a distributed systems tracing platform at Twitter, generate random TraceIds (TransactionIds in Pinpoint) and consider conflict situations as a normal case. However, we wanted to avoid this conflict as much as possible in Pinpoint. We had two available options for this; one with a method in which the amount of data is small but the probability of conflict is high; the other is a method in which the amount of data is large but the probability of conflict is low; We chose the second option. - -There may be a better ways to handle transactions. We came up with several ideas such as key issue by a central key server. But we didn't implement this in the system due to performance issues and network errors. We are still considering issuing keys in bulk as an alternative Solution. So maybe later in the future, such methods can be developed; But for now, a simple method is adopted. In Pinpoint, a TransactionId is regarded as changeable data. - -## Bytecode Instrumentation, Not Requiring Code Modifications - -Earlier, we explained distributed transaction tracing. One way for implementing this is that developers to modify their code by themselves. Allow developers to add tag information when an RPC is made. However, it could be a burden to modify code even though such functionality is useful to developers. - -Twitter's Zipkin provides the functionality of distributed transaction tracing using modified libraries and its container (Finagle). However, it requires the code to be modified if needed. We wanted the functionality to work without code modifications and desired to ensure code-level visibility. To solve such problems, the bytecode instrumentation technique was adopted in Pinpoint. The Pinpoint Agent intervenes code to make RPCs so as to automatically handle tag information. - -### Overcoming Disadvantages of Bytecode Instrumentation - -There are two ways for distributed transaction tracing as below. Bytecode instrumentation is one of an automatic method. -* Manual method: Developers develop code that records data at important points using APIs provided by Pinpoint. -* Automatic method: Developers do not involve code modifications because Pinpoint decides which API is to be intervened and developed. - - -Advantages and disadvantages of each method are as follows: - -Table 1 Advantages and disadvantage of each method - - -Item |Advantage |Disadvantage ----------|----------|------------ -**Manual Tracing** | - Requires less development resources.
- An API can become simpler and consequently the number of bugs can be reduced. |- Developers must modify the code.
- Tracing level is low. -**Automatic Tracing** |- Developers are not required to modify the code.
- More precise data can be collected due to more information in bytecode.|- It would cost 10 times more to develop Pinpoint with automatic method.
- Requires highly competent developers who can instantly recognize the library code to be traced and make decisions on the tracing points.
- Can increase the possibility of a bug due to high-level development skills such as bytecode instrumentation. - -Bytecode instrumentation is a technique that includes high difficulty level and risks. Nevertheless, using this technique has many benefits. - -Although it requires a large number of development resources, it requires almost none for applying the service. For example, the following shows the costs between an automatic method which uses bytecode instrumentation and a manual method which uses libraries (in this context, costs are random numbers assumed for clarity). - -* Automatic method: Total of 100 - - Cost of Pinpoint development: 100 - - Cost of services applied: 0 -* Manual method: Total of 30 - - Cost of Pinpoint development: 20 - - Cost of services applied: 10 - -The data above tells us that a manual method is more cost-effective than an automatic one. However, it will not guarantee the same result for NAVER since we have thousands of services. For example, if we have 10 services which require being modified, the total cost will be calculated as follows: - -* Cost of Pinpoint development 20 + Cost of services applied 10 x 10 services = 120 - -As you can see, the automatic method was more cost-efficient for us. - -We are lucky to have many developers who are highly competent and specialized in Java in the Pinpoint team. Therefore, we believed it was only a matter of time to overcome the technical difficulties in Pinpoint development. - -### The Value of Bytecode Instrumentation - -The reason we chose to implement bytecode instrumentation(Automatic method) is not only those that we have already explained but also the following points. - -#### Hidden API - -If the API is exposed for developers to use. We, as API providers, are restricted to modify the API as we desire. Such a restriction can impose stress on us. - -We may modify an API to correct mistaken design or add new functions. However, if there is a restriction to do this, it would be difficult for us to improve the API. The best answer for solving such a problem is a scalable system design, which is not an easy option as everyone knows. It is almost impossible to create perfect API design as we can't predict the future. - -With bytecode instrumentation, we don't have to worry about the problems caused by exposing the tracing APIs and can continuously improve the design, without considering dependency relationships. For those who are planning to develop their applications using Pinpoint, please note that API can be changed by the Pinpoint developers since improving performance and design is our first priority. - -#### Easy to Enable or Disable - -The disadvantage of using bytecode instrumentation is that it could affect your applications when a problem occurs in the profiling section of a library or Pinpoint itself. However, you can easily solve it by just disabling the Pinpoint since you don't have to change any code. - -You can easily enable Pinpoint for your applications by adding the three lines (associated with the configuration of the Pinpoint Agent) below into your JVM startup script: - - -javaagent:$AGENT_PATH/pinpoint-bootstrap-$VERSION.jar - -Dpinpoint.agentId= - -Dpinpoint.applicationName= - -If any problem occurs due to Pinpoint, you can just delete the configuration data in the JVM startup script. - -### How Bytecode Instrumentation Works - -Since bytecode instrumentation technique has to deal with Java bytecode, it tends to increase the risk of development while it decreases productivity. In addition, developers are prone to make mistakes. In Pinpoint, we improved productivity and accessibility by abstraction with the interceptor. Pinpoint injects necessary codes to track distributed transactions and performance information by intervening application code at class loading time. This increases performance since tracking codes are directly injected into the application code. - -![Figure 3. Behavior of bytecode instrumentation](images/td_figure3.png) - -Figure 3. Basic principle of bytecode instrumentation - -In Pinpoint, the API intercepting part and data recording part are separated. Interceptor is injected into the method that we'd like to track and calls before() and after() methods where data recording is taken care of. Through bytecode instrumentation, Pinpoint Agent can record data only from the necessary method which makes the size of profiling data compact. - -## Optimizing Performance of the Pinpoint Agent - -Finally, we will describe how to optimize the performance of Pinpoint Agent. - -### Using Binary Format (Thrift) - -You can increase encoding speed by using a binary format ([Thrift](https://thrift.apache.org/)). Although it is difficult to use and debug, It can improve the efficiency of network usage by reducing the size of data generated. - -### Optimize Recorded Data for Variable-Length Encoding and Format - -If you convert a long integer into a fixed-length string, the data size will be 8 bytes. However, if you use variable-length encoding, the data size can vary from 1 to 10 bytes depending on the size of a given number. To reduce data size, Pinpoint encodes data as a variable-length string through Compact Protocol of Thrift and records data to be optimized for encoding format. Pinpoint Agent reduces data size by converting remaining time based on root method into a vector value. - -> Variable-length encoding - -> For more information on the variable-length encoding, see "[Base 128 Varints](https://developers.google.com/protocol-buffers/docs/encoding#varints)" in Google Developers. - - ![Figure 4. Comparison between fixed-length encoding and variable-length encoding](images/td_figure4.png) - - Figure 4. Comparison between fixed-length encoding and variable-length encoding - -As you can see in Figure 4, you need to measure the time of 6 different points to get information of when three different methods are called and finished(Figure 4); With fixed-length encoding, this process requires 48 bytes (6points × 8bytes). - -Meanwhile, Pinpoint Agent uses variable-length encoding and records the data according to its corresponding format. And calculate time information on other points with the difference(in vector value) based on the start time of the root method. Since vector value is a small number, it consumes a small number of bytes resulting only 13 bytes consumed rather than 48bytes. - -If it takes more time to execute a method, it will increase the number of bytes even though variable-length encoding is used. However, it is still more efficient than fixed-length encoding. - -### Replacing Repeated API Information, SQLs, and Strings with Constant Tables - -We wanted Pinpoint to enable code-level tracing. However, it had a problem in terms of increasing data size. Every time data with a high degree of precision is sent to a server, due to the size of the data it increased network usage. - -To solve such a problem, we adopted a strategy by creating a constant table in a remote HBase server. Since there will be an overload to send data of "method A" to Pinpoint Collector each time, Pinpoint Agent converts "method A" data to an ID and stores this information as a constant table in HBase, and continue tracing data with the ID. When a user retrieves trace data on the Website, the Pinpoint Web searches for the method information of the corresponding ID in the constant table and reorganize them. The same way is used to reduce data size in SQLs or frequently-used strings. - -### Handling Samples for Bulk Requests - -The requests to online portal services which Naver is providing are huge. A single service handles more than 20 billion requests a day. A simple way to trace such request is by expanding network infrastructure and servers as much as needed to suit the number of requests. However, this is not a cost-effective way to handle such situations. - -In Pinpoint, you can collect only sampling data rather than tracking every request. In a development environment where requests are few, every data is collected. While in the production environment where requests are large, only 1~5% out of whole data is collected which is sufficient to analyze the status of entire applications. With sampling, you can minimize network overhead in applications and reduce costs of infrastructure such as networks and servers. - -> Sampling method in Pinpoint - -> Pinpoint supports a Counting sampler, which collects data only for one of 10 requests if it is set to 10. We plan to add new samplers that can collect data more effectively. - -### Minimizing Application Threads Being Aborted Using Asynchronous Data Transfer - -Pinpoint does not interfere with application threads since encoded data or remote messages are transferred asynchronously by another thread. - -#### Transferring Data via UDP - -Unlike Google's Dapper, Pinpoint transfers data through a network to ensure data speed. Sharing network with your service can be an issue when data traffic bursts out. In such situations, the Pinpoint Agent starts to use UDP protocol to give the network connection priority to your service. - -> Note - -> The data transfer APIs can be replaced since it's separated as an interface. It can be changed into an implementation that stores data in a different way, like local files. - -## Example of Pinpoint Applied - -Here is an example of how to get data from your application so that you can comprehensively understand the contents described earlier. - -Figure 5 shows what you can see when you install Pinpoint in TomcatA and TomcatB. You can see the trace data of an individual node as a single transaction, which represents the flow of distributed transaction tracing. - -![Figure 5. Example 1: Pinpoint applied](images/td_figure5.png) - -Figure 5. Example of Pinpoint in action - - -The following describes what Pinpoint does for each method. - -1. Pinpoint Agent issues a TraceId when a request arrives at TomcatA. - - TX_ID: TomcatA^TIME^1 - - SpanId: 10 - - ParentSpanId: -1(Root) - -2. Records data from Spring MVC controllers. - -3. Intervene the calls of HttpClient.execute() method and configure the TraceId in HttpGet. - - Creates a child TraceId. - - TX_ID: TomcatA^TIME^1 -> TomcatA^TIME^1 - - SPAN_ID: 10 -> 20 - - PARENT_SPAN_ID: -1 -> 10 (parent SpanId) - - Configures the child TraceId in the HTTP header. - - HttpGet.setHeader(PINPOINT_TX_ID, "TomcatA^TIME^1") - - HttpGet.setHeader(PINPOINT_SPAN_ID, "20") - - HttpGet.setHeader(PINPOINT_PARENT_SPAN_ID, "10") - -4. Transfer tagged request to TomcatB. - - TomcatB checks the header from the transferred request. - - HttpServletRequest.getHeader(PINPOINT_TX_ID) - - TomcatB becomes a child node as it identifies a TraceId in the header. - - TX_ID: TomcatA^TIME^1 - - SPAN_ID: 20 - - PARENT_SPAN_ID: 10 - -5. Records data from Spring MVC controllers and completes the request. - - ![Figure 6. Example 2: Pinpoint applied ](images/td_figure6.png) - -6. Pinpoint Agent sends trace data to Pinpoint Collector to store it in HBase when the request from TomcatB is completed. - -7. After the HTTP calls from TomcatB is terminated, then the request from TomcatA is complete. The Pinpoint Agent sends trace data to Pinpoint Collector to store it in HBase. - -8. UI reads the trace data from HBase and creates a call stack by sorting trees. - - -## Conclusions - -Pinpoint is another application that runs along with your applications. Using bytecode instrumentation makes Pinpoint seem like that it does not require code modifications. In general, the bytecode instrumentation technique makes applications vulnerable to risks; if a problem occurs in Pinpoint, it will affect your applications as well. But for now, instead of getting rid of such threats, we are focusing on improving performance and design of Pinpoint. Because we think this makes Pinpoint more valuable. So whether or not to use Pinpoint is for you to decide. - -We still have a large amount of work to be done to improve Pinpoint. Despite its incompleteness, Pinpoint was released as an open-source project; we are continuously trying to develop and improve Pinpoint so as to meet your expectations. - -> Written by Woonduk Kang - -> In 2011, I wrote about myself like this—As a developer, I would like to make a software program that users are willing to pay for, like those of Microsoft or Oracle. As Pinpoint was launched as an open-source project, it seems that my dreams somewhat came true. For now, my desire is to make Pinpoint more valuable and attractive to users. \ No newline at end of file diff --git a/doc/troubleshooting_network.md b/doc/troubleshooting_network.md deleted file mode 100755 index 59c1789714155..0000000000000 --- a/doc/troubleshooting_network.md +++ /dev/null @@ -1,66 +0,0 @@ ---- -title: Checking network configuration -keywords: troubleshooting -last_updated: Aug 14, 2018 -sidebar: mydoc_sidebar -permalink: troubleshooting_network.html -disqus: false ---- - -We provide a simple tool that can check your network configurations. -This tool checks the network status between Pinpoint-Agent and Pinpoint-Collector - -### Testing with binary release - -If you have downloaded the build results from our [**latest release**](https://github.com/pinpoint-apm/pinpoint/releases/latest). - -1. Start your collector server -2. With any terminal that you are using, go to *script* folder which is under *pinpoint-agent-VERSION.tar.gz* package that you have downloaded. - -```` -> pwd -/Users/user/Downloads/pinpoint-agent-1.7.2-SNAPSHOT/script -```` -and run *networktest.sh* shell script -```` -> sh networktest.sh -```` - -You will see some CLASSPATH and configuration you have made in the *pinpoint.config* file as below -```` -CLASSPATH=./tools/pinpoint-tools-1.7.2-SNAPSHOT.jar -...Remainder Omitted... -2018-04-10 17:36:30 [INFO ](com.navercorp.pinpoint.bootstrap.config.DefaultProfilerConfig) profiler.enable=true -2018-04-10 17:36:30 [INFO ](com.navercorp.pinpoint.bootstrap.config.DefaultProfilerConfig) profiler.instrument.engine=ASM -2018-04-10 17:36:30 [INFO ](com.navercorp.pinpoint.bootstrap.config.DefaultProfilerConfig) profiler.instrument.matcher.enable=true -...Remainder Omitted... -```` - -And after that, you will see the results. (In this case, collector server was started locally) -If you receive all six SUCCESSes as below, then you are all set and ready to go. - -```` -UDP-STAT:// localhost - => 127.0.0.1:9995 [SUCCESS] - => 0:0:0:0:0:0:0:1:9995 [SUCCESS] - -UDP-SPAN:// localhost - => 127.0.0.1:9996 [SUCCESS] - => 0:0:0:0:0:0:0:1:9996 [SUCCESS] - -TCP:// localhost - => 127.0.0.1:9994 [SUCCESS] - => 0:0:0:0:0:0:0:1:9994 [SUCCESS] -```` - -### Testing with source code - -The idea is basically the same. - -1. Start your collector server -2. Pass the *path* of the pinpoint.config file as a *program argument* and run ***NetworkAvailabilityChecker*** class. -3. (only for under v1.7.2)For the few who gets JNI error while running. Please remove ````provided```` line from pom.xml under *tools* module and try again - -Results should be same as shown above. - - > If you face error for v1.7.3 take a look at this [issue](https://github.com/pinpoint-apm/pinpoint/issues/4668) \ No newline at end of file diff --git a/doc/ui_guide.md b/doc/ui_guide.md deleted file mode 100644 index 393306daa7425..0000000000000 --- a/doc/ui_guide.md +++ /dev/null @@ -1,63 +0,0 @@ ---- -title: New UI Guide -sidebar: mydoc_sidebar -tags: -keywords: UI -last_updated: Sep 1, 2018 -permalink: ui_v2.html -toc: false ---- - -## How to test new UI in Development - -Our team is redeveloping the UI with a new design using the latest Angular Framework. -If you want to experience the new UI in advance, -please follow the instructions below. - -* Update the following `RewriteForV2Filter` setting to Spring's `applicationContext-web.xml`. - -```` xml -// applicationContext-web.xml - - - - -```` - -* Add `-Pv2` option when building Maven. -> mvn clean install -Pv2 - // Please note that adding the -Pv2 option may cause longer time to build. - -* URL where you can check - * http://your.domain.name/v2 - -![UI Example](images/ui.png) - -## 새롭게 개발 중인 UI를 테스트 할 수 있는 방법 - -Pinpoint 팀은 새로운 디자인과 최신 Angular Framework 을 이용하여 UI 를 재 개발하고 있습니다. -만약 새로운 UI를 미리 체험하고 싶다면 다음과 같은 설정이 필요합니다. - -* applicationContext-web.xml 의 `contructor-arg` 값을 `true`로 설정합니다. - -```` xml -// applicationContext-web.xml - - - -```` -* Maven 빌드 시 `-Pv2` 옵션을 추가 합니다. -> mvn clean install -Pv2 - // -Pv2 옵션을 추가하면 빌드 타임이 오래 걸릴 수 있는 점을 유의해 주세요. - -* 확인 할 수 있는 URL - * http://your.domain.name/v2 - -![UI Example](images/ui.png) - -* 개발 시 watch & build 실행 방법 -`./web/src/main/webapp/v2/` 에서 `npm install` 후 다음 명령 실행 - -```` -> npm run build:watch -```` \ No newline at end of file diff --git a/doc/videos.md b/doc/videos.md deleted file mode 100755 index d037f64773187..0000000000000 --- a/doc/videos.md +++ /dev/null @@ -1,31 +0,0 @@ ---- -title: "Videos" -keywords: videos, guide -permalink: videos.html -sidebar: mydoc_sidebar ---- - -## Speaking at COSCUP2019 - - Speaking at Taiwan's largest open source conference - - Title : [Naver, monitoring services with trillions of event with open source APM, Pinpoint](https://coscup.org/2019/en/programs/naver-monitoring-services-with-trillions-of-event-with-open-source-apm-pinpoint) - Date : Aug 18, 2019 - - - -## Speaking at HKOSCon2019 - - Speaking at HongKong's largest open source conference - - Title : [How we started an open source APM project and troubleshooting with it](https://hkoscon.org/2019/topics/how-we-started-open-source-apm-project-and-troubleshooting-it) - Date : June 15, 2019 - - - -## Introduction to Pinpoint v1.5.0 - - Video introducing Pinpoint - - -