-
Notifications
You must be signed in to change notification settings - Fork 82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RHCLOUD-4283 - Enabling stdout logs #650
Conversation
Is this solution any good? Tried hard not to use |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
questions and a simple re-write suggestion
@jharting just set the env var to use |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don’t understand how this enables output to stdout. It only changes the way how stuff is logged to CloudWatch. Originally we relied on propagating our logs to the root logger, now we use the CloudWatch handler in every logger. I don’t think this approach is any better.
If using the CloudWatch handler for the root logger is not desired, then we can at least configure it for the inventory handler. That handler is a parent of all loggers returned by get_logger and if not overridden by the INI file log configuration, all logs are propagated automatically to it.
In this review I focused only on replacing the global Boto3 session with a global handler. Handlers are naturally reused and here, the configuration is the same for every logger.
Okay. Hopefully found a way how to do it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Instead of configuring each handler separately in the get_logger function, configure the inventory handler only once.
- Instantiate it in the configure_logging function, set the log level.
- Pass it to the contextual filter and Watchtower configuration functions.
- In the watchtower configuration function, read the configuration data and if present, create a handler and add it to the logger.
- Simplify the get_logger function so it only returns a child logger inventory.*. It propagates to the inventory one that has the CloudWatch handler and contextual filter.
- Remove all the garbage from logconfig.ini, keep only root and library configuration.
- All the inventory modules (app, lib etc.) should go away.
- Add inventory with some basic configuration like loglevel=INFO etc. so it’s properly logged to stdout.
app/logging.py
Outdated
global session_boto3 | ||
session_boto3 = boto3_session |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
global session_boto3 | |
session_boto3 = boto3_session | |
handler = _get_cloudwatch_handler(aws_access_key_id, aws_secret_access_key, aws_region_name, aws_log_group, aws_stream_name, create_log_group) | |
logger = logging.addHandler(handler) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
see my comment below
There is now a conflict, caused by my #657 being merged in. It shouldn’t affect your code though. |
@Glutexo I think I took your suggestions into account, but sadly it does not send any logs to cloudwatch. I will keep digging, but let me know if you see something I did that diverges from the changes you suggested. |
Also, regarding changing logconfig.ini did you meant taking out app, api, tasks, mq_service, lib, utils out? and replace it with only one |
Ok, fixed it. 😓 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just a qq.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this one lgtm!
Oops! Didn't realize that was causing the issue 😞 |
@Glutexo In |
^ This simply removes the cloudwatch handler info from the dictionary if creds aren't present. Let me know if you can think of a better way to do this. |
@tahmidefaz What’s wrong with the NullHandler? 😦 Now when we now we need to return an instance instead of a class. |
@Glutexo sorry, I misunderstood. I will revert 🤦♂️ |
Makefile
Outdated
|
||
run_inv_mq_service: | ||
PAYLOAD_TRACKER_SERVICE_NAME=inventory-mq-service INVENTORY_LOGGING_CONFIG_FILE=logconfig.ini \ | ||
PAYLOAD_TRACKER_SERVICE_NAME=inventory-mq-service INVENTORY_LOGGING_CONFIG_FILE=logconfig.yaml \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dtto
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here again, the INVENTORY_LOGGING_CONFIG_FILE is not needed.
app/__init__.py
Outdated
@@ -40,6 +38,7 @@ def create_app(runtime_environment): | |||
# This feels like a hack but it is needed. The logging configuration | |||
# needs to be setup before the flask app is initialized. | |||
configure_logging(runtime_environment) | |||
logger = get_logger(__name__) # we need to call the logger after configuring it, not before |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the comment actually true? Even if we get the logger before the configuration, its configuration still gets updated. It makes perfect sense to get the logger after the configuration, but technically it’s not necessary.
logger = get_logger(__name__) # we need to call the logger after configuring it, not before | |
logger = get_logger(__name__) |
To reproduce, swap these two lines and see the logging still works the same. Compare to how it doesn’t work if you try to log something before the configuration.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe the reason I moved this line around was that due to the way logging was setup earlier, some of the logs weren't being produced. However, after the logging refactoring, that seems to be fixed. I will move it back to where it was earlier. 👍
else: | ||
print("Unable to configure watchtower logging. Please verify watchtower logging configuration!") | ||
return NullHandler() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🎉
run_gunicorn.py
Outdated
@@ -30,7 +30,7 @@ def run_server(): | |||
variables. | |||
""" | |||
bind = f"0.0.0.0:{LISTEN_PORT}" | |||
run(("gunicorn", "--log-config=logconfig.ini", "--log-level=debug", f"--bind={bind}", "run")) | |||
run(("gunicorn", "--log-config=logconfig.yaml", "--log-level=debug", f"--bind={bind}", "run")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this actually work? Not only I am afraid that it doesn’t but we also found out that it’s this loading the configuration by the --log-config argument that causes problems. Let’s remove it instead.
run(("gunicorn", "--log-config=logconfig.yaml", "--log-level=debug", f"--bind={bind}", "run")) | |
run(("gunicorn", "--log-level=debug", f"--bind={bind}", "run")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Confirmed. It tries to use the configparser.
configparser.MissingSectionHeaderError: File contains no section headers.
file: 'logconfig.yaml', line: 1
'---\n'
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This may be a solution. We already use gunicorn.conf.py in production. I think we can move the logging configuration there! Not in this PR, but it’d definitely worth exploring.
@tahmidefaz It looks like you already fixed most of my comments. We’re getting there! |
@Glutexo sorry for the hiccup earlier, today is a day of major faceplams 🤦♂️ |
@tahmidefaz Do you know what needs to be done there and how? Can you post a link here once you have it? |
@Glutexo yes, I will mention this PR in e2e-deploy once I get around to it |
^ @Glutexo I think in terms of the changes in e2e-deploy, that should be good. Please take a look. |
Removed CloudWatch configuration status logging. It doesn’t work correctly and moreover duplicates existing print statements.
Value of INVENTORY_LOGGING_CONFIG_FILE in the Makefile only fills in the default value. That is redundant and thus removed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Found some minor issues. Fixing and pushing them right away.
I can finally confirm this works. 🎉 Logging both to standard output and to CloudWatch works. There are no nasty globals or configuration shenanigans. Without CloudWatch credentials, messages are logged only to standard output.
There is only one sad thing. Now, when the runtime environment flag is not used when loading the logging configuration, the Flask app tries to configure the CloudWatch handler in tests too. Because there are no credential provided in the tests, this only results in a message being printed. But this should be addressed in the future somehow.
Makefile
Outdated
|
||
run_inv_mq_service: | ||
PAYLOAD_TRACKER_SERVICE_NAME=inventory-mq-service INVENTORY_LOGGING_CONFIG_FILE=logconfig.ini \ | ||
PAYLOAD_TRACKER_SERVICE_NAME=inventory-mq-service INVENTORY_LOGGING_CONFIG_FILE=logconfig.yaml \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here again, the INVENTORY_LOGGING_CONFIG_FILE is not needed.
Makefile
Outdated
PAYLOAD_TRACKER_SERVICE_NAME=inventory-mq-service INVENTORY_LOGGING_CONFIG_FILE=logconfig.yaml \ | ||
INVENTORY_LOG_LEVEL=DEBUG python inv_mq_service.py |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PAYLOAD_TRACKER_SERVICE_NAME=inventory-mq-service INVENTORY_LOGGING_CONFIG_FILE=logconfig.yaml \ | |
INVENTORY_LOG_LEVEL=DEBUG python inv_mq_service.py | |
PAYLOAD_TRACKER_SERVICE_NAME=inventory-mq-service INVENTORY_LOG_LEVEL=DEBUG python inv_mq_service.py |
app/logging.py
Outdated
aws_log_group = os.getenv("AWS_LOG_GROUP", "platform") | ||
aws_log_stream = os.getenv("AWS_LOG_STREAM", _get_hostname()) | ||
create_log_group = str(os.getenv("AWS_CREATE_LOG_GROUP")).lower() == "true" | ||
logger.info(f"Configuring watchtower logging (log_group={aws_log_group}, stream_name={aws_log_stream})") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
logger.info(f"Configuring watchtower logging (log_group={aws_log_group}, stream_name={aws_log_stream})") |
app/logging.py
Outdated
|
||
if all([aws_access_key_id, aws_secret_access_key, aws_region_name, stream_name]): | ||
print(f"Configuring watchtower logging (log_group={log_group}, stream_name={stream_name})") | ||
logger = get_logger(__name__) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
logger = get_logger(__name__) |
Tests passed. @jharting, are you okay with merging? |
No description provided.