-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Per-CPU usage is unavailable on some CPUs #19
Labels
anatomy
Changes are more than what meets the eye
bug
Something isn't working
help wanted
Extra attention is needed
Comments
gridhead
added
bug
Something isn't working
help wanted
Extra attention is needed
anatomy
Changes are more than what meets the eye
labels
Apr 2, 2021
This needs to be fixed at the frontend side as the driver emits the data that has been made available by the Docker Py module. Python 3.9.2 (default, Feb 20 2021, 18:40:11)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.22.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: from docker import DockerClient
In [2]: dock = DockerClient()
In [7]: dock.containers.list()
Out[7]: [<Container: a940a858b8>]
In [8]: a = dock.containers.list()[0]
In [9]: a.stats
Out[9]: <bound method Container.stats of <Container: a940a858b8>>
In [10]: a.stats()
Out[10]: <generator object APIClient._stream_helper at 0x7f54c0299270>
In [11]: a.stats(stream=False)
Out[11]:
{'read': '2021-04-03T01:43:51.278450805Z',
'preread': '2021-04-03T01:43:50.276025491Z',
'pids_stats': {'current': 1, 'limit': 9366},
'blkio_stats': {'io_service_bytes_recursive': [{'major': 8,
'minor': 0,
'op': 'read',
'value': 3973120},
{'major': 8, 'minor': 0, 'op': 'write', 'value': 0}],
'io_serviced_recursive': None,
'io_queue_recursive': None,
'io_service_time_recursive': None,
'io_wait_time_recursive': None,
'io_merged_recursive': None,
'io_time_recursive': None,
'sectors_recursive': None},
'num_procs': 0,
'storage_stats': {},
'cpu_stats': {'cpu_usage': {'total_usage': 35105000,
'usage_in_kernelmode': 27003000,
'usage_in_usermode': 8101000},
'system_cpu_usage': 1983850000000,
'online_cpus': 4,
'throttling_data': {'periods': 0,
'throttled_periods': 0,
'throttled_time': 0}},
'precpu_stats': {'cpu_usage': {'total_usage': 35105000,
'usage_in_kernelmode': 27003000,
'usage_in_usermode': 8101000},
'system_cpu_usage': 1979830000000,
'online_cpus': 4,
'throttling_data': {'periods': 0,
'throttled_periods': 0,
'throttled_time': 0}},
'memory_stats': {'usage': 4902912,
'stats': {'active_anon': 0,
'active_file': 2027520,
'anon': 540672,
'anon_thp': 0,
'file': 3784704,
'file_dirty': 0,
'file_mapped': 2838528,
'file_writeback': 0,
'inactive_anon': 540672,
'inactive_file': 1757184,
'kernel_stack': 49152,
'pgactivate': 396,
'pgdeactivate': 0,
'pgfault': 1716,
'pglazyfree': 0,
'pglazyfreed': 0,
'pgmajfault': 0,
'pgrefill': 0,
'pgscan': 0,
'pgsteal': 0,
'shmem': 0,
'slab': 0,
'slab_reclaimable': 0,
'slab_unreclaimable': 0,
'sock': 0,
'thp_collapse_alloc': 0,
'thp_fault_alloc': 0,
'unevictable': 0,
'workingset_activate': 0,
'workingset_nodereclaim': 0,
'workingset_refault': 0},
'limit': 8199704576},
'name': '/trusting_clarke',
'id': 'a940a858b80c6795051eedec4f7397830290b74b63216de160b00ddd732e0058',
'networks': {'eth0': {'rx_bytes': 4832,
'rx_packets': 40,
'rx_errors': 0,
'rx_dropped': 0,
'tx_bytes': 0,
'tx_packets': 0,
'tx_errors': 0,
'tx_dropped': 0}}} Notice how Docker Py does not provide the per-CPU usage information in the output of the |
This was referenced Apr 3, 2021
Fixed at the frontend side with gridhead/supervisor-frontend-service#85. No action is required for the driver side of things. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
anatomy
Changes are more than what meets the eye
bug
Something isn't working
help wanted
Extra attention is needed
This is a snapshot from the frontend scrapping data from the driver running on an Intel CPU.
And this is the output from the container statistics endpoint
While the following is the snapshot from the frontend scrapping data from the driver running on a Broadcom CPU.
And this is the output from the container statistics endpoint
The reason behind such an occurrence is unknown but this thing breaks the statistics page so this should be fixed promptly.
The text was updated successfully, but these errors were encountered: