Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix 404 on vmware's site #1477

Merged
merged 2 commits into from
Aug 14, 2024
Merged

fix 404 on vmware's site #1477

merged 2 commits into from
Aug 14, 2024

Conversation

mrjones-plip
Copy link
Contributor

Description

per CI link check

License

The software is provided under AGPL-3.0. Contributions to this project are accepted under the same license.

@@ -22,7 +22,7 @@ CHT Core 4.0.0 introduces [a new architecture]({{< relref "core/overview/archite

Before getting into how the CHT horizontally scales, it should be well understood the importance of vertical scaling and what it is. This is the ability of the CHT to support more users by adding more RAM and CPU to either the bare-metal or virtual machine host. This ensures key services like API, Sentinel and, most importantly, CouchDB, can operate without performance degradation.

When thousands of users are simultaneously trying to synchronize with the CHT, the load can overwhelm CouchDB. As discovered [through extensive research](https://forum.communityhealthtoolkit.org/t/how-we-tested-scalability-of-cht-infrastructure/1532) and [large production deployments](https://github.com/medic/cht-core/issues/8324#issuecomment-1691411542), administrators will start to see errors in their logs and end users will complain of slow sync times. Before moving to more CouchDB nodes, administrators should consider adding more RAM and CPU to the single server where the CHT is hosted. This applies to both CHT 3.x and CHT 4.x. Given the ease of allocating more resources, presumably in virtualized environment like [EC2](https://aws.amazon.com/ec2/), [Proxmox](https://www.vmware.com/content/vmware/vmware-published-sites/us/products/esxi-and-esx.html.html) or [ESXi](https://www.vmware.com/content/vmware/vmware-published-sites/us/products/esxi-and-esx.html.html), this is much easier than moving [from a single to multi-node CouchDB instance]({{< relref "hosting/4.x/data-migration" >}}).
When thousands of users are simultaneously trying to synchronize with the CHT, the load can overwhelm CouchDB. As discovered [through extensive research](https://forum.communityhealthtoolkit.org/t/how-we-tested-scalability-of-cht-infrastructure/1532) and [large production deployments](https://github.com/medic/cht-core/issues/8324#issuecomment-1691411542), administrators will start to see errors in their logs and end users will complain of slow sync times. Before moving to more CouchDB nodes, administrators should consider adding more RAM and CPU to the single server where the CHT is hosted. This applies to both CHT 3.x and CHT 4.x. Given the ease of allocating more resources, presumably in virtualized environment like [EC2](https://aws.amazon.com/ec2/), [Proxmox](https://www.vmware.com/content/vmware/vmware-published-sites/us/products/esxi-and-esx.html.html) or [ESXi](https://www.vmware.com/products/cloud-infrastructure/esxi-and-esx), this is much easier than moving [from a single to multi-node CouchDB instance]({{< relref "hosting/4.x/data-migration" >}}).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mrjones-plip , I am not able to access the Proxmox link.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah - thanks! should be fixed in ac1c70b

@mrjones-plip mrjones-plip merged commit fbe6cd3 into main Aug 14, 2024
2 checks passed
@mrjones-plip mrjones-plip deleted the mrjones-plip-patch-6 branch August 14, 2024 15:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants