Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: typos #9783

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -281,7 +281,7 @@
## [1.1.8] - 2017-01-22
### Fixed
- Compatibility fixes for Docker for Mac v1.13.0 ([\#272](https://github.com/testcontainers/testcontainers-java/issues/272))
- Relax docker environment disk space check to accomodate unusual empty `df` output observed on Docker for Mac with OverlayFS ([\#273](https://github.com/testcontainers/testcontainers-java/issues/273), [\#278](https://github.com/testcontainers/testcontainers-java/issues/278))
- Relax docker environment disk space check to accommodate unusual empty `df` output observed on Docker for Mac with OverlayFS ([\#273](https://github.com/testcontainers/testcontainers-java/issues/273), [\#278](https://github.com/testcontainers/testcontainers-java/issues/278))
- Fix inadvertent private-scoping of startup checks' `StartupStatus`, which made implementation of custom startup checks impossible ([\#266](https://github.com/testcontainers/testcontainers-java/issues/266))
- Fix potential resource lead/deadlock when errors are encountered building images from a Dockerfile ([\#274](https://github.com/testcontainers/testcontainers-java/issues/274))

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -368,7 +368,7 @@ static class FilterRegistry {
* Registers the given filters with Ryuk
*
* @param filters the filter to register
* @return true if the filters have been registered successfuly, false otherwise
* @return true if the filters have been registered successfully, false otherwise
* @throws IOException if communication with Ryuk fails
*/
protected boolean register(List<Map.Entry<String, String>> filters) throws IOException {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ public void shouldWorkWithSimpleDependency() {
}

@Test
public void shouldWorkWithMutlipleDependencies() {
public void shouldWorkWithMultipleDependencies() {
InvocationCountingStartable startable1 = new InvocationCountingStartable();
InvocationCountingStartable startable2 = new InvocationCountingStartable();

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
public class LazyFutureTest {

@Test
public void testLazyness() throws Exception {
public void testLaziness() throws Exception {
AtomicInteger counter = new AtomicInteger();

Future<Integer> lazyFuture = new LazyFuture<Integer>() {
Expand Down
4 changes: 2 additions & 2 deletions docs/contributing.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Contributing

* Star the project on [Github](https://github.com/testcontainers/testcontainers-java) and help spread the word :)
* Star the project on [GitHub](https://github.com/testcontainers/testcontainers-java) and help spread the word :)
* Join our [Slack workspace](http://slack.testcontainers.org)
* [Start a discussion](https://github.com/testcontainers/testcontainers-java/discussions) if you have an idea, find a possible bug or have a general question.
* Contribute improvements or fixes using a [Pull Request](https://github.com/testcontainers/testcontainers-java/pulls). If you're going to contribute, thank you! Please just be sure to:
Expand Down Expand Up @@ -97,7 +97,7 @@ We will evaluate incubating modules periodically, and remove the label when appr
Since we generally get a lot of Dependabot PRs, we regularly combine them into single commits.
For this, we are using the [gh-combine-prs](https://github.com/rnorth/gh-combine-prs) extension for [GitHub CLI](https://cli.github.com/).

The whole process is as follow:
The whole process is as follows:

1. Check that all open Dependabot PRs did succeed their build. If they did not succeed, trigger a rerun if the cause were external factors or else document the reason if obvious.
2. Run the extension from an up-to-date local `main` branch: `gh combine-prs --query "author:app/dependabot"`
Expand Down
2 changes: 1 addition & 1 deletion docs/contributing_docs.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ foo.doSomething();

Note that:

* Any code included will be have its indentation reduced
* Any code included will have its indentation reduced
* Every line in the source file will be searched for an instance of the token (e.g. `doFoo`). If more than one line
includes that token, then potentially more than one block could be targeted for inclusion. It is advisable to use a
specific, unique token to avoid unexpected behaviour.
Expand Down
8 changes: 4 additions & 4 deletions docs/modules/azure.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ CosmosDBEmulatorContainer | [mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emula
Start Azurite Emulator during a test:

<!--codeinclude-->
[Starting a Azurite container](../../modules/azure/src/test/java/org/testcontainers/azure/AzuriteContainerTest.java) inside_block:emulatorContainer
[Starting an Azurite container](../../modules/azure/src/test/java/org/testcontainers/azure/AzuriteContainerTest.java) inside_block:emulatorContainer
<!--/codeinclude-->

!!! note
Expand All @@ -29,11 +29,11 @@ If the tested application needs to use more than one set of credentials, the con
Please see some examples below.

<!--codeinclude-->
[Starting a Azurite Blob container with one account and two keys](../../modules/azure/src/test/java/org/testcontainers/azure/AzuriteContainerTest.java) inside_block:withTwoAccountKeys
[Starting an Azurite Blob container with one account and two keys](../../modules/azure/src/test/java/org/testcontainers/azure/AzuriteContainerTest.java) inside_block:withTwoAccountKeys
<!--/codeinclude-->

<!--codeinclude-->
[Starting a Azurite Blob container with more accounts and keys](../../modules/azure/src/test/java/org/testcontainers/azure/AzuriteContainerTest.java) inside_block:withMoreAccounts
[Starting an Azurite Blob container with more accounts and keys](../../modules/azure/src/test/java/org/testcontainers/azure/AzuriteContainerTest.java) inside_block:withMoreAccounts
<!--/codeinclude-->

#### Using with Blob
Expand Down Expand Up @@ -77,7 +77,7 @@ Build Azure Table client:
Start Azure CosmosDB Emulator during a test:

<!--codeinclude-->
[Starting a Azure CosmosDB Emulator container](../../modules/azure/src/test/java/org/testcontainers/containers/CosmosDBEmulatorContainerTest.java) inside_block:emulatorContainer
[Starting an Azure CosmosDB Emulator container](../../modules/azure/src/test/java/org/testcontainers/containers/CosmosDBEmulatorContainerTest.java) inside_block:emulatorContainer
<!--/codeinclude-->

Prepare KeyStore to use for SSL.
Expand Down
2 changes: 1 addition & 1 deletion docs/modules/databases/jdbc.md
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@ By default database container is being stopped as soon as last connection is clo

`jdbc:tc:mysql:8.0.36:///databasename?TC_DAEMON=true`

With this parameter database container will keep running even when there're no open connections.
With this parameter database container will keep running even when there's no open connections.


### Running container with tmpfs options
Expand Down
2 changes: 1 addition & 1 deletion docs/modules/kafka.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ Now your tests or any other process running on your machine can get access to ru
Create a `ConfluentKafkaContainer` to use it in your tests:

<!--codeinclude-->
[Creating a ConlfuentKafkaContainer](../../modules/kafka/src/test/java/org/testcontainers/kafka/ConfluentKafkaContainerTest.java) inside_block:constructorWithVersion
[Creating a ConfluentKafkaContainer](../../modules/kafka/src/test/java/org/testcontainers/kafka/ConfluentKafkaContainerTest.java) inside_block:constructorWithVersion
<!--/codeinclude-->

### Using org.testcontainers.kafka.KafkaContainer
Expand Down
2 changes: 1 addition & 1 deletion docs/modules/typesense.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Testcontainers module for [Typesense](https://hub.docker.com/r/typesense/typesen

## TypesenseContainer's usage examples

You can start an Typesense container instance from any Java application by using:
You can start a Typesense container instance from any Java application by using:

<!--codeinclude-->
[Typesense container](../../modules/typesense/src/test/java/org/testcontainers/typesense/TypesenseContainerTest.java) inside_block:container
Expand Down
2 changes: 1 addition & 1 deletion docs/test_framework_integration/junit_5.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ Since this module has a dependency onto JUnit Jupiter and on Testcontainers core
has a dependency onto JUnit 4.x, projects using this module will end up with both, JUnit Jupiter
and JUnit 4.x in the test classpath.

This extension has only be tested with sequential test execution. Using it with parallel test execution is unsupported and may have unintended side effects.
This extension has only been tested with sequential test execution. Using it with parallel test execution is unsupported and may have unintended side effects.

## Adding Testcontainers JUnit 5 support to your project dependencies

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -250,7 +250,7 @@ commit_failure_policy: stop
#
# Valid values are either "auto" (omitting the value) or a value greater 0.
#
# Note that specifying a too large value will result in long running GCs and possbily
# Note that specifying a too large value will result in long running GCs and possibly
# out-of-memory errors. Keep the value at a small fraction of the heap.
#
# If you constantly see "prepared statements discarded in the last minute because
Expand All @@ -259,7 +259,7 @@ commit_failure_policy: stop
# i.e. use bind markers for variable parts.
#
# Do only change the default value, if you really have more prepared statements than
# fit in the cache. In most cases it is not neccessary to change this value.
# fit in the cache. In most cases it is not necessary to change this value.
# Constantly re-preparing statements is a performance penalty.
#
# Default value ("auto") is 1/256th of the heap or 10MB, whichever is greater
Expand Down Expand Up @@ -309,7 +309,7 @@ key_cache_save_period: 14400
# Fully off-heap row cache implementation (default).
#
# org.apache.cassandra.cache.SerializingCacheProvider
# This is the row cache implementation availabile
# This is the row cache implementation available
# in previous releases of Cassandra.
# row_cache_class_name: org.apache.cassandra.cache.OHCProvider

Expand Down Expand Up @@ -444,7 +444,7 @@ concurrent_counter_writes: 32
concurrent_materialized_view_writes: 32

# Maximum memory to use for sstable chunk cache and buffer pooling.
# 32MB of this are reserved for pooling buffers, the rest is used as an
# 32MB of this are reserved for pooling buffers, the rest is used as a
# cache that holds uncompressed sstable chunks.
# Defaults to the smaller of 1/4 of heap or 512MB. This pool is allocated off-heap,
# so is in addition to the memory allocated for heap. The cache also has on-heap
Expand Down Expand Up @@ -553,7 +553,7 @@ memtable_allocation_type: heap_buffers
# new space for cdc-tracked tables has been made available. Default to 250ms
# cdc_free_space_check_interval_ms: 250

# A fixed memory pool size in MB for for SSTable index summaries. If left
# A fixed memory pool size in MB for SSTable index summaries. If left
# empty, this will default to 5% of the heap size. If the memory usage of
# all index summaries exceeds this limit, SSTables with low read rates will
# shrink their index summaries in order to meet this limit. However, this
Expand Down Expand Up @@ -778,7 +778,7 @@ auto_snapshot: true
# number of rows per partition. The competing goals are these:
#
# - a smaller granularity means more index entries are generated
# and looking up rows withing the partition by collation column
# and looking up rows within the partition by collation column
# is faster
# - but, Cassandra will keep the collation index in memory for hot
# rows (as part of the key cache), so a larger granularity means
Expand Down Expand Up @@ -1109,7 +1109,7 @@ windows_timer_interval: 1

# Enables encrypting data at-rest (on disk). Different key providers can be plugged in, but the default reads from
# a JCE-style keystore. A single keystore can hold multiple keys, but the one referenced by
# the "key_alias" is the only key that will be used for encrypt opertaions; previously used keys
# the "key_alias" is the only key that will be used for encrypt operations; previously used keys
# can still (and should!) be in the keystore and will be used on decrypt operations
# (to handle the case of key rotation).
#
Expand Down Expand Up @@ -1143,7 +1143,7 @@ transparent_data_encryption_options:
# tombstones seen in memory so we can return them to the coordinator, which
# will use them to make sure other replicas also know about the deleted rows.
# With workloads that generate a lot of tombstones, this can cause performance
# problems and even exaust the server heap.
# problems and even exhaust the server heap.
# (http://www.datastax.com/dev/blog/cassandra-anti-patterns-queues-and-queue-like-datasets)
# Adjust the thresholds here if you understand the dangers and want to
# scan more tombstones anyway. These thresholds may also be adjusted at runtime
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -250,7 +250,7 @@ commit_failure_policy: stop
#
# Valid values are either "auto" (omitting the value) or a value greater 0.
#
# Note that specifying a too large value will result in long running GCs and possbily
# Note that specifying a too large value will result in long running GCs and possibly
# out-of-memory errors. Keep the value at a small fraction of the heap.
#
# If you constantly see "prepared statements discarded in the last minute because
Expand All @@ -259,7 +259,7 @@ commit_failure_policy: stop
# i.e. use bind markers for variable parts.
#
# Do only change the default value, if you really have more prepared statements than
# fit in the cache. In most cases it is not neccessary to change this value.
# fit in the cache. In most cases it is not necessary to change this value.
# Constantly re-preparing statements is a performance penalty.
#
# Default value ("auto") is 1/256th of the heap or 10MB, whichever is greater
Expand Down Expand Up @@ -309,7 +309,7 @@ key_cache_save_period: 14400
# Fully off-heap row cache implementation (default).
#
# org.apache.cassandra.cache.SerializingCacheProvider
# This is the row cache implementation availabile
# This is the row cache implementation available
# in previous releases of Cassandra.
# row_cache_class_name: org.apache.cassandra.cache.OHCProvider

Expand Down Expand Up @@ -444,7 +444,7 @@ concurrent_counter_writes: 32
concurrent_materialized_view_writes: 32

# Maximum memory to use for sstable chunk cache and buffer pooling.
# 32MB of this are reserved for pooling buffers, the rest is used as an
# 32MB of this are reserved for pooling buffers, the rest is used as a
# cache that holds uncompressed sstable chunks.
# Defaults to the smaller of 1/4 of heap or 512MB. This pool is allocated off-heap,
# so is in addition to the memory allocated for heap. The cache also has on-heap
Expand Down Expand Up @@ -553,7 +553,7 @@ memtable_allocation_type: heap_buffers
# new space for cdc-tracked tables has been made available. Default to 250ms
# cdc_free_space_check_interval_ms: 250

# A fixed memory pool size in MB for for SSTable index summaries. If left
# A fixed memory pool size in MB for SSTable index summaries. If left
# empty, this will default to 5% of the heap size. If the memory usage of
# all index summaries exceeds this limit, SSTables with low read rates will
# shrink their index summaries in order to meet this limit. However, this
Expand Down Expand Up @@ -778,7 +778,7 @@ auto_snapshot: true
# number of rows per partition. The competing goals are these:
#
# - a smaller granularity means more index entries are generated
# and looking up rows withing the partition by collation column
# and looking up rows within the partition by collation column
# is faster
# - but, Cassandra will keep the collation index in memory for hot
# rows (as part of the key cache), so a larger granularity means
Expand Down Expand Up @@ -1109,7 +1109,7 @@ windows_timer_interval: 1

# Enables encrypting data at-rest (on disk). Different key providers can be plugged in, but the default reads from
# a JCE-style keystore. A single keystore can hold multiple keys, but the one referenced by
# the "key_alias" is the only key that will be used for encrypt opertaions; previously used keys
# the "key_alias" is the only key that will be used for encrypt operations; previously used keys
# can still (and should!) be in the keystore and will be used on decrypt operations
# (to handle the case of key rotation).
#
Expand Down Expand Up @@ -1143,7 +1143,7 @@ transparent_data_encryption_options:
# tombstones seen in memory so we can return them to the coordinator, which
# will use them to make sure other replicas also know about the deleted rows.
# With workloads that generate a lot of tombstones, this can cause performance
# problems and even exaust the server heap.
# problems and even exhaust the server heap.
# (http://www.datastax.com/dev/blog/cassandra-anti-patterns-queues-and-queue-like-datasets)
# Adjust the thresholds here if you understand the dangers and want to
# scan more tombstones anyway. These thresholds may also be adjusted at runtime
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ private void runConsulCommands() {
/**
* Run consul commands using the consul cli.
*
* Useful for enableing more secret engines like:
* Useful for enabling more secret engines like:
* <pre>
* .withConsulCommand("secrets enable pki")
* .withConsulCommand("secrets enable transit")
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ public void testCommandOverride() throws SQLException {

ResultSet resultSet = performQuery(cratedb, "select name from sys.cluster");
String result = resultSet.getString(1);
assertThat(result).as("cluster name should be overriden").isEqualTo("testcontainers");
assertThat(result).as("cluster name should be overridden").isEqualTo("testcontainers");
}
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ protected Set<Integer> getLivenessCheckPorts() {

@Override
protected void configure() {
// If license was not accepted programatically, check if it was accepted via resource file
// If license was not accepted programmatically, check if it was accepted via resource file
if (!getEnvMap().containsKey("LICENSE")) {
LicenseAcceptance.assertLicenseAccepted(this.getDockerImageName());
acceptLicense();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ public void testMariaDBWithCommandOverride() throws SQLException {
ResultSet resultSet = performQuery(mariadbCustomConfig, "show variables like 'auto_increment_increment'");
String result = resultSet.getString("Value");

assertThat(result).as("Auto increment increment should be overriden by command line").isEqualTo("10");
assertThat(result).as("Auto increment increment should be overridden by command line").isEqualTo("10");
}
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ public Set<Integer> getLivenessCheckPortNumbers() {

@Override
protected void configure() {
// If license was not accepted programatically, check if it was accepted via resource file
// If license was not accepted programmatically, check if it was accepted via resource file
if (!getEnvMap().containsKey("ACCEPT_EULA")) {
LicenseAcceptance.assertLicenseAccepted(this.getDockerImageName());
acceptLicense();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ public void testCommandOverride() throws SQLException {
ResultSet resultSet = performQuery(mysqlCustomConfig, "show variables like 'auto_increment_increment'");
String result = resultSet.getString("Value");

assertThat(result).as("Auto increment increment should be overriden by command line").isEqualTo("42");
assertThat(result).as("Auto increment increment should be overridden by command line").isEqualTo("42");
}
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ public void testCommandOverride() throws SQLException {
"SELECT current_setting('max_connections')"
);
String result = resultSet.getString(1);
assertThat(result).as("max_connections should be overriden").isEqualTo("42");
assertThat(result).as("max_connections should be overridden").isEqualTo("42");
}
}

Expand All @@ -54,7 +54,7 @@ public void testUnsetCommand() throws SQLException {
"SELECT current_setting('max_connections')"
);
String result = resultSet.getString(1);
assertThat(result).as("max_connections should not be overriden").isNotEqualTo("42");
assertThat(result).as("max_connections should not be overridden").isNotEqualTo("42");
}
}

Expand Down
Loading
Loading