Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CodeStyle][Typos][D-[24-30]] Fix typos(defferent,differenciation,diffrent,differnt,difficults,dimensinal,dimenstions,demension,dimention,dimenstion) #70570

Merged
merged 1 commit into from
Jan 2, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 0 additions & 10 deletions _typos.toml
Original file line number Diff line number Diff line change
Expand Up @@ -49,16 +49,6 @@ deciamls = 'deciamls'
decalared = 'decalared'
decompse = 'decompse'
decompositing = 'decompositing'
defferent = 'defferent'
differenciation = 'differenciation'
differnt = 'differnt'
diffrent = 'diffrent'
difficults = 'difficults'
dimensinal = 'dimensinal'
dimenstions = 'dimenstions'
dimenstion = 'dimenstion'
dimention = 'dimention'
demension = 'demension'
Direcly = 'Direcly'
direcly = 'direcly'
direcotory = 'direcotory'
Expand Down
2 changes: 1 addition & 1 deletion paddle/cinn/operator_fusion/graph_transformer/operation.h
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ struct MergeReduceTreeAndTrivialOperation {
merged_node->set_fusion_iters(
graph->iters_fusion_policy()->SingleDownstreamItersFusion(node,
downstream));
// TODO(huangjiyi): Support relationship analysis for defferent iters, for
// TODO(huangjiyi): Support relationship analysis for different iters, for
// example the input iters and output iters of reshape op.
auto sig = merged_node->fusion_iters();
const auto upstream_iters = node->fusion_iters();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
namespace cinn::fusion {

// TODO(@wuzhanfei) ops like a = b + b, the Value b is used by AddOp twice
// Currently we can not mark them as two differnt DimUsage
// Currently we can not mark them as two different DimUsage

struct DimUsage {
pir::Value v_;
Expand Down
10 changes: 5 additions & 5 deletions paddle/common/flags.cc
Original file line number Diff line number Diff line change
Expand Up @@ -1106,25 +1106,25 @@ PHI_DEFINE_EXPORTED_string(cinn_subgraph_graphviz_dir,
* Since Version: 3.0 Beta
* Value Range: bool, default=false
* Example: FLAGS_cinn_specify_input_dynamic_dim=true will use file set by
* FLAGS_cinn_input_dynamic_dim_spec_file to specify input dynamic dimention.
* FLAGS_cinn_input_dynamic_dim_spec_file to specify input dynamic dimension.
*/
PHI_DEFINE_EXPORTED_bool(cinn_specify_input_dynamic_dim,
false,
"Whether to specify input dynamic dimention.");
"Whether to specify input dynamic dimension.");

/*
* CINN related FLAG
* Name: FLAGS_cinn_input_dynamic_dim_spec_file
* Since Version: 3.0 Beta
* Value Range: string, default=""
* Example: FLAGS_cinn_input_dynamic_dim_spec_file="./config.json",
* FLAGS_cinn_specify_input_dynamic_dim=true would use input dynamic dimention
* predefined in ./config.json to specify input dynamic dimention.
* FLAGS_cinn_specify_input_dynamic_dim=true would use input dynamic dimension
* predefined in ./config.json to specify input dynamic dimension.
*/
PHI_DEFINE_EXPORTED_string(
cinn_input_dynamic_dim_spec_file,
"",
"File path of predefined input dynamic dimention specification.");
"File path of predefined input dynamic dimension specification.");

#endif

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -428,7 +428,7 @@ pir::Attribute AttrTypeReader::ReadPaddleOperatorAttr(
std::vector<int64_t>>(attr_json, ctx);
} else if (attr_name == paddle::dialect::ScalarAttribute::name()) {
VLOG(8) << "Parse ScalarAttribute .";
// this func's return type is pir::Attribute which is diffrent
// this func's return type is pir::Attribute which is different
// from paddle::dialect::ScalarAttribute
return pir::deserializeAttrFromJson_scalarAttr(attr_json, ctx);
} else if (attr_name == paddle::dialect::DataTypeAttribute::name()) {
Expand Down
4 changes: 2 additions & 2 deletions paddle/fluid/pybind/imperative.cc
Original file line number Diff line number Diff line change
Expand Up @@ -1162,7 +1162,7 @@ void BindImperative(py::module *m_ptr) {

count (Tensor): The count tensor, and the data type should be `int64` currently.
Besides, `count` should be placed on CPUPlace. The shape of `count`
should be one-dimensinal.
should be one-dimensional.

Examples:
.. code-block:: python
Expand Down Expand Up @@ -1395,7 +1395,7 @@ void BindImperative(py::module *m_ptr) {

count (Tensor): The count tensor, and the data type should be `int64` currently.
Besides, `count` should be placed on CPUPlace. The shape of `count`
should be one-dimensinal.
should be one-dimensional.

Examples:
.. code-block:: python
Expand Down
2 changes: 1 addition & 1 deletion paddle/phi/backends/onednn/onednn_reuse.h
Original file line number Diff line number Diff line change
Expand Up @@ -1751,7 +1751,7 @@ class PoolingOneDNNHandler
const OneDNNContext& dev_ctx, const std::string& unique_name) {
dnnl::memory::desc workspace_md = this->fwd_pd_->workspace_desc();
// Pooling Workspace has to be passed to Grad op that
// may be executed by diffrent thread, hence
// may be executed by different thread, hence
// for that one we use key that does not contain TID
std::string workspace_key = CreateKey(dev_ctx,
workspace_md.get_dims(),
Expand Down
2 changes: 1 addition & 1 deletion paddle/phi/infermeta/spmd_rules/utils.cc
Original file line number Diff line number Diff line change
Expand Up @@ -299,7 +299,7 @@ void AlignDimsSharding(std::vector<TensorDistAttr>* input_attrs_ptr,
return false;
};

// a dim can not be sharded twice along diffrent mesh_dim
// a dim can not be sharded twice along different mesh_dim
std::set<char> sharded_axis;
std::map<int32_t, ReduceType> partial_dim_to_type;
std::map<int32_t, char> mesh_dim_to_axis;
Expand Down
2 changes: 1 addition & 1 deletion paddle/phi/infermeta/unary.cc
Original file line number Diff line number Diff line change
Expand Up @@ -4773,7 +4773,7 @@ void SumInferMeta(const MetaTensor& x,
}

void DetInferMeta(const MetaTensor& x, MetaTensor* out, MetaConfig config) {
// remove the last two demension
// remove the last two dimension
auto out_dim = common::vectorize<int>(x.dims());
out_dim.pop_back();
out_dim.pop_back();
Expand Down
12 changes: 6 additions & 6 deletions paddle/phi/kernels/gpu/flash_attn_grad_kernel.cu
Original file line number Diff line number Diff line change
Expand Up @@ -143,11 +143,11 @@ static void kvReduceForGQA(const Context& ctx,
PADDLE_ENFORCE_EQ(
dk->strides()[2],
1,
common::errors::InvalidArgument("headdim dimention must be contiguous"));
common::errors::InvalidArgument("headdim dimension must be contiguous"));
PADDLE_ENFORCE_EQ(
dk_tmp.strides()[3],
1,
common::errors::InvalidArgument("headdim dimention must be contiguous"));
common::errors::InvalidArgument("headdim dimension must be contiguous"));
const int64_t reduceDimSize = dk_tmp.dims()[2];
const size_t blockNum =
std::min((static_cast<int64_t>(dk_tmp.dims()[0] + 31) / 32),
Expand Down Expand Up @@ -177,19 +177,19 @@ static void kvReduceBatchedForGQA(const Context& ctx,
PADDLE_ENFORCE_EQ(
dk->strides()[3],
1,
common::errors::InvalidArgument("headdim dimention must be contiguous"));
common::errors::InvalidArgument("headdim dimension must be contiguous"));
PADDLE_ENFORCE_EQ(
dk_tmp.strides()[4],
1,
common::errors::InvalidArgument("headdim dimention must be contiguous"));
common::errors::InvalidArgument("headdim dimension must be contiguous"));
PADDLE_ENFORCE_EQ(dk->strides()[0],
dk->strides()[1] * dk->dims()[1],
common::errors::InvalidArgument(
"batchsize dimention must be contiguous"));
"batchsize dimension must be contiguous"));
PADDLE_ENFORCE_EQ(dk_tmp.strides()[0],
dk_tmp.strides()[1] * dk_tmp.dims()[1],
common::errors::InvalidArgument(
"batchsize dimention must be contiguous"));
"batchsize dimension must be contiguous"));
const int64_t reduceDimSize = dk_tmp.dims()[3];
const size_t blockNum = std::min(
(static_cast<int64_t>(dk_tmp.dims()[0] * dk_tmp.dims()[1] + 31) / 32),
Expand Down
2 changes: 1 addition & 1 deletion paddle/phi/kernels/gpu/flash_attn_utils.h
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ static std::vector<int64_t> GetAttnSparseMaskDims(
rank,
4,
common::errors::InvalidArgument(
"The number of dimenstions of startend_row_indices is expected to "
"The number of dimensions of startend_row_indices is expected to "
"be greater or equal to 4, but recieved %d. The shape of "
"startend_row_indices is [%s]",
rank,
Expand Down
12 changes: 6 additions & 6 deletions test/cpp/inference/api/full_pascalvoc_test_preprocess.py
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ def convert_pascalvoc_local2bin(args):

boxes = []
lbls = []
difficults = []
difficulties = []
object_nums = []

for line in lines:
Expand Down Expand Up @@ -127,12 +127,12 @@ def convert_pascalvoc_local2bin(args):

lbls.extend(bbox_labels[:, 0])
boxes.extend(bbox_labels[:, 1:5])
difficults.extend(bbox_labels[:, -1])
difficulties.extend(bbox_labels[:, -1])

f1.write(np.array(object_nums).astype('uint64').tobytes())
f1.write(np.array(lbls).astype('int64').tobytes())
f1.write(np.array(boxes).astype('float32').tobytes())
f1.write(np.array(difficults).astype('int64').tobytes())
f1.write(np.array(difficulties).astype('int64').tobytes())
f1.close()

object_nums_sum = sum(object_nums)
Expand Down Expand Up @@ -168,7 +168,7 @@ def convert_pascalvoc_tar2bin(tar_path, data_out_path):
gt_labels = {}
boxes = []
lbls = []
difficults = []
difficulties = []
object_nums = []

# map label to number (index)
Expand Down Expand Up @@ -254,7 +254,7 @@ def convert_pascalvoc_tar2bin(tar_path, data_out_path):
continue
lbls.extend(bbox_labels[:, 0])
boxes.extend(bbox_labels[:, 1:5])
difficults.extend(bbox_labels[:, -1])
difficulties.extend(bbox_labels[:, -1])

if line_idx % per_percentage:
print_processbar(line_idx / per_percentage)
Expand All @@ -265,7 +265,7 @@ def convert_pascalvoc_tar2bin(tar_path, data_out_path):
f1.write(np.array(object_nums).astype('uint64').tobytes())
f1.write(np.array(lbls).astype('int64').tobytes())
f1.write(np.array(boxes).astype('float32').tobytes())
f1.write(np.array(difficults).astype('int64').tobytes())
f1.write(np.array(difficulties).astype('int64').tobytes())
f1.close()
print_processbar(100)
print("Conversion finished!\n")
Expand Down
4 changes: 2 additions & 2 deletions test/cpp/pir/cinn/tile_config_performance_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -631,13 +631,13 @@ void TestPerformanceForTileConfig(int spatial_left_bound,
best_tile_config_map,
iter_space_type);
} // end of r_dimension_lower loop
} // end of s_dimention_lower loop
} // end of s_dimension_lower loop
if (test_single_large) {
// (II) Test in the single large areas,
// i.e., S:[4096-32768]*R:[2-1024], S:[2-1024]*R:[4096-32768]
for (int s_dimension_lower = 2; s_dimension_lower < 1024;
s_dimension_lower += spatial_tile_config) {
// adjust the tile size for the spatial dimension dymaically
// adjust the tile size for the spatial dimension dynamically
spatial_tile_config =
get_tile_size_config_in_large_area(s_dimension_lower);
spatial_tile_width = (is_spatial_dynamic ? spatial_tile_config : 1);
Expand Down
2 changes: 1 addition & 1 deletion test/ipu/test_eval_model_ipu.py
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@ def _test_optimizer(self, run_ipu=True):
return np.array(result)

def test(self):
# cpu and ipu dimenstion mismatch, cpu:(100, 1, 1), ipu:(100, 1)
# cpu and ipu dimension mismatch, cpu:(100, 1, 1), ipu:(100, 1)
ipu_loss = self._test_optimizer(True).flatten()
cpu_loss = self._test_optimizer(False).flatten()
self.assertTrue(ipu_loss[0] == ipu_loss[99])
Expand Down
2 changes: 1 addition & 1 deletion test/ipu/test_lr_sheduler_ipu.py
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ def run_model(self, run_ipu=True):
def test_training(self):
data = np.random.rand(1, 3, 10, 10).astype(np.float32)
self.feed = {'image': data}
# cpu and ipu dimenstion mismatch, cpu:(100, 1, 1), ipu:(100, 1)
# cpu and ipu dimension mismatch, cpu:(100, 1, 1), ipu:(100, 1)
ipu_loss = self.run_model(True).flatten()
cpu_loss = self.run_model(False).flatten()

Expand Down
2 changes: 1 addition & 1 deletion test/ipu/test_weight_decay_ipu.py
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@ def exclude_fn(param):
return np.array(result)

def test(self):
# cpu and ipu dimenstion mismatch, cpu:(100, 1, 1), ipu:(100, 1)
# cpu and ipu dimension mismatch, cpu:(100, 1, 1), ipu:(100, 1)
ipu_loss = self._test_optimizer(True).flatten()
cpu_loss = self._test_optimizer(False).flatten()

Expand Down
2 changes: 1 addition & 1 deletion test/legacy_test/gradient_checker.py
Original file line number Diff line number Diff line change
Expand Up @@ -541,7 +541,7 @@ def double_grad_check(


# TODO(jiabin): We currently support only triple grad check here, extend this to support
# higher order differenciation later.
# higher order differentiation later.


# check triple grad and two outputs of the triple Kernel
Expand Down
2 changes: 1 addition & 1 deletion test/legacy_test/test_group_norm_op_v2.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ def group_norm_naive_for_general_dimension(
x, scale, bias, epsilon, groups, channel_last=False
):
# original version group norm only support 4-D tensor
# this function generalizes to support differnt dimensions tensor (>= 2-D)
# this function generalizes to support different dimensions tensor (>= 2-D)
if channel_last:
shape = list(range(x.ndim))
shape.insert(1, shape.pop(-1))
Expand Down
Loading