Skip to content

Commit

Permalink
Merge pull request #2242 from FedML-AI/charlie/dev/v0.7.0
Browse files Browse the repository at this point in the history
[update]Upgrade official website address: https://tensoropera.ai , and the brand: TensorOpera ®  AI
  • Loading branch information
fedml-alex authored Dec 20, 2024
2 parents 63f2110 + 93f9760 commit ca5e764
Show file tree
Hide file tree
Showing 45 changed files with 126 additions and 126 deletions.
4 changes: 2 additions & 2 deletions python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,5 +43,5 @@ Other low-level APIs related to security and privacy are also supported. All alg

**utils**: Common utilities shared by other modules.

## About FedML, Inc.
https://FedML.ai
## About TensorOpera, Inc.
https://tensoropera.ai
14 changes: 7 additions & 7 deletions python/examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,14 @@
# FEDML Examples (Including Prebuilt Jobs in Jobs Store)

- `FedML/python/examples` -- examples for training, deployment, and federated learning
- `FedML/python/examples/launch` -- examples for FEDML®Launch
- `FedML/python/examples/serving` -- examples for FEDML®Deploy
- `FedML/python/examples/train` -- examples for FEDML®Train
- `FedML/python/examples/cross_cloud` -- examples for FEDML®Train cross-cloud distributed training
- `FedML/python/examples/launch` -- examples for TensorOpera®Launch
- `FedML/python/examples/serving` -- examples for TensorOpera®Deploy
- `FedML/python/examples/train` -- examples for TensorOpera®Train
- `FedML/python/examples/cross_cloud` -- examples for TensorOpera®Train cross-cloud distributed training
- `FedML/python/examples/federate/prebuilt_jobs` -- examples for federated learning prebuilt jobs (FedCV, FedNLP, FedGraphNN, Healthcare, etc.)
- `FedML/python/examples/federate/cross_silo` -- examples for cross-silo federated learning
- `FedML/python/examples/federate/cross_device` -- examples for cross-device federated learning
- `FedML/python/examples/federate/simulation` -- examples for federated learning simulation
- `FedML/python/examples/federate/security` -- examples for FEDML®Federate security related features
- `FedML/python/examples/federate/privacy` -- examples for FEDML®Federate privacy related features
- `FedML/python/examples/federate/federated_analytics` -- examples for FEDML®Federate federated analytics (FA)
- `FedML/python/examples/federate/security` -- examples for TensorOpera®Federate security related features
- `FedML/python/examples/federate/privacy` -- examples for TensorOpera®Federate privacy related features
- `FedML/python/examples/federate/federated_analytics` -- examples for TensorOpera®Federate federated analytics (FA)
2 changes: 1 addition & 1 deletion python/examples/deploy/complex_example/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Use -cf to indicate the configuration file.
curl -XPOST localhost:2345/predict -d '{"text": "Hello"}'
```

## Option 2: Deploy to the Cloud (Using fedml®launch platform)
## Option 2: Deploy to the Cloud (Using TensorOpera®launch platform)
- Uncomment the following line in config.yaml

For information about the configuration, please refer to fedml ® launch.
Expand Down
2 changes: 1 addition & 1 deletion python/examples/deploy/complex_example/config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ environment_variables:
LOCAL_RANK: "0"

# If you do not have any GPU resource but want to serve the model
# Try FedML® Nexus AI Platform, and Uncomment the following lines.
# Try TensorOpera® Nexus AI Platform, and Uncomment the following lines.
# ------------------------------------------------------------
computing:
minimum_num_gpus: 1 # minimum # of GPUs to provision
Expand Down
4 changes: 2 additions & 2 deletions python/examples/deploy/mnist/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,9 @@ curl -XPOST localhost:2345/predict -d '{"arr":[$DATA]}'
#For $DATA, please check the request_input_example, it is a 28*28=784 float array
#Output:{"generated_text":"tensor([0.2333, 0.5296, 0.4350, 0.4537, 0.5424, 0.4583, 0.4803, 0.2862, 0.5507,\n 0.8683], grad_fn=<SigmoidBackward0>)"}
```
## Option 2: Deploy to the Cloud (Using fedml® launch platform)
## Option 2: Deploy to the Cloud (Using TensorOpera® launch platform)
Uncomment the following line in mnist.yaml,
for infomation about the configuration, please refer to fedml® launch.
for infomation about the configuration, please refer to TensorOpera® launch.
```yaml
# computing:
# minimum_num_gpus: 1
Expand Down
2 changes: 1 addition & 1 deletion python/examples/deploy/mnist/mnist.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ data_cache_dir: ""
bootstrap: ""

# If you do not have any GPU resource but want to serve the model
# Try FedML® Nexus AI Platform, and Uncomment the following lines.
# Try TensorOpera® Nexus AI Platform, and Uncomment the following lines.
# ------------------------------------------------------------
computing:
minimum_num_gpus: 1 # minimum # of GPUs to provision
Expand Down
6 changes: 3 additions & 3 deletions python/examples/deploy/multi_service/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ fedml model create --name $model_name --config_file config.yaml
```

## On-premsie Deploy
Register an account on FedML website: https://fedml.ai
Register an account on TensorOpera website: https://tensoropera.ai

You will have a user id and api key, which can be found in the profile page.

Expand Down Expand Up @@ -44,8 +44,8 @@ You will have a user id and api key, which can be found in the profile page.
```
- Result

See the deployment result in https://fedml.ai
See the deployment result in https://tensoropera.ai

- OPT2: Deploy - UI

Follow the instructions on https://fedml.ai
Follow the instructions on https://tensoropera.ai
2 changes: 1 addition & 1 deletion python/examples/deploy/quick_start/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Use -cf to indicate the configuration file.
curl -XPOST localhost:2345/predict -d '{"text": "Hello"}'
```

## Option 2: Deploy to the Cloud (Using fedml®launch platform)
## Option 2: Deploy to the Cloud (Using TensorOpera®launch platform)
- Uncomment the following line in config.yaml

For information about the configuration, please refer to fedml ® launch.
Expand Down
2 changes: 1 addition & 1 deletion python/examples/deploy/scalellm-multi-engine/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ computing:
#device_type: CPU # options: GPU, CPU, hybrid
resource_type: A100-80G # e.g., A100-80G,
# please check the resource type list by "fedml show-resource-type"
# or visiting URL: https://fedml.ai/accelerator_resource_type
# or visiting URL: https://tensoropera.ai/accelerator_resource_type
```

```bash
Expand Down
2 changes: 1 addition & 1 deletion python/examples/deploy/scalellm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ computing:
#device_type: CPU # options: GPU, CPU, hybrid
resource_type: A100-80G # e.g., A100-80G,
# please check the resource type list by "fedml show-resource-type"
# or visiting URL: https://fedml.ai/accelerator_resource_type
# or visiting URL: https://tensoropera.ai/accelerator_resource_type
```

```bash
Expand Down
2 changes: 1 addition & 1 deletion python/examples/deploy/streaming_response/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Use -cf to indicate the configuration file.
curl -XPOST localhost:2345/predict -d '{"text": "Hello"}'
```

## Option 2: Deploy to the Cloud (Using fedml®launch platform)
## Option 2: Deploy to the Cloud (Using TensorOpera®launch platform)
- Uncomment the following line in config.yaml

For information about the configuration, please refer to fedml ® launch.
Expand Down
2 changes: 1 addition & 1 deletion python/examples/deploy/streaming_response/config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ bootstrap: |
echo "Bootstrap finished"
# If you do not have any GPU resource but want to serve the model
# Try FedML® Nexus AI Platform, and Uncomment the following lines.
# Try TensorOpera® Nexus AI Platform, and Uncomment the following lines.
# ------------------------------------------------------------
computing:
minimum_num_gpus: 1 # minimum # of GPUs to provision
Expand Down
6 changes: 3 additions & 3 deletions python/examples/deploy/triton/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ fedml model create --name $model_name --config_file config.yaml
```

## On-premsie Deploy
Register an account on FedML website: https://fedml.ai
Register an account on TensorOpera website: https://tensoropera.ai

You will have a user id and api key, which can be found in the profile page.

Expand Down Expand Up @@ -68,8 +68,8 @@ You will have a user id and api key, which can be found in the profile page.
```
- Result

See the deployment result in https://fedml.ai
See the deployment result in https://tensoropera.ai

- OPT2: Deploy - UI

Follow the instructions on https://fedml.ai
Follow the instructions on https://tensoropera.ai
4 changes: 2 additions & 2 deletions python/examples/deploy/your_own_llm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,9 @@ fedml model deploy --name llm --local
#INFO: Uvicorn running on http://0.0.0.0:2345 (Press CTRL+C to quit)
curl -XPOST localhost:2345/predict -d '{"text": "Hello"}'
```
## Option 2: Deploy to the Cloud (Using fedml®launch platform)
## Option 2: Deploy to the Cloud (Using TensorOpera®launch platform)
Uncomment the following line in llm.yaml,
for infomation about the configuration, please refer to fedml®launch.
for infomation about the configuration, please refer to TensorOpera®launch.
```yaml
# computing:
# minimum_num_gpus: 1
Expand Down
2 changes: 1 addition & 1 deletion python/examples/deploy/your_own_llm/llm.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ bootstrap: |
echo "Bootstrap finished"
# If you do not have any GPU resource but want to serve the model
# Try FedML® Nexus AI Platform, and Uncomment the following lines.
# Try TensorOpera® Nexus AI Platform, and Uncomment the following lines.
# ------------------------------------------------------------
# computing:
# minimum_num_gpus: 1 # minimum # of GPUs to provision
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@

# Introduction
In this working example, we will run 1 aggregation server and 2 clients on the same machine using Docker + gRPC and we will use the FEDML.ai platform to run the FL job.
In this working example, we will run 1 aggregation server and 2 clients on the same machine using Docker + gRPC and we will use the TensorOpera.ai platform to run the FL job.

# gRPC Configuration File
The content of the gRPC configuration file is as follows:
Expand Down Expand Up @@ -47,5 +47,5 @@ source /fedml/bin/activate
fedml login -c <FEDML_API_KEY>
```

Then we only need to compile our job and submit to our dockerb-based cluster as it is also discussed in detail in the official FEDML documentation: https://fedml.ai/octopus/userGuides
Then we only need to compile our job and submit to our dockerb-based cluster as it is also discussed in detail in the official TensorOpera documentation: https://tensoropera.ai/octopus/userGuides

4 changes: 2 additions & 2 deletions python/examples/launch/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -132,7 +132,7 @@ You just need to customize the following config items.

3. `bootstrap`, It is the bootstrap shell command which will be executed before running entry commands.

Then you can use the following example CLI to launch the job at FedML® Nexus AI Platform
Then you can use the following example CLI to launch the job at TensorOpera® Nexus AI Platform
(Replace $YourApiKey with your own account API key from open.fedml.ai)

Example:
Expand All @@ -142,7 +142,7 @@ fedml launch hello_job.yaml

After the launch CLI is executed, the output is as follows. Here you may open the job url to confirm and actually start the job.
```
Submitting your job to FedML® Nexus AI Platform: 100%|████████████████████████████████████████████████████████████████████████████████████████| 6.07k/6.07k [00:01<00:00, 4.94kB/s]
Submitting your job to TensorOpera® Nexus AI Platform: 100%|████████████████████████████████████████████████████████████████████████████████████████| 6.07k/6.07k [00:01<00:00, 4.94kB/s]
Searched and matched the following GPU resource for your job:
+-----------+-------------------+---------+------------+-------------------------+---------+-------+----------+
Expand Down
2 changes: 1 addition & 1 deletion python/examples/launch/federate_build_package/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
```
Usage: fedml federate build [OPTIONS] [YAML_FILE]
Build federate packages for the FedML® Nexus AI Platform.
Build federate packages for the TensorOpera® Nexus AI Platform.
Options:
-h, --help Show this message and exit.
Expand Down
2 changes: 1 addition & 1 deletion python/examples/launch/train_build_package/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
```
Usage: fedml train build [OPTIONS] [YAML_FILE]
Build training packages for the FedML® Nexus AI Platform.
Build training packages for the TensorOpera® Nexus AI Platform.
Options:
-h, --help Show this message and exit.
Expand Down
2 changes: 1 addition & 1 deletion python/examples/train/README.md
Original file line number Diff line number Diff line change
@@ -1 +1 @@
# Examples (Prebuilt Jobs) for FEDML®Train
# Examples (Prebuilt Jobs) for TensorOpera®Train
2 changes: 1 addition & 1 deletion python/examples/train/llm_train/job.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -44,4 +44,4 @@ computing:

allow_cross_cloud_resources: false # true, false
device_type: GPU # options: GPU, CPU, hybrid
resource_type: A100-80G # e.g., A100-80G, please check the resource type list by "fedml show-resource-type" or visiting URL: https://fedml.ai/accelerator_resource_type
resource_type: A100-80G # e.g., A100-80G, please check the resource type list by "fedml show-resource-type" or visiting URL: https://tensoropera.ai/accelerator_resource_type
2 changes: 1 addition & 1 deletion python/fedml/api/constants.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ class ApiConstants:

RESOURCE_MATCHED_STATUS_BIND_CREDIT_CARD_FIRST = \
"""
Before we can start a job, please add a credit card to your FEDML account at https://fedml.ai/billing/home.
Before we can start a job, please add a credit card to your FEDML account at https://tensoropera.ai/billing.
Once it's added, please try to run the launch command again
"""

Expand Down
4 changes: 2 additions & 2 deletions python/fedml/api/modules/build.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ def build(platform, type, source_folder, entry_point, config_folder, dest_folder

if type == "client" or type == "server":
click.echo(
"Now, you are building the fedml packages which will be used in the FedML® Nexus AI Platform "
"Now, you are building the fedml packages which will be used in the TensorOpera® Nexus AI Platform "
"platform."
)
click.echo(
Expand All @@ -34,7 +34,7 @@ def build(platform, type, source_folder, entry_point, config_folder, dest_folder
+ "."
)
click.echo(
"Then you may upload the packages on the configuration page in the FedML® Nexus AI Platform to "
"Then you may upload the packages on the configuration page in the TensorOpera® Nexus AI Platform to "
"start your training flow."
)
click.echo("Building...")
Expand Down
6 changes: 3 additions & 3 deletions python/fedml/api/modules/device.py
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ def _bind(
else:
docker_install_url = "https://docs.docker.com/engine/install/"
docker_config_text = " Moreover, you need to config the docker engine to run as a non-root user. Here is the docs. https://docs.docker.com/engine/install/linux-postinstall/"
print("\n Welcome to FedML.ai! \n Start to login the current device to the FedML® Nexus AI Platform\n")
print("\n Welcome toTensorOpera.ai! \n Start to login the current device to the TensorOpera® Nexus AI Platform\n")
print(" If you want to deploy models into this computer, you need to install the docker engine to serve your models.")
print(f" Here is the docs for installation docker engine. {docker_install_url}")
if docker_config_text is not None:
Expand Down Expand Up @@ -137,7 +137,7 @@ def _bind(
client_daemon_cmd = "client_daemon.py"
client_daemon_pids = RunProcessUtils.get_pid_from_cmd_line(client_daemon_cmd)
if client_daemon_pids is not None and len(client_daemon_pids) > 0:
print("Your computer has been logged into the FedML® Nexus AI Platform. "
print("Your computer has been logged into the TensorOpera® Nexus AI Platform. "
"Before logging in again, please log out of the previous login using the command "
"'fedml logout -c'. If it still doesn't work, run the command 'fedml logout -c' "
"using your computer's administrator account.")
Expand Down Expand Up @@ -193,7 +193,7 @@ def _bind(
server_daemon_cmd = "server_daemon.py"
server_daemon_pids = RunProcessUtils.get_pid_from_cmd_line(server_daemon_cmd)
if server_daemon_pids is not None and len(server_daemon_pids) > 0:
print("Your computer has been logged into the FedML® Nexus AI Platform. "
print("Your computer has been logged into the TensorOpera® Nexus AI Platform. "
"Before logging in again, please log out of the previous login using the command "
"'fedml logout -s'. If it still doesn't work, run the command 'fedml logout -s' "
"using your computer's administrator account.")
Expand Down
4 changes: 2 additions & 2 deletions python/fedml/api/modules/model.py
Original file line number Diff line number Diff line change
Expand Up @@ -252,9 +252,9 @@ def deploy(name: str, endpoint_name: str = "", endpoint_id: str = None, local: b
return FedMLModelCards.get_instance().serve_model_on_premise(
name, endpoint_name, master_ids, worker_ids, use_remote, endpoint_id)
else:
# FedML® Launch deploy mode
# TensorOpera® Launch deploy mode
click.echo("Warning: You did not indicate the master device id and worker device id\n\
Do you want to use FedML® Nexus AI Platform to find GPU Resources deploy your model?")
Do you want to use TensorOpera® Nexus AI Platform to find GPU Resources deploy your model?")
answer = click.prompt("Please input your answer: (y/n)")
if answer == "y" or answer == "Y":
api_key = get_api_key()
Expand Down
2 changes: 1 addition & 1 deletion python/fedml/api/modules/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ def _check_api_key(api_key=None):
if api_key is None or api_key == "":
saved_api_key = get_api_key()
if saved_api_key is None or saved_api_key == "":
api_key = click.prompt("FedML® Launch API Key is not set yet, please input your API key")
api_key = click.prompt("TensorOpera® Launch API Key is not set yet, please input your API key")
else:
api_key = saved_api_key

Expand Down
Loading

0 comments on commit ca5e764

Please sign in to comment.