Skip to content

Commit

Permalink
Refactor: legacy accelerators and plugins (#5645)
Browse files Browse the repository at this point in the history
* tests: legacy

* legacy: accel

* legacy: plug

* fix imports

* mypy

* flake8
  • Loading branch information
Borda authored Jan 27, 2021
1 parent 9d165f6 commit 7e2e874
Show file tree
Hide file tree
Showing 64 changed files with 141 additions and 111 deletions.
3 changes: 3 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,6 +102,9 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- `stat_scores_multiple_classes` is deprecated in favor of `stat_scores` ([#4839](https://github.com/PyTorchLightning/pytorch-lightning/pull/4839))


- Moved accelerators and plugins to its `legacy` pkg ([#5645](https://github.com/PyTorchLightning/pytorch-lightning/pull/5645))


### Removed

- Removed deprecated checkpoint argument `filepath` ([#5321](https://github.com/PyTorchLightning/pytorch-lightning/pull/5321))
Expand Down
6 changes: 3 additions & 3 deletions benchmarks/test_sharded_parity.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,10 +21,10 @@
import torch

from pytorch_lightning import seed_everything, Trainer
from pytorch_lightning.plugins.ddp_plugin import DDPPlugin
from pytorch_lightning.plugins.sharded_plugin import DDPShardedPlugin
from pytorch_lightning.plugins.legacy.ddp_plugin import DDPPlugin
from pytorch_lightning.plugins.legacy.sharded_plugin import DDPShardedPlugin
from pytorch_lightning.utilities import _FAIRSCALE_AVAILABLE, _NATIVE_AMP_AVAILABLE
from tests.backends import DDPLauncher
from tests.accelerators.legacy import DDPLauncher
from tests.base.boring_model import BoringModel, RandomDataset


Expand Down
5 changes: 4 additions & 1 deletion dockers/tpu-tests/tpu_test_cases.jsonnet
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,10 @@ local tputests = base.BaseTest {
command: utils.scriptCommand(
|||
cd pytorch-lightning
coverage run --source=pytorch_lightning -m pytest tests/models/test_tpu.py tests/backends/test_tpu_backend.py pytorch_lightning/utilities/xla_device_utils.py -v
coverage run --source=pytorch_lightning -m pytest -v \
pytorch_lightning/utilities/xla_device_utils.py \
tests/accelerators/legacy/test_tpu_backend.py \
tests/models/test_tpu.py
test_exit_code=$?
echo "\n||| END PYTEST LOGS |||\n"
coverage xml
Expand Down
6 changes: 3 additions & 3 deletions docs/source/advanced/multi_gpu.rst
Original file line number Diff line number Diff line change
Expand Up @@ -580,9 +580,9 @@ Below are the possible configurations we support.

Implement Your Own Distributed (DDP) training
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you need your own way to init PyTorch DDP you can override :meth:`pytorch_lightning.plugins.ddp_plugin.DDPPlugin.init_ddp_connection`.
If you need your own way to init PyTorch DDP you can override :meth:`pytorch_lightning.plugins.legacy.ddp_plugin.DDPPlugin.init_ddp_connection`.

If you also need to use your own DDP implementation, override :meth:`pytorch_lightning.plugins.ddp_plugin.DDPPlugin.configure_ddp`.
If you also need to use your own DDP implementation, override :meth:`pytorch_lightning.plugins.legacy.ddp_plugin.DDPPlugin.configure_ddp`.


----------
Expand Down Expand Up @@ -692,7 +692,7 @@ This should be kept within the ``sequential_module`` variable within your ``Ligh

.. code-block:: python
from pytorch_lightning.plugins.ddp_sequential_plugin import DDPSequentialPlugin
from pytorch_lightning.plugins.legacy.ddp_sequential_plugin import DDPSequentialPlugin
from pytorch_lightning import LightningModule
class MyModel(LightningModule):
Expand Down
22 changes: 11 additions & 11 deletions docs/source/extensions/accelerators.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ To link up arbitrary hardware, implement your own Accelerator subclass

.. code-block:: python
from pytorch_lightning.accelerators.accelerator import Accelerator
from pytorch_lightning.accelerators.legacy.accelerator import Accelerator
class MyAccelerator(Accelerator):
def __init__(self, trainer, cluster_environment=None):
Expand Down Expand Up @@ -124,59 +124,59 @@ Available Accelerators
CPU Accelerator
===============

.. autoclass:: pytorch_lightning.accelerators.cpu_accelerator.CPUAccelerator
.. autoclass:: pytorch_lightning.accelerators.legacy.cpu_accelerator.CPUAccelerator
:noindex:

DDP Accelerator
===============

.. autoclass:: pytorch_lightning.accelerators.ddp_accelerator.DDPAccelerator
.. autoclass:: pytorch_lightning.accelerators.legacy.ddp_accelerator.DDPAccelerator
:noindex:

DDP2 Accelerator
================

.. autoclass:: pytorch_lightning.accelerators.ddp2_accelerator.DDP2Accelerator
.. autoclass:: pytorch_lightning.accelerators.legacy.ddp2_accelerator.DDP2Accelerator
:noindex:

DDP CPU HPC Accelerator
=======================

.. autoclass:: pytorch_lightning.accelerators.ddp_cpu_hpc_accelerator.DDPCPUHPCAccelerator
.. autoclass:: pytorch_lightning.accelerators.legacy.ddp_cpu_hpc_accelerator.DDPCPUHPCAccelerator
:noindex:

DDP CPU Spawn Accelerator
=========================

.. autoclass:: pytorch_lightning.accelerators.ddp_cpu_spawn_accelerator.DDPCPUSpawnAccelerator
.. autoclass:: pytorch_lightning.accelerators.legacy.ddp_cpu_spawn_accelerator.DDPCPUSpawnAccelerator
:noindex:

DDP HPC Accelerator
===================

.. autoclass:: pytorch_lightning.accelerators.ddp_hpc_accelerator.DDPHPCAccelerator
.. autoclass:: pytorch_lightning.accelerators.legacy.ddp_hpc_accelerator.DDPHPCAccelerator
:noindex:

DDP Spawn Accelerator
=====================

.. autoclass:: pytorch_lightning.accelerators.ddp_spawn_accelerator.DDPSpawnAccelerator
.. autoclass:: pytorch_lightning.accelerators.legacy.ddp_spawn_accelerator.DDPSpawnAccelerator
:noindex:

GPU Accelerator
===============

.. autoclass:: pytorch_lightning.accelerators.gpu_accelerator.GPUAccelerator
.. autoclass:: pytorch_lightning.accelerators.legacy.gpu_accelerator.GPUAccelerator
:noindex:

Horovod Accelerator
===================

.. autoclass:: pytorch_lightning.accelerators.horovod_accelerator.HorovodAccelerator
.. autoclass:: pytorch_lightning.accelerators.legacy.horovod_accelerator.HorovodAccelerator
:noindex:

TPU Accelerator
===============

.. autoclass:: pytorch_lightning.accelerators.tpu_accelerator.TPUAccelerator
.. autoclass:: pytorch_lightning.accelerators.legacy.tpu_accelerator.TPUAccelerator
:noindex:
6 changes: 3 additions & 3 deletions docs/source/extensions/plugins.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,16 +19,16 @@ For example, to customize your own DistributedDataParallel you could do somethin
ApexPlugin
**********

.. autoclass:: pytorch_lightning.plugins.apex.ApexPlugin
.. autoclass:: pytorch_lightning.plugins.legacy.apex.ApexPlugin

***************
NativeAMPPlugin
***************

.. autoclass:: pytorch_lightning.plugins.native_amp.NativeAMPPlugin
.. autoclass:: pytorch_lightning.plugins.legacy.native_amp.NativeAMPPlugin

*********
DDPPlugin
*********

.. autoclass:: pytorch_lightning.plugins.ddp_plugin.DDPPlugin
.. autoclass:: pytorch_lightning.plugins.legacy.ddp_plugin.DDPPlugin
2 changes: 1 addition & 1 deletion pl_examples/basic_examples/conv_sequential_example.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@
from pl_examples import cli_lightning_logo
from pytorch_lightning import Trainer
from pytorch_lightning.metrics.functional import accuracy
from pytorch_lightning.plugins.ddp_sequential_plugin import DDPSequentialPlugin
from pytorch_lightning.plugins.legacy.ddp_sequential_plugin import DDPSequentialPlugin
from pytorch_lightning.utilities import _BOLTS_AVAILABLE, _FAIRSCALE_PIPE_AVAILABLE

if _BOLTS_AVAILABLE:
Expand Down
24 changes: 12 additions & 12 deletions pytorch_lightning/accelerators/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,15 +11,15 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from pytorch_lightning.accelerators.accelerator import Accelerator # noqa: F401
from pytorch_lightning.accelerators.cpu_accelerator import CPUAccelerator # noqa: F401
from pytorch_lightning.accelerators.ddp2_accelerator import DDP2Accelerator # noqa: F401
from pytorch_lightning.accelerators.ddp_accelerator import DDPAccelerator # noqa: F401
from pytorch_lightning.accelerators.ddp_cpu_hpc_accelerator import DDPCPUHPCAccelerator # noqa: F401
from pytorch_lightning.accelerators.ddp_cpu_spawn_accelerator import DDPCPUSpawnAccelerator # noqa: F401
from pytorch_lightning.accelerators.ddp_hpc_accelerator import DDPHPCAccelerator # noqa: F401
from pytorch_lightning.accelerators.ddp_spawn_accelerator import DDPSpawnAccelerator # noqa: F401
from pytorch_lightning.accelerators.dp_accelerator import DataParallelAccelerator # noqa: F401
from pytorch_lightning.accelerators.gpu_accelerator import GPUAccelerator # noqa: F401
from pytorch_lightning.accelerators.horovod_accelerator import HorovodAccelerator # noqa: F401
from pytorch_lightning.accelerators.tpu_accelerator import TPUAccelerator # noqa: F401
from pytorch_lightning.accelerators.legacy.accelerator import Accelerator # noqa: F401
from pytorch_lightning.accelerators.legacy.cpu_accelerator import CPUAccelerator # noqa: F401
from pytorch_lightning.accelerators.legacy.ddp2_accelerator import DDP2Accelerator # noqa: F401
from pytorch_lightning.accelerators.legacy.ddp_accelerator import DDPAccelerator # noqa: F401
from pytorch_lightning.accelerators.legacy.ddp_cpu_hpc_accelerator import DDPCPUHPCAccelerator # noqa: F401
from pytorch_lightning.accelerators.legacy.ddp_cpu_spawn_accelerator import DDPCPUSpawnAccelerator # noqa: F401
from pytorch_lightning.accelerators.legacy.ddp_hpc_accelerator import DDPHPCAccelerator # noqa: F401
from pytorch_lightning.accelerators.legacy.ddp_spawn_accelerator import DDPSpawnAccelerator # noqa: F401
from pytorch_lightning.accelerators.legacy.dp_accelerator import DataParallelAccelerator # noqa: F401
from pytorch_lightning.accelerators.legacy.gpu_accelerator import GPUAccelerator # noqa: F401
from pytorch_lightning.accelerators.legacy.horovod_accelerator import HorovodAccelerator # noqa: F401
from pytorch_lightning.accelerators.legacy.tpu_accelerator import TPUAccelerator # noqa: F401
25 changes: 25 additions & 0 deletions pytorch_lightning/accelerators/legacy/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# Copyright The PyTorch Lightning team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from pytorch_lightning.accelerators.legacy.accelerator import Accelerator # noqa: F401
from pytorch_lightning.accelerators.legacy.cpu_accelerator import CPUAccelerator # noqa: F401
from pytorch_lightning.accelerators.legacy.ddp2_accelerator import DDP2Accelerator # noqa: F401
from pytorch_lightning.accelerators.legacy.ddp_accelerator import DDPAccelerator # noqa: F401
from pytorch_lightning.accelerators.legacy.ddp_cpu_hpc_accelerator import DDPCPUHPCAccelerator # noqa: F401
from pytorch_lightning.accelerators.legacy.ddp_cpu_spawn_accelerator import DDPCPUSpawnAccelerator # noqa: F401
from pytorch_lightning.accelerators.legacy.ddp_hpc_accelerator import DDPHPCAccelerator # noqa: F401
from pytorch_lightning.accelerators.legacy.ddp_spawn_accelerator import DDPSpawnAccelerator # noqa: F401
from pytorch_lightning.accelerators.legacy.dp_accelerator import DataParallelAccelerator # noqa: F401
from pytorch_lightning.accelerators.legacy.gpu_accelerator import GPUAccelerator # noqa: F401
from pytorch_lightning.accelerators.legacy.horovod_accelerator import HorovodAccelerator # noqa: F401
from pytorch_lightning.accelerators.legacy.tpu_accelerator import TPUAccelerator # noqa: F401
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,8 @@

from pytorch_lightning.cluster_environments import ClusterEnvironment
from pytorch_lightning.core.lightning import LightningModule
from pytorch_lightning.plugins.ddp_plugin import DDPPlugin
from pytorch_lightning.plugins.rpc_plugin import RPCPlugin
from pytorch_lightning.plugins.legacy.ddp_plugin import DDPPlugin
from pytorch_lightning.plugins.legacy.rpc_plugin import RPCPlugin
from pytorch_lightning.utilities.apply_func import move_data_to_device
from pytorch_lightning.utilities.parsing import AttributeDict

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@

from pytorch_lightning import _logger as log
from pytorch_lightning import accelerators
from pytorch_lightning.accelerators.accelerator import Accelerator
from pytorch_lightning.accelerators.legacy.accelerator import Accelerator
from pytorch_lightning.cluster_environments.slurm_environment import SLURMEnvironment
from pytorch_lightning.cluster_environments.torchelastic_environment import TorchElasticEnvironment
from pytorch_lightning.utilities import (
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@

import torch

from pytorch_lightning.accelerators.accelerator import Accelerator, ReduceOp
from pytorch_lightning.accelerators.legacy.accelerator import Accelerator, ReduceOp
from pytorch_lightning.cluster_environments import ClusterEnvironment
from pytorch_lightning.utilities import AMPType
from pytorch_lightning.utilities.exceptions import MisconfigurationException
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,13 +18,13 @@
from torch.nn.parallel import DistributedDataParallel

from pytorch_lightning import _logger as log
from pytorch_lightning.accelerators.accelerator import Accelerator, ReduceOp
from pytorch_lightning.accelerators.legacy.accelerator import Accelerator, ReduceOp
from pytorch_lightning.cluster_environments import ClusterEnvironment
from pytorch_lightning.core.lightning import LightningModule
from pytorch_lightning.core.step_result import Result
from pytorch_lightning.distributed.dist import LightningDistributed
from pytorch_lightning.plugins.ddp_plugin import DDPPlugin
from pytorch_lightning.plugins.rpc_plugin import RPCPlugin
from pytorch_lightning.plugins.legacy.ddp_plugin import DDPPlugin
from pytorch_lightning.plugins.legacy.rpc_plugin import RPCPlugin
from pytorch_lightning.utilities import AMPType
from pytorch_lightning.utilities.distributed import all_gather_ddp_if_available, rank_zero_only, sync_ddp_if_available

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,12 +24,12 @@
from torch.nn.parallel import DistributedDataParallel

from pytorch_lightning import _logger as log
from pytorch_lightning.accelerators.accelerator import Accelerator, ReduceOp
from pytorch_lightning.accelerators.legacy.accelerator import Accelerator, ReduceOp
from pytorch_lightning.cluster_environments import ClusterEnvironment
from pytorch_lightning.core.lightning import LightningModule
from pytorch_lightning.distributed.dist import LightningDistributed
from pytorch_lightning.plugins.ddp_plugin import DDPPlugin
from pytorch_lightning.plugins.rpc_plugin import RPCPlugin
from pytorch_lightning.plugins.legacy.ddp_plugin import DDPPlugin
from pytorch_lightning.plugins.legacy.rpc_plugin import RPCPlugin
from pytorch_lightning.utilities import _HYDRA_AVAILABLE, AMPType
from pytorch_lightning.utilities.distributed import (
all_gather_ddp_if_available,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,9 @@
# limitations under the License
from typing import Optional

from pytorch_lightning.accelerators.ddp_hpc_accelerator import DDPHPCAccelerator
from pytorch_lightning.accelerators.legacy.ddp_hpc_accelerator import DDPHPCAccelerator
from pytorch_lightning.cluster_environments import ClusterEnvironment
from pytorch_lightning.plugins.ddp_plugin import DDPPlugin
from pytorch_lightning.plugins.legacy.ddp_plugin import DDPPlugin


class DDPCPUHPCAccelerator(DDPHPCAccelerator):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,12 +20,12 @@
from torch.nn.parallel import DistributedDataParallel

from pytorch_lightning import _logger as log
from pytorch_lightning.accelerators.accelerator import Accelerator, ReduceOp
from pytorch_lightning.accelerators.legacy.accelerator import Accelerator, ReduceOp
from pytorch_lightning.cluster_environments import ClusterEnvironment
from pytorch_lightning.core.lightning import LightningModule
from pytorch_lightning.distributed.dist import LightningDistributed
from pytorch_lightning.plugins.ddp_plugin import DDPPlugin
from pytorch_lightning.plugins.rpc_plugin import RPCPlugin
from pytorch_lightning.plugins.legacy.ddp_plugin import DDPPlugin
from pytorch_lightning.plugins.legacy.rpc_plugin import RPCPlugin
from pytorch_lightning.utilities import AMPType
from pytorch_lightning.utilities.distributed import (
all_gather_ddp_if_available,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,12 +19,12 @@
from torch.nn.parallel import DistributedDataParallel

from pytorch_lightning import _logger as log
from pytorch_lightning.accelerators.accelerator import Accelerator, ReduceOp
from pytorch_lightning.accelerators.legacy.accelerator import Accelerator, ReduceOp
from pytorch_lightning.cluster_environments import ClusterEnvironment
from pytorch_lightning.core.lightning import LightningModule
from pytorch_lightning.distributed.dist import LightningDistributed
from pytorch_lightning.plugins.ddp_plugin import DDPPlugin
from pytorch_lightning.plugins.rpc_plugin import RPCPlugin
from pytorch_lightning.plugins.legacy.ddp_plugin import DDPPlugin
from pytorch_lightning.plugins.legacy.rpc_plugin import RPCPlugin
from pytorch_lightning.utilities import AMPType
from pytorch_lightning.utilities.distributed import all_gather_ddp_if_available, rank_zero_only, sync_ddp_if_available

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,12 +21,12 @@
from torch.nn.parallel import DistributedDataParallel

from pytorch_lightning import _logger as log
from pytorch_lightning.accelerators.accelerator import Accelerator, ReduceOp
from pytorch_lightning.accelerators.legacy.accelerator import Accelerator, ReduceOp
from pytorch_lightning.cluster_environments import ClusterEnvironment
from pytorch_lightning.core.lightning import LightningModule
from pytorch_lightning.distributed import LightningDistributed
from pytorch_lightning.plugins.ddp_plugin import DDPPlugin
from pytorch_lightning.plugins.rpc_plugin import RPCPlugin
from pytorch_lightning.plugins.legacy.ddp_plugin import DDPPlugin
from pytorch_lightning.plugins.legacy.rpc_plugin import RPCPlugin
from pytorch_lightning.utilities import AMPType
from pytorch_lightning.utilities.cloud_io import atomic_save
from pytorch_lightning.utilities.cloud_io import load as pl_load
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
import torch
from torch import optim

from pytorch_lightning.accelerators.accelerator import Accelerator
from pytorch_lightning.accelerators.legacy.accelerator import Accelerator
from pytorch_lightning.cluster_environments import ClusterEnvironment
from pytorch_lightning.core.lightning import LightningModule
from pytorch_lightning.core.step_result import Result
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@

import torch

from pytorch_lightning.accelerators.accelerator import Accelerator, ReduceOp
from pytorch_lightning.accelerators.legacy.accelerator import Accelerator, ReduceOp
from pytorch_lightning.cluster_environments import ClusterEnvironment
from pytorch_lightning.distributed.dist import LightningDistributed
from pytorch_lightning.utilities import AMPType
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
import torch
from torch.optim.lr_scheduler import _LRScheduler

from pytorch_lightning.accelerators.accelerator import Accelerator, ReduceOp
from pytorch_lightning.accelerators.legacy.accelerator import Accelerator, ReduceOp
from pytorch_lightning.cluster_environments import ClusterEnvironment
from pytorch_lightning.utilities import _HOROVOD_AVAILABLE, AMPType, DeviceType
from pytorch_lightning.utilities.distributed import rank_zero_only
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
from torch.optim import Optimizer

from pytorch_lightning import _logger as log
from pytorch_lightning.accelerators.accelerator import Accelerator, ReduceOp
from pytorch_lightning.accelerators.legacy.accelerator import Accelerator, ReduceOp
from pytorch_lightning.cluster_environments import ClusterEnvironment
from pytorch_lightning.core import LightningModule
from pytorch_lightning.utilities import (
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.

from pytorch_lightning.plugins.plugin import LightningPlugin
from pytorch_lightning.plugins.legacy.plugin import LightningPlugin


class ClusterEnvironment(LightningPlugin):
Expand Down
Empty file.
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
from torch.optim.optimizer import Optimizer

from pytorch_lightning.core.lightning import LightningModule
from pytorch_lightning.plugins.precision_plugin import PrecisionPlugin
from pytorch_lightning.plugins.legacy.precision_plugin import PrecisionPlugin
from pytorch_lightning.utilities import _APEX_AVAILABLE, AMPType
from pytorch_lightning.utilities.distributed import rank_zero_warn

Expand Down
Loading

0 comments on commit 7e2e874

Please sign in to comment.