Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SDK] Get rid of py-substrate-interface (DO NOT MERGE) #2565

Merged
Changes from 1 commit
Commits
Show all changes
127 commits
Select commit Hold shift + click to select a range
7c38f63
add utils.execute_coroutine
roman-opentensor Dec 27, 2024
83a7c3b
rename subtensor.py to classic_subtensor.py
roman-opentensor Dec 27, 2024
339586a
wrapp `self.initialize()` with `execute_coroutine`
roman-opentensor Dec 27, 2024
6848b42
add `publish_metadata_async` to serving.py
roman-opentensor Dec 27, 2024
5955d7a
remove async_subtensor from __init__.py
roman-opentensor Dec 27, 2024
09cc4dd
add `get_metadata_async` to serving.py
roman-opentensor Dec 27, 2024
5d8a6b8
fic `supports_rpc_method`
roman-opentensor Dec 28, 2024
048d3d4
fix type for `weights` field in neuron_info.py
roman-opentensor Dec 28, 2024
3f1dfe2
add TODO
roman-opentensor Dec 28, 2024
184831e
fix test
roman-opentensor Dec 28, 2024
3d754d9
add TODO
roman-opentensor Dec 28, 2024
b066771
ruff
roman-opentensor Dec 28, 2024
f39633b
add async staking extrinsic call, forward sync staking to async ones
roman-opentensor Dec 28, 2024
e7fa857
add async transfer extrinsic call, forward sync transfer to async one…
roman-opentensor Dec 30, 2024
0984e4f
fix transfer
roman-opentensor Dec 30, 2024
b9a4519
fix registration and burn registration
roman-opentensor Dec 30, 2024
dcd5f29
add async serving_axon_extrinsic, publish_metadata, get_metadata, do_…
roman-opentensor Dec 31, 2024
227d383
add async root extrinsic things
roman-opentensor Dec 31, 2024
8da5308
fix registration
roman-opentensor Dec 31, 2024
c26dafe
rename to be consistency
roman-opentensor Dec 31, 2024
05b456d
add sync call for async `set_weights_extrinsic`
roman-opentensor Dec 31, 2024
c5f83e5
add async_unstaking.py
roman-opentensor Jan 2, 2025
a24f4d0
rename async_weights.py
roman-opentensor Jan 2, 2025
d40b6f2
fix annotation in serving.py
roman-opentensor Jan 2, 2025
110eae1
fix test
roman-opentensor Jan 2, 2025
ae1727f
update `bittensor.core.extrinsics.async_serving.get_metadata`
roman-opentensor Jan 2, 2025
a452058
add async `bittensor.core.extrinsics.async_weights.reveal_weights_ext…
roman-opentensor Jan 2, 2025
c9a4e5d
update `bittensor.utils.execute_coroutine`
roman-opentensor Jan 2, 2025
a556cb6
add period argument
roman-opentensor Jan 2, 2025
559d20b
fix tests
roman-opentensor Jan 2, 2025
2e14c6f
fix sync setting, commiting, reveal weights extrinsics + async relate…
roman-opentensor Jan 2, 2025
59fb63f
add `bittensor/core/extrinsics/asyncex` sub-package
roman-opentensor Jan 2, 2025
32423f7
ruff for `execute_coroutine`
roman-opentensor Jan 2, 2025
a0b1334
add nonce to commit_reveal v3
roman-opentensor Jan 2, 2025
3c1df1d
update unstaking extrinsics
roman-opentensor Jan 3, 2025
39107d3
remove substrate from call args
roman-opentensor Jan 3, 2025
cb7b657
fix await, remove next_nonce from commit reveal v3
roman-opentensor Jan 3, 2025
8edbdce
update registration extrinsics
roman-opentensor Jan 3, 2025
574709d
update root extrinsics
roman-opentensor Jan 3, 2025
dbf3c99
update set_weights extrinsics
roman-opentensor Jan 3, 2025
5d0e450
update transfer extrinsics
roman-opentensor Jan 3, 2025
75e9a48
update staking extrinsics
roman-opentensor Jan 3, 2025
c58a08d
update staking extrinsics
roman-opentensor Jan 3, 2025
53f25f2
update serving extrinsics
roman-opentensor Jan 3, 2025
7262891
add event loop to async subtensor and async substrate interface
roman-opentensor Jan 3, 2025
085089c
add WeightCommitInfo chain data class
roman-opentensor Jan 3, 2025
16eb574
add new Subtensor class
roman-opentensor Jan 3, 2025
bd418ee
update AsyncSubtensor class
roman-opentensor Jan 3, 2025
81879ba
fix transfer
roman-opentensor Jan 3, 2025
8274400
fix `subtensor.substrate.submit_extrinsic` call args
roman-opentensor Jan 3, 2025
4139f37
update tests
roman-opentensor Jan 3, 2025
a380adb
ruff
roman-opentensor Jan 3, 2025
84ae79c
add sync Metagraph
roman-opentensor Jan 3, 2025
e55673e
metagraph related changes in subtensors
roman-opentensor Jan 3, 2025
7ee9c65
update metagraph.Metagraph
roman-opentensor Jan 3, 2025
1ea40e1
update `AsyncSubstrateInterface`
roman-opentensor Jan 4, 2025
5b0e9db
move async extrinsics tests to proper directory
roman-opentensor Jan 4, 2025
86a46d1
update `commit_reveal` unit test
roman-opentensor Jan 4, 2025
f940e22
update `commit_weights.py` unit test
roman-opentensor Jan 4, 2025
7ab4112
update `registration.py` unit test
roman-opentensor Jan 4, 2025
50e383e
ruff
roman-opentensor Jan 4, 2025
09751b1
typo
roman-opentensor Jan 4, 2025
785f508
update `bittensor/core/extrinsics/root.py` unit tests
roman-opentensor Jan 4, 2025
87e53c1
update `bittensor/core/extrinsics/serving.py` unit tests
roman-opentensor Jan 4, 2025
41cfd99
update `bittensor/core/extrinsics/set_weights.py` unit tests
roman-opentensor Jan 4, 2025
efc47f4
update `bittensor/core/extrinsics/staking.py` unit tests
roman-opentensor Jan 4, 2025
d10b688
update `bittensor/core/extrinsics/transfer.py` unit tests
roman-opentensor Jan 4, 2025
035148c
update `bittensor/core/extrinsics/unstaking.py` unit tests
roman-opentensor Jan 4, 2025
e25f242
improve `bittensor/core/extrinsics/utils.py`
roman-opentensor Jan 4, 2025
aa8ede9
add TODO, add close method with sync call via wrapper
roman-opentensor Jan 4, 2025
041c83d
Update subtensor.py unit tests
roman-opentensor Jan 4, 2025
600955e
Update async_subtensor.py unit tests
roman-opentensor Jan 4, 2025
48388e9
update TODOs
roman-opentensor Jan 4, 2025
943dd98
update metagraph
roman-opentensor Jan 4, 2025
7bfea83
update `close` method for Subtensor class
roman-opentensor Jan 4, 2025
0051521
update `test_subtensor.py`
roman-opentensor Jan 4, 2025
60bb8aa
Update metagraph determining logic for AI and IDE. Finally, the same …
roman-opentensor Jan 4, 2025
870d0b5
Update sync `Metagraph.__getattr__`
roman-opentensor Jan 4, 2025
ebb61d1
fix metagraph tests
roman-opentensor Jan 4, 2025
36b4cbb
update test for async_substrate_interface
roman-opentensor Jan 4, 2025
eaeff47
add `reuse_block` where is should be in async_subtensor.py
roman-opentensor Jan 4, 2025
b3e256d
fix test for async_subtensor.py
roman-opentensor Jan 4, 2025
131d49a
ruff
roman-opentensor Jan 4, 2025
c217967
update for `get_current_weight_commit_info`
roman-opentensor Jan 6, 2025
1caf777
Merge branch 'feat/thewhaleking/new-sync-substrate' into feat/roman/a…
roman-opentensor Jan 6, 2025
25b9485
merge fixes + ruff
roman-opentensor Jan 6, 2025
02e63b0
small fixes for `async_subtensor.py`
roman-opentensor Jan 7, 2025
8fcde8a
Merge branch 'feat/thewhaleking/new-sync-substrate' into feat/roman/a…
roman-opentensor Jan 7, 2025
fe89cfc
metagraph fix
roman-opentensor Jan 7, 2025
9c4ca15
add `ensure_connected` temporarily
roman-opentensor Jan 7, 2025
71ad7fa
add argument in test
roman-opentensor Jan 7, 2025
a587611
ruff
roman-opentensor Jan 7, 2025
8f9435e
remove `EXTRINSIC_SUBMISSION_TIMEOUT` from test
roman-opentensor Jan 7, 2025
38ed94f
remove unused import
roman-opentensor Jan 7, 2025
ffb32db
make async subtensor query_* returns compatible with sync ones
roman-opentensor Jan 7, 2025
d695b09
convert `subtensor.substrate` to sync version with wrapper
roman-opentensor Jan 7, 2025
d718ddf
improve `ScaleObj` class and `SubstrateInterface` class-wrapper
roman-opentensor Jan 7, 2025
408daa9
fix `test_async_subtensor.py`
roman-opentensor Jan 7, 2025
c38e969
fix `test_commit_reveal_v3.py`
roman-opentensor Jan 7, 2025
ddfd323
remove unused import
roman-opentensor Jan 7, 2025
4835cc9
Merge remote-tracking branch 'origin/feat/thewhaleking/new-sync-subst…
thewhaleking Jan 7, 2025
9009282
Merge
thewhaleking Jan 7, 2025
3e0b1bd
Optimisations.
thewhaleking Jan 7, 2025
6b1604a
Improved the `execute_coroutine` function. Moved `event_loop_is_runni…
thewhaleking Jan 7, 2025
44e7f58
Created an `execute_coroutine` method to cut down on redundant code. …
thewhaleking Jan 7, 2025
4872a79
Better-handle mocked substrate and getting event loop.
thewhaleking Jan 7, 2025
bd2f3c7
Better mock handling.
thewhaleking Jan 7, 2025
a3fc4f6
`is_hotkey_registered` fixed in Subtensor
thewhaleking Jan 7, 2025
c4cb262
Unit tests fixed.
thewhaleking Jan 7, 2025
f839796
E2E test optimisation.
thewhaleking Jan 7, 2025
1732297
E2E fix
thewhaleking Jan 7, 2025
8fdfadd
Optimisations.
thewhaleking Jan 7, 2025
a9bead2
Fixed sync metagraph + added back in save dir and load dir functional…
thewhaleking Jan 7, 2025
abbddea
Trigger no-op
thewhaleking Jan 7, 2025
b03a76d
Two more unit tests passing.
thewhaleking Jan 8, 2025
c4d6d3c
All unit tests passing.
thewhaleking Jan 8, 2025
c699ba2
Most Metagraph integration tests working.
thewhaleking Jan 8, 2025
5fbe1dc
Final metagraph integration tests working.
thewhaleking Jan 8, 2025
b329bab
Lint
thewhaleking Jan 8, 2025
fbc6a83
Merge pull request #2569 from opentensor/feat/thewhaleking/improve-su…
thewhaleking Jan 8, 2025
2ddb224
Added skeleton methods for all used Substrate methods for ease of use…
thewhaleking Jan 8, 2025
522479a
Trigger no-op
thewhaleking Jan 8, 2025
7f302e2
Backwards compatibility, docsstring cleanup.
thewhaleking Jan 8, 2025
f6df25d
Removed `ensure_connected` fn and `classic_subtensor.py`
thewhaleking Jan 8, 2025
0f5def9
Imports cleanup.
thewhaleking Jan 8, 2025
1603ef5
Type fix
thewhaleking Jan 8, 2025
59739f2
Reverted change.
thewhaleking Jan 8, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
add sync call for async set_weights_extrinsic
roman-opentensor committed Dec 31, 2024
commit 05b456d9e044c9308f8630f97a6c445356137d69
2 changes: 1 addition & 1 deletion bittensor/core/extrinsics/async_set_weights.py
Original file line number Diff line number Diff line change
@@ -19,9 +19,9 @@
async def _do_set_weights(
subtensor: "AsyncSubtensor",
wallet: "Wallet",
netuid: int,
uids: list[int],
vals: list[int],
netuid: int,
version_key: int = version_as_int,
wait_for_inclusion: bool = False,
wait_for_finalization: bool = False,
162 changes: 10 additions & 152 deletions bittensor/core/extrinsics/set_weights.py
Original file line number Diff line number Diff line change
@@ -1,107 +1,16 @@
# The MIT License (MIT)
# Copyright © 2024 Opentensor Foundation
#
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
# documentation files (the “Software”), to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software,
# and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all copies or substantial portions of
# the Software.
#
# THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO
# THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.

from typing import Union, Optional, TYPE_CHECKING
from typing import Union, TYPE_CHECKING

import numpy as np
from numpy.typing import NDArray

from bittensor.core.extrinsics.utils import submit_extrinsic
from bittensor.core.settings import version_as_int
from bittensor.utils import format_error_message, weight_utils
from bittensor.utils.btlogging import logging
from bittensor.utils.networking import ensure_connected
from bittensor.utils.registration import torch, use_torch
from bittensor.utils import execute_coroutine
from bittensor.utils.registration import torch

# For annotation purposes
if TYPE_CHECKING:
from bittensor.core.subtensor import Subtensor
from bittensor_wallet import Wallet


# Chain call for `do_set_weights`
@ensure_connected
def do_set_weights(
self: "Subtensor",
wallet: "Wallet",
uids: list[int],
vals: list[int],
netuid: int,
version_key: int = version_as_int,
wait_for_inclusion: bool = False,
wait_for_finalization: bool = False,
period: int = 5,
) -> tuple[bool, Optional[str]]: # (success, error_message)
"""
Internal method to send a transaction to the Bittensor blockchain, setting weights for specified neurons. This method constructs and submits the transaction, handling retries and blockchain communication.

Args:
self (bittensor.core.subtensor.Subtensor): Subtensor interface
wallet (bittensor_wallet.Wallet): The wallet associated with the neuron setting the weights.
uids (list[int]): List of neuron UIDs for which weights are being set.
vals (list[int]): List of weight values corresponding to each UID.
netuid (int): Unique identifier for the network.
version_key (int): Version key for compatibility with the network.
wait_for_inclusion (bool): Waits for the transaction to be included in a block.
wait_for_finalization (bool): Waits for the transaction to be finalized on the blockchain.
period (int): Period dictates how long the extrinsic will stay as part of waiting pool.

Returns:
tuple[bool, Optional[str]]: A tuple containing a success flag and an optional response message.

This method is vital for the dynamic weighting mechanism in Bittensor, where neurons adjust their trust in other neurons based on observed performance and contributions.
"""

call = self.substrate.compose_call(
call_module="SubtensorModule",
call_function="set_weights",
call_params={
"dests": uids,
"weights": vals,
"netuid": netuid,
"version_key": version_key,
},
)
next_nonce = self.get_account_next_index(wallet.hotkey.ss58_address)
# Period dictates how long the extrinsic will stay as part of waiting pool
extrinsic = self.substrate.create_signed_extrinsic(
call=call,
keypair=wallet.hotkey,
era={"period": period},
nonce=next_nonce,
)
response = submit_extrinsic(
self,
extrinsic=extrinsic,
wait_for_inclusion=wait_for_inclusion,
wait_for_finalization=wait_for_finalization,
)
# We only wait here if we expect finalization.
if not wait_for_finalization and not wait_for_inclusion:
return True, "Not waiting for finalization or inclusion."

response.process_events()
if response.is_success:
return True, "Successfully set weights."
else:
return False, format_error_message(response.error_message)


# Community uses this extrinsic directly and via `subtensor.set_weights`
def set_weights_extrinsic(
subtensor: "Subtensor",
wallet: "Wallet",
@@ -112,66 +21,15 @@ def set_weights_extrinsic(
wait_for_inclusion: bool = False,
wait_for_finalization: bool = False,
) -> tuple[bool, str]:
"""Sets the given weights and values on chain for wallet hotkey account.

Args:
subtensor (bittensor.core.subtensor.Subtensor): Subtensor endpoint to use.
wallet (bittensor_wallet.Wallet): Bittensor wallet object.
netuid (int): The ``netuid`` of the subnet to set weights for.
uids (Union[NDArray[np.int64], torch.LongTensor, list]): The ``uint64`` uids of destination neurons.
weights (Union[NDArray[np.float32], torch.FloatTensor, list]): The weights to set. These must be ``float`` s and correspond to the passed ``uid`` s.
version_key (int): The version key of the validator.
wait_for_inclusion (bool): If set, waits for the extrinsic to enter a block before returning ``true``, or returns ``false`` if the extrinsic fails to enter the block within the timeout.
wait_for_finalization (bool): If set, waits for the extrinsic to be finalized on the chain before returning ``true``, or returns ``false`` if the extrinsic fails to be finalized within the timeout.

Returns:
tuple[bool, str]: A tuple containing a success flag and an optional response message.
"""
# First convert types.
if use_torch():
if isinstance(uids, list):
uids = torch.tensor(uids, dtype=torch.int64)
if isinstance(weights, list):
weights = torch.tensor(weights, dtype=torch.float32)
else:
if isinstance(uids, list):
uids = np.array(uids, dtype=np.int64)
if isinstance(weights, list):
weights = np.array(weights, dtype=np.float32)

# Reformat and normalize.
weight_uids, weight_vals = weight_utils.convert_weights_and_uids_for_emit(
uids, weights
)

logging.info(
f":satellite: [magenta]Setting weights on [/magenta][blue]{subtensor.network}[blue] [magenta]...[/magenta]"
)
logging.debug(f"Weights: {[float(v / 65535) for v in weight_vals]}")

try:
success, message = do_set_weights(
self=subtensor,
return execute_coroutine(
set_weights_extrinsic(
subtensor=subtensor.async_subtensor,
wallet=wallet,
netuid=netuid,
uids=weight_uids,
vals=weight_vals,
uids=uids,
weights=weights,
version_key=version_key,
wait_for_finalization=wait_for_finalization,
wait_for_inclusion=wait_for_inclusion,
wait_for_finalization=wait_for_finalization,
)

if not wait_for_finalization and not wait_for_inclusion:
return True, "Not waiting for finalization or inclusion."

if success is True:
logging.success(f"[green]Finalized![/green] Set weights: {str(success)}")
return True, "Successfully set weights and Finalized."
else:
logging.error(message)
return False, message

except Exception as e:
logging.error(f":cross_mark: [red]Failed.[/red]: Error: {e}")
logging.debug(str(e))
return False, str(e)
)