Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sourcery refactored master branch #3

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions network.py
Original file line number Diff line number Diff line change
@@ -106,10 +106,10 @@ def variables(self):
self.scope + '/')

def assign(self, other):
copy_ops = []
for self_var, other_var in zip(self.variables, other.variables):
copy_ops.append(tf.assign(other_var, self_var))
return copy_ops
return [
tf.assign(other_var, self_var)
for self_var, other_var in zip(self.variables, other.variables)
]
Comment on lines -109 to +112
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function BaseNetwork.assign refactored with the following changes:

  • Convert for loop into list comprehension (list-comprehension)
  • Inline variable that is immediately returned (inline-immediately-returned-variable)



class PolicyNetwork(BaseNetwork):
8 changes: 2 additions & 6 deletions policy_training.py
Original file line number Diff line number Diff line change
@@ -141,8 +141,7 @@ def train_games(self, opponent, games):

def process_results(self, opponent, games, step, summary):
win_rate = np.mean([game.policy_player_score for game in games])
average_moves = sum([len(game.moves)
for game in games]) / self.config.batch_size
average_moves = sum(len(game.moves) for game in games) / self.config.batch_size
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function PolicyTraining.process_results refactored with the following changes:

  • Replace unneeded comprehension with generator (comprehension-to-generator)


opponent_summary = tf.Summary()
opponent_summary.value.add(
@@ -263,10 +262,7 @@ def move(self, move, policy_player_turn=False):
self.positions.append(self.position)
if self.position.gameover():
self.result = self.position.result
if self.result:
self.policy_player_score = float(policy_player_turn)
else:
self.policy_player_score = 0.5
self.policy_player_score = float(policy_player_turn) if self.result else 0.5
Comment on lines -266 to +265
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function Game.move refactored with the following changes:

  • Replace if statement with if expression (assign-if-exp)



def main(_):
2 changes: 1 addition & 1 deletion util.py
Original file line number Diff line number Diff line change
@@ -10,7 +10,7 @@ def find_previous_run(dir):
if os.path.isdir(dir):
runs = [child[4:] for child in os.listdir(dir) if child[:4] == 'run_']
if runs:
return max([int(run) for run in runs])
return max(int(run) for run in runs)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function run_directory.find_previous_run refactored with the following changes:

  • Replace unneeded comprehension with generator (comprehension-to-generator)


return 0