-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Risky food simulation #4
Conversation
Codecov Report
@@ Coverage Diff @@
## main #4 +/- ##
=======================================
Coverage ? 17.20%
=======================================
Files ? 3
Lines ? 93
Branches ? 0
=======================================
Hits ? 16
Misses ? 77
Partials ? 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really exciting work. Runs perfectly and code reads well!
# initialize model with one agent | ||
model = RiskyFoodModel(1) | ||
model.prob_notcontaminated = prob_notcontaminated | ||
|
||
results = [] | ||
total_runs = 100 | ||
for i in range(total_runs): | ||
results.append(model.get_risky_food_status()) | ||
|
||
# use counter to tally the results | ||
result_count = Counter(results) | ||
|
||
# the expected value is the probability times number of times we ran it | ||
expected = total_runs * model.prob_notcontaminated | ||
assert math.isclose( | ||
result_count[FoodStatus.NOTCONTAMINATED], expected, abs_tol=total_runs * 0.1 | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is nice. 'Testing' takes on an interesting meaning here. One knows (from the scholarship) what expected outcomes of particular models ought to be. These can be written into our implementations directly as tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wouldn't have thought of using unit tests for checking reproducibility / replicability for known models, but would be interesting to automate (maybe more like integration testing?) that if we are working on a known model with expected outputs. But IDK if we'll be working much with simulations where we know the expected outputs, seems they are less interesting for investigating risk attitudes.
Every agent gets a parameter `r` between 0 and 1. [or DISCRETE: 8 buckets etc.] | ||
|
||
EACH ROUND: | ||
- Nature selects a probability `p` for **N** | ||
- For each agent: if `r` > `p`, then they choose RISKY; else SAFE | ||
- Nature flips a coin with bias `p` for **N**, and announces **N** or **C** | ||
- If **N**: everyone who chose RISKY gets 3, everyone who chose SAFE gets 2 | ||
- If **C**: everyone who chose RISKY gets 1, everyone SAFE 2 | ||
- Reproduce in proportion to payoff | ||
- Either agent gets # of offspring = payoff [they replace–original “dies off”] | ||
- OR: take the total payoff for RISKYs over total for everyone, there are that proportion of RISKYs in the new population | ||
|
||
END ROUND |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this semi-formal way of describing a 'round'! Very clear articulation of the steps/rules of the game.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is all from Lara! I just added some formatting. very clear, right?
## Running the simulation | ||
|
||
- Install python dependencies as described in the main project readme (requires mesa) | ||
- To run from the main `simulating-risk` project directory: | ||
- Configure python to include the current directory in import path; | ||
for C-based shells, run `setenv PYTHONPATH .` ; for bash, run `export $PYTHONPATH=.` | ||
- To run interactively with mesa runserver: `mesa runserver simulatingrisk/risky_food/` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Works for me!
def step(self): | ||
# choose food based on the probability not contaminated and risk tolerance | ||
if self.risk_level > self.model.prob_notcontaminated: | ||
choice = FoodChoice.RISKY | ||
else: | ||
choice = FoodChoice.SAFE | ||
self.payoff = self.model.payoff(choice) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again I feel like a 1st-person POV might be more legible as a way of describing what's happening
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you say more, I don't understand?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh, read in the wrong order - I see your comments on the other PR now
self.datacollector = mesa.DataCollector( | ||
model_reporters={ | ||
"prob_notcontaminated": "prob_notcontaminated", | ||
"contaminated": "contaminated", | ||
"average_risk_level": "avg_risk_level", | ||
"min_risk_level": "min_risk_level", | ||
"max_risk_level": "max_risk_level", | ||
"num_agents": "total_agents", | ||
}, | ||
agent_reporters={"risk_level": "risk_level", "payoff": "payoff"}, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting setup they provid
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seems very verbose / redundant to me, but at least workable enough for now
implements the "risky food" game as described in #3