Let’s make a docker container out of this thing. The key win from docker is it gives you a headache-free way to “just run” something and not worry about the dependencies and install/uninstall and binary incompatibility and all that great stuff. So in our case it’s a bit of overkill. I’ve taken pains to keep the external library dependencies down to nothing and use “vanilla” python3.
But there are still hurdles. I find myself going back and double-checking which pip command I ran. As we saw in the datasets work those pip dependencies can start stacking up. Using the env system there is like a mini developer workstation container model to keep all the libs stowed where they can’t cause trouble. That’s still a little forbidding for the casual user who just wants to run it though. And for the disorganized devs like me, part of the beauty of docker is the Dockerfile itself gives you a codified statement of your environment and requirements that you can freeze or put into further regression testing.
Since we are currently so simple this should be a piece of cake.
FROM python:3 WORKDIR /usr/src/app RUN pip install cfbd RUN git clone https://github.com/mcccfb/cfb.git WORKDIR cfb/vconf CMD [ "python3", "./mcc_schedule.py", "-v" ]
This seems like a minimal working Dockerfile.
We can build it with:
$ docker build -t mcc-app .
And run it with:
$ docker run --env CFBD_API_KEY=your_secret_key_here \
--rm --name running-mcc mcc-app
So far so good. If someone else did want to run this code having a Dockerfile makes it a tiny bit more portable. After all not everyone is going to be working on a system that has python3 installed out of the box. So as long as they have to install something, it might as well be a complete binary package that runs the full program rather than a stack of language, libs and source.
This also gives us a launching point if we ever fully realize the 538-style standings and simulation web interface. Interested parties should be able to host their own. That’s where docker can really shine, putting together a complex “machine” with a webserver that needs a few libraries.
Note that we’re still totally stateless so we can avoid any docker volume hassles for this round. Simple.
The CMD line tells the container to run the current year’s verbose run as the default:
There are no standings, possibly because no games were completed. USC at Stanford on Sep 09, 2022 Fresno State at USC on Sep 16, 2022 San José State at Fresno State on Oct 14, 2022 San Diego State at Fresno State on Oct 28, 2022 Stanford at UCLA on Oct 28, 2022 California at USC on Nov 04, 2022 San José State at San Diego State on Nov 11, 2022 Stanford at California on Nov 18, 2022 USC at UCLA on Nov 18, 2022 UCLA at California on Nov 24, 2022 Full Enumeration Simulation: USC 152 [14%] San Diego State 144 [14%] San José State 144 [14%] California 128 [12%] UCLA 124 [12%] Stanford 124 [12%] Fresno State 116 [11%] No Winner 92 [8%] Monte Carlo [Sampled Home Margin Predictor] Simulation: USC 1609 [16%] Fresno State 1579 [15%] UCLA 1506 [15%] California 1490 [14%] San Diego State 1305 [13%] Stanford 1251 [12%] San José State 1163 [11%] No Winner 97 [0%] At least one missing element error prevents Elo Predictor from finishing: no Elo for Stanford 2022, 10, ,
After the last post I added a userland Exception when the Elo is missing data. No more garbage in / garbage out there.
As I was testing this I ran the full historical run and saw huge deviation from my published results. Uh-oh. It looks like the cfbd endpoint I depend on to distinguish which teams are FBS (or the historical equivalent) is broken right now. It’s reporting many fewer FBS teams than should exist for many years prior to 2000.
Hopefully that’s something they can fix but it does point out another real world reason I need to get the testing situation fixed up. An automated regression against the known resultset would have caught this earlier.