This repository has been archived by the owner on Dec 16, 2022. It is now read-only.
Replies: 2 comments 1 reply
-
This is great! It'll really pay off when you run that experiment many times and you don't have to do the setup every time. |
Beta Was this translation helpful? Give feedback.
1 reply
-
Just merged some additional improvements! ➡️ #5114 These will go out with the 2.3 release coming soon! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
This post will walk you through an example of logging experiments to Weights & Biases with AllenNLP via the
WandBCallback
. It assumes you're already familiar with AllenNLP (if not, see Getting Started Using the Library) and have bothallennlp
andallennlp-models
version 2.2.0 or greater installed.You'll also need a W&B account. If you don't have one yet, it's free to make one and you can sign up here through GitHub or Google.
Setup
Let's get set up with a small experiment first. Run the following to create a new directory and download the data you'll need:
mkdir wandb_test_run cd wandb_test_run wget https://raw.githubusercontent.com/epwalsh/nlp-models/master/data/greetings.tar.gz tar -xzvf greetings.tar.gz
Then create an AllenNLP experiment configuration file called
config.jsonnet
and the copy over the following contents (I've also posted it as a Gist):config.jsonnet
👇Lastly, head over to the W&B dashboard and create a new project called "allennlp-testing". Then create an API key from your settings page and set it to the environment variable
WANDB_API_KEY
.Running the experiment
After you've completed all the setup steps, just run
Once the first epoch of training starts, you'll see a new run going in your W&B project with a funky name like "fiery-butterfly-12" 🔥🦋⓵⓶. The experiment should only take a minute or two to complete, after which the metrics in W&B will be fully populated. Click on the run in W&B to explore.
Exploring what's logged to W&B
AllenNLP logs a lot of information to W&B, some of which you can disable to save on storage like gradient and parameter statistics. Let's walk through everything that's logged step-by-step.
Overview
First, if you click on the "Overview" tab on the left side panel, you'll see a lot of useful information about the operating system, Python environment, hardware, etc.
Keep scrolling down and you'll also see a "Config" section, which actually mirrors the AllenNLP configuration file we used to define this experiment.
Cool! Let's keep moving.
Charts
My favorite tab is probably "Charts". This tab is pretty self-explanatory. It contains charts for all of the batch and epoch metrics that AllenNLP tracks during training and validation.
You'll notice that the default x-axis variable is just "Step", which corresponds to the training batch number. But when viewing epoch metrics it usually makes more sense to change this to "epoch".
Another neat thing about the Charts tab is that you can reorganize it however you'd like! For example, you could drag train and validation epoch metrics together into the same section, and leave batch metrics on their own.
Files
Lastly I'll take about the "Files" tab. W&B uploads a few files by default, including:
output.log
: the logs captured by W&B,config.yaml
: the W&B config (pretty much the same as the AllenNLP config, but in YAML format),requirements.txt
: a snapshot of the Python environment.AllenNLP also adds:
config.json
: the AllenNLP experiment configuration file in JSON form,out.log
: all of the logs produced from AllenNLP.It's worth noting that
out.log
will usually contain more lines thanoutput.log
since W&B can only capture logs from when it's initialized at the start of training, whereasout.log
will contain ALL of the logs back to when theallennlp command
is invoked.That's it! This is a brand new feature in AllenNLP so please let us know if you find any bugs or have suggestions for improvements!
Beta Was this translation helpful? Give feedback.
All reactions