We train a language model conditioned on different input sources. We call the different sources communities, but they could be anything reall -- different individuals, time periods, etc.
The language models (tranformer, LSTM) are based on the ones defined in the PyTorch Sequence Modeling tutorial.