Skip to content

Training Speed #5

@qfzhu

Description

@qfzhu

Hi, may I ask how long it will take for training on the Reddit and Holmes dataset?
I ran the code using a single GPU and here is the logging information:
Epoch 1/1
2560/2560 [==============================] - 1058s 413ms/step - loss: 10.8904 - decoder_softmax_loss:
3.5243 - concat_1_loss: 0.0277
n_batch: 20, prev 0
spent: 1067 sec
train: 10.8904
It seems that training an epoch (10M instances) will take over a month, which is a little slow.
Is it a normal speed, or did I miss anything? Thank you.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions