s46144603 VQVAE OASIS#478
Conversation
…alidate - each preprocessed by normalising by factoring by 255.
…ers in the model.
…r through vq_layer for the quantised vae and finally fed the quantised vae through the decoder for the model.
… to the train.py file and also added plots of the losses and metrics during training.
…compare original vs reconstructed images as well as calculate the structured similarity (SSIM) of these two images).
… main title and description of the problem. Some headings were also made.
…e dependencies and references.
…set so fixed this to sample from the test set.
…the loss as a result.
…al test images vs reconstructed images. Not certain yet that they will show on github however.
…ll as loss rate graph.
…ach reconstructed image shown. All are above 0.6!
…es to obtain OASIS data.
…s well as the version of python used.
|
This is an initial inspection, no action is required at this point
|
Good Practice (Design/Commenting, TF/Torch Usage)Adequate use and implementation Recognition ProblemSolves problem (missing generations) -2 Commit LogMeaningful commit messages DocumentationReadMe OK, broken links -2 Pull RequestSuccessful Pull Request (Working Algorithm Delivered on Time in Correct Branch) |
This VQVAE analyses the OASIS brain scan dataset. The path variable in dataset.py will need to be altered to run. Also, the dataset.py should be run first, followed by modules.py, train.py and finally predict.py.