🔥 Training neural networks 𝘄𝗶𝘁𝗵𝗼𝘂𝘁 𝗯𝗮𝗰𝗸𝗽𝗿𝗼𝗽?Geoffrey Hinton proposed a forward-forward (FF) algorithm at this year’s NeurIPS.The idea is inspired by the realization that there is very little to no evidence that mammal brains are performing back-prop-like operations when learning. Are mammal brains strictly feed-forward? Maybe. Paper Link: https://www.cs.toronto.edu/~hinton/FFA13.pdfWant to try FF yourself? Now you can do it in PyTorch!Checkout it’s now available on #github :https://github.com/mohammadpz/pytorch_forward_forward

Published by Artificialcybernet

cyber security guard

Leave a comment

Design a site like this with WordPress.com
Get started