Using Weight Sharing to Improve Machine Learning (image) University of Texas at Austin, Texas Advanced Computing Center Share Print E-Mail Caption An example of grouped partial weight sharing, here with two groups. Lease's team stochastically selects embedding weights to be shared between words belonging to the same groups. Weight sharing constrains the number of free parameters that a system must learn, increases the efficiency and accuracy of the neural model, and serves as a flexible way to incorporate prior knowledge, combining the best of human knowledge with machine learning. Credit Ye Zhang, Matthew Lease, UT Austin; Byron C. Wallace, Northeastern University Usage Restrictions None Share Print E-Mail Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.