Sign in or Join FriendFeed
FriendFeed is the easiest way to share online. Learn more »
Ask HN: What's the best technical talk you've heard? | Hacker News - https://news.ycombinator.com/item...
Advances in Neural Information Processing Systems (NIPS) - http://books.nips.cc/
How LeBron James transformed his game to become a highly efficient scoring machine - Grantland - http://www.grantland.com/story...
LeBron James's ethic of improvement. - Michael Nielsen
You aren't getting any better | Robert Heaton - http://robertheaton.com/2013...
On being realistic about how much better you're getting at what you do - Michael Nielsen
Is Giving the Secret to Getting Ahead? - NYTimes.com - http://www.nytimes.com/2013...
The Four Habits that Form Habits : zenhabits - http://zenhabits.net/habitses/
Stochastic Gradient Tricks - http://leon.bottou.org/papers...
Paper by Bottou on how to do stochastic gradient descent - Michael Nielsen
NIST Special Database 19 - http://www.nist.gov/srd...
The successor to NIST Special Databases 3 and 7, earlier databases for doing character recognition. MNIST was based in part on sepcial database 3. - Michael Nielsen
label:social_computing - Google Scholar Citations - http://scholar.google.ca/citatio...
Algorithmic Rape Jokes in the Library of Babel | Quiet Babylon - http://quietbabylon.com/2013...
On the problems caused when you create tshirts using a data mining algorithm. As Kevin Slavin has noted, this is how our culture is increasingly being created. - Michael Nielsen
Why does regularization work? | Justin Domke's Weblog - http://justindomke.wordpress.com/2008...
I like the idea about regularizing the regularization parameters. - Michael Nielsen
Research at Google - Google+ - *Building High-level Features Using Large Scale… - https://plus.google.com/u...
ImageNet Classification with Deep Convolutional Neural Networks - http://www.cs.toronto.edu/~hinton...
Conversational Speech Transcription Using Context-Dependent Deep Neural Networks - Microsoft Research - http://research.microsoft.com/apps...
Someone got the natural gas report 400 ms early | Hacker News - https://news.ycombinator.com/item...
Regularization and model selection (Andrew Ng) - http://cs229.stanford.edu/notes...
Visual Storytelling: The Digital Video Documentary - http://documentarystudies.duke.edu/uploads...
A brief skim suggests that this is a very good basic guide to making documentaries. - Michael Nielsen
Training products of experts by minimizing contrastive divergence (pdf) - http://www.cs.utoronto.ca/~hinton...
Theoretical and Practical Questions about Regularization - MetaOptimize Q+A - http://metaoptimize.com/qa...
A pretty good summary of how people think about regularization. - Michael Nielsen
Dan Pallotta: The way we think about charity is dead wrong | Video on TED.com - http://www.ted.com/talks...
An extremely insightful talk about the way our society thinks about not-for-profit organizations. He highlights many of the systematic ways we hobble the not-for-profit sector. - Michael Nielsen
[bib2web] Yann LeCun's Publications - http://yann.lecun.com/exdb...
LeCun et al's well-known 1998 paper on document recognition. - Michael Nielsen
Vi Hart's Guide to Comments - YouTube - http://www.youtube.com/watch...!
Vi Hart's Guide to Comments - YouTube
Play
[bib2web] Yann LeCun: Efficient BackProp - http://yann.lecun.com/exdb...
Eric Battenberg - Google+ - I know there are plenty of books out there on neural… - https://plus.google.com/1175844...
[1206.5533] Practical recommendations for gradient-based training of deep architectures - http://arxiv.org/abs/1206.5533
"Learning algorithms related to artificial neural networks and in particular for Deep Learning may seem to involve many bells and whistles, called hyper-parameters. This chapter is meant as a practical guide with recommendations for some of the most commonly used hyper-parameters, in particular in the context of learning algorithms based on back-propagated gradient and gradient-based optimization. It also discusses how to deal with the fact that more interesting results can be obtained when allowing one to adjust many hyper-parameters. Overall, it describes elements of the practice used to successfully and efficiently train and debug large-scale and often deep multi-layer neural networks. It closes with open questions about the training difficulties observed with deeper architectures." - Michael Nielsen
Pay by the Bit: An Information-Theoretic Metric for Collective Human Judgment - http://research.google.com/pubs...
Proposes a measure for the utility of an individual's contribution to a task. - Michael Nielsen
The Fundamental Theorems of Welfare Economics - http://www.econ.umn.edu/~jchipm...
Phil Agre on issue entrepreneurship - http://groups.yahoo.com/group...
For 40 Years, This Russian Family Was Cut Off From All Human Contact, Unaware of World War II | History & Archaeology | Smithsonian Magazine - http://www.smithsonianmag.com/history...
Astonishing story of how a family built a place for themselves, alone in the world. - Michael Nielsen
Other ways to read this feed:Feed readerFacebook