There has been a trend for publications to report better and better numbers, but less and less insight. The literature is turning into a giant leaderboard, where publication depends on numbers and little else (such as insight and explanation). It is considered a feature that machine learning has become so powerful (and so opaque) that it is no longer necessary (or even relevant) to talk about how it works. Insight is not only not required any more, but perhaps, insight is no longer even considered desirable.Transparency is good and opacity is bad. A recent best seller, Weapons of Math Destruction, is concerned that big data (and WMDs) increase inequality and threaten democracy largely because of opacity. Algorithms are being used to make lots of important decisions like who gets a loan and who goes to jail. If we tell the machine to maximize an objective function like making money, it will do exactly that, for better and for worse. Who is responsible for the consequences? Does it make it ok for machines to do bad things if no one knows what's happening and why, including those of us who created the machines?1 Papers are reporting better numbers, but...There has been a trend for publications to report better and better numbers, but less and less insight. Years ago, someone from an industrial lab presented a talk at a conference saying basically I did it, I did it, I did it, but I'll be damned if I'll tell you how! (because his employer wouldn't allow him to tell us what we really wanted to hear). Since I also worked for an industrial lab at the time, I had a strong allergic reaction to this talk. I was worried that my employer might ask me to publish similar papers so they could take credit for my results while protecting their intellectual property as trade secret. Since then, I have often argued that we need to reject papers that try to pull this kind of stunt. The better the numbers are, the more important it is to reject the paper. We can't afford papers that report results without insights.With the rise of neural nets, we are now seeing a new variation on this theme. We are seeing lots of papers, these days, that say, I did it, I did it, I did it, but I don't know how! Now even the author of the paper doesn't know how the machine does what it does. It is considered a feature that machine learning has become so powerful (and so opaque) that it is no longer necessary (or even relevant) to talk at https://www.cambridge.org/core/terms. https://doi