Would You Trust an Algorithm to Choose Your Next Vacation Destination?

We sometimes wonder: if we let an AI curate this newsletter, would it land on the same picks as TDS’s 100% human team? Would it rely on views, claps, and social shares, or could it somehow detect an article’s less effable qualities—a writer’s voice, originality, or clarity? Carolina Bento asks a similar question in her superb explanation of Decision Tree algorithms. Using the process of holiday-destination selection as an example, she demonstrates how such a system would work, as well as the limitations it would face.

Many experts argue that it’s more important to understand why a model produced a certain result than to be satisfied with its output. In some ways, a human-made decision is the same; luckily, with our weekly selection of highlights it’s quite easy to explain our choices. Robert Lange’s monthly roundup of deep learning research papers is a perennial favorite on TDS, and readers keep flocking to it because it not only lists and summarizes important developments in the field, but also adds context and analysis around them. On the other end of the article spectrum, Elena Etter dives deep into a less-discussed but crucial topic: the layers of subjectivity that are baked into data visualization, and how they affect the medium’s supposed neutrality and transparency.

As you probably know by now, we have a soft spot for well-executed tutorials and explainers. It’s always a treat when authors successfully channel a complex topic into an engaging post that inspires others to learn and take action. This week, we particularly enjoyed CJ Sullivan’s hands-on demonstration: it focused on injecting graph embeddings created in Neo4j and visualizing them with a Streamlit dashboard. Pierre Blanchart turned to model interpretability, and showed how we may use the counterfactual-explanation approach in tree ensemble models like XGBoost. Going the full distance from theory to practice, Borja Velasco (and coauthors) introduce us to the emerging methods of double machine learning, and explain its applications in the context of causal inference. For anyone with a growing curiosity about computational intelligence, Brandon Morgan just launched an exciting new project: a full course on evolutionary computation. (If you’ve already read Brandon’s introduction, units one and two are already available!)

Our appreciation for solid, practical guides can only be matched by the joy we feel when we learn something new about issues and conversations that aren’t often on our radar. The TDS podcast is a venue for precisely these kinds of discussions, and Jeremie Harris’s recent episode with Jeffrey Ding, on China’s booming AI ecosystem, was no exception. Daniel Angelov posed a thought-provoking question for ML practitioners working in industry: “How do you know that the system you’ve been developing is reliable enough to be deployed in the real world?” He goes on to explore the testing practices of software development, and examines whether they can be just as useful in machine learning. Finally, we hosted a lively debate on TDS in the past week, with posts weighing the importance of math skills for data scientists. We leave you with Sarem Seitz’s impassioned case for what they call the “most unappreciated skill in ML,” and why learning a profession’s theoretical foundations is just as important as one’s ability to ship good, clean code.

Thank you for taking a chance on our reading recommendations—we hope you enjoyed them as much as we did. And, as always, thank you for making our work possible.

Until the next Variable,
TDS Editors

 

Original post: https://towardsdatascience.com/would-you-trust-an-algorithm-to-choose-your-next-vacation-destination-447ae3877730

Leave a Reply

Your email address will not be published. Required fields are marked *