Applying Machine Learning - favorite lessons I've learned (Interview)

“Applying machine learning is hard. Many organizations have yet to benefit from ML, and most teams still find it tricky to apply it effectively.

Though there are many ML courses, most focus on theory and students finish without knowing how to apply ML. Practical know-how is gained via hands-on experience and seldom documented—it’s hard to find it in a textbook, class, or tutorial. There’s a gap between knowing ML vs. applying it at work.” - ApplyingML

Eugene Yan’s (eugeneyan.com, Applied Scientist at Amazon) ApplyingML initiative aims to collect and share tacit knowledge that is usually only learned after entering the field. As part of the initiative, I’ve written a detailed post about many lessons I’ve learned so far in machine learning, especially on topics almost impossible to learn from school, such as ML in production, and ML in real products.

Full interview cross-posted at applyingml.com. Check out the other mentor interviews, which I highly enjoyed reading through as well!

How do you work with business to identify and define problems suited for machine learning? How do you align ML projects with business objectives?

This is a loose framing, but in my experience it is important to communicate in the lens of the following: How does it lead to more customers, retention, or a better user experience? Pick something from what the specific business cares about.

Recently on the systems side, I needed to make the case that this new framework or infrastructure can help us serve more customers. In the end this still leads to better user experience.

Some initiatives fall under “improving developer experience” or “reducing technical debt”. These two types of initiatives basically lead to faster iteration time, meaning faster reaction of the company and our product to the market and user needs.

Even if they might sound abstract, especially without further explanation, to folks outside of the engineering team, they still contribute to the user experience, which can contribute to users staying with us, and so on. In the end, I think there is no secret here apart from practicing communication skills, and being able to find a middle ground. I personally use what I call “data science storytelling”, which I’ve written about in this article.

Machine learning systems can be several steps removed from users, relative to product and UI. How do you maintain empathy with your end-users?

One awesome thing that I experienced at a previous employer, Canada’s largest telecommunications firm, was that we had a 3 week training period at a call centre.

We listened to phone calls, and also had to take support calls on the phones. It was there that I learned so much about the end user, and the empathy that I could bring to my org, which was quite far removed from the end user.

After that experience I had another call centre shadowing experience, which was only for one day, but we shadowed the top performers and saw how they interacted with internal tools which were related to the ML project we were building.

If answering to support calls or tickets personally isn’t possible in your org, speak to those who are more customer facing, such as customer success. Or join their Slack channels, and see what issues are being reported (my approach at my Clearco). Clearco also shares customers stories on the weekly company wide calls, which helps those in non-customer facing roles keep those real users in mind.

Imagine you’re given a new, unfamiliar problem to solve with machine learning. How would you approach it?

Due to the nature of my role, I am at the front of dealing with unknowns in a DS project. To use an analogy of a coloring book: I am responsible for leading and drawing the black outlines to resemble a distinct object, and allowing others to fill in the colors, add in shading, and help others improve the outlines based on what they find in the trenches.

Faced with problems that haven’t been solved before in tech in Clearco, these are my loose steps:

First of all, I need to see what this ML approach should achieve in the context of the business. This can be how and why it’s integrated into a product, and how it’s integrated into the broader tech team’s production processes.

This step is to establish a loose list of approaches that are feasible, and others that will be more difficult to implement due to, for example, how easy it will be to integrate with the web stack. In this situation, my (random) experience as a video game developer, as well as a contract full-stack web developer, helps a lot. Perhaps counterintuitively, my strength here is to provide perspectives that aren’t just in the cozy corner of data science.

Secondly, I spend time comparing what has been done in industry, for similar use cases. This is where my experience hosting ML livestreams with leading researchers at Aggregate Intellect (14k+ YouTube subscribers) comes in handy - I have a habit of keeping up with engineering blogs and research papers from large tech companies.

Eugene’s curated applied ML papers is also a resource I recommended to my team, as well as my LinkedIn connections.

Finally, I put my hands to the keyboard and prototype, sometimes trying out 2~3 or more frameworks. There is a lot of iteration between researching and prototyping - more often than not, there are tons of mistakes that are made in this step, which is the intent.

One cannot be afraid of mistakes - which I think is good advice for beginners and experienced folks alike. No amount of online courses or sanitized datasets can really compare to the leveling up we gain from simply doing things, even if there is no playbook at all. (And it’s my current job to figure out the playbook.)

Think of people who are able to apply ML effectively–what skills or traits do you think contributed to that?

For folks that build ML in production that fits into a product, it’s important to have a “Minimal viable product (MVP)” mindset - envisioning how things work end to end.

It’s not just me having fun on my little keyboard tinkering away, doing my research for ages (if I were in a research branch of an org, like DeepMind, then of course this is fine) - there is an actual product and an actual customer.

In terms of the dev work, I like to do an end to end implementation first, e.g. if it’s meant to use an API endpoint in production, use a .pkl first, then sub it out as soon as feasible so I can show the stakeholder. This exercise means that I unearth a lot of potential problems faster, and don’t traverse down too far into a dead end.

I think this makes a huge difference between data scientists that gain experience with production work - there are many data scientists I’ve seen who have their work rotting away in some Jupyter notebook, or some unused modules and scripts in the team repo.

I think it is a meta-skill to be able to

After shipping your ML project, how do you operate and maintain it sustainably?

In a previous role we had a person dedicated to helping with the ongoing reporting of ML model performance by setting up an automated dashboard.

The model performance was then presented to executives on a regular cadence as part of a larger, initiative update. So there was transparency on how the models were actually performing as real users were interacting - as a person that built the model, it was exciting but also nerve-wracking in the sense that there would be nowhere to hide, and it was outside of the sanitized training data. For that project in particular the ML model achieved a ~2x lift over non-ML approaches on the metrics the business previously decided on, so that was a massive win for me as a DS, as well as for the product.

On the maintenance side, there were changes from the business logic we had to deal with, for example, updating labels of the recommended items. Thanks to the way I designed and implemented the training pipeline, the new labels in the new dataset could simply be added to the pipeline and re-ran which would populate the recommendations again. The training was automated to be re-run nightly.

The manual effort for the most part was the manual data input of new labels, in the format the model intakes, and also analyzing to see if the new labels would break anything (not likely, but possible). Thankfully, after doing this manual label addition once, the rest of the model pipeline portion was automated.

We had considered automating this manual portion many times, but due to the format that we received it from the business (Excel, sometimes with column names and order changed randomly), it was placed at a very low priority.

In cases like these, there is always going to be a backlog of “good to have” features. Can’t have them all! The main decision-making rationale is to weigh the effort, the frequency of use, and the benefit.

What processes, tools, or artifacts have you found helpful in the machine learning lifecycle?

Data scientists need to know git, and know git well, something I’ve tried to bring to my teams. It is easy especially for new DS or DS who have had no reason to really understand the software development process to neglect this part (I’m super serious about this), but I’d argue it improves their work and the reproducibility of their work by multitudes.

Conclusion

To adhere to my typical blog length, I’ve selected parts of this interview that I’ve covered less in my existing blog posts, such as my data science career origin story. You can find the full interview, which covers some more questions, over at applyingml.com.

Thank you, Eugene, for this awesome initiative!

More articles about "data science"

Affiliate disclosure: The content on this site is reader-supported.
As an Amazon Associate, we may earn commissions from qualifying purchases from Amazon.com.