Home  >   Blog  >   Machine Learning / Conference   >   Attending MLconf SF 2018 #mlconf18


Attending MLconf SF 2018 #mlconf18

I have attended MLconf 2018 in San Francisco. Since awesome speakers came from highly recognized industrial organizations, I can confidently say that MLconf can be a great place to see industrial trends and real-world "successful" use cases.

Surprisingly (and unsurprisingly), all of the following topics were covered in this one-day conference:

  • Interpretability
    • Saliency map for images vs. TCAV
  • Large-scale satellite image data collection
  • Scalable ML with Amazon SageMaker
    • Providing cheaper scalable solution on the cloud
    • Train local states (i.e., partial models) on multiple GPU-enabled instances in parallel in the streaming, incremental fashion, and finally merge them into single shared state
    • Reinventing k-means to make it scalable
  • Uber's NLP efforts on building AI for riders and drivers
    • TF-IDF + LSA vs. CNN
  • ML and deep learning applications
  • Practical lessons on differential privacy
  • and more!

My favorite session was Edo Liberty's one about Amazon SageMaker:

I have a special feeling for Edo because my master's research was strongly motivated by his paper; the paper eventually guided me to the world of scalable ML.

Hearing this session confirmed that I made the right decision by attending this year's MLconf. In fact, inside of SageMaker is still like a black box for me, but I can easily imagine that this out-of-the-box application is based on many advanced studies as Edo mentioned about approximation techniques for streaming data.

At the end of the event, the above tweet luckily won a free book:

Thanks organizers, I got "The Deep Learning Revolution"!

Overall, I really enjoyed this single-track, single-day conference thanks to the high-quality talks and well-organized program, as well as many networking opportunities. I personally believe ML, DL, and data science conferences should be more compact in terms of duration, number of sessions and attendees, just like MLconf, because the recent chaotic situation in those fields easily messes conference program; as an audience, too much input can sometimes be harmful to learning something truly valuable and important.

  Author: Takuya Kitazawa

Takuya Kitazawa is working on machine learning, data science, and product development at Treasure Data.

Opinions are my own.

  Popular articles

Why a Data Science Engineer Becomes a Product Manager
Apache Hivemall at #ODSCEurope, #RecSys2018, and #MbedConnect
Parallel Programming vs. Concurrent Programming


  Give me a coffee

  Support from my wishlist

  See also

The Rise of Customer-Centric Retailing @ NRF Retail's Big Show #NRF2020
User Modeling, Adaptation, Personalization for Marketing #UMAP2019
Apache Hivemall at #ODSCEurope, #RecSys2018, and #MbedConnect