ODSC Europe 2018
Apache Hivemall: Query-Based Handy, Scalable Machine Learning on Hive @ ODSC Europe 2018
Abstract
This talk introduces Apache Hivemall, a scalable machine learning library for Apache Hive, Spark and Pig, in the context of real-world large-scale data science.
Most importantly, Hivemall significantly simplifies machine learning workflow such as feature engineering, algorithm implementation and evaluation, because Hive enables us to access to distributed storage using handy SQL-like queries (HiveQL). Today, data scientists and machine learning engineers commonly suffer from numerous tiny code fragments and poor scalability of pipelines due to the difficulty of implementation. By contrast, once Hivemall is installed, we can execute a wealth of machine learning algorithms in a scalable manner by just writing dozens of lines of queries.
To the end of this session, the speaker talks about:
- Which part of modern realistic machine learning and data science is painful
- When Hivemall is notably preferable to the other implementation of machine learning algorithms, and why it is
- Who can get the benefit from the scalability and simplicity of Hivemall
- What kind of machine learning techniques are implemented in Hivemall, including classification, regression, anomaly detection, natural language processing and recommendation
- How to install and use Hivemall, and how Hivemall implements a wide variety of machine learning algorithms in the scalable manner
Note that, since Hivemall is officially providing its Dockerfile, attendees can immediately try its functionality on their laptop.
Additionally, this talk provides some tips to more effectively utilize Hivemall by showing an example with a workflow engine. For example, Digdag, a distributed workflow engine, provides a simple way to run, organize and/or schedule highly-dependent complex tasks in either sequential and parallel; that is, the workflow engine makes real-world machine learning pipelines nicely manageable. Since workflow definition itself is written in the easy-to-use YAML format, engineers can handle the pipelines in a similar way to what people do on their own source code, in terms of deployment, version control and modularity.
Eventually, the speaker expects audiences to get an idea for making machine learning more handy and manageable, and they would become able to discuss how real-world machine learning should be in the next couple of years.
Slides
Video
Author: Takuya Kitazawa
Takuya Kitazawa is a freelance software developer, minimalistic traveler, ultralight hiker & runner, and craft beer enthusiast. With a decade of experience at start-up companies and Big Tech ranging from full-stack/machine-learning engineering to data science to product management, I am currently working at the intersection of technological and social aspects of data-driven applications.
Disclaimer
- Opinions are my own and do not represent the views of organizations I am/was belonging to.
- I am doing my best to ensure the accuracy and fair use of the information. However, there might be some errors, outdated information, or biased subjective statements because the main purpose of this blog is to jot down my personal thoughts as soon as possible before conducting an extensive investigation. Visitors understand the limitations and rely on any information at their own risk.
- That said, if there is any issue with the content, please contact me so I can take the necessary action.