Authors: The CASL Team

CASL Forte

Welcome back to our blog post series of building your own Question and Answering system! In the last post (Building a Question Answering System Part 1: Query Understanding in 18 lines of Code), we introduced how to implement Question Understanding — — the first step of a Q&A system in just 18 lines of code using Forte. By following this blog post, you will learn to build the Document Retrieval step and connect it with Question Understanding from Part 1.


Authors: Petuum CASL Team

We are excited to launch our newest open source offering, Tuun!

What is Tuun?

Tuun lets you build AutoML and meta-learning techniques into your Machine Learning pipelines to boost task performance and accuracy. Our initial release supports black-box tuning, hyperparameter optimization, data augmentation, and zeroth order optimization, with Neural Architecture Search coming in our next major update. Tuun scales to computationally-intensive ML pipelines via integration with CASL’s AdaptDL for cluster scheduling, and we’ve made Tuun easy to use by integrating with Microsoft NNI as a front-end.

Tuun’s Bayesian optimization-based search algorithms perform especially well with complex and high dimensional…


Authors: Aurick Qiao, Henry Guo, Qirong Ho — Petuum CASL Team

Reduce cost by 3x in the cloud and improve GPU usage in shared clusters for deep learning training (GitHub).

Deep learning models can be expensive and time-consuming to train. A language model such as BERT needs over 2000 GPU-hours to train, while computer vision models such as ResNet and VGG require at least 100s of GPU-hours. With today’s cloud compute costs, the training bill might be up to 4 or 5 figures in US dollars!

Given the time and dollar costs, organizations pool computing resources into shared clusters, where multiple users can each submit multiple training jobs. …


Authors: The CASL Team

Welcome to the first part of our blog series, where we’ll show how you can build end-to-end AI applications quickly and with less fuss using Forte.

Part 2 in the series — Building a Question Answering System Part 2: Document Retrieval — is now available.

In this post, we will start to build a Q&A system that produces interesting answers by utilizing the CORD-19 dataset, which contains over 190,000 scientific and medical papers from the National Institutes of Health (NIH). To give you a sense of what the completed Q&A system can do, here are some sample questions and responses:


Authors: Petuum CASL Team

Machine Learning (ML) and Deep Learning models improve in accuracy and generalizability when trained with more data. However, finding sufficient data for your ML task can be challenging — data may be restricted because of security and privacy concerns, or it may be expensive and time-consuming to acquire and label at scale.

Data augmentation can address these challenges by making better use of existing data, by synthesizing new training examples from existing ones. At Petuum, we use data augmentation to improve our AI applications, such as multi-lingual chatbots, industrial process optimization systems, and visual defect detection tools. Data augmentation helps…


nVidia GTC and PyTorch Ecosystem Day are just around the corner, and we’ll be presenting our latest open-source developments and real-world use cases! We have 6 events lined up across 3 venues, covering our latest tools for building Natural Language Processing pipelines, ways to conduct Learning training faster and cheaper, plus how we’re applying Time-Series modeling to advanced manufacturing and Automatic Process Control. Join us and strike up some conversations with our speakers!

Recap

CASL Forte presentation at Nvidia GTC 2020

Modularizing Natural Language Processing

Presented by Zhengzhong (Hector) Liu and Zecong Hu

Recent success and growth in natural language processing and artificial intelligence have given the world many new applications, techniques, models…


Natural Language Processing (NLP) is the science and engineering behind AI applications that interpret and respond to human language. Such applications can help with day-to-day problems, such as: supporting medical practitioners by highlighting key information in clinical notes, web and mobile applications that provide interactive medical advice regarding COVID and future pandemics, pre-filling clinical reports to improve operational consistency in healthcare processes, and knowledge-graph bots that build searchable “webs” of information from companies’ annual financial reports.

To apply NLP technologies to real-world applications, like a clinical report management system, one often needs to “stitch together” NLP tools, such as a…


Authors: Petuum CASL Team | Acknowledgment: Microsoft NNI Team

Hyper-parameter tuning (HPT) is an essential step in deep learning workflows, allowing teams to push a model’s quality higher by systematically evaluating many different sets of hyper-parameters (each called a “trial”) and picking the best outcome. HPT is appealing because it is easy to automate and requires little engineering or coding. At Petuum, we use HPT to tune our models for Healthcare Report Writing, Industrial Process Optimization and Image Annotation, running dozens of trials per deployed model.

However, HPT requires large amounts of computing — proportional to the number of trials you run — and quickly becomes expensive in time…


BY AURICK QIAO

What is AdaptDL?

Petuum is very excited to announce the launch of our newest open source offering, AdaptDL, a resource-adaptive deep learning (DL) training and scheduling framework. The goal of AdaptDL is to make distributed DL easy and efficient in dynamic-resource environments such as shared clusters and the cloud. During our benchmark studies when using AdaptDL with Amazon Web Services (AWS), we recorded a reduction in cost by up to 80% when AdaptDL was set to automatically provision spot instances on AWS when available.

AdaptDL can automatically determine the optimal number of resources given a job’s need. It will efficiently…


AutoDist Logo
AutoDist Logo

Introducing AutoDist

Petuum is excited to announce our latest Open Source project, AutoDist, a distributed deep learning training engine. It provides an easy to use interface to automatically distribute the training of a wide variety of deep learning models across many CPUs and GPUs at scale with very minimal code change.

Why use AutoDist?

Are you searching for an intuitive library on TensorFlow for distributed training? As an alternative to Horovod better performance, AutoDist allows a developer to scale a model from a single GPU to many, without requiring changes to your model building scripts . …

Petuum, Inc.

One Machine Learning Platform to Serve Many Industries: Petuum, Inc. is a startup building a revolutionary AI & ML solution development platform petuum.com

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store