Skip to content
Microsoft Open Source Blog

Posts

Accelerate PyTorch training with torch-ort 

With a simple change to your PyTorch training script, you can now speed up training large language models with torch_ort.ORTModule, running on the target hardware of your choice. Training deep learning models requires ever-increasing compute and memory resources. Today we release torch_ort.ORTModule, to accelerate distributed training of PyTorch models, reducing the time and resources needed...Read more

ONNX Runtime release 1.8.1 previews support for accelerated training on AMD GPUs with the AMD ROCm™ Open Software Platform 

This post was co-authored by Jeff Daily, a Principal Member of Technical Staff, Deep Learning Software for AMD. ONNX Runtime is an open-source project that is designed to accelerate machine learning across a wide range of frameworks, operating systems, and hardware platforms. Today, we are excited to announce a preview version of ONNX Runtime in...Read more

Journey to optimize large scale transformer model inference with ONNX Runtime 

“With its resource-efficient and high-performance nature, ONNX Runtime helped us meet the need of deploying a large-scale multi-layer generative transformer model for code, a.k.a., GPT-C, to empower IntelliCode with the whole line of code completion suggestions in Visual Studio and Visual Studio Code.” Large-scale transformer models, such as GPT-2 and GPT-3, are among the most...Read more

Delivering reliable production experiences with PyTorch Enterprise on Microsoft Azure 

At Microsoft, we use PyTorch to power products such as Bing and Azure Cognitive Services and we actively contribute to several PyTorch open-source projects, including PyTorch Profiler, ONNX Runtime, DeepSpeed, and more. Today, we’re announcing a new initiative in collaboration with Facebook—the PyTorch Enterprise Support Program. This new program enables service providers to develop and...Read more

Introducing ONNX Runtime mobile – a reduced size, high performance package for edge devices 

ONNX Runtime is an open source project that is designed to accelerate machine learning across a wide range of frameworks, operating systems, and hardware platforms. Today, we are excited to announce ONNX Runtime release v1.5 as part of our AI at Scale initiative. This release includes ONNX Runtime mobile, a new feature targeting smartphones and other...Read more

GPT-2 fine-tuning with ONNX Runtime – a 34% speedup in training time 

Model training is an important step when developing and deploying large scale Artificial Intelligence (AI) models. Training typically utilizes a large amount of compute resources to tune the model based on the input dataset. Transformer models, with millions and billions of parameters, are especially compute-intensive and training costs increase with model size and fine-tuning steps...Read more