Skip to content
Microsoft Open Source Blog

Posts

Accelerate and simplify Scikit-learn model inference with ONNX Runtime 

Scikit-learn is one of the most useful libraries for general machine learning in Python. To minimize the cost of deployment and avoid discrepancies, deploying scikit-learn models to production usually leverages Docker containers and pickle, the object serialization module of the Python standard library. Docker is a good way to create consistent environments and pickle saves...Read more

ONNX Runtime scenario highlight: Vespa.ai integration 

Since its open source debut two years ago, ONNX Runtime has seen strong growth with performance improvements, expanded platform and device compatibility, hardware accelerator support, an extension to training acceleration, and more. We are excited by its broad usage in production, powering more than a hundred models across Microsoft products and services and bringing concrete...Read more

Adding RoBERTa NLP to the ONNX model zoo for natural language predictions 

In summer 2019, I worked as a high school intern for the ONNX AI team at Microsoft and loved working on various projects with the team, including the BERT text classification model. However, due to Covid-19, the Microsoft Internship Program for high school students was canceled in the summer of 2020. This led two other...Read more

Introducing ONNX Runtime mobile – a reduced size, high performance package for edge devices 

ONNX Runtime is an open source project that is designed to accelerate machine learning across a wide range of frameworks, operating systems, and hardware platforms. Today, we are excited to announce ONNX Runtime release v1.5 as part of our AI at Scale initiative. This release includes ONNX Runtime mobile, a new feature targeting smartphones and other...Read more

GPT-2 fine-tuning with ONNX Runtime – a 34% speedup in training time 

Model training is an important step when developing and deploying large scale Artificial Intelligence (AI) models. Training typically utilizes a large amount of compute resources to tune the model based on the input dataset. Transformer models, with millions and billions of parameters, are especially compute-intensive and training costs increase with model size and fine-tuning steps...Read more

Announcing accelerated training with ONNX Runtime—train models up to 45% faster 

ONNX Runtime is an open source project that is designed to accelerate machine learning across a wide range of frameworks, operating systems, and hardware platforms. It is used extensively in Microsoft products, like Office 365 and Bing, delivering over 20 billion inferences every day and up to 17 times faster inferencing. Today we are introducing...Read more

Microsoft open sources breakthrough optimizations for transformer inference on GPU and CPU 

This post is co-authored by Emma Ning, Azure Machine Learning; Nathan Yan, Azure Machine Learning; Jeffrey Zhu, Bing; Jason Li, Bing One of the most popular deep learning models used for natural language processing is BERT (Bidirectional Encoder Representations from Transformers). Due to the significant computation required, inferencing BERT at high scale can be extremely...Read more

ONNX joins Linux Foundation 

Today the Open Neural Network eXchange (ONNX) is joining the LF AI Foundation, an umbrella foundation of the Linux Foundation supporting open source innovation in artificial intelligence, machine learning, and deep learning. ONNX was co-founded by Microsoft in 2017 to make it easier to create and deploy machine learning applications. In the past few years,...Read more

Announcing ONNX Runtime 1.0 

One year after ONNX Runtime’s initial preview release, we’re excited to announce v1.0 of the high-performance machine learning model inferencing engine. This release marks our commitment to API stability for the cross-platform, multi-language APIs, and introduces a breadth of performance optimizations, broad operator coverage, and pluggable accelerators to take advantage of new and exciting hardware...Read more

Now available: ONNX Runtime 0.5 with support for edge hardware acceleration 

ONNX Runtime 0.5, the latest update to the open source high performance inference engine for ONNX models, is now available. This release improves the customer experience and supports inferencing optimizations across hardware platforms. Since the last release in May, Microsoft teams have deployed an additional 45+ models that leverage ONNX Runtime for inferencing. These models...Read more

ONNX Runtime: a one-stop shop for machine learning inferencing 

Organizations that want to leverage AI at scale must overcome a number of challenges around model training and model inferencing. Today, there are a plethora of tools and frameworks that accelerate model training but inferencing remains a tough nut due to the variety of environments that models need to run in. For example, the same...Read more

Open Source Weekly #6 

This week’s Microsoft Connect(); event has been a demo-packed few days, highlighting Microsoft’s continuing commitment to delivering open technologies and contributing to and partnering with open source communities. From joining the MariaDB Foundation to launching a new Apache Spark-based analytics platform and previewing Visual Studio Code Live Share, there’s a ton of open source goodness...Read more