Improve BERT inference speed by combining the power of Optimum, OpenVINO™, ONNX Runtime, and Azure 

5 min read

Make large models smaller and faster with OpenVino Execution Provider, NNCF and ONNX Runtime leveraging Azure Machine Learning. Read more

Faster inference for PyTorch models with OpenVINO Integration with Torch-ORT 

4 min read

Many developers opt to use popular AI Frameworks like PyTorch, which simplifies the process of analyzing predictions, training models, leveraging data, and refining future results. Read more

Join Microsoft at KubeCon + CloudNativeCon Europe 2022 

3 min read

Welcome to KubeCon Europe 2022. While I am unfortunately stuck in rainy Seattle (coldest start to May in 20 years), I’m excited that many of our cloud-native Azure folks will be able to experience sunny Valencia. It’s especially exciting for KubeCon to be the first chance for parts of the Azure Kubernetes Service (AKS) team Read more