On-Device Training: Training a model in browser
Continuing the ONNXRuntime On-Device Training blog series, we are introducing ONNX Runtime Training for Web, a new feature in ONNX Runtime (ORT) that enables training models in the browser. Read more
Continuing the ONNXRuntime On-Device Training blog series, we are introducing ONNX Runtime Training for Web, a new feature in ONNX Runtime (ORT) that enables training models in the browser. Read more
In this Blog we will describe our implementation of the DragGAN2 algorithm, based on StyleGAN1, using ONNX Runtime. We will give a technical overview of the architectures, describe the motivation and discuss challenges and their resolution. We have also released the python code that walks through the resulting implementation. Read more
LF AI & Data Foundation announced Recommenders as its latest Sandbox project. Read more
ONNX models can be accelerated with ONNX Runtime (ORT), which works cross-platform and provides coverage for many cloud models and language models. Updating the HF Model Hub with more accurate information about ONNX coverage will ensure that users can leverage all the benefits of ORT when deploying HF models. Read more
By thinking outside the box, we can envision creating a virtual multiverse. Within this innovative space, one can propose, evaluate, and decide on multiple hypotheses. Real-world examples of this approach include planning new product configurations, operating a plant, designing heating or cooling systems, or responding to catastrophes. Read more
Building upon the foundation we established earlier, this blog will present comprehensive information about the underlying details of training models directly on user devices using ORT. Equipped with these technical details, we encourage you to try out On-Device Training with ONNX Runtime for your custom scenario. Read more
ONNX Runtime is a high-performance cross-platform inference and training engine that can run a variety of machine learning models. ORT provides an easy-to-use experience for the AI developers to run models on multiple hardware and software platforms. Read more
As we come together in Amsterdam, there are significant headwinds and challenges facing us, but I’m confident that open-source and cloud-native computing are critical parts of the solutions. Read more
Today, we are excited to announce the much-anticipated availability of the OSS Feathr 1.0. Read more
The team at Pieces shares the problems and solutions evaluated for their on-device model serving stack and how ONNX Runtime enables their success. Read more
Make large models smaller and faster with OpenVino Execution Provider, NNCF and ONNX Runtime leveraging Azure Machine Learning. Read more
Together with our colleagues at LinkedIn, we are happy to announce that Feathr is joining the LF AI Data Foundation, an umbrella foundation of the Linux Foundation supporting open source innovation in AI and data. Read more