Last week Steve Guggenheimer posted something really interesting on his blog about the work Microsoft has been doing to make AI features like vision, speech, and natural language commands accessible to developers.
In his post (which includes a pretty funny video of him demoing voice commands into a Compaq Ipaq in 2001), Guggs explains how anyone can take advantage of the two decades worth of research Microsoft has done in AI: using Microsoft AI API’s via Microsoft Cognitive Services.
Why is this important to you? An article from Monday on CIO.com summarizes it perfectly: “From a market perspective, A.I. is positioned to become a pillar of the next generation of software technologies.”
Here’s how you can use these APIs:
To help developers build apps and services that detect, interpret and gain insights in human like ways without requiring a dedicated team of AI experts, we have now put the pieces together in Microsoft Cognitive Services (it is part of Cortana Intelligence Suite and is in preview).
Developers with just a few lines of code can benefit from over 20 years of intelligence research that has been used in Microsoft products. Since it is a service you don’t have to deploy it, update it, or train the models (although you have the option for some services).
With Cognitive Services you have access to vision, speech, language, knowledge and search APIs.
The post goes on to walk through an example of how to do this with a POC for a vision API. If you’re interested in experimenting with some of these capabilities for your organizing, I recommend taking a look. You can also check out all the APIs on GitHub.
Read more here.