How I got into software development is actually quite an interesting and personal story. My first ever blog post tells it in more detail for anyone interested, but essentially, I was in my late teens with no friends, bullied, struggling with my mental health and a bit lost. I had always liked computers and wanted to return to education so I took an IT course at a local college. Programming was one of the modules I took when I started and I just fell in love with it.
A classmate was already an experienced programmer and offered to help me learn C#. He would teach me away from college so as well as learning code, I made a friend out of it, then joined his social circle and never looked back! It honestly saved my life!
That led me to take Software Engineering at University. I have been lucky enough to work in the industry ever since graduating in 2014. It’s also one of the reasons I blog and public speak, because I am eager to share knowledge in the hopes it helps someone in the way technology and code has helped me.
Body Language Assistant
Body Language Assistant is a mobile app which uses Xamarin and Azure Cognitive Services, to help me detect sentiment in text and emotion in faces in photographs.
In April 2018, I had the pleasure of meeting Jim Bennett, Snr Cloud Advocate at Microsoft. We had been in contact a lot as I reviewed his book Xamarin in Action on my blog, but we were both speaking at the same event for the first time. His talk on Cognitive Services in Azure had me hooked. It was so powerful, so easy to get started with the power of AI and the Cloud, and the documentation was fantastic!
Seeing how their vision SDK could be used to detect emotion in faces got me buzzing with ideas. As someone neurodiverse who struggles to read people, I thought of an app that I could use to help me understand sentiment better.
Then in May 2018, I had the pleasure of attending Build in Seattle. I attended a talk by Brandon Minnick, a colleague of Jim’s, who gave a talk and demo on the sentiment analysis side of Cognitive Services. It made me realise I am poor with written text too sometimes!
A few months later, I decided to join the world of public speaking and submitted to speak at my first conference. I wanted to do an intro to Xamarin, but also a longer version that shows how you can easily combine Xamarin and Cognitive Services to do something beyond just “Hello, World!”. This is what spurred me on to finally write my app, that I named Body Language Assistant. It has two uses; it’s a great app for helping me (and surprising myself), and a fantastic demo for talks. Winner!!
How does it work?
Body Language Assistant is a Xamarin.Forms app at heart, with shared UI and code between Android and iOS. When you first open the app, you are met with a menu page with two buttons, to pick which function you wish to use.
In the sentiment analysis page, there is a text box and a button. You enter the text, click the button and the app will send that text to Cognitive Services using REST. Cognitive Services sends back a json response, which the app then processes to show the most likely sentiment on the page.
The photo analysis page is more complex. It uses a third-party plugin to allow for taking a photo, it then sends that photo, using the Vision SDK to Cognitive Services, which identifies faces in the photo and returns a list of attributes about the face it sees, including the emotion. The app then processes that to find the most likely emotion, which it then displays on a results page, alongside the original photo taken.
As I mentioned earlier, I had seen a talk from Jim Bennett and was really excited by how easy it was to get started and all the possibilities of what you could do with it. As a lover of Xamarin, I already had a soft spot for Azure and Microsoft, so the easy documentation, free cost of entry and Jim’s talk convinced me that Azure would be perfect and most likely the easiest to integrate with Xamarin.
No matter how bad my code might be (that one is no doubt subjective), I always set my repo’s as public on GitHub, in case it can help anyone. I never set out to make Body Language Assistant open source, it was just a natural consequence of that. However, I also made sure it was on GitHub and public as I also mention it in my talks, and I want attendees to have the chance to dissect it and understand how it works at their own pace.
For me, the hardest part was probably learning how to use multiple SDK’s; HttpClient (yup, in all my years as a dev, I have written network code so little I didn’t know how to do it, we all have gaps in knowledge!), the media plugin for taking photos and the Azure Cognitive Services SDK. Once I understood those, I then had to work out how to piece them all together and how to layout the app in a way that made sense from a design perspective.
Overall, it took me a few weeks but that was with only a few hours every few days put into it. The app itself is actually pretty simple, so someone with more experience may well have done it a lot quicker.
I have big dreams for the app longer term. For starters, I want to make the app look much nicer. It has very basic styling at the moment, and it is quite possible that a desktop style menu page when you open the app is a terrible design for a mobile app. I also want to find a way to add additional features such as speech to text analysis, analysing photos already on the device, and other things that might help those who are neurodiverse.
This leads on to the ultimate goal, which is to release it on app stores so it can help others besides just me. Although AI has limitations such as understanding sarcasm and the range of emotions it can detect, this is improving all the time and I still think the app may be able to help many people.
Embracing open source
For me, open source software is very important. It gives us all access to powerful projects and libraries without a cost, gives us code samples we can learn from as we try and achieve something in our own code, the ability to improve and expand a product faster due to community involvement (for example Xamarin.Forms itself, which is all open source) and one of the most important, it often gives newer developers a nice place to try and get involved in a bigger project, with many often having labels to identify work that might be great for a newer developer.
If you’re thinking about releasing your work open source, try and make it as welcoming as possible to everyone to get involved, no matter their experience. Create good documentation on how to get started, the architecture and any known issues or caveats.
Also try and take time beforehand to make sure the code is readable and maintainable so it’s not daunting to a potential maintainer. Oh, and most of all? Don’t leak your API keys! 😉
More from the author
My name is Luce Carter. I am a Software Developer, currently working in QA for a data science company called dunnhumby. I am @LuceCarter1 on Twitter, LuceCarter on GitHub, CodingWithLuce on Twitch, and I write on my blog, https://lucecarter.co.uk. I’ve not been as active lately with blogging but still some interesting posts on there for anyone interested in my story.