April 17, 2017

Episerver Automatic Image Tagging with Microsoft Cognitive Services

I must admit I'm enjoying messing around with Microsoft's Cognitive Services, as is probably evidenced by the SMS App I made for my Dad, I also figured I could turn this into a practical application for work as well.

I'm willing to bet that most organizations are still a bit behind on the content strategy curve at this point and aren't adequately tagging their content and images. So I decided to build out a demo-ware / proof of concept for auto-tagging images, which will soon grow into processing content text, possibly as an add-on for the community. We'll see.

It's worth noting that the results that come back are suspect at best. Machine-based tagging should be looked at much like we look at machine-translation. It'll get close, but if you use it then there should be some human moderation put into play. Either pre-publish or post-publish doesn't matter, just know that you're not going to get 100% accuracy. You should expect some tags you'll want to remove and that you'll want to step in and add your own as well.

Still, I find that the tagger inserts values that are likely helpful as well as likely un-thought-of by some authors.

April 13, 2017

200 Lines or Less: Combining Twilio, AWS Lambda, and MS Cognitive Services into an SMS Image Analyzer

(Cool story. Show me the code!)

I've been doing a bit in the way of personal projects and a major source of inspiration for them is my Dad, who has been slowly but surely losing his eyesight for the past decade or so. I really want to help him navigate an intensely, naturally visual world as his affliction progresses. Through hours... and hours... of searching Google, I've found very few resources that are practical enough for every-day use.

That's not to say that there's nothing out there. Apple has been doing a fairly great job making iOS accessible for him - he zooms and has text read to him all the time on his phone. He uses his phone camera to snap pictures of items he wants to zoom in on to read. Not to mention Siri and the help she/it has provided so he can stay in touch with family and friends. Siri also helps him find how to get home, which is pretty critical to a man who walks nearly everywhere and can't read road signs. If he takes a wrong turn, it's easy for him to ask "Where am I?" to regain his bearings.

I've introduced my Dad, an avid reader all his life, to the wonder of audiobooks. He now has multiple Alexa-enabled devices around his condo that he uses to control lights, set timers, manage lists, and get tide schedules (he lives by the beach).

So there's not nothing for him to use, but there are a lot of things that are not as good as they could be for him. And some of what I've found online, such as desktop magnifiers, that are simply absurdly priced as medical equipment rather than the convenient household electronics they could or should be. The cheap ones I've seen hover around the $1,500 mark. For a camera and a screen. I'm considering building one out of an old monitor and a Raspberry Pi with a camera. Maybe even a Pi Zero. I figure it'll set me back about $200.

The point is that the resources are few, expensive, or only moderately effective.

What's a developer to do? Why, build something of course!