I recently started in the Fishers Youth Mentoring Initiative, and my mentee is a young man in junior high who really likes lizards. He showed me photos of them on his iPad, photos of his pet lizard, and informed me of many lizard facts. He’s also a talented sketch artist – showcasing many drawings of Pokemon, lizards and more. Oh, yeah, he’s also into computers and loves his iPad.

Part of the mentoring program is to help with school, being there as they adjust to growing up, and both respecting and encouraging their interests.

It just so happens that he had a science project coming up. He wasn’t sure what to write about. His pet lizard recently had an attitude shift, and he figured it was because it wasn’t getting as much food week over week. Changing that, he realized its attitude changed. So, he wanted to cover that somehow.

Seeing his interest in lizards, drawing, and computers I asked if we could combine them. I suggested we build an app, a “Reptile Tracker,” that would help us track reptiles, teach others about them, and show them drawings he did. He loved the idea.

Planning

We only get to meet for 30 minutes each week. So, I gave him some homework. Next time we meet, “show me what the app would look like.” He gleefully agreed.

One week later, he proudly showed me his vision for the app:

Reptile Tracker

I said “Very cool.” I’m now convinced “he’s in” on the project, and taking it seriously.

I was also surprised to learn that my expectations of “show me what it would look like” were different from what I received from someone both much younger than I and with a different world view. To him, software may simply be visualized as an icon. In my world, it’s mockups and napkin sketches. It definitely made me think about others’ perceptions!

True to software engineer and sort-of project manager form, I explained our next step was to figure out what the app would do. So, here’s our plan:

  1. Identify if there are reptiles in the photo.
  2. Tell them if it’s safe to pick it up, if it’s venomous, and so forth.
  3. Get one point for every reptile found. We’ll only support Lizards, Snakes, and Turtles in the first version.

Alright, time for the next assignment. My homework was to figure out how to do it. His homework was to draw up the Lizard, Snake, and Turtle that will be shown in the app.

Challenge accepted!

I quickly determined a couple key design and development points:

  • The icon he drew is great, but looks like a drawing on the screen. I think I’ll need to ask him to draw them on my Surface Book, so they have the right look. Looks like an opportunity for him to try Fresh Paint on my Surface Book.
  • Azure Cognitive Services, specifically their Computer Vision solution (API), will work for this task. I found a great article on the Xamarin blog by Mike James. I had to update it a bit for this article, as the calls and packages are a bit different two years later, but it definitely pointed me in the right direction.

Writing the Code

The weekend came, and I finally had time. I had been thinking about the app the remainder of the week. I woke up early Saturday and drew up a sketch of the tracking page, then went back to sleep. Later, when it was time to start the day, I headed over to Starbucks…

20181105_083756

I broke out my shiny new MacBook Pro and spun up Visual Studio Mac. Xamarin Forms was the perfect candidate for this project – cross platform, baby! I started a new Tabbed Page project, brought over some code for taking photos with the Xam.Plugin.Media plugin and resizing them, and the beta Xamarin.Essentials plugin for eventual geolocation and settings support. Hey, it’s only the first week Smile

Side Note: Normally I would use my Surface Book. This was a chance for me to seriously play with MFractor for the first time. Yay, even more learning this weekend!

Now that I had the basics in there, I created the interface for the Image Recognition Service. I wanted to be able to swap it out later if Azure didn’t cut it, so Dependency Service to the rescue! Here’s the interface:

using System.IO;
using System.Threading.Tasks;
using Microsoft.Azure.CognitiveServices.Vision.ComputerVision.Models;
 
namespace ReptileTracker.Services
{
     public interface IImageRecognitionService
     {
         string ApiKey { get; set; }
         Task<ImageAnalysis> AnalyzeImage(Stream imageStream);
     }
}

Now it was time to check out Mike’s article. It made sense, and was close to what I wanted. However, the packages he referenced were for Microsoft’s Project Oxford. In 2018, those capabilities have been rolled into Azure as Azure Cognitive Services. Once I found the updated NuGet package – Microsoft.Azure.CognitiveServices.Vision.ComputerVision – and made some code tweaks, I ended up with working code.

A few developer notes for those playing with Azure Cognitive Services:

  • Hold on to that API key, you’ll need it
  • Pay close attention to the Endpoint on the Overview page – you must provide it, otherwise you’ll get a 403 Forbidden

image

And here’s the implementation. Note the implementation must have a parameter-less constructor, otherwise Dependency Service won’t resolve it.

using Microsoft.Azure.CognitiveServices.Vision.ComputerVision;
using Microsoft.Azure.CognitiveServices.Vision.ComputerVision.Models;
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.IO;
using System.Threading.Tasks;
using ReptileTracker.Services;
using Xamarin.Forms;
 
[assembly: Dependency(typeof(ImageRecognitionService))]
namespace ReptileTracker.Services
{
    public class ImageRecognitionService : IImageRecognitionService
    {
        /// <summary>
        /// The Azure Cognitive Services Computer Vision API key.
        /// </summary>
        public string ApiKey { get; set; }
 
        /// <summary>
        /// Parameterless constructor so Dependency Service can create an instance.
        /// </summary>
        public ImageRecognitionService()
        {
 
        }
 
        /// <summary>
        /// Initializes a new instance of the <see cref="T:ReptileTracker.Services.ImageRecognitionService"/> class.
        /// </summary>
        /// <param name="apiKey">API key.</param>
        public ImageRecognitionService(string apiKey)
        {
 
            ApiKey = apiKey;
        }
 
        /// <summary>
        /// Analyzes the image.
        /// </summary>
        /// <returns>The image.</returns>
        /// <param name="imageStream">Image stream.</param>
        public async Task<ImageAnalysis> AnalyzeImage(Stream imageStream)
        {
            const string funcName = nameof(AnalyzeImage);
 
            if (string.IsNullOrWhiteSpace(ApiKey))
            {
                throw new ArgumentException("API Key must be provided.");
            }
 
            var features = new List<VisualFeatureTypes> {
                VisualFeatureTypes.Categories,
                VisualFeatureTypes.Description,
                VisualFeatureTypes.Faces,
                VisualFeatureTypes.ImageType,
                VisualFeatureTypes.Tags
            };
 
            var credentials = new ApiKeyServiceClientCredentials(ApiKey);
            var handler = new System.Net.Http.DelegatingHandler[] { };
            using (var visionClient = new ComputerVisionClient(credentials, handler))
            {
                try
                {
                    imageStream.Position = 0;
                    visionClient.Endpoint = "https://eastus.api.cognitive.microsoft.com/";
                    var result = await visionClient.AnalyzeImageInStreamAsync(imageStream, features);
                    return result;
                }
                catch (Exception ex)
                {
                    Debug.WriteLine($"{funcName}: {ex.GetBaseException().Message}");
                    return null;
                }
            }
        }
 
    }
}

And here’s how I referenced it from my content page:

pleaseWait.IsVisible = true;
pleaseWait.IsRunning = true;
var imageRecognizer = DependencyService.Get<IImageRecognitionService>();
imageRecognizer.ApiKey = AppSettings.ApiKey_Azure_ImageRecognitionService;
var details = await imageRecognizer.AnalyzeImage(new MemoryStream(ReptilePhotoBytes));
pleaseWait.IsRunning = false;
pleaseWait.IsVisible = false;

var tagsReturned = details?.Tags != null 
                   && details?.Description?.Captions != null 
                   && details.Tags.Any() 
                   && details.Description.Captions.Any();

lblTags.IsVisible = true; 
lblDescription.IsVisible = true; 

// Determine if reptiles were found. 
var reptilesToDetect = AppResources.DetectionTags.Split(','); 
var reptilesFound = details.Tags.Any(t => reptilesToDetect.Contains(t.Name.ToLower()));  

// Show animations and graphics to make things look cool, even though we already have plenty of info. 
await RotateImageAndShowSuccess(reptilesFound, "lizard", details, imgLizard);
await RotateImageAndShowSuccess(reptilesFound, "turtle", details, imgTurtle);
await RotateImageAndShowSuccess(reptilesFound, "snake", details, imgSnake);
await RotateImageAndShowSuccess(reptilesFound, "question", details, imgQuestion);

That worked like a champ, with a few gotchas:

  • I would receive a 400 Bad Request if I sent an image that was too large. 1024 x 768 worked, but 2000 x 2000 didn’t. The documentation says the image must be less than 4MB, and at least 50×50.
  • That API endpoint must be initialized. Examples don’t always make this clear. There’s no constructor that takes an endpoint address, so it’s easy to miss.
  • It can take a moment for recognition to occur. Make sure you’re using async/await so you don’t block the UI Thread!

Prettying It Up

Before I get into the results, I wanted to point out I spent significant time prettying things up. I added animations, different font sizes, better icons from The Noun Project, and more. While the image recognizer only took about an hour, the UX took a lot more. Funny how that works.

Mixed Results

So I was getting results. I added a few labels to my view to see what was coming back. Some of them were funny, others were accurate. The tags were expected, but the captions were fascinating. The captions describe the scene as the Computer Vision API sees it. I spent most of the day taking photos and seeing what was returned. Some examples:

  • My barista, Matt, was “a smiling woman working in a store”
  • My mom was “a smiling man” – she was not amused

Most of the time, as long as the subjects were clear, the scene recognition was correct:

Screenshot_20181105-080807

Or close to correct, in this shot with a turtle at Petsmart:

tmp_1541385064684

Sometimes, though, nothing useful would be returned:

Screenshot_20181105-080727

I would have thought it would have found “White Castle”. I wonder if it won’t show brand names for some reason? They do have an OCR endpoint, so maybe that would be useful in another use case.

Sometimes, even though I thought an image would “obviously” be recognized, it wasn’t:

Screenshot_20181105-081207

I’ll need to read more about how to improve accuracy, if and whether that’s even an option.

Good thing I implemented it with an interface! I could try Google’s computer vision services next.

Next Steps

We’re not done with the app yet – this week, we will discuss how to handle the scoring. I’ll post updates as we work on it. Here’s a link to the iOS beta.

Some things I’d like to try:

  • Highlight the tags in the image, by drawing over the image. I’d make this a toggle.
  • Clean up the UI to toggle “developer details”. It’s cool to show those now, but it doesn’t necessarily help the target user. I’ll ask my mentee what he thinks.

Please let me know if you have any questions by leaving a comment!

Want to learn more about Xamarin? I suggest Microsoft’s totally awesome Xamarin University. All the classes you need to get started are free.

Update 2018-11-06:

  • The tags are in two different locations – Tags and Description.Tags. Two different sets of tags are in there, so I’m now combining those lists and getting better results.
  • I found I could get color details. I’ve updated the accent color surrounding the photo. Just a nice design touch.

I ran into this issue today and was frustrated why my Azure Blob Storage images weren’t loading while running locally.

I first thought it was the ASP.NET Core Image Tag Helper. Nope.

Digging a little deeper, it turns out my Ad Blocker was the culprit. Disabled AdBlock+ and all was again good in the world.

I hope that helps someone else struggling with this annoyance! 🙂

 

I ran into this issue this week. I would define the Source as a URL and then, nothing…

It turns out, with FFImageLoading, an indispensable Xamarin.Forms plugin available via NuGet, you must also set the ErrorPlaceholder property if loading your image from a URL. That did the trick – images started loading perfectly!

I’ve reported what I think is a bug. I haven’t yet looked at their code.

Here’s an example of how I fixed it:

Working Code:

<ff:CachedImage 
    Source="{Binding ModelImageUrl}"
    ErrorPlaceholder="icon_errorloadingimage"
    DownsampleToViewSize="True"
    RetryCount="3"
    RetryDelay="1000"
    WidthRequest="320"
    HeightRequest="240"
    Aspect="AspectFit"
    HorizontalOptions="Center" 
    VerticalOptions="Center" />

Non-Working Code, note the missing ErrorPlaceholder property:

<ff:CachedImage 
    Source="{Binding ModelImageUrl}"
    DownsampleToViewSize="True"
    RetryCount="3"
    RetryDelay="1000"
    WidthRequest="320"
    HeightRequest="240"
    Aspect="AspectFit"
    HorizontalOptions="Center" 
    VerticalOptions="Center" />

I hope that helps others with the same issue. Enjoy!

I had the need today to display strikethrough text in a Xamarin Forms app. The built-in label control didn’t support such formatting. So, leaning on Unicode’s strikethrough character set, I wrote a function to convert any string to a strikethrough string. To be fair, this works great for the normal character set, so I feel it’s good for most things. Please let me know if your mileage varies.

Business case: I needed to show a “Was some dollar amount” value. Like “Was $BLAH, and Now BLAH!”

In my class, I simply called into my strikethrough converter, as follows:

The property:

public string StrikeThroughValueText => StrikeThroughValue.HasValue ? $"{ConvertToStrikethrough(StrikeThroughValue.Value.ToString("C"))}" : "???";

The function:

private string ConvertToStrikethrough(string stringToChange)
{
    var newString = "";
    foreach (var character in stringToChange)
    {
        newString += $"{character}\u0336";
    }
 
    return newString;
}

Enjoy! I hope this helps you 🙂

Link: More about why this works: Combining Long Stroke Overlay.

I ran into this issue today when debugging on Android, so posting what took an hour to figure out 🙂 This is for when you’re getting a null reference exception when attempting to scan. I was following the instructions here, and then, well, it wouldn’t work 🙂

Rather than using the Dependency Resolver, you’ll need to pass in the Application Context from Android. So, in the App, create a static reference to the IQrCodeScanner,, as follows:

	public partial class App : Application
	{
 
	    public static IQrCodeScanningService QrCodeScanningService;

Then, populate that static instance from the Android app, as follows:

App.QrCodeScanningService = new QrCodeScanningService(this);
global::Xamarin.Forms.Forms.Init(this, bundle);
LoadApplication(new App());

Obviously you’ll also need a matching constructor, like so:

public class QrCodeScanningService : IQrCodeScanningService
{
    private readonly Context _context;
 
    public QrCodeScanningService(Context context)
    {
        _context = context;
    }

This solved the problem like magic for me. I hope it helps you, too!

P.S. Make sure you have the CAMERA permission. I’ve also read you may also need the FLASHLIGHT permission, although I’m not entirely sure that’s required.

So I had to deal with this recently. There were many examples out there, many of which didn’t work. Sooo, I’m blogging my code example so others don’t remain stuck 🙂

In short:

  1. In the XAML, add a CommandParameter binding, and wire up the Clicked event handler.
  2. In the C# Event Handler: Read the (sender as Button).CommandParameter and it’ll be the bound object. Cast / parse accordingly.

XAML (condensed):

<ListView x:Name=”LocationsListView”
ItemsSource=”{Binding Items}”
VerticalOptions=”FillAndExpand”
HasUnevenRows=”true”
RefreshCommand=”{Binding LoadLocationsCommand}”
IsPullToRefreshEnabled=”true”
IsRefreshing=”{Binding IsBusy, Mode=OneWay}”
Refreshing=”LocationsListView_OnRefreshing”
CachingStrategy=”RecycleElement”>
<ListView.ItemTemplate>
<DataTemplate>
<ViewCell>
<StackLayout Orientation=”Horizontal” Padding=”5″>
<StackLayout WidthRequest=”64″>
<Button
CommandParameter=”{Binding Id}”
BackgroundColor=”#4CAF50″
Clicked=”MapButtonClicked”
Text=”Map”
HorizontalOptions=”FillAndExpand”></Button>
</StackLayout>
</StackLayout>
</ViewCell>
</DataTemplate>
</ListView.ItemTemplate>
</ListView>

C#:

protected void MapButtonClicked(object sender, EventArgs e)
{
var selectedLocation = _viewModel.Items.First(item =>
item.Id == int.Parse((sender as Button).CommandParameter.ToString()));

Utility.LaunchMapApp(selectedLocation.Latitude, selectedLocation.Longitude);
}

I recently read a “Coding Question” thread, and a developer was asking what we all thought about this article. I wanted to hold on to my replies, so I’m posting it here for posterity 🙂

Auri:

Only a Sith deals in absolutes. There are use cases for everything, with exceptions.

Auri:

Seriously, though, I’d write tests to ensure the states that you want work as expected.

Auri:

And now that I’ve had my coffee:

Exceptions are a necessary construct. If something doesn’t go as planned, we need a way to handle it. In his article, he’s against blindly swallowing exceptions. That’s generally sound advice. Ask yourself: “Is this an expected exception? If so, do I have a specific exception handler for it, or am I just using the generic catch-all? Have other exceptions occurred? If so, is this one expected? Didn’t I read about C# support for exception switch statements? Did I just shiver?”

Like I was explaining before, only a Sith deals in absolutes. The way I see it, if an error is unexpected, I should have specific use cases for how to handle that exception. I should, generally, never blindly swallow with no logging, or simply re-throw and assume the code above will address. At least, not without a custom/known/non-generic exception I can check up the chain, or include in an integration test. Good article about testing [written by a friend] here, btw: https://arktronic.com/weblog/2015-11-01/automated-software-testing-part-1-reasoning/
At the very least, and I try to follow this rule as much as possible, LOG in any exception for tracking and pro-active/offensive development. Better that you can read logs, or run scripts to let you know about exceptions, and help things go right, than to be blind with a “well, the code seems to work, so let it be” approach. That’s the key goal, really: Help things go right. There are exceptions [heh] to this rule, like simple utility apps whose job is to bulk process data, and exceptions are part of the game. Still, I try to make sure to log, even with those. Unexpected/unintended bugs tend to appear when you’re dealing with massive amounts of data, and logs give a perspective you can’t see from debugging.
Ok, next cup of coffee.

As part of my .NET 301 Advanced class at the fantastic Eleven Fifty Academy, I teach Xamarin development. It’s sometimes tough, as every student has a different machine. Some have PCs, others have Macs running Parallels or Bootcamp. Some – many – have Intel processors, while others have AMD. I try to recommend students come to the class with Intel processors, due to the accelerated Android emulator benefit Intel’s HAXM – Hardware Acceleration Manager – provides. This blog entry is a running list of how I’ve solved getting the emulator running on so many machines. I hope the list helps you, too.

This list will be updated from time to time, as I find new bypasses. At this time, the list is targeted primarily for machines with an Intel processor. Those with AMD and Windows are likely stuck with the ARM emulators. Umm, sorry. I welcome solutions, there, too, please!

Last updated: December 4, 2017

Make sure you’re building from a path that’s ultimate length is less than 248 characters.

That odd Windows problem of long file paths bites us again here. Many new developers tend to build under c:\users\username\documents\Visual Studio 2017\projectname. Add to that the name of the project, and all its subfolders, and the eventual DLLs and executable are out of reach of various processes.

I suggest in this case you have a folder such as c:\dev\ and build your projects under there. That’s solved many launch and compile issues.

Use the x86 emulators.

If you have an Intel processor, then use the x86 and x64 based emulators instead of ARM. They’re considerably faster, as long as you have a) an Intel processor with virtualization abilities, which I believe all or most modern Intel processors do, and b) Intel’s HAXM installed.

Make sure VTI-X / Hardware Virtualization is enabled.

Intel’s HAXM – which you can download here – won’t run if the processor’s virtualization is disabled. You need to tackle this in the BIOS. That varies per machine. Many devices seem to chip with the feature disabled. Enabling it will enable HAXM to work.

Uninstall the Mobile Development with .NET Workload using the Visual Studio Installer, and reinstall.

Yes, I’m suggesting Uninstall + Reinstall. This has worked well in the class. Go to Start, then Visual Studio Installer, and uncheck the box. Restart afterwards. Then reinstall, and restart.

Mobile Development Workload Screenshot

Use the Xamarin Android SDK Manager.

The Xamarin team has built a much better Android SDK Manager than Google’s. It’s easy to install HAXM, update Build Tools and Platforms, and so forth. Use it instead and dealing with tool version conflicts may be a thing of the past.

Make sure you’re using the latest version of Visual Studio.

Bugs are fixed all the time, especially with Xamarin. Make sure you’re running the latest bits and your problems may be solved.

Experiment with Hyper-V Enabled and Disabled.

I’ve generally had issues with virtualization when Hyper-V is enabled. If you’re having trouble with it enabled, try with it disabled.

To enable/disable Hyper-V, go to Start, then type Windows Features. Choose Turn Windows Features On or Off. When the selection list comes up, toggle the Hyper-V feature accordingly.

Note: You may need to disable Windows Device Guard before you can disable Hyper-V. Thanks to Matt Soucoup for this tip.

Use a real device.

As a mobile developer, you should never trust the emulators to reflect the real thing. If you can’t get the emulators to work, and even if you can, you have the option of picking up an Android phone or tablet for cheap. Get one and test with it. If you’re not clear on how to set up Developer Mode on Android devices, it’s pretty simple. Check out Google’s article on the subject.

Try Xamarin’s HAXM and emulator troubleshooting guide.

The Xamarin folks have a guide, too.

If all else fails, use the ARM processors.

This is your last resort. If you don’t have an Intel processor, or a real device available, use the ARM processors. They’re insanely slow. I’ve heard there’s an x86 emulator from AMD, yet it’s supposedly only available for Linux. Not sure why that decision was made, but moving on… 🙂

Have another solution?

Have a suggestion, solution, or feature I’ve left out? Let me know and I’ll update!

 

CEATEC, the Combined Electronics and Technology exhibition in Makuhari, Japan is this week. The latest innovations from Japanese companies are showcased here, often many months before Americans get a taste. I’ll be posting a reporter’s notebook in a bit. For now, enjoy clicking through videos and photos of cool things found on the show floor!

Panasonic’s Cocotto Children’s Companion Robot

Bowing Vision Violin Improvement Sensors & App

Hitachi Robot for the Elderly, and those with Dimentia

Omron “Ping Pong” Robot, Now with “Smash” Shot Abilities

au’s AR Climbing Wall

Unisys’ Manufacturing Robot That Follows Lines

VR Racer

Takara Tomy Programmable Robot

Dry Ice Locomotion

Airline Customer Service Bot Attendant

Feel the Biker’s Heartbeat

Wind Sensors Paired with Fun Animations

The Trouble with Tribbles – Qoobo Robot

Spider-Like Robot from Bandai

Semi-Transparent Display with Water Effect

Bandai BN Bot

Model Train

Kunshan Plasma

The Many Faces of Robots at CEATEC

There were MANY robots at CEATEC. Many just sit there and answer basic questions. Still, some, like Omron’s Ping Pong robot, can learn and adapt and make a difference.

 

My latest Visual Studio extension is now available! Get it here: 2017, 2015

So what is CodeLink?

Getting two developers on the same page over chat can be time consuming. I work remote, so I can’t just walk to someone’s desk. I often find myself saying “go to this file” and “ok, now find function <name>”. Then I wait. Most of the time it’s only 10-20 seconds lost. If it’s a common filename or function, it takes longer. Even then, mistakes can be made.

So I asked myself: Self, wouldn’t it be great if I could send them a link to the place / cursor location in the solution I’m at? Just like a web link?

CodeLink was born.

So here’s what a CodeLink looks like:

codelink://[visualstudio]/[AurisIdeas.Common.Security\AurisIdeas.Common.Security.csproj]/[ParameterFactory.cs]/[9]

I would simply share that CodeLink with a fellow developer. They’d select “Open CodeLink…” in VisualStudio, paste it in, and be brought to that line of code in that project. No more walking them through it, much less waiting.

Technically, the format is:

codelink://[Platform]/[Project Unique Path]/[File Unique Path]/[LineNumber]

What’s it good for?

Other than what I’ve suggested, and what you come up with, I’m thinking CodeLink will help you, teams, teachers, and students with:

  • Include CodeLinks in bugs, code reviews to highlight what needs to be reviewed
  • Share CodeLinks on Git repos, pointing to specific code examples, points of interest, and so forth
  • Share CodeLinks with students so they can continue referring / reviewing useful code

So what’s next?

When I was thinking of the link format, I figured I may end up extending this to VS Code and other editors in the future. After all, not everyone uses VS. Why not XCode, Visual Studio Mac, Atom? So, I added a type identifier.

As always, I look forward to your feedback. Hit me up on Twitter or LinkedIn.