Posts Tagged ‘cloud’

Overview

Ever want to run your AWS lambda functions locally so you can debug efficiently? Well, the documentation for doing so isn’t in one nice, convenient location. Still, the tools ARE THERE. You just need instructions on what to set up and how. That’s what this article will attempt to help you accomplish.

Assumptions

I’m assuming a Windows and Visual Studio environment here. If that’s not your go-to, I’m imagining the adjustments are small. If you’d like to share your adjustments, I’m happy to update this article.

I’m also assuming you started your project with the AWS Lambda Project (.NET Core, C#) template.

Pre-requisites

Before you can debug, the following must be installed:

Lambda Mock Test Tool Install Shortcut

You can install the Mock Test Tool from the command line easily. Just open PowerShell and run the following command:

dotnet tool install -g Amazon.Lambda.TestTool-8.0

Note the -8.0 needs to match the .NET version. Here are some versions to choose from, which will update from time to time. See the Github repo in Pre-requisites for the latest table.

.NET Core VersionTool NuGet PackageTool executable
.NET Core 2.1 (Deprecated)Amazon.Lambda.TestTool-2.1dotnet-lambda-test-tool-2.1.exe
.NET Core 3.1 (Deprecated)Amazon.Lambda.TestTool-3.1dotnet-lambda-test-tool-3.1.exe
.NET 5.0 (Deprecated)Amazon.Lambda.TestTool-5.0dotnet-lambda-test-tool-5.0.exe
.NET 6.0Amazon.Lambda.TestTool-6.0dotnet-lambda-test-tool-6.0.exe
.NET 7.0 (Deprecated)Amazon.Lambda.TestTool-7.0dotnet-lambda-test-tool-7.0.exe
.NET 8.0Amazon.Lambda.TestTool-8.0dotnet-lambda-test-tool-8.0.exe
.NET 9.0Amazon.Lambda.TestTool-9.0dotnet-lambda-test-tool-9.0.exe

Configuring Your Project

In your project, you will need to make some adjustments in order to debug.

Update launchSettings.json

In the project’s launchSettings.json file, make sure you are pointing to the Mock Lamda Test Tool profile and environmentVariables are specified. Something like this:

{
  "profiles": {
    "Mock Lambda Test Tool": {
      "commandName": "Executable",
      "commandLineArgs": "--port 5050",
      "workingDirectory": ".\\bin\\$(Configuration)\\net8.0",
      "executablePath": "%USERPROFILE%\\.dotnet\\tools\\dotnet-lambda-test-tool-8.0.exe",
      "environmentVariables": {
        "AWS_LAMBDA_RUNTIME_API": "localhost:5050",
        "AWS_PROFILE": "default",
        "AWS_REGION": "us-east-2",
        "DYNAMODB_ENDPOINT": "http://localhost:8000"
      }
    }
  }
}

The DYNAMODB_ENDPOINT is optional, and your tastes for naming environment variables may vary.

Make sure the workingDirectory and executablePath are set to the appropriate version of your installed .NET and Lambda Mock Test Tool versions.

Update aws-lambda-tools-defaults.json

You must also tell the Lambda Mock Test Tool where to find the function to point your requests. You can only test one function at a time (sorry), but it’s easy to update.

Populate the function-handler setting in the aws-lambda-tools-defaults.json file as follows:

{
  "Information": [
    "This file provides default values for the deployment wizard inside Visual Studio and the AWS Lambda commands added to the .NET Core CLI.",
    "To learn more about the Lambda commands with the .NET Core CLI execute the following command at the command line in the project root directory.",
    "dotnet lambda help",
    "All the command line options for the Lambda command can be specified in this file."
  ],
  "profile": "default",
  "region": "us-east-1",
  "configuration": "Release",
  "function-architecture": "arm64",
  "function-runtime": "dotnet8",
  "function-memory-size": 128,
  "function-timeout": 30,
  "function-handler": "assemblyName::fullClassPath::nameOfFunction",
  "framework": "net8.0",
  "package-type": "Zip"
}

See it there, on line 15? Populate it as follows:

  • assemblyName: The name of your assembly. For example, OhMyLambda.
  • fullClassPath: The full path of the class containing your function. For example, OhMyLambda.MyFunctionClass
  • nameOfFunction: The name of your function, such as Handler

So, if you had a class like this:

// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.SystemTextJson.DefaultLambdaJsonSerializer))]

namespace OhMyLambda.Functions;

public class MyFunctionClass(IAmazonDynamoDB dynamoDbClient)
{
    public MyFunctionClass() : this(CreateDynamoDbClient()) { }

    public async Task<APIGatewayProxyResponse> Handler(APIGatewayProxyRequest request, ILambdaContext context)
    {
          ... more code here ...

…then your function-handler line would look like:

  "function-handler": "OhMyLambda::OhMyLambda.Functions.MyFunctionClass::Handler",

All good? Let’s continue.

Before You Debug

Before debugging, make sure you see Mock Lambda Test Tool as your startup option. You should also have DynamoDb running if it’s needed.

Starting DynamoDb

If you also need DynamoDb to be running, you should start it before debugging. If you have installed DynamoDb Local from the link above, you need to get AWS Credentials and THEN start it.

To get AWS credentials for the local instance, open PowerShell and run aws configure and use the following credentials:

  • AWS Access Key ID [None]: fakeMyKeyId
  • AWS Secret Access Key [None]: fakeSecretAccessKey
  • Default Region Name [None]: fakeRegion
  • Default output format [None]: (just hit enter)

This will take care of being able to access DynamoDb locally with the proper credentials

Once the credentials have been set, you can launch DynamoDb as follows:

java -D”java.library.path=./DynamoDBLocal_lib” -jar DynamoDBLocal.jar -sharedDb

I added this to a batch file to quickly run it from File Explorer.

This will launch DynamoDb. You can press Control-C to end its process when you’re done.

Debugging

You should be all set now. Simply launch the debugger and you should see the Mock Lambda Test Tool appear in your default web browser. It will look something like this:

If you don’t see your function details, or the top two dropdowns are empty, you have an error in your configuration. Make sure that function-handler is correct!

Triggering the Lambda

So how do you send the payload and trigger the Lambda? Amazon has you covered – just select API Gateway AWS Proxy from the Example Requests dropdown. Then, fill in the “body” with the proper JSON-formatted-as-string. Hit Execute Function, and the request will be made and should trigger your debug breakpoint, assuming you’ve set one.

The End

That’s it! You should be able to debug now! I hope this helped. If you have any updates or questions, feel free to hit me up. You can find me on LinkedIn: https://www.linkedin.com/in/aurirahimzadeh

I recently started in the Fishers Youth Mentoring Initiative, and my mentee is a young man in junior high who really likes lizards. He showed me photos of them on his iPad, photos of his pet lizard, and informed me of many lizard facts. He’s also a talented sketch artist – showcasing many drawings of Pokemon, lizards and more. Oh, yeah, he’s also into computers and loves his iPad.

Part of the mentoring program is to help with school, being there as they adjust to growing up, and both respecting and encouraging their interests.

It just so happens that he had a science project coming up. He wasn’t sure what to write about. His pet lizard recently had an attitude shift, and he figured it was because it wasn’t getting as much food week over week. Changing that, he realized its attitude changed. So, he wanted to cover that somehow.

Seeing his interest in lizards, drawing, and computers I asked if we could combine them. I suggested we build an app, a “Reptile Tracker,” that would help us track reptiles, teach others about them, and show them drawings he did. He loved the idea.

Planning

We only get to meet for 30 minutes each week. So, I gave him some homework. Next time we meet, “show me what the app would look like.” He gleefully agreed.

One week later, he proudly showed me his vision for the app:

Reptile Tracker

I said “Very cool.” I’m now convinced “he’s in” on the project, and taking it seriously.

I was also surprised to learn that my expectations of “show me what it would look like” were different from what I received from someone both much younger than I and with a different world view. To him, software may simply be visualized as an icon. In my world, it’s mockups and napkin sketches. It definitely made me think about others’ perceptions!

True to software engineer and sort-of project manager form, I explained our next step was to figure out what the app would do. So, here’s our plan:

  1. Identify if there are reptiles in the photo.
  2. Tell them if it’s safe to pick it up, if it’s venomous, and so forth.
  3. Get one point for every reptile found. We’ll only support Lizards, Snakes, and Turtles in the first version.

Alright, time for the next assignment. My homework was to figure out how to do it. His homework was to draw up the Lizard, Snake, and Turtle that will be shown in the app.

Challenge accepted!

I quickly determined a couple key design and development points:

  • The icon he drew is great, but looks like a drawing on the screen. I think I’ll need to ask him to draw them on my Surface Book, so they have the right look. Looks like an opportunity for him to try Fresh Paint on my Surface Book.
  • Azure Cognitive Services, specifically their Computer Vision solution (API), will work for this task. I found a great article on the Xamarin blog by Mike James. I had to update it a bit for this article, as the calls and packages are a bit different two years later, but it definitely pointed me in the right direction.

Writing the Code

The weekend came, and I finally had time. I had been thinking about the app the remainder of the week. I woke up early Saturday and drew up a sketch of the tracking page, then went back to sleep. Later, when it was time to start the day, I headed over to Starbucks…

20181105_083756

I broke out my shiny new MacBook Pro and spun up Visual Studio Mac. Xamarin Forms was the perfect candidate for this project – cross platform, baby! I started a new Tabbed Page project, brought over some code for taking photos with the Xam.Plugin.Media plugin and resizing them, and the beta Xamarin.Essentials plugin for eventual geolocation and settings support. Hey, it’s only the first week Smile

Side Note: Normally I would use my Surface Book. This was a chance for me to seriously play with MFractor for the first time. Yay, even more learning this weekend!

Now that I had the basics in there, I created the interface for the Image Recognition Service. I wanted to be able to swap it out later if Azure didn’t cut it, so Dependency Service to the rescue! Here’s the interface:

using System.IO;
using System.Threading.Tasks;
using Microsoft.Azure.CognitiveServices.Vision.ComputerVision.Models;
 
namespace ReptileTracker.Services
{
     public interface IImageRecognitionService
     {
         string ApiKey { get; set; }
         Task<ImageAnalysis> AnalyzeImage(Stream imageStream);
     }
}

Now it was time to check out Mike’s article. It made sense, and was close to what I wanted. However, the packages he referenced were for Microsoft’s Project Oxford. In 2018, those capabilities have been rolled into Azure as Azure Cognitive Services. Once I found the updated NuGet package – Microsoft.Azure.CognitiveServices.Vision.ComputerVision – and made some code tweaks, I ended up with working code.

A few developer notes for those playing with Azure Cognitive Services:

  • Hold on to that API key, you’ll need it
  • Pay close attention to the Endpoint on the Overview page – you must provide it, otherwise you’ll get a 403 Forbidden

image

And here’s the implementation. Note the implementation must have a parameter-less constructor, otherwise Dependency Service won’t resolve it.

using Microsoft.Azure.CognitiveServices.Vision.ComputerVision;
using Microsoft.Azure.CognitiveServices.Vision.ComputerVision.Models;
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.IO;
using System.Threading.Tasks;
using ReptileTracker.Services;
using Xamarin.Forms;
 
[assembly: Dependency(typeof(ImageRecognitionService))]
namespace ReptileTracker.Services
{
    public class ImageRecognitionService : IImageRecognitionService
    {
        /// <summary>
        /// The Azure Cognitive Services Computer Vision API key.
        /// </summary>
        public string ApiKey { get; set; }
 
        /// <summary>
        /// Parameterless constructor so Dependency Service can create an instance.
        /// </summary>
        public ImageRecognitionService()
        {
 
        }
 
        /// <summary>
        /// Initializes a new instance of the <see cref="T:ReptileTracker.Services.ImageRecognitionService"/> class.
        /// </summary>
        /// <param name="apiKey">API key.</param>
        public ImageRecognitionService(string apiKey)
        {
 
            ApiKey = apiKey;
        }
 
        /// <summary>
        /// Analyzes the image.
        /// </summary>
        /// <returns>The image.</returns>
        /// <param name="imageStream">Image stream.</param>
        public async Task<ImageAnalysis> AnalyzeImage(Stream imageStream)
        {
            const string funcName = nameof(AnalyzeImage);
 
            if (string.IsNullOrWhiteSpace(ApiKey))
            {
                throw new ArgumentException("API Key must be provided.");
            }
 
            var features = new List<VisualFeatureTypes> {
                VisualFeatureTypes.Categories,
                VisualFeatureTypes.Description,
                VisualFeatureTypes.Faces,
                VisualFeatureTypes.ImageType,
                VisualFeatureTypes.Tags
            };
 
            var credentials = new ApiKeyServiceClientCredentials(ApiKey);
            var handler = new System.Net.Http.DelegatingHandler[] { };
            using (var visionClient = new ComputerVisionClient(credentials, handler))
            {
                try
                {
                    imageStream.Position = 0;
                    visionClient.Endpoint = "https://eastus.api.cognitive.microsoft.com/";
                    var result = await visionClient.AnalyzeImageInStreamAsync(imageStream, features);
                    return result;
                }
                catch (Exception ex)
                {
                    Debug.WriteLine($"{funcName}: {ex.GetBaseException().Message}");
                    return null;
                }
            }
        }
 
    }
}

And here’s how I referenced it from my content page:

pleaseWait.IsVisible = true;
pleaseWait.IsRunning = true;
var imageRecognizer = DependencyService.Get<IImageRecognitionService>();
imageRecognizer.ApiKey = AppSettings.ApiKey_Azure_ImageRecognitionService;
var details = await imageRecognizer.AnalyzeImage(new MemoryStream(ReptilePhotoBytes));
pleaseWait.IsRunning = false;
pleaseWait.IsVisible = false;

var tagsReturned = details?.Tags != null 
                   && details?.Description?.Captions != null 
                   && details.Tags.Any() 
                   && details.Description.Captions.Any();

lblTags.IsVisible = true; 
lblDescription.IsVisible = true; 

// Determine if reptiles were found. 
var reptilesToDetect = AppResources.DetectionTags.Split(','); 
var reptilesFound = details.Tags.Any(t => reptilesToDetect.Contains(t.Name.ToLower()));  

// Show animations and graphics to make things look cool, even though we already have plenty of info. 
await RotateImageAndShowSuccess(reptilesFound, "lizard", details, imgLizard);
await RotateImageAndShowSuccess(reptilesFound, "turtle", details, imgTurtle);
await RotateImageAndShowSuccess(reptilesFound, "snake", details, imgSnake);
await RotateImageAndShowSuccess(reptilesFound, "question", details, imgQuestion);

That worked like a champ, with a few gotchas:

  • I would receive a 400 Bad Request if I sent an image that was too large. 1024 x 768 worked, but 2000 x 2000 didn’t. The documentation says the image must be less than 4MB, and at least 50×50.
  • That API endpoint must be initialized. Examples don’t always make this clear. There’s no constructor that takes an endpoint address, so it’s easy to miss.
  • It can take a moment for recognition to occur. Make sure you’re using async/await so you don’t block the UI Thread!

Prettying It Up

Before I get into the results, I wanted to point out I spent significant time prettying things up. I added animations, different font sizes, better icons from The Noun Project, and more. While the image recognizer only took about an hour, the UX took a lot more. Funny how that works.

Mixed Results

So I was getting results. I added a few labels to my view to see what was coming back. Some of them were funny, others were accurate. The tags were expected, but the captions were fascinating. The captions describe the scene as the Computer Vision API sees it. I spent most of the day taking photos and seeing what was returned. Some examples:

  • My barista, Matt, was “a smiling woman working in a store”
  • My mom was “a smiling man” – she was not amused

Most of the time, as long as the subjects were clear, the scene recognition was correct:

Screenshot_20181105-080807

Or close to correct, in this shot with a turtle at Petsmart:

tmp_1541385064684

Sometimes, though, nothing useful would be returned:

Screenshot_20181105-080727

I would have thought it would have found “White Castle”. I wonder if it won’t show brand names for some reason? They do have an OCR endpoint, so maybe that would be useful in another use case.

Sometimes, even though I thought an image would “obviously” be recognized, it wasn’t:

Screenshot_20181105-081207

I’ll need to read more about how to improve accuracy, if and whether that’s even an option.

Good thing I implemented it with an interface! I could try Google’s computer vision services next.

Next Steps

We’re not done with the app yet – this week, we will discuss how to handle the scoring. I’ll post updates as we work on it. Here’s a link to the iOS beta.

Some things I’d like to try:

  • Highlight the tags in the image, by drawing over the image. I’d make this a toggle.
  • Clean up the UI to toggle “developer details”. It’s cool to show those now, but it doesn’t necessarily help the target user. I’ll ask my mentee what he thinks.

Please let me know if you have any questions by leaving a comment!

Want to learn more about Xamarin? I suggest Microsoft’s totally awesome Xamarin University. All the classes you need to get started are free.

Update 2018-11-06:

  • The tags are in two different locations – Tags and Description.Tags. Two different sets of tags are in there, so I’m now combining those lists and getting better results.
  • I found I could get color details. I’ve updated the accent color surrounding the photo. Just a nice design touch.