Posts Tagged ‘iOS’

I recently started in the Fishers Youth Mentoring Initiative, and my mentee is a young man in junior high who really likes lizards. He showed me photos of them on his iPad, photos of his pet lizard, and informed me of many lizard facts. He’s also a talented sketch artist – showcasing many drawings of Pokemon, lizards and more. Oh, yeah, he’s also into computers and loves his iPad.

Part of the mentoring program is to help with school, being there as they adjust to growing up, and both respecting and encouraging their interests.

It just so happens that he had a science project coming up. He wasn’t sure what to write about. His pet lizard recently had an attitude shift, and he figured it was because it wasn’t getting as much food week over week. Changing that, he realized its attitude changed. So, he wanted to cover that somehow.

Seeing his interest in lizards, drawing, and computers I asked if we could combine them. I suggested we build an app, a “Reptile Tracker,” that would help us track reptiles, teach others about them, and show them drawings he did. He loved the idea.

Planning

We only get to meet for 30 minutes each week. So, I gave him some homework. Next time we meet, “show me what the app would look like.” He gleefully agreed.

One week later, he proudly showed me his vision for the app:

Reptile Tracker

I said “Very cool.” I’m now convinced “he’s in” on the project, and taking it seriously.

I was also surprised to learn that my expectations of “show me what it would look like” were different from what I received from someone both much younger than I and with a different world view. To him, software may simply be visualized as an icon. In my world, it’s mockups and napkin sketches. It definitely made me think about others’ perceptions!

True to software engineer and sort-of project manager form, I explained our next step was to figure out what the app would do. So, here’s our plan:

  1. Identify if there are reptiles in the photo.
  2. Tell them if it’s safe to pick it up, if it’s venomous, and so forth.
  3. Get one point for every reptile found. We’ll only support Lizards, Snakes, and Turtles in the first version.

Alright, time for the next assignment. My homework was to figure out how to do it. His homework was to draw up the Lizard, Snake, and Turtle that will be shown in the app.

Challenge accepted!

I quickly determined a couple key design and development points:

  • The icon he drew is great, but looks like a drawing on the screen. I think I’ll need to ask him to draw them on my Surface Book, so they have the right look. Looks like an opportunity for him to try Fresh Paint on my Surface Book.
  • Azure Cognitive Services, specifically their Computer Vision solution (API), will work for this task. I found a great article on the Xamarin blog by Mike James. I had to update it a bit for this article, as the calls and packages are a bit different two years later, but it definitely pointed me in the right direction.

Writing the Code

The weekend came, and I finally had time. I had been thinking about the app the remainder of the week. I woke up early Saturday and drew up a sketch of the tracking page, then went back to sleep. Later, when it was time to start the day, I headed over to Starbucks…

20181105_083756

I broke out my shiny new MacBook Pro and spun up Visual Studio Mac. Xamarin Forms was the perfect candidate for this project – cross platform, baby! I started a new Tabbed Page project, brought over some code for taking photos with the Xam.Plugin.Media plugin and resizing them, and the beta Xamarin.Essentials plugin for eventual geolocation and settings support. Hey, it’s only the first week Smile

Side Note: Normally I would use my Surface Book. This was a chance for me to seriously play with MFractor for the first time. Yay, even more learning this weekend!

Now that I had the basics in there, I created the interface for the Image Recognition Service. I wanted to be able to swap it out later if Azure didn’t cut it, so Dependency Service to the rescue! Here’s the interface:

using System.IO;
using System.Threading.Tasks;
using Microsoft.Azure.CognitiveServices.Vision.ComputerVision.Models;
 
namespace ReptileTracker.Services
{
     public interface IImageRecognitionService
     {
         string ApiKey { get; set; }
         Task<ImageAnalysis> AnalyzeImage(Stream imageStream);
     }
}

Now it was time to check out Mike’s article. It made sense, and was close to what I wanted. However, the packages he referenced were for Microsoft’s Project Oxford. In 2018, those capabilities have been rolled into Azure as Azure Cognitive Services. Once I found the updated NuGet package – Microsoft.Azure.CognitiveServices.Vision.ComputerVision – and made some code tweaks, I ended up with working code.

A few developer notes for those playing with Azure Cognitive Services:

  • Hold on to that API key, you’ll need it
  • Pay close attention to the Endpoint on the Overview page – you must provide it, otherwise you’ll get a 403 Forbidden

image

And here’s the implementation. Note the implementation must have a parameter-less constructor, otherwise Dependency Service won’t resolve it.

using Microsoft.Azure.CognitiveServices.Vision.ComputerVision;
using Microsoft.Azure.CognitiveServices.Vision.ComputerVision.Models;
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.IO;
using System.Threading.Tasks;
using ReptileTracker.Services;
using Xamarin.Forms;
 
[assembly: Dependency(typeof(ImageRecognitionService))]
namespace ReptileTracker.Services
{
    public class ImageRecognitionService : IImageRecognitionService
    {
        /// <summary>
        /// The Azure Cognitive Services Computer Vision API key.
        /// </summary>
        public string ApiKey { get; set; }
 
        /// <summary>
        /// Parameterless constructor so Dependency Service can create an instance.
        /// </summary>
        public ImageRecognitionService()
        {
 
        }
 
        /// <summary>
        /// Initializes a new instance of the <see cref="T:ReptileTracker.Services.ImageRecognitionService"/> class.
        /// </summary>
        /// <param name="apiKey">API key.</param>
        public ImageRecognitionService(string apiKey)
        {
 
            ApiKey = apiKey;
        }
 
        /// <summary>
        /// Analyzes the image.
        /// </summary>
        /// <returns>The image.</returns>
        /// <param name="imageStream">Image stream.</param>
        public async Task<ImageAnalysis> AnalyzeImage(Stream imageStream)
        {
            const string funcName = nameof(AnalyzeImage);
 
            if (string.IsNullOrWhiteSpace(ApiKey))
            {
                throw new ArgumentException("API Key must be provided.");
            }
 
            var features = new List<VisualFeatureTypes> {
                VisualFeatureTypes.Categories,
                VisualFeatureTypes.Description,
                VisualFeatureTypes.Faces,
                VisualFeatureTypes.ImageType,
                VisualFeatureTypes.Tags
            };
 
            var credentials = new ApiKeyServiceClientCredentials(ApiKey);
            var handler = new System.Net.Http.DelegatingHandler[] { };
            using (var visionClient = new ComputerVisionClient(credentials, handler))
            {
                try
                {
                    imageStream.Position = 0;
                    visionClient.Endpoint = "https://eastus.api.cognitive.microsoft.com/";
                    var result = await visionClient.AnalyzeImageInStreamAsync(imageStream, features);
                    return result;
                }
                catch (Exception ex)
                {
                    Debug.WriteLine($"{funcName}: {ex.GetBaseException().Message}");
                    return null;
                }
            }
        }
 
    }
}

And here’s how I referenced it from my content page:

pleaseWait.IsVisible = true;
pleaseWait.IsRunning = true;
var imageRecognizer = DependencyService.Get<IImageRecognitionService>();
imageRecognizer.ApiKey = AppSettings.ApiKey_Azure_ImageRecognitionService;
var details = await imageRecognizer.AnalyzeImage(new MemoryStream(ReptilePhotoBytes));
pleaseWait.IsRunning = false;
pleaseWait.IsVisible = false;

var tagsReturned = details?.Tags != null 
                   && details?.Description?.Captions != null 
                   && details.Tags.Any() 
                   && details.Description.Captions.Any();

lblTags.IsVisible = true; 
lblDescription.IsVisible = true; 

// Determine if reptiles were found. 
var reptilesToDetect = AppResources.DetectionTags.Split(','); 
var reptilesFound = details.Tags.Any(t => reptilesToDetect.Contains(t.Name.ToLower()));  

// Show animations and graphics to make things look cool, even though we already have plenty of info. 
await RotateImageAndShowSuccess(reptilesFound, "lizard", details, imgLizard);
await RotateImageAndShowSuccess(reptilesFound, "turtle", details, imgTurtle);
await RotateImageAndShowSuccess(reptilesFound, "snake", details, imgSnake);
await RotateImageAndShowSuccess(reptilesFound, "question", details, imgQuestion);

That worked like a champ, with a few gotchas:

  • I would receive a 400 Bad Request if I sent an image that was too large. 1024 x 768 worked, but 2000 x 2000 didn’t. The documentation says the image must be less than 4MB, and at least 50×50.
  • That API endpoint must be initialized. Examples don’t always make this clear. There’s no constructor that takes an endpoint address, so it’s easy to miss.
  • It can take a moment for recognition to occur. Make sure you’re using async/await so you don’t block the UI Thread!

Prettying It Up

Before I get into the results, I wanted to point out I spent significant time prettying things up. I added animations, different font sizes, better icons from The Noun Project, and more. While the image recognizer only took about an hour, the UX took a lot more. Funny how that works.

Mixed Results

So I was getting results. I added a few labels to my view to see what was coming back. Some of them were funny, others were accurate. The tags were expected, but the captions were fascinating. The captions describe the scene as the Computer Vision API sees it. I spent most of the day taking photos and seeing what was returned. Some examples:

  • My barista, Matt, was “a smiling woman working in a store”
  • My mom was “a smiling man” – she was not amused

Most of the time, as long as the subjects were clear, the scene recognition was correct:

Screenshot_20181105-080807

Or close to correct, in this shot with a turtle at Petsmart:

tmp_1541385064684

Sometimes, though, nothing useful would be returned:

Screenshot_20181105-080727

I would have thought it would have found “White Castle”. I wonder if it won’t show brand names for some reason? They do have an OCR endpoint, so maybe that would be useful in another use case.

Sometimes, even though I thought an image would “obviously” be recognized, it wasn’t:

Screenshot_20181105-081207

I’ll need to read more about how to improve accuracy, if and whether that’s even an option.

Good thing I implemented it with an interface! I could try Google’s computer vision services next.

Next Steps

We’re not done with the app yet – this week, we will discuss how to handle the scoring. I’ll post updates as we work on it. Here’s a link to the iOS beta.

Some things I’d like to try:

  • Highlight the tags in the image, by drawing over the image. I’d make this a toggle.
  • Clean up the UI to toggle “developer details”. It’s cool to show those now, but it doesn’t necessarily help the target user. I’ll ask my mentee what he thinks.

Please let me know if you have any questions by leaving a comment!

Want to learn more about Xamarin? I suggest Microsoft’s totally awesome Xamarin University. All the classes you need to get started are free.

Update 2018-11-06:

  • The tags are in two different locations – Tags and Description.Tags. Two different sets of tags are in there, so I’m now combining those lists and getting better results.
  • I found I could get color details. I’ve updated the accent color surrounding the photo. Just a nice design touch.

Want to learn all about Xamarin and how you can use it, while not spending most of your time watching code scroll by in a video? I figured there was room for an explainer without being a close-captioner for a code tutorial. Enjoy my latest video!

https://www.youtube.com/edit?video_id=AhvofyQCrhw

From the description, along with links:

Have you been considering Xamarin for your cross-platform mobile app? This presentation will help.

In this non-code-heavy presentation, we’ll discuss:

* What is Xamarin
* Development Environment Gotchas
* Creating a Sample To Do List App without writing any code
* Reviewing a real Xamarin app that’s “in the wild”
* Review native, platform-specific integrations
* Discuss gotchas when using Xamarin, and mobile apps in general
* Answer audience questions

Why not code-heavy? Because there are many examples you can follow online. This presentation will provide valuable information you can consider while reviewing the myriad of tutorials available to you with a simple Bing or Google search, or visiting Pluralsight, Microsoft Virtual Academy, or Xamarin University.

If you have any feedback, please leave in the comments, or ask me on Twitter: @Auri

Here are the links relevant for this presentation:

Slides: https://1drv.ms/p/s!AmKBMqPeeM_1-Zd7Y…

Indy.Code Slides with Cost and Performance Figures: https://1drv.ms/p/s!AmKBMqPeeM_1-JZR4…
(you can find the Indy.Code() presentation on my YouTube channel)

Google Xamarin vs. Native iOS with Swift/Objective C vs. Android with Java Performance Article: https://medium.com/@harrycheung/mobil…

Example code for push notifications, OAuth Twitter/Facebook/Google authentication, and more: https://github.com/codemillmatt/confe…

Link to Microsoft Dev Essentials for $30/month free Azure credit and free Xamarin training: https://aka.ms/devessentials

Microsoft Virtual Academy Multi-Threading Series: https://mva.microsoft.com/en-us/train…

 

I’m continuing my resolution to record as many of my programming and technical presentations as possible. I recently spoke at the inaugural Indy.Code() conference. It was excellent, with an incredible speaker line-up. I hope they, too, post some of their presentations online!

Watch the Video on YouTube

From the synopsis:

Should you write your app “native” or use a “cross-platform” solution like React Native, Xamarin, or NativeScript? The new wave of native-cross-compiling solutions provide significant cost savings, code reuse opportunities, and lower technical debt. Does wholly native, per platform development, still play a role in future mobile development? Let’s discuss together.

In this presentation, we’ll discuss:

  • The growth of native, hybrid, and cross-platform mobile development solutions
  • Cost analysis of multiple native and cross-platform apps
  • Considerations for each native and cross-platform solution
  • Lessons learned

Slides are available here: https://t.co/5iLhEoEfen

If you have any questions, I’m happy to answer them! Please email me or ask on Twitter.

 

I’ve been struggling with carrying a Mac and PC for Xamarin development for a couple years now. Wouldn’t it be nice to just run OS X in a VM so I could use my Surface Book and not mess with the Apple ecosystem more than necessary? Well, I finally got it working, thanks in large part to the work that’s been done by many people, links of which I’ll credit in this article. Thanks, all!

vm

Requirements

  • macOS X Sierra Installer, or a machine with Sierra installed – extraction details below
  • Intel powered machine, preferably i5 or higher
  • 16GB or larger thumb drive, preferably USB 3
  • VirtualBox 5.x
  • Visual Studio 2015 or higher
  • If on a laptop, you’re plugged in

Create the OS X Installer USB Drive

First things first – you’ll need a Mac running Sierra and at least a 16 GB USB thumb drive. We’ll be using Unibeast, Multibeast, and the Clover bootloader. I imagine you own both Mac OS and the drive. I’d go the USB 3.0 or higher route so things run a bit faster. We’ll be extracting a Sierra installer in a moment. If you’re all set with the above, follow the instructions at the awesome Tony Mac x86 website. Special thanks to the Hackintosh website. When asked whether to use UEFI or Legacy boot mode, choose Legacy. Here is a marked-up PDF of the article in case the link doesn’t work.

In a nutshell, here’s what you’ll be doing in this step:

  1. Insert the thumb drive into the Mac. Launch Disk Utility and format the USB drive with the name USB and the format of GUID Partition Map.
  2. Download the macOS Sierra installer via the App Store. The installer will be saved in your Applications folder. Make a copy of it somewhere, just in case it gets deleted and you need it again. Don’t move the installer.
  3. Download and Run Unibeast and follow the prompts. Use Legacy boot mode. You’ll need a [free] account on the Tony Mac x86 site to download, fyi.
  4. Let Unibeast create the thumb drive. This will take about 10 minutes on a USB 3 drive.
  5. Download and Copy Multibeast to the newly created Unibeast drive.
  6. You’re now ready to start configuring VirtualBox.

Create the VirtualBox USB Drive Shim

You have a USB thumb drive, but VirtualBox doesn’t make it easy to boot from such a device. You’ll need to create a fake virtual disk that points to the USB drive. This tutorial walks you through it. Here’s a PDF if that link doesn’t work.

In a nutshell, here’s what you’ll be doing in this step:

  1. Open Disk Management and get the Disk Number of the thumb drive, as shown below
  2. Open command prompt as an administrator
  3. Navigate to %programfiles%\oracle\virtualbox
  4. Run the command
    VBoxManage internalcommands createrawvmdk -filename C:\usb.vmdk -rawdisk \\.\PhysicalDrive#

    to create the virtual drive pointer

  5. You’re now ready to create the VirtualBox virtual machine.

disk-number

Create the VM

Windows won’t allow VirtualBox to use the USB shim you just created unless you launch with administrator privileges. Right-click VirtualBox and select Run as Administrator. VirtualBox should open. Then, follow the instructions on this page. Ignore the download portion – you already have an install thumb drive, and you just want the VM configuration steps. If that link doesn’t work, here’s a PDF.

In a nutshell, here’s what you’ll be doing in this step:

  1. Create a new Virtual Machine, name it Sierra – although that’s not a requirement – and choose OS X 64-bit as the guest OS. VirtualBox’s settings aren’t fully correct, but we’ll get there.
  2. Choose 4 GB of RAM. I didn’t test with any other memory configs. So, YMMV if you go another route.
  3. When asked which drive to use, choose an existing drive, and select the USB shim you created in the previous section. The example above saved the file as usb.vmdk at the root of C:
  4. You should now have a VM, like every other time you’ve used VirtualBox 🙂
  5. Add another Virtual Disk to stand in as your Mac’s hard drive. I suggest VDI format, dynamically sized, and 60 GB in size. Ignore that my screen shot shoes 40 GB <grin> In future steps you’ll need to install XCode and Xamarin Studio. Don’t skimp on size here or you’ll be reinstalling later. Much sad.

Note: XCode uses a lot of space when it updates. Don’t skimp on virtual disk size. If that’s a big deal, save the VM’s drive to a location that will have enough space.

Once you’ve added the hard drive, you’ll need to finish configuring the VM. You already have an installer on the thumb drive.

Aside from the defaults, confirm the settings you have match settings below. I’ve also included some screen shots a little further down.

  1. After performing the steps above, you’ll be using the following settings in your VM:
    • System, Motherboard, Base Memory: 4096 MB
    • System, Motherboard, Boot Order: Only Optical and Hard Disk checked
    • System, Motherboard, Pointing Device: USB Tablet
    • System, Motherboard, Chipset: ICH9
    • System, Motherboard, Extended Features: Enable I/O APIC, Enable EFI, Hardware Clock in UTC Time, all checked
    • System, Processor, Processors: 2 CPUs
    • System, Processor, Execution Cap: 100%
    • System, Processor, Enable PAE/NX: Checked
    • Display, Screen, Video Memory: 128 MB
    • Display, Screen, Monitor Count: 1
    • Display, Screen, Scale Factor: 100% (you can change this later if you’re on a high-res display)
    • Display, Screen, Accelerator: 3D and 2D both unchecked
    • Storage: One controller, first item is USB shim, then the hard drive and “Empty” optical drive. The order of those two don’t matter.

After configuring the VM in the UI, close VirtualBox and run the following commands, of which I’ve created a convenient all-in-one script here. You may need to edit it depending on what you named your VM.

These make the appropriate settings to fool OS X to thinking you’re on a real Mac.

cd "C:\Program Files\Oracle\VirtualBox\"

VBoxManage modifyvm "Your VM Name" --cpuidset 00000001 000106e5 00100800 0098e3fd bfebfbff

VBoxManage setextradata "Your VM Name" "VBoxInternal/Devices/efi/0/Config/DmiSystemProduct" "iMac11,3"

VBoxManage setextradata "Your VM Name" "VBoxInternal/Devices/efi/0/Config/DmiSystemVersion" "1.0"

VBoxManage setextradata "Your VM Name" "VBoxInternal/Devices/efi/0/Config/DmiBoardProduct" "Iloveapple"

VBoxManage setextradata "Your VM Name" "VBoxInternal/Devices/smc/0/Config/DeviceKey" "ourhardworkbythesewordsguardedpleasedontsteal(c)AppleComputerInc"

VBoxManage setextradata "Your VM Name" "VBoxInternal/Devices/smc/0/Config/GetKeyFromRealSMC" 1

Boot the VM and Install Sierra

Alright, we’re ready to boot! Re-launch VirtualBox as an administrator and start up. After a bunch of Unix style text scrolling, you should see the Apple logo appear and begin to load macOS. If not, something’s configured wrong.  Read through the above steps and see what you missed. Of course, things may have changed over time, and this tutorial may no longer be valid. Bummer if that’s the case! Much sad. I want you to be much happy.

NOTE: If things appear frozen during boot, wait a minute. In sanity checking this on another machine with a friend, his seemed to be frozen, and then resumed. A watched installer never boils… [terrible joke]

The instructions for installing Sierra are pretty straightforward:

  1. When the installer appears, select the Utilities menu on top, then Disk Utility.
  2. Format the Virtual Hard Disk. I named mine VBox, but that doesn’t matter. Make sure the format is GUID Partition Map and Mac OS Journaled. Do not select the case sensitive option.
  3. When formatting is complete, quit Disk Utility and you’ll be back at the installer.
  4. Select the freshly formatted hard drive and start the install process.
  5. Wait. It took about 20 minutes to install on my 6th Gen Core i7 SSD Surface Book. YMMV.
  6. Keep an eye on the installer. When it’s done, remove the thumb drive. Otherwise, it’ll boot back into the installer. If that happens, wait for the installer to boot so you don’t corrupt anything, then remove the thumb drive, and restart the VM.
  7. When the Mac boots back up, follow the prompts. Do *not* use an Apple account – it won’t let you. Make note of the username – it will be in lowercase – you’ll need that when you enter a username and password for Visual Studio later. Don’t worry about the Apple Account issue, though – this won’t affect your ability to install XCode or use the App Store.
  8. Once setup is complete, shut down the Mac.
  9. In the VM’s settings, remove the USB shim.
  10. OS X is now installed.

From this point forward, you no longer need to run VirtualBox as an administrator. Yay!

NOTE: After configuring OS X, you may be presented with a dialog stating the keyboard cannot be identified. Don’t worry – just follow the prompts and you’ll be all set.

Install Xcode

For OS X to act as a build server, you must have Xcode and Xamarin Studio installed. Let’s install and configure Xcode first.

To complete this step, do the following:

  1. Open the App Store on the Mac
  2. Search for Xcode
  3. Click Get to install it. You’ll need to enter your Apple account credentials.
  4. Wait a while – it’s big and takes a while to install. About 30 minutes on my machine.
  5. Once installed, Launch Xcode, agree to any terms, and let it finish installing components.
  6. When Xcode is finished configuring, open the Xcode menu, select Preferences, then Accounts, and click the + symbol. Enter your Apple Developer Account details.
  7. Great! Xcode is configured! Time to get Xamarin set up.

virtualbox_sierra_20_11_2016_08_48_44

Install Xamarin Studio

Xamarin Studio handles installing the OS X build agent so you can debug apps with Visual Studio, while performing the necessary build and simulation tasks on the Mac. This is required for licensing reasons, and Apple being a closed system for iOS developers. Boo.

Note: Xamarin Studio may be called Visual Studio for Mac by the time you read this.

  1. First, open Safari – unless you installed something else on the Mac already – and download Xamarin Studio for Mac. This is simple – go to Xamarin.com, and download the installer.
  2. Open the installer on your Mac from the Downloads folder, and click Open when it warns you that it’s an application downloaded from the Internet.
  3. Install everything except Android. You can do Android dev on your PC, so I feel there’s no reason to install it again here. Again, YMMV – do as you wish 🙂 This process can take a while due to downloading and installing many items.
    • Note: I’m not sure if you need Profiler or Workbooks, so I kept them in there. I’m thinking it’s an insignificant difference.
    • Another Note: The installer will say it’s installing Android SDK anyway, not sure why! 🙂 I complained to Microsoft about this – it didn’t make sense to me.
  4. Once Xamarin Studio is installed, start it and make sure it comes up.
  5. Under the Xamarin Studio menu item – which may be Visual Studio by the time you read this – select Check for Updates and make sure everything’s up to date.
  6. Note to Visual Studio 2017 Release Candidate Users: If you’re running Visual Studio 2017 Release Candidate, it [annoyingly] installs and targets the alpha channel of Xamarin. You’ll need to switch to the Alpha channel in Xamarin Studio to match this, otherwise Visual Studio will refuse to compile/build/debug through the Mac instance. A channel switching option is available in the Check for Updates menu to address this issue.

virtualbox_sierra_20_11_2016_09_23_19

Configure the Mac for Remote Login

In order to connect to the Mac from Visual Studio, we’ll need to open a port on the Mac side. The process is described in this article.

In a nutshell, here’s what you’ll be doing in this step:

  1. Press Windows-Space, which translates to Apple-Space, and type remote login to open the Sharing control panel.
  2. Check the box for Remote Login, and select All Users, or at least ensure your user account is in there. You’re on a private network only accessible by your machine, so I see few security issues here. Behind the scenes, this is opening Port 22 for SSH access to your Mac.

virtualbox_sierra_20_11_2016_10_01_03

Alright, we should be all configured! Let’s switch back to Windows!

Configure VirtualBox Networking and Port Forwarding for Remote Debugging

Now that the Mac is configured, we have to tell VirtualBox how to allow your computer to talk with it. We do this by configuring Port Forwarding in VirtualBox.

  1. Open command prompt and type ipconfig.
  2. Take note of the Ethernet adapter VirtualBox Host-Only Network, which may be #2, #3 etc. You want the one with an IP address. Copy that IP address to the clipboard.
  3. In VirtualBox, open the Settings of your VM, and select Network, then Advanced, then Port Forwarding.
  4. Add a new rule. I named mine Remote Access.
  5. For Protocol, choose TCP.
  6. For Host IP, paste in your host adapter IP.
  7. For Host Port, enter 22.
  8. For Guest Port, enter 22.
  9. For Guest IP, go to your Mac, use Windows-Space to search for Network Utility, and type in the IP that appears there.

getting-host-ip

port-forwarding

port-forwarding

Note: It’s possible the Guest IP will change from time to time. This is especially true if the Mac isn’t the only virtual machine you run. If you can no longer connect, check whether you need to update the Guest IP.

VirtualBox is now configured! If you want to verify this, you can launch bash shell on your Windows 10 machine and type ssh username@ipaddress, accept the certificate, and enter your password when prompted. If you can type ls and see your Mac’s files, all is good in the world.

Link Visual Studio to macOS

Alright, hard part’s over. Now we need to configure Visual Studio. The steps for accomplishing this can be found at the same link above, or you can just click here if you don’t want to scroll. There’s also a tutorial in Visual Studio.

In a nutshell, here’s what you’ll be doing in this step:

  1. Launch Visual Studio.
  2. Type Control-Q to access the Quick Access Menu, and type iOS Settings. This will take you to the Xamarin iOS settings pane.
  3. Choose Find Xamarin Mac Agent, follow the prompts to ensure you’ve configured everything properly.
  4. In Enter Mac name or IP address enter your VirtualBox host adapter’s IP. If everything’s configured properly, you should be prompted to enter your Mac’s username and password.
  5. If all went well, a lock-like icon should appear next to the IP address, as shown below. If not, make sure the version of Xamarin installed in Visual Studio is the same as that on the Mac. See my note above about Visual Studio 2017 and its Xamarin Alpha Channel issue.

And, Go!

If everything went well, you should now be able to do all your Windows and Xamarin / Mac development on one machine!

Please provide feedback in the comments. Enjoy!

Tips / Updates

  • If you’d like to tweak the video resolution, you can follow this article. The command is:
VBoxManage setextradata "VM name" VBoxInternal2/EfiGopMode N

Where N can be one of 0,1,2,3,4,5 referring to the 640x480, 800x600, 1024x768, 1280x1024, 1440x900, 1920x1200 screen resolution respectively
  • My goal with this article is to build a machine for building a build/test VM. I am not trying to replace a Mac environment for doing lots of work on the Mac side of things.

 

 

I struggled with this for a few days while trying to convert a Silverlight video player to HTML5, and finally found an answer. Posting here in case anyone else is having trouble!

You need to specify the format as MPEG DASH to get it to smoothstream the MP4 file to the HTML5 video player. This is done by adding a format parameter to the manifest URL, as follows:
Note the (format=mpd-time-csf) at the end of the URL. There are a number of other formats you can stream, including the Silverlight SmoothStream, Adobe’s streaming format, Apple’s HTTP Live Streaming for iOS devices, and more. This is all done for you automatically by Azure’s Media Services. Pretty darn cool.
I struggled to find this, too, so quite happy I finally got things working. Here’s the source URL from Microsoft for more details: