Archive for the ‘Computers and Internet’ Category

I recently started in the Fishers Youth Mentoring Initiative, and my mentee is a young man in junior high who really likes lizards. He showed me photos of them on his iPad, photos of his pet lizard, and informed me of many lizard facts. He’s also a talented sketch artist – showcasing many drawings of Pokemon, lizards and more. Oh, yeah, he’s also into computers and loves his iPad.

Part of the mentoring program is to help with school, being there as they adjust to growing up, and both respecting and encouraging their interests.

It just so happens that he had a science project coming up. He wasn’t sure what to write about. His pet lizard recently had an attitude shift, and he figured it was because it wasn’t getting as much food week over week. Changing that, he realized its attitude changed. So, he wanted to cover that somehow.

Seeing his interest in lizards, drawing, and computers I asked if we could combine them. I suggested we build an app, a “Reptile Tracker,” that would help us track reptiles, teach others about them, and show them drawings he did. He loved the idea.

Planning

We only get to meet for 30 minutes each week. So, I gave him some homework. Next time we meet, “show me what the app would look like.” He gleefully agreed.

One week later, he proudly showed me his vision for the app:

Reptile Tracker

I said “Very cool.” I’m now convinced “he’s in” on the project, and taking it seriously.

I was also surprised to learn that my expectations of “show me what it would look like” were different from what I received from someone both much younger than I and with a different world view. To him, software may simply be visualized as an icon. In my world, it’s mockups and napkin sketches. It definitely made me think about others’ perceptions!

True to software engineer and sort-of project manager form, I explained our next step was to figure out what the app would do. So, here’s our plan:

  1. Identify if there are reptiles in the photo.
  2. Tell them if it’s safe to pick it up, if it’s venomous, and so forth.
  3. Get one point for every reptile found. We’ll only support Lizards, Snakes, and Turtles in the first version.

Alright, time for the next assignment. My homework was to figure out how to do it. His homework was to draw up the Lizard, Snake, and Turtle that will be shown in the app.

Challenge accepted!

I quickly determined a couple key design and development points:

  • The icon he drew is great, but looks like a drawing on the screen. I think I’ll need to ask him to draw them on my Surface Book, so they have the right look. Looks like an opportunity for him to try Fresh Paint on my Surface Book.
  • Azure Cognitive Services, specifically their Computer Vision solution (API), will work for this task. I found a great article on the Xamarin blog by Mike James. I had to update it a bit for this article, as the calls and packages are a bit different two years later, but it definitely pointed me in the right direction.

Writing the Code

The weekend came, and I finally had time. I had been thinking about the app the remainder of the week. I woke up early Saturday and drew up a sketch of the tracking page, then went back to sleep. Later, when it was time to start the day, I headed over to Starbucks…

20181105_083756

I broke out my shiny new MacBook Pro and spun up Visual Studio Mac. Xamarin Forms was the perfect candidate for this project – cross platform, baby! I started a new Tabbed Page project, brought over some code for taking photos with the Xam.Plugin.Media plugin and resizing them, and the beta Xamarin.Essentials plugin for eventual geolocation and settings support. Hey, it’s only the first week Smile

Side Note: Normally I would use my Surface Book. This was a chance for me to seriously play with MFractor for the first time. Yay, even more learning this weekend!

Now that I had the basics in there, I created the interface for the Image Recognition Service. I wanted to be able to swap it out later if Azure didn’t cut it, so Dependency Service to the rescue! Here’s the interface:

using System.IO;
using System.Threading.Tasks;
using Microsoft.Azure.CognitiveServices.Vision.ComputerVision.Models;
 
namespace ReptileTracker.Services
{
     public interface IImageRecognitionService
     {
         string ApiKey { get; set; }
         Task<ImageAnalysis> AnalyzeImage(Stream imageStream);
     }
}

Now it was time to check out Mike’s article. It made sense, and was close to what I wanted. However, the packages he referenced were for Microsoft’s Project Oxford. In 2018, those capabilities have been rolled into Azure as Azure Cognitive Services. Once I found the updated NuGet package – Microsoft.Azure.CognitiveServices.Vision.ComputerVision – and made some code tweaks, I ended up with working code.

A few developer notes for those playing with Azure Cognitive Services:

  • Hold on to that API key, you’ll need it
  • Pay close attention to the Endpoint on the Overview page – you must provide it, otherwise you’ll get a 403 Forbidden

image

And here’s the implementation. Note the implementation must have a parameter-less constructor, otherwise Dependency Service won’t resolve it.

using Microsoft.Azure.CognitiveServices.Vision.ComputerVision;
using Microsoft.Azure.CognitiveServices.Vision.ComputerVision.Models;
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.IO;
using System.Threading.Tasks;
using ReptileTracker.Services;
using Xamarin.Forms;
 
[assembly: Dependency(typeof(ImageRecognitionService))]
namespace ReptileTracker.Services
{
    public class ImageRecognitionService : IImageRecognitionService
    {
        /// <summary>
        /// The Azure Cognitive Services Computer Vision API key.
        /// </summary>
        public string ApiKey { get; set; }
 
        /// <summary>
        /// Parameterless constructor so Dependency Service can create an instance.
        /// </summary>
        public ImageRecognitionService()
        {
 
        }
 
        /// <summary>
        /// Initializes a new instance of the <see cref="T:ReptileTracker.Services.ImageRecognitionService"/> class.
        /// </summary>
        /// <param name="apiKey">API key.</param>
        public ImageRecognitionService(string apiKey)
        {
 
            ApiKey = apiKey;
        }
 
        /// <summary>
        /// Analyzes the image.
        /// </summary>
        /// <returns>The image.</returns>
        /// <param name="imageStream">Image stream.</param>
        public async Task<ImageAnalysis> AnalyzeImage(Stream imageStream)
        {
            const string funcName = nameof(AnalyzeImage);
 
            if (string.IsNullOrWhiteSpace(ApiKey))
            {
                throw new ArgumentException("API Key must be provided.");
            }
 
            var features = new List<VisualFeatureTypes> {
                VisualFeatureTypes.Categories,
                VisualFeatureTypes.Description,
                VisualFeatureTypes.Faces,
                VisualFeatureTypes.ImageType,
                VisualFeatureTypes.Tags
            };
 
            var credentials = new ApiKeyServiceClientCredentials(ApiKey);
            var handler = new System.Net.Http.DelegatingHandler[] { };
            using (var visionClient = new ComputerVisionClient(credentials, handler))
            {
                try
                {
                    imageStream.Position = 0;
                    visionClient.Endpoint = "https://eastus.api.cognitive.microsoft.com/";
                    var result = await visionClient.AnalyzeImageInStreamAsync(imageStream, features);
                    return result;
                }
                catch (Exception ex)
                {
                    Debug.WriteLine($"{funcName}: {ex.GetBaseException().Message}");
                    return null;
                }
            }
        }
 
    }
}

And here’s how I referenced it from my content page:

pleaseWait.IsVisible = true;
pleaseWait.IsRunning = true;
var imageRecognizer = DependencyService.Get<IImageRecognitionService>();
imageRecognizer.ApiKey = AppSettings.ApiKey_Azure_ImageRecognitionService;
var details = await imageRecognizer.AnalyzeImage(new MemoryStream(ReptilePhotoBytes));
pleaseWait.IsRunning = false;
pleaseWait.IsVisible = false;

var tagsReturned = details?.Tags != null 
                   && details?.Description?.Captions != null 
                   && details.Tags.Any() 
                   && details.Description.Captions.Any();

lblTags.IsVisible = true; 
lblDescription.IsVisible = true; 

// Determine if reptiles were found. 
var reptilesToDetect = AppResources.DetectionTags.Split(','); 
var reptilesFound = details.Tags.Any(t => reptilesToDetect.Contains(t.Name.ToLower()));  

// Show animations and graphics to make things look cool, even though we already have plenty of info. 
await RotateImageAndShowSuccess(reptilesFound, "lizard", details, imgLizard);
await RotateImageAndShowSuccess(reptilesFound, "turtle", details, imgTurtle);
await RotateImageAndShowSuccess(reptilesFound, "snake", details, imgSnake);
await RotateImageAndShowSuccess(reptilesFound, "question", details, imgQuestion);

That worked like a champ, with a few gotchas:

  • I would receive a 400 Bad Request if I sent an image that was too large. 1024 x 768 worked, but 2000 x 2000 didn’t. The documentation says the image must be less than 4MB, and at least 50×50.
  • That API endpoint must be initialized. Examples don’t always make this clear. There’s no constructor that takes an endpoint address, so it’s easy to miss.
  • It can take a moment for recognition to occur. Make sure you’re using async/await so you don’t block the UI Thread!

Prettying It Up

Before I get into the results, I wanted to point out I spent significant time prettying things up. I added animations, different font sizes, better icons from The Noun Project, and more. While the image recognizer only took about an hour, the UX took a lot more. Funny how that works.

Mixed Results

So I was getting results. I added a few labels to my view to see what was coming back. Some of them were funny, others were accurate. The tags were expected, but the captions were fascinating. The captions describe the scene as the Computer Vision API sees it. I spent most of the day taking photos and seeing what was returned. Some examples:

  • My barista, Matt, was “a smiling woman working in a store”
  • My mom was “a smiling man” – she was not amused

Most of the time, as long as the subjects were clear, the scene recognition was correct:

Screenshot_20181105-080807

Or close to correct, in this shot with a turtle at Petsmart:

tmp_1541385064684

Sometimes, though, nothing useful would be returned:

Screenshot_20181105-080727

I would have thought it would have found “White Castle”. I wonder if it won’t show brand names for some reason? They do have an OCR endpoint, so maybe that would be useful in another use case.

Sometimes, even though I thought an image would “obviously” be recognized, it wasn’t:

Screenshot_20181105-081207

I’ll need to read more about how to improve accuracy, if and whether that’s even an option.

Good thing I implemented it with an interface! I could try Google’s computer vision services next.

Next Steps

We’re not done with the app yet – this week, we will discuss how to handle the scoring. I’ll post updates as we work on it. Here’s a link to the iOS beta.

Some things I’d like to try:

  • Highlight the tags in the image, by drawing over the image. I’d make this a toggle.
  • Clean up the UI to toggle “developer details”. It’s cool to show those now, but it doesn’t necessarily help the target user. I’ll ask my mentee what he thinks.

Please let me know if you have any questions by leaving a comment!

Want to learn more about Xamarin? I suggest Microsoft’s totally awesome Xamarin University. All the classes you need to get started are free.

Update 2018-11-06:

  • The tags are in two different locations – Tags and Description.Tags. Two different sets of tags are in there, so I’m now combining those lists and getting better results.
  • I found I could get color details. I’ve updated the accent color surrounding the photo. Just a nice design touch.

Micro Adventure was a series books in the 1980s where you had to write computer programs to get from chapter to chapter. It was a great way to learn coding for a geeky kid looking for a good story related to computers. A few months ago, I was granted rights to use the books on a website, and now it’s in beta! Check out the site and let me know what you think!

https://microadventure.net

 

The built-in Facebook OWIN provider in ASP.NET MVC can open your website to the benefits of logging in via the social networking behemoth. Still, it’s limited when it comes to pulling in profile details such as photo, birthdate, gender, and so forth. I recently implemented retrieval of those profile properties, and will explain how you can do it, too! I feel the obvious benefit is your users don’t need to manually type in their profile details, should you have similar fields in your system.

I’m assuming you’ve created and configured a Facebook app via Facebook’s Dev center, and won’t be going into that process in this article.

Determine Which Profile Fields You Need

Before we write any code, you need to know to which profile details you desire access. Facebook used to be relatively open. Not anymore! Now you need to ask permission for a ton of items, and many are no longer available. Make sure you check permissions at least every 3 months, otherwise you may find your granted permissions are no longer, well, granted, or even accessible.

Here’s a link to everything you can get: https://developers.facebook.com/docs/facebook-login/permissions/

In my case, to access the Profile photo, name information, and some other basic items, I chose:

  • public_profile
  • email
  • user_photos
  • user_about_me

I probably don’t need all these right now, but I may in the future. I figured I’d ask ahead of time.

Once you have your list, continue to the fun coding part…

Enable the Facebook Provider in Startup.Auth.cs

If you haven’t already, you’ll need to enable the Facebook provider via Startup.Auth.cs. Make sure you do this *after* any cookie authentication, so “normal” username/password logins are serviced before Facebook takes over. This should already be the case, as the default ASP.NET MVC template includes the many optional providers afterwards by default.

I suggest keeping the App ID and Secret in your config file – or at least out of code – so you can swap for differing environments as necessary. The code snippet below enables Facebook authentication, and specifies the profile fields for which we’ll be asking read permission:

You don’t have to use what I chose – it’s just what I needed for my particular case. Facebook *does* change allowed permissions and profile item visibility somewhat often. Stay on top of their developer changes – otherwise your site login may unexpectedly break.

// Enable Facebook authentication with permission grant request.
// Otherwise, you just get the user's name.
var options = new FacebookAuthenticationOptions();
options.AppId = ConfigurationManager.AppSettings["Facebook.AppId"];
options.AppSecret = ConfigurationManager.AppSettings["Facebook.AppSecret"];
options.Scope.Add("public_profile");
options.Scope.Add("email");
options.Scope.Add("user_photos");
options.Scope.Add("user_about_me");
app.UseFacebookAuthentication(options);

Install the Facebook NuGet Package

In order to easily get access to the Facebook data, I used the Facebook.NET library. It’s easy enough to install:

Install-Package Facebook

Note: I used version 7.0.6 in this example. You should be able to find the latest version and changelog at https://www.nuget.org/packages/Facebook/7.0.10-beta

Handle the Facebook External Login Callback in AccountController.cs

Once Facebook has been configured, all requests from your website will direct to Facebook, where it will ask permission, and, if granted, will redirect back to the ExternalLoginCallback action in the Account controller. It is here that I suggest you retrieve the data you’ve requested from Facebook. You’ll then modify the associated ExternalLoginConfirmation View with fields to correct or remove any information from Facebook, then continue with the account creation process on your website. That’s the part where you’ll populate the ApplicationUser entity, or whatever you decided to call it.

It’s relatively simple, as shown in the code below. The steps are as follows:

  1. Get the Facebook OAuth token with a simple HttpClient call
  2. Make the request for Profile details using the Facebook.NET library
  3. Optionally, download the Profile photo and save it somewhere

Yes, I could split this out – refactor as you see fit, and feel free to share any optimizations.

Below is the change to ExternalLoginCallback to grab the data from Facebook after the redirect:

ExternalLoginCallback Code

If you’d like to get the profile image, below is an example:

GetProfileImage Code

 

Moving Forward

I hope this article has helped answer your Facebook integration questions. If you would like additional details, please post in the comments, or message me on Twitter: @Auri

Thank you!

I was recently included on a thread with a high school student considering programming as a career. Fellow developers at Eleven Fifty were sharing their insight. I liked my pre-caffeinated contribution. I hope you enjoy as well.

Aaron,
I echo Tiffany’s sentiment. I’d be delighted to be more interactive with you on questions. Funny – I think I went to school with a Rickleff.
Anyway… I *loved* computers growing up. Still, until I was in high school, I didn’t want to be a programmer, which I later learned was really a “software engineer.” I thought they were just unhealthy, unsocial slobs that worked long, grueling hours, with pizza their only food group. Well, that was television and movies, at least. I found programming and problem solving came easily, and I liked making the computer do whatever it was I wanted, if I only spent the time. I didn’t start out with programming as a career – I started with technology, being an analyst and writer at a consumer electronics research firm. It wasn’t until my friend [and employer] challenged me to write a program for the company, and I accomplished it by putting my hobby to good use, that I started thinking programming could be a career. I learned I could make a living with my favorite hobby. That’s fun, and freeing. It’s like not working, even when it feels like work.
So what will your career look like? Software engineering makes you somewhat of a white collar worker – the pay is higher, and you’re always working with intelligent people – not that you’ll always admit that. It’s more of a “white collar t-shirt” job, because you’re required to be both a thinker and a creator at once, which can be messy. Ask yourself if you like to make things better, and if you think about how to actually do it. Even if you don’t have the skill yet – that will grow over time, and you’ll have to fail… a lot – that two-punch thinking combination is what will get things done, and make you enjoy your job. Did I mention failing? It happens all the time. You’re always building things that don’t exist, based on ideas written in a few sentences by people who don’t know how to do what you’ll be able to do. Like the beautiful buildings you see when walking, to paintings at shows, to jokes you hear for the first time – all those are the final result after all the failures to make them reality before. Building designs start with an idea out of thin air, go through a billion revisions, and finally get built. Jokes usually start from trying variations that don’t get a blink, to the final one that makes an audience laugh. But the comic started the line of thought, from thin air, from inspiration, and from thinking about how people think. The same goes with programming.
The lesson: Fail quickly, then move on to the next approach.
That being said, I’ve found the best parts of programming are the community, and what it leads to.
First, Community. Software engineering is like medicine. You’re not going to know all the practices. You’ll be good at one, or a few, but can never be good at all. Yet, you’ll meet brilliant people that can fill in the gaps in your knowledge, and you feel even better when you do the same. As engineers, we inspire other engineers. Look at Steve Wozniak, Steve Jobs, Nicholai Tesla, Sergei Brin, Larry Page – all their bios mention influencers. Nobody did it on their own. They all had help.
Second, What it Leads To. Coming up with ideas all the time has its side effects. The most prevalent? A constant stream of ideas on how to make those cool computers, whether they have a keyboard or not – phones for example – do more stuff. You’ll have ideas. Lots of them. And you have the power to make your ideas real. You’ll fail in bringing them to reality, often. Like medicine, or any career really, you’ll get better over time, tuning your craft. You’ll release your ideas, maybe as apps, maybe as web sites, maybe just making your own projects millions of people use – like Apple, Google, Microsoft, and countless others you think of having the best and brightest. Those companies are full of people who aspired, as you do, to become software engineers at some point in their lives. Those companies were also started by software and hardware engineers. Heck, Apple practically invented the personal computer, and the software engineer that wanted to program it.
Gosh, that’s a lot, and I need another refill of coffee. I hope to discuss further, if you’d like.
Thanks and Best,
-Auri
Appending what a fellow developer and instructor answered to the same student:
What your career looks like in 5 or 10 years is a very personal choice.  If you are a guy looking for a desk job with great benefits in a big company, that’s going to look very different than if you have an entrepreneurial spark that leads you to develop your own products or freelance.  I can tell you that you need to talk to all types of software professionals to get this knowledge and find out what excites you most.  The best way to do so is to attend networking events.  Verge is a fun one for entrepreneurs.  I believe Auri can refer you to a few great .NET networking groups.
After 5 years of MY career, I found myself climbing a technical corporate ladder inside of Motorola and being very content with that.  But after 10 years (still at the same company), I grew restless and started my own freelance firm on the side while also transitioning from test to architecture within the big company.  And after 15 years, I found myself appreciating the big picture of software (sales / pm / business dev) more than I did the nitty gritty code and new technologies.
As far as highs and lows in a coding career… that’s a bit more finite.  There’s a huge high when you can point at something and say, “I did that! And it’s AWESOME!”  And an ever bigger high when your peers and mentors do the same.  And for every coder, there’s a dark dark low when you run into a problem that you just CAN’T figure out.  You feel alone, you feel stupid, and you feel like a failure.  As a coder, you’re going to need to expect those situations, not fear them, just grow and learn from them.
Hope this helps.  Feel free to find me & Auri at Eleven Fifty and chat about this stuff during the time you’re here.
Thanks,
Tiffany Trusty

I recently bought a 3D Systems 3rd Generation Cubify 3D printer. I promised a friend we could print his mom a 3D box with her name embossed across a ribbon for Mother’s Day. That’s when this issue reared it’s ugly head: the software would simply say “File not found” or “Bad file” on various STL files – representing various parts of the box – and refusing to print. So what was causing this? I had a Mother’s Day present to print!

It turns out the Cubify software doesn’t like special characters in the STL filenames. So, quotes and dashes appear to be verboten. My guess is they’re running a command line tool in the background in an ill-advised way – ahem, directly instead of through .NET’s proper command calls – and the quotes and dashes end up being interpreted as paths or command line arguments, and failing.

The solution? Rename the file to something simpler and with no special characters.

If you’re running into this issue, please let me know in the comments. I’m fully convinced Cubify doesn’t test their software all that well. I wonder if they are the ilk following the “unit tests passed, therefore it works” mantra.

UPDATE (13-Dec-2013): Microsoft has a fix: http://answers.microsoft.com/en-us/windows/forum/windows8_1-networking/dell-venue-pro-loses-wireless-connection-after/bc8a1426-fdb8-466d-b074-c80a06e70d76 and direct link to update http://www.microsoft.com/en-us/download/details.aspx?id=40755

UPDATE (10-Dec-2013): Updated to include fix for WiFi problems caused by latest Patch Tuesday installs.

My WiFi stopped working on my Dell Venue Pro 8. Uninstalling Microsoft Updates KB2887595 and KB2903939 fixed the problem.

TIP: After uninstalling these updates, you can go back to Windows Update via the method below, scan for updates, then right-click the updates and select Hide this Update so Windows doesn’t repeatedly try to reinstall it.

To do this:

1. On the Start menu, swipe down to All Applications

2. Scroll all the way right and tap Control Panel

3. When Control Panel appears on the desktop, search for Windows Update by typing in the search box, and tap it

4. On the left pane there will be an option for View Installed Updates. Tap it.

5. Find Update for Microsoft Windows (KB2887595) and tap it, then tap Uninstall. If you also have update KB2903939, don’t restart yet. Otherwise, skip to step 7.

6. Find Update for Microsoft Windows (KB2903939) and tap it, then tap Uninstall.

7. Restart.

8. Your WiFi should be working again.

I picked up a Dell Venue 8 Pro for $99 as part of Microsoft’s 12 Days of Presents spree. Here are some tips & tricks for the more techy folks out there:

How to Access the BIOS

Press the power button once. Then hold down the Volume Down button until the Dell logo disappears. You don’t need a keyboard – it has an on-screen mouse mapped to the touch screen. Cool, eh?

To access the Advanced settings of the BIOS, follow the instructions through Step 7 below:

How to Speed Up SSD Disk Access by Modifying the EFI / BIOS

Thanks to Sasha for the following steps, which can increase speeds by over 50%!

1) From Windows, bring up the charms (swipe in from right)
2) Select Settings -> Change PC Settings, or Start, then All Apps, then PC Settings.
3) Choose Update and Recovery -> Recovery
4) Under Advanced Startup, select Restart Now
5) From this blue menu, select Troubleshoot, then select Advanced Options
6) Select UEFI Firmware Settings, then click Restart
7) Now, the BIOS shows up, hit the on-screen ESC button ONLY ONCE.
8) You’re now in the Main “tab”, with a vertical list of options, from here you must select Advanced, this lets you see all the BIOS settings and is different from hitting the Advanced tab across the top.
9) Select LPSS & SCC Configuration
10) Select SCC eMMC 4.5 HS200 Support and select Enabled (Mine was disabled by default)
12) Select DDR50 Support for SDCard and select Enabled (Mine was disabled by default)
13) Press F10 on the on-screen keyboard to save, then Save Settings and Exit and you’re all set.

Getting Back ~5 Gigabytes of Space by Removing Recovery Partition

The Dell Recovery Partition is essential for restoring your machine should something catastrophic happen. To add insult to injury, Dell often runs out of stock of recovery media, and won’t send you such after a year or two has passed. That’s hit me before, and it’s not fun. So, make sure you’ve backed it up!

Once you’ve backed up that recovery partition, there’s no point in keeping it. Get those gigs back!

Here’s how:

NOTE: Make sure you have at least 50% of your battery left for this process. I wouldn’t do this when hitting the lower ends of the battery spectrum.

  1. Go to All Applications and scroll all the way right to the Dell group. Tap the My Dell application.
  2. Click Backup, even if it says no backup software is installed.
  3. Click the Download Local Backup button. This will provide a link to download Dell Backup and Recovery, which you should download and install. Basically, once you click the Download button, select Run and wait for Setup to do its job. This process can take a long time. Even the download appears to be huge. It’s probably downloading the latest recovery data, but that’s just a guess.
  4. After the software has installed, it will request a restart. So, restart the tablet.
  5. Go to All Applications and back to the Dell group. Note the new Dell Backup and… option. Tap it.
  6. Wait a few moments for the cool clock animation to complete, then agree to whatever terms are presented, or not.
  7. Tap the Reinstall Disks option. This is the equivalent of a Factory Restore partition backup.
  8. Tap USB Flash Drive, which is probably the only real option you have with this unit. This includes use of the Micro SD card, which is what I used, since I didn’t have a USB adapter handy. If you decide to use an external burner, that’s cool, too. But… why?
  9. Select your USB drive, or the MicroSD card. I backed up to an 8 GB MicroSD. Dell estimates the backup at 4.03 GB, so 8 GB should suit you just fine.
  10. Tap Start, then tap Yes when asked if you’re sure about wiping out the USB or MicroSD drive. Of course you’re sure! (right?)
  11. Wait until it’s done.
  12. When it’s complete, click OK, and put the backup media in a safe place. I put it in my Venue Pro’s box.
  13. Go back to Start, then All Programs, then Desktop.
  14. Hold down on the Start button and select Command Prompt (Admin).
  15. Type diskpart to launch the Disk Partition manager.
  16. Type list partition to see the available partitions.
  17. Type select partition X, where X is the number of the approximately 4 gigabyte recovery partition. On my Venue, it was 6.
  18. Make sure you see “Partition X is now the selected partition”!!!
  19. Type delete partition override and hit enter.
  20. You should be greeting with “DiskPart successfully deleted the selected partition.”
  21. Type exit to quit DiskPart, then exit again to quit Command Prompt.
  22. Now that the partition is gone, we need to expand the size of the main partition.
  23. Open an Explorer window and long press This PC, then select Manage.
  24. When Computer Management appears, select Disk Management under Storage.
  25. You should see the 4.64 gigabytes or so we freed up showing as Unallocated.
  26. Long press your C: drive and select Extend Volume….
  27. The Extend Volume Wizard appears. Click Next.
  28. You’ll be asked where the space to extend the volume should come from. Everything should already be filled out to assign the maximum unallocated space. Simply tap Next or adjust as desired and click Next.
  29. The wizard will confirm the extension settings. Click Finish.
  30. There you go! Your C: drive is now almost five gigabytes larger!

UPDATE: You can also back up to a USB drive by acquiring a USB OTG, or “On-The-Go”, adapter. Pick one up from Fry’s, SKU number 7582626, here. This will also enable you to use thumb drives and such on your Dell Venue 8 Pro.

Disable the Annoying Backlight

Dell’s power management settings for the backlight are wretched, making the display dim almost all the time. Let’s get around that, shall we?

  1. Swipe out the charms menu, then select Settings, then Change PC Settings on the bottom.
  2. Select PC and devices.
  3. Select Power and sleep.
  4. Set Adjust my screen brightness automatically to Off.

Many developers have had that “labor of love” project – the kind that keeps them up nights trying to get everything right, figuring out how to pass that one last hurdle. Woz was no different, and the recently open-sourced code – for non-commercial use, of course – brought back memories of the days he worked on it so long ago, finishing in Vegas no less.

Some of you know I used to work for Steve, so I reached out to him with a link to the his code

Here’s his response:

On Nov 13, 2013, at 8:04 PM, ʞɐıuzoʍ ǝʌǝʇs wrote:

The MOST AMAZING code of my life…I could never do anything close to this much ‘out of any box’ stuff ever again…it was as amazing to come up with it as it seems to be reading my code. In some places I put numbers like (5) meaning that 5 cycles would be taken by that instruction – I had to count them all so the loops always sent a byte to the controller every 32 microseconds exactly. And there is no way to explain the 5-bit and 7-bit stuff but it extended the data from 13 sectors to 16 sectors. The 13-sector version was running in Las Vegas. The improvement to this 16-sector code is the part that I worked on every night for a month, nearly finishing each night around 2 AM (Denny’s milkshake) but repeating the whole process the next day because I had to keep getting the entire huge framework in my head each day. Finally I stayed one night until 6:30 AM and got it totally done. Jobs had been asking me every day when it would be done and that morning I told him that it was! This part of the low-level disk code was not Randy’s but I am so thankful for the parts he did so well too that made higher level sense out of this. I consider this code to be more like hardware than software.

Below are my notes from Day 1 of the CEATEC show in Makuhari, Japan.

SAM_8159

Sony Info-Eye + Sony Social Live

Sony showcased two unique social apps, Info Eye and Social Live, part of their Smart Social Camera initiative.

SAM_8244

Info Eye takes an image and analyzes it for different types of information, surfacing that information in different views. For example, take a photo of the Eiffel Tower and you are presented with different "views" of the data. The first view may be related photos of the French attraction, such as a view from the top, or the Eiffel Tower Restaurant. Change views and you’re presented with a map of Paris. Continue to the next view and see your friends’ comments on social networks about the attraction. It certainly is an innovative approach to automatically get benefits from simple photo taking – photos you normally wouldn’t look at again anyway.

A video is worth thousands of pictures, and you already know what those are worth:

And in case you simply want a picture:

SAM_8249

Social Live is like a live video feed, streamed from your phone to various social services. While the example of a live marriage proposal wasn’t so realistic, Social Live still has great consumer applications. For example, set a social live link on Facebook and your friends could view your video feed while you tour the Champs Elise in Paris, without your needing to initiate a Skype call. It’s similar to having a live broadcast stream ready to go at any time.

3D 4K Everywhere!

3D didn’t entice the world – again – so, why not re-use all that marketing material, swapping 4K for 3D? No, it’s not that bad, and 4K is beautiful, but it’s just too early, too expensive, as is almost every evolutionary technology like this. Just for fun I  made a collage of the various offerings. Component innovation is once again creating products at a pace greater than the consumers’ willingness to adopt.

4K_AutoCollage_12_Images

Tizen IVI Solutions at Intel

Intel had a sizeable display of Tizen OS based In-Vehicle Infotainment solutions at its booth. Apparently Intel had 800 developers working on Tizen while partnered with Nokia on the OS-formerly-known-as-MeeGo. The most interesting Tizen demonstration was Obigo’s HTML5-based IVI solution. On a related note, Samsung is apparently folding their Bada OS into Tizen. It will be interesting to see whether it makes any difference in the global mobile OS movement, still dominated by Android, then iOS, then Windows Phone.

SAM_8250

Obigo’s HTML5-based In-Vehicle-Infotainment Solution

Obigo’s solution is to automotive application development what PhoneGap is to standard mobile application development. Developers build widgets using HTML5 + JavaScript, accessing vehicle data and services via an abstraction layer provided by the Obigo engine. Apps in Obigo’s system are called widgets. Nothing appears to prevent Obigo from bringing this solution to Android, so look for that possibility on the various Android vehicle head units coming to market. Hyundai and Toyota will be the first integrators of the system.

SAM_8213

Apparently Japanese Car Insurance is Very Expensive

Another solution shown at the Intel Tizen display was a driving habits monitor capable of sending an email to your insurance company with said information. The goal would be to lower insurance rates. The solution was a hokey implementation at best, but at least I’ve learned insurance is expensive here as well.

Fujitsu Elderly Care Cloud

In an effort to keep Japan’s increasingly elderly population in touch with their families, Fujitsu has created a "Senior Cloud." The benefit to seniors will apparently be video and photo communication and sharing services with their family, alongside healthcare detail sharing services. I couldn’t get a demo, but it sounds like a good idea. For the next 10-20 years, anyway – by then, the "elderly" will have become the people who know how to do these things.

SAM_8221

ModCrew iPhone Waterproofing Coat

ModCrew displayed a nano-coating solution for iPhones (only), rendering your fruit phone washable.

clip_image001

clip_image002

Omron Basil Thermometer with DoCoMo Health Smartphone App

Omron has a unique line of basil thermometers, with pleasant shapes and colors, targeted (obviously) towards women. The devices, among other Omron health device solutions, can all transmit their data via NFC to phones and tablets. Using an app from NTT DoCoMo, health data can be consolidated and analyzed, and health advice can be provided.

clip_image003

All health components gather data to recommend healthy choices.

clip_image004

Huawei Phone with Panic Alarm

Chinese consumer and mobile electronics provider Huawei showcased their HW-01D feature phone with a built-in panic alarm. Targeted towards women, children, and the elderly, the device has a pull tab that sets off a loud, yet oddly pleasant, siren to scare away would-be perpetrators.

SAM_8252

Fujitsu Finger Link

Fujitsu’s Finger Link solution uses a top-mounted camera to convert physical objects to virtual objects, enabling you to organize and relate such items for later manipulation. For example, put 3 Post It notes down and they are converted to digital representations, automatically recognized as separate objects. Tap each related item and drag a line between others similar to the first. Tap a button on the projected interface and now they’re related, moveable, sharable, and more.

clip_image006

Fujitsu Sleepiness Detection Sensor

A hot item in vehicles displayed at CEATEC this year was detection of distracted driving. Fujitsu’s component detects eyes moving away from the road, a downward or upward motion possibly signifying the driver is drowsy. The component is for use by automotive integrators.

clip_image007

Fujitsu big data + open data quiet service, LOD utilization Platform

Fujitsu showcased an open LOD utilization platform for quickly and easily mining and analyzing the data from many Open Data sources all at once, visually. The back-end is using the SPARQL query language.

clip_image008

Mitsubishi 4K LaserVue

Mitsubishi showcased a prototype 4K Red Laser + LED backlit display, enabling a beautiful, beyond photorealistic video display. Standing in front of the reference unit, I actually felt like I was looking through a window – the colors were amazingly vivid and lifelike.

SAM_8267

clip_image010

Mitsubishi elevator skyscraper swap detection system

Mitsubishi also showcased a solution for preventing elevator stalls in swaying skyscrapers. Their sensor moves the elevator cart to a non-swaying or less-swaying floor to prevent service outages, keeping the elevators running as efficiently as possible, and giving you one less excuse to miss that meeting.

clip_image011

Mitsubishi 100Gbps optical transmission technology

Mitsubishi showcased a 100 gigabit/second inter-city optical interconnect solution, with a range up to 9000 kilometers.

clip_image012

Mitsubishi Vector Graphics Accelerating GPU

Who says you need multi-core ARM processors running over 1 GHz + powerful GPUs for beautiful embedded device interfaces? Mitsubishi sure doesn’t. They showcased a GPU running at a scant 96 MHz, accelerating vector graphics display at up to 60 frames per second. Incredibly responsive interfaces for elevators and boat tachometers were displayed. The target is rich user interfaces with incredibly low power consumption.

Related notes:

SAM_8265

Mitsubishi Rear Projection Display for Automotive

It’s no surprise Mitsubishi is proposing rear projection solutions for automotive – RP is one of the company’s strengths. What they propose is curved surfaces to provide an interface that matches the interior of the vehicle. Also possible is 3D-like interfaces, as shown below.

clip_image013

Sharp Frameless TV Concept

A display with no bezel? Sharp’s frameless concept showcases how beautiful such a solution would be. That it in the center.

clip_image014

Sharp Mirror Type Display

Also on display (ahem) was the Mirror Type Display, with a display built into a mirror. Have I said display enough times?

Pioneer Wireless Blu-ray Drive

That shiny new ultrabook is pretty svelte, isn’t it? What’s that? You want to watch a Blu-ray? That’s fine – just use Pioneer’s BDR-WFS05J solution to wirelessly connect to the Blu-ray drive across the room and stream the data over 802.11N, as long as it’s in its dock. The unit also supports USB 2 and 3. Ships at the end of September.

clip_image015

Toyota Smart Home HEMS Using Kinect

Toyota showcased a smart home energy management system (HEMS) using Kinect to interact with various residents.

Toyota Concept Vehicles

I don’t know much about the following one-person electric riders, but they looked cool, so enjoy the photos.

clip_image016

clip_image017

Clarion Smart Access + EcoAccel

Determining whether you’re driving Green, or "Eco" as they say in Japan, can be difficult. Clarion’s EcoAccel app, which runs on their Android-powered head unit, reads ODB2 sensor data to rate your Eco driving habits. It’s an entertaining way to enhance the eco-friendliness of your driving routine. The representative said there are no current plans to bring this product Stateside, but I’m hoping they change their mind. After all, ODB2 data is pretty easy to read, even if it’s not entirely standardized.

clip_image018

clip_image019

clip_image020

Mazda Heads Up Cockpit

While the HUD component is nothing to write home about, Mazda’s approach of keeping everything at eye level, while re-organizing the shift knob to also be easily manipulated was a welcome safe-driving-meets-ergonomics approach. Better yet, they will be shipping this in their Axela vehicles, meaning less expensive vehicles may be readily receiving technology to deter distracted driving. They call this the Heads Up Cockpit with a Concentration Center Display.

clip_image021

clip_image022

clip_image023

Mazda Connect System

Mazda also showcased the Mazda Connect system, enabling car communication and software components to be "easily" upgraded as new features are available. Whether this will be an insanely expensive solution, akin to Samsung’s upgradeable TV approach, remains to be seen.

It’s fascinating to see how some of the most innovative products are coming from what used to be one of the least innovative industries: automotive.

Murata – Sonic Gesture Control

Murata’s components for sonic transmission and reception are being used to create a gesture recognition interface, ideal for hands free control of tablets and other devices. This technology could be used for games, such as Fruit Ninja, providing a 3D space in which to work. The gesture X, Y, and Z coordinates can be determined. An SDK is available, provided by Elliptic Labs. Only single point recognition is supported at this time, but Elliptic claims multi-gesture support is in the works.

clip_image001

Other notes:

  • Single point.
  • Working on multipoint. 2014 target.
  • 180 degree range.
  • Emitters and microphones.
  • 2 Transmitters, 4 microphones.
  • Accurate to about half an inch, but fine movement is supported.
  • Elliptic Labs makes software, Murata the transducer.
  • SDK for android, releasing at CEATEC, Windows SDK already available.

Mitsumi laser heads up display for automotive

Mitsumi demoed a heads-up display for automotive use, preventing distracted driving. The reference exhibit utilizes a laser pico projector and piezoelectric transmission to the mirror rather than the electromagnetic approach their competitor Macrovision (?)  uses.

The projected resolution is claimed to be 1024×640, although I’m unsure if that was a mis-translation – they’re only using a QHD (quarter-HD) panel.

clip_image002

clip_image003

The device is expected to be shipped to integrators by 2017-2018. End user access could take longer, as integrators decide how to best implement the technology.

Alps Epistemic Cockpit

ALPS showed what happens when you buck the trends of the traditional car cockpit.

Utilizing cameras, biometric sensors, wireless charging and transmission, the cockpit can ensure the driver is authenticated, isn’t sleepy, and provide them access to all their phone’s media.

Other notes:

  • User authentication.
  • Face recognition.
  • Checks physical condition, such as heart rate, gaze direction for drowsiness, whether the driver is looking away.
  • Gaze detection occurs continuously.
  • Vitals dictate whether driver has entered, exited vehicle.

clip_image004

clip_image005

The system uses a camera and laser to point the user to places in the vehcile, such as where to place their phone. It’s encouraging to see more manufacturers thinking outside the traditional configuration. A lack of such leads us to retaining QWERTY as the default keyboard layout Smile with tongue out

ALPS + MyWay + ROHM Efficient DC-DC Converter

Modern portable DC-DC converters are still quite inefficient, but a recent collaboration between ALPS + MyWay + ROHM may change that forever. The trio has created a much more efficient dc-dc converter – it’s 1/10 the size, 1/5 the weight, and many times more efficient than traditional systems.

clip_image006

clip_image007

The unit is smaller due to its switching frequency, which is 100KHz , versus 15KHz in current solutions, while still providing effectively the same amount of power.

The module will be sold by MyWay by the end of October 2013.

Possible applications of the module will be significantly smaller and more efficient charging stations and electric vehicle power systems. This could further increase the range of EV systems allthewhile using less space.

Photo from the Intel Booth

While I haven’t yet visited the Intel booth, it sure looks cool.

clip_image008

NEC DNA Analyzer

NEC has created a portable DNA analyzer capable of analyzing DNA indicators at crime scenes and determining any possible suspect matches through integrated database searching. The company has combined the functions of three DNA machines used on crime scenes into a single, smaller unit. Rather than taking two days to process the samples, it can return results in about an hour, with a target of 30 minutes being their next goal. The database searching is optional and does not significantly affect the unit’s processing time either way.

clip_image009

clip_image010

clip_image011

clip_image012

clip_image013

Other Notes:

  • In 2014 they will make units available to research and law enforcement. 2015 product launch.
  • Also has disaster site and medical area applications. Anywhere DNA analysis its necessary.
  • Price range expected to be 20-50M Yen. Possibly 10M Yen when it goes mainstream.
  • In conventional system, each of those components costs 10-50M Yen each, so this is a considerable savings. However, those systems can do 40-80 samples at a time vs. only 1 here.

NTT Docomo Intelligent Glasses

NTT DoCoMo showed their take on the software solutions possible when a camera and OS are attached to a glasses interface. They called these scenarios and software solutions “intelligent glasses,” even though no product is shipping as of yet.

clip_image014

The units had a QHD panel for the interface, with full movie playback capability.

clip_image015

clip_image016

clip_image017

clip_image018

clip_image019

In the example pictured above, the glasses are generating an overlay touch interface on the book she’s holding.

clip_image020

clip_image021

Above is their concept for an augmented reality application. Hands can be tracked in 3D space for manipulating an object projected in the lens display.

clip_image022

clip_image023

One incredible application utilized text and face recognition. Looking at a menu in Japanese, for example, overlaid the English translation over the text. Users could also find and recognize faces in the crowd, making it easier, say, to find your children at a parade, or social media contacts in a crowd based on their online photos.

clip_image024

clip_image025

clip_image026

clip_image027

NTT already has a similar translation feature on their smartphone products.

clip_image028

clip_image029

Other Notes:

  • The solution for text translation and face recognition was running on an Android 4.0 – Ice Cream Sandwich – platform.

NTT DoCoMo 5G Demonstration

NTT demonstrated a 5G solution utilizing arrays of 100 microantennae to boost per-user signal strength and data transmission. Their goal is to provide 1 GBit/second rates to all users, with up to 10 GBit/second under “ideal” circumstances.

clip_image030

clip_image031

clip_image032

Below are photographs from their 5G simulation:

clip_image033

clip_image034

clip_image035

clip_image036

clip_image037

Other Notes:

  • 1000x system capacity, 100x speed increase
  • 1GBps goal typical data rate, sometimes 10 GBits if prefect conditions
  • Question: What processor could handle that on a phone anyway? Makes sense it would be future.
  • Multi cell provides direct path to more users under load. Great for traffic explosion, also in congested environments, with the 100 micro cells per antenna.