An Einstein Vision Use Case to Ponder – Face Identification


And it happened – my VISA didn’t come through and I won’t be speaking at Dreamforce’17. Like everyone else it was my ever longing dream to attend but fate took a different turn. Well, NO. I am not giving up. I will keep my fingers crossed and I hope I will be able make it to Dreamforce in the coming year. That being said, I didn’t want my session to go in vain and I have decided to write a blog on what I was hoping to present at Dreamforce’17.

Now to make things easier, the whole concept is an extension of the idea presented here – https://shrutisridharan.wordpress.com/2017/04/07/build-an-employee-check-in-system-using-tessel-and-salesforce/. In simple words, an authentication system which uses your face as the key to unlock access to say your office! The face detection will be facilitated using Einstein Vision.

Here is a graph depicting how authentication systems have evolved over time –

Auth Sys Timeline.png

So that shows how the access control systems/authentication systems progressed. It all started with keypad based access. Then we had RFIDs. I guess we use them today too. Then we saw the birth of “Biometric-based Authentication”. That one made a revolution! We saw authentication using Thumb, Voice and Face. That’s where we are now.

Part 1: Setting up Einstein Vision


So before I go into the depths, I would highly recommend you to read one of my previous blog posts that talks about setting up Einstein Vision in your DE – https://shrutisridharan.wordpress.com/2018/01/04/a-primer-to-einstein-vision/. Don’t worry! It shouldn’t take that long to complete 🙂 And if you are still reluctant, all you have to do is to take a look at this 10 mins video –

Alright! Now that you have setup Einstein Vision and played around with Mountains and Beaches, let’s explore what else can Einstein Vision do.

Taking Einstein Vision to the Next Level

Now what if we could take this a step ahead and use Faces instead of Mountains and Beaches ?!?  So I decided to create a Dataset with pictures of mine and my friend (Parvathy). Guess what happened ?!? Take a look.

Einstein Vision isn’t developed to detect faces or facial features but the power of Pattern Matching/AI was good enough to identify faces. Isn’t that awesome!

The Fun Begins – DIY Authentication System!

Now that we have our Einstein Vision trained with the faces that needs to be identified, the next step is to setup a device at the door of your office that can capture your face and send it to this Developer Edition Org (where you have setup the Einstein Vision) for identification.

Let’s look at a Flow Diagram that explains the whole setup.

Flow Diagram.png

How are we going to do this? You won’t believe that using Salesforce Einstein is so simple that it hurts. So, at first the Tessel will capture your face. It will send the same to SF Org via APIs where you have the Einstein Vision Setup. Then the API would do the prediction and returns the result back to Tessel.

So in essence there are fundamentally 3 things to note:

  1. Tessel – An IoT powered device that captures your face and sends it to Salesforce
  2. A Developer Edition Org – A DE where you have setup the Einstein Vision (trained to detect faces)
  3. A Force.com Site – What ?!? Yes, this Force.com Site will be setup in the same Org as above (mentioned in number 2) and will expose Apex REST Services written in order to accept the images from the Tessel and perform the identification using Einstein Vision

Part 2: Creating Public APIs for Tessel


Well, no rocket science. All we have to do is to create an Apex Class that exposes Apex REST APIs capable of accepting images and then add it to Force.com Site in order to make it public. Yes, I understand – PUBLIC = DANGER! But let’s keep it simple 🙂

@RestResource( urlMapping = '/tesselservices/*' )
global class TesselServices {
    public static final String MODEL_ID = 'INSERT MODEL ID HERE';
    
    @HttpPost
    global static String upload() {
        RestRequest req     = RestContext.request;
        RestResponse res    = Restcontext.response;
        
        Blob picture        = req.requestBody;
        
        ContentVersion conVersion   = new ContentVersion();
        conVersion.Title            = 'Content Version' + System.now().getTime();
        conVersion.PathOnClient     = 'file_' + Datetime.now().getTime() + '.jpg';
        conVersion.VersionData      = picture;
        conVersion.Origin           = 'H';
        INSERT conVersion;
        
        ContentDistribution cd              = new ContentDistribution();
        cd.Name                             = 'Content Dist ' + System.now().getTime();
        cd.ContentVersionId                 = conVersion.Id;
        cd.PreferencesAllowOriginalDownload = TRUE;
        cd.PreferencesAllowPDFDownload      = TRUE;
        cd.PreferencesAllowViewInBrowser    = TRUE;
        INSERT cd;
        
        List<ContentDistribution> conDist = new List<ContentDistribution>();
        conDist = [
            SELECT  ContentDownloadUrl
            FROM    ContentDistribution
            WHERE   Id = :cd.Id
        ];
        
        return conDist[0].ContentDownloadUrl;
    }    
    
    @HttpGet
    global static PredictionResponse authenticate() {        
        EinsteinAPI api = new EinsteinAPI();
        
        PredictionResponse resp = api.predictImage( MODEL_ID, EncodingUtil.urlDecode( RestContext.request.params.get( 'img' ), 'UTF-8' ) );
        
        return resp;
    }
}

In the above code snippet, ensure you specify the Model Id (at the very beginning) to the one that was obtained as a result of the training. You can always use the Save Model button to save the trained model with a friendly name from the Einstein Vision tab –

Screenshot_2

After a successful save, you can navigate to the Custom Settings – Trained Models and capture the Model Id as shown below.

Screenshot_1

Now all that Apex Code does is this (a picture is worth a thousand words) =

Public APIs for Tessel.png

Now don’t get overwhelmed seeing that code! It’s ain’t difficult at all. So we have two REST methods – one POST and another GET. The POST receives the image from the Tessel while the GET does the prediction.

In essence the idea is to create a Public URL to the image (via Content Version & Content Distribution) and then send this Public Image URL along with the Model Id to Einstein Vision APIs for prediction.

Part 3: Configuring the Tessel


I know you all have been hearing Tessel for a couple of times now. Tessel is a microcontroller that runs Node.js. It supports loooot of plug and play modules. And you know what – it connects to the Internet using WiFi. Pretty cool! We will be using a Tessel (along with a WebCam) to capture the User’s face and this little thingie will be affixed at the door of your office.

Tessel.png

And do you know that Node.js will not be like “greek and latin“. If you know a little JavaScript then Node.js can be easy to learn too 🙂 Trust me setting up Tessel is super easy. Check this out – http://tessel.github.io/t2-start/. Now, unlike other IoT enabled devices, Tessel is really easy to work with. You just have to plug it into your PC via the USB port and you will see that it will start living! The next bit is to install all the required 3rd party Node.js libraries for our project and finally burn our source code right into it. That is it!

The libraries that we will be using includes –

  1. Requests
  2. Camera
  3. Keypad

Now what happens within the Tessel can be summarized as below –

Setting Up Tessel.png

Here is the snippet that needs to burned on the Tessel –

"use strict";

const request = require( "request" );
const av = require( "tessel-av" );
const camera = new av.Camera();
const capture = camera.capture();

capture.on(
    "data",
    function( data ) {
        console.log( "\nStarted capturing process..." );

        request(
            {
                method  : "POST",
                url     : "https://authorize-tessel-developer-edition.ap5.force.com/services/apexrest/tesselservices/upload",
                body    : data
            },
            function( error, response, body ) {
                console.log( "\nFace captured successfully." );

                body = body.substring( 1, body.length - 1 );
                body = encodeURIComponent( body );

                request(
                    {
                        method  : "GET",
                        url     : "https://authorize-tessel-developer-edition.ap5.force.com/services/apexrest/tesselservices/authenticate?img=" + body,
                    },
                    function( error, response, body ) {
                        console.log( "\nPrediction completed." );
                        console.log( "\n" + body );
                    }
                );
            }
        );
    }
);

The Wrap!

That is it! Did you even realize it was that simple that it could hurt 😉 Don’t wait! Go give it a shot. You are sure to be amazed at the potential of both of these magnificent platforms.

Here are the slides that were to be presented @ #DF17 –

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s