ML Image Capture Inference with Greengrass V2 (Raspberry Pi)
Overview
Machine Learning (ML) and Internet of Things (IoT). Two key sentences for those in tech or, curious about tech, that keep coming up when you think about where technology is going. At least for myself, my exposure to these during my years of study were limited. When topics of ML came about, they were introduced with a deep level of mathematical understanding for me to grasp, or IoT simply a buzz word for making things move over the internet. In later years of university and my first tech job in the cloud I came to understand the barrier to these technologies are not so high after all.
You see with the power of cloud computing, giving any individual access to enterprise technologies, doing ‘The cool stuff’ just became a little bit easier. In today’s article we’ll talk about how we can use the AWS cloud to:
- Connect an edge device, Raspberry Pi, to the cloud.
- Deploy via Greengrass V2 to the the device.
- And pull local ML inferences from the device into the cloud.
We will be using the publicly offer DLR detection component greengrass offers, but note you can introduce you’re own ML models.
Workshop
Installing and setting up Greengrass
Before we can connect and deploy, we first need to install the relevant greengrass software on our pi.
You can follow the official AWS Getting Started instructions for installing here. However, follow the instructions up until the Create your first component section.
Do Not do this section, as we will be creating our components from the console.
Alternatively you can follow the below steps to install.
- In order to install greengrass, the pi needs to have on it a java runtime. The following command should install java 11 and print if it’s worked:
sudo apt install default-jdk
java -version
2. From you raspberry pi’s home location retrieve the installation package.
cd ~
curl -s https://d2s8p88vqu9w66.cloudfront.net/releases/greengrass-nucleus-latest.zip > greengrass-nucleus-latest.zip
3. Unzip and run the package.
unzip greengrass-nucleus-latest.zip -d GreengrassCore && rm greengrass-nucleus-latest.zip
Note: Replace GreengrassCore in the above command with the location of where you’d like to store your core folder.
4. Before continuing you’ll need to connect your pi to the AWS cli, so that you can run the next command. You can do this by exporting either the access key and secret for a user you’ve created in your AWS account or a role for added security.
If you created and want to use a user do the following:export AWS_ACCESS_KEY_ID=<User Access Key here>
export AWS_SECRET_ACCESS_KEY=<Secret Key here>or if you have chosen to use a role:export AWS_ACCESS_KEY_ID=<Role Access Key here>
export AWS_SECRET_ACCESS_KEY=<Secret Key here>
export AWS_SESSION_TOKEN=<Role session token here>
5. The next command will connect to the cloud and register our core in the cloud.
sudo -E java -Droot="/greengrass/v2" -Dlog.store=FILE \
-jar ./GreengrassCore/lib/Greengrass.jar \
--aws-region region \
--thing-name MyGreengrassCore \
--thing-group-name MyGreengrassCoreGroup \
--tes-role-name GreengrassV2TokenExchangeRole \
--tes-role-alias-name GreengrassCoreTokenExchangeRoleAlias \
--component-default-user ggc_user:ggc_group \
--provision true \
--setup-system-service true \
--deploy-dev-tools true
Congrats you should now have greengrass running on our raspberry pi, and if you have a look under greengrass v2 in your console. you’ll see your core is registered.
Components and Artefacts
Now comes the fun, working with components and artefact in greengrass v2.
Components and artefacts work together in allowing us to deploy and run code on our device. The easiest way to remember what each do, for myself is:
- Artefacts: Are as what they sound like. These are assets from the cloud that we wish to place on the device.
- Components: These can be thought of containers that encapsulate what this module will do once deployed to the device. Within our component we will reference the artefacts to deploy through a Recipe.
A recipe specifies all the instructions, deployment, steps to execute when deploying this component.
Now that we understand components and artefacts we can deploy the publicly available once now.
Deploy DLR Object Detection
- First we will need to navigate the greengrass v2 service in our AWS account, selecting components from the menu, as per below.
2. Once inside the components section you’ll have two types of components to select from.
- My components
- Public components (Select this)
3. The component we will be deploying is the publicly supplied DLR Object Detection. Search for aws.greengrass.DLRObjectDetection to find it.
You can view the recipe for this component by clicking into aws.greengrass.DLRObjectDetection > View recipe
4. We will now deploy this recipe to our device, but not without a few modifications as we are using a camera. Select Deploy from the DLRObjectDetection component.
5. On the next page you will be prompted to select a deployment group to push to. Select the group with the new core device you registered earlier, continue through the screens until you reach an option to update your recipe.
Conclusion
And that’s it! You’ve now successfully deployed the publicly available DLRObjectDetection. Hopefully by now you understand the component/artefact concept of Greengrass V2. Take this idea and go further in deploying your own custom components with not just ML inference, but other more creative deployments.
Good Luck!!!