Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -88,3 +88,4 @@ lint/tmp/
*.swp

Makefile
tmp_*
58 changes: 46 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,8 @@
| Face tracking | Pose estimation | YOLO |
|:-------------:|:---------------:|:----:|
| ![Face tracking Demo](docs/Demo-UFO.gif) | ![Pose estimation demo](docs/Demo-Pose.gif) | ![YOLO demo](docs/Demo-YOLO.gif) |
| **Mnistwild** | **Rubic's cube** | **Whack-a-Mole** |
| ![Mnistwild demo](docs/mnistwild.gif) | ![Rubics cube demo](docs/rubics_cube.gif) | |

## Models Used in Samples

Expand All @@ -13,6 +15,8 @@
| UFO (Face Detection) | MediaPipe Face Detection | [Qualcomm AI Hub - MediaPipe Face Detection](https://aihub.qualcomm.com/models/mediapipe_face?searchTerm=face) |
| Pose (Pose Estimation) | MediaPipe Pose Landmarker | [Google AI Edge - Pose Landmark Detection](https://ai.google.dev/edge/mediapipe/solutions/vision/pose_landmarker#models) |
| YOLO (Object Detection) | YOLOv11 Detection | [Qualcomm AI Hub - YOLOv11 Detection](https://aihub.qualcomm.com/models/yolov11_det?searchTerm=yolo) |
| MNIST (Digit Recognition) | Classic MNIST | [MNIST Database](https://en.wikipedia.org/wiki/MNIST_database) |
| Rubic's Cube (Solver) | No model used | N/A (Internal) |

## Project aims

Expand All @@ -23,10 +27,15 @@ customized MR-based effects with deployment of open-sourced
machine learning algorithms.

Additionally, the project provides a set of utility classes,
located under [`./base/securemr_utils`](base/securemr_utils/README.md)
located under [`base/securemr_utils`](base/securemr_utils/README.md)
to simplify your
development of SecureMR-enabled applications.

The [`sample/mnist`](samples/mnistwild) also demonstrates how to defines the SecureMR
pipelines completely in JSON, using which allows your application
to dynamically load and update pipelines at run time, without hard
coding the algorithms into the app.

A docker file together with necessary resources are also
contained under the `Docker/` directory, if you would like
deploy your own algorithm packages.
Expand Down Expand Up @@ -73,10 +82,31 @@ deploy your own algorithm packages.
├── samples
│   │ Directory for all sample projects.
│   │
│   └── ufo
│   This is a sample showing a UFO "chasing" the human being
│   whoever it sees. The sample app uses an open-sourced
│   face detection model from MediaPipe.
│   ├── ufo
│   │ This is a sample showing a UFO "chasing" the human being
│   │ whoever it sees. The sample app uses an open-sourced
│   │ face detection model from MediaPipe.
│   │
│   ├── mnistwild
│   │ Hand-written digit recognition using a self-trained MNIST-based
│   │ inference pipeline.
│   │
│   ├── readback
│   │ A minimumal demo that shows the usage of the readback APIs, which
│   │ allows an app, if proper camera or spatial-data permission(s)
│   │ is granted, to read the tensor content back from the SecureMR server.
│   │ This demo does not deploy any algorithms, nor present any render
│   │ effects: it simply calls the readback methods to obtain the camera
│   │ image and save it to local storage.
│   │
│   ├── rubics_cube
│   │ A Rubik's Cube scanner and solver with real-time
│   │ color classification, presenting the step-by-step instructions on
│   │ the screen.
│   │
│   └── model_inspect
│   Diagnostic tool for validating serialized models
│   on device.
|
└── ...
```
Expand All @@ -85,7 +115,7 @@ deploy your own algorithm packages.

#### (A) To run the demo, you will need

1. A PICO 4 Ultra device with the latest system update (OS version >= 5.14.0U)
1. A PICO 4 Ultra device with the latest system update (OS version >= 5.15.0)
1. Android Studio, with Android NDK installed, suggested NDK version = 25
1. Gradle and Android Gradle plugin (usually bundled with Android Studio install),
suggested Gradle version = 8.7, Android Gradle Plugin version = 8.3.2
Expand All @@ -99,12 +129,16 @@ deploy your own algorithm packages.

1. Install and configure according to the [prerequisite](#prerequisite).
1. Open the repository root in Android Studio, as an Android project
1. After project sync, you will find there are four modules detected by the Android Studio, all under the `samples` folder:
1. `pose` which contains a pose detection demo
1. `ufo` which contains a face detection demo
1. `yolo` which contains an object detection demo
1. `ufo-origin`, the same demo as `ufo`, but written using direct calls to the OpenXR C-API, with no
simplification using SecureMR Utils classes.
1. After project sync, you will find several modules detected by the Android Studio, all under the `samples` folder:
1. `pose` which contains a pose detection demo
1. `ufo` which contains a face detection demo
1. `yolo` which contains an object detection demo
1. `mnistwild` which contains a hand-written digit recognition demo
1. `readback` which contains a minimal demo showing the usage of the readback APIs
1. `rubics_cube` which contains a Rubik's Cube solver demo
1. `model_inspect` which is a utility for model validation
1. `ufo-origin`, the same demo as `ufo`, but written using direct calls to the OpenXR C-API, with no
simplification using SecureMR Utils classes.
1. Connect to a PICO 4 Ultra device with the latest OS update installed
1. Select the module you want to run, and click the launch button.

Expand Down
116 changes: 116 additions & 0 deletions assets/UFO/facedetector_fp16_qnn229.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,116 @@
{
"version": "QNN_SYSTEM_CONTEXT_BINARY_INFO_VERSION_3",
"info": {
"backendId": 6,
"buildId": "v2.29.0.241129103708_105762",
"coreApiVersion": "2.22.0",
"backendApiVersion": "5.29.0",
"socVersion": "",
"contextBlobVersion": "3.2.0",
"contextBlobSize": 1362248,
"numContextTensors": 0,
"contextTensors": [],
"numGraphs": 1,
"graphs": [
{
"version": "QNN_SYSTEM_CONTEXT_GRAPH_INFO_VERSION_3",
"info": {
"graphName": "mediapipe_face_mediapipefacedetector_tflite",
"numGraphInputs": 1,
"graphInputs": [
{
"version": "QNN_TENSOR_VERSION_1",
"info": {
"id": 1,
"name": "image",
"type": "QNN_TENSOR_TYPE_APP_WRITE",
"dataFormat": "QNN_TENSOR_DATA_FORMAT_FLAT_BUFFER",
"dataType": "QNN_DATATYPE_FLOAT_16",
"rank": 4,
"dimensions": [
1,
256,
256,
3
],
"memType": "QNN_TENSORMEMTYPE_RAW",
"quantizeParams": {
"definition": "QNN_DEFINITION_UNDEFINED",
"quantizationEncoding": "QNN_QUANTIZATION_ENCODING_UNDEFINED"
}
}
}
],
"numGraphOutputs": 2,
"graphOutputs": [
{
"version": "QNN_TENSOR_VERSION_1",
"info": {
"id": 486,
"name": "box_coords",
"type": "QNN_TENSOR_TYPE_APP_READ",
"dataFormat": "QNN_TENSOR_DATA_FORMAT_FLAT_BUFFER",
"dataType": "QNN_DATATYPE_FLOAT_16",
"rank": 3,
"dimensions": [
1,
896,
16
],
"memType": "QNN_TENSORMEMTYPE_RAW",
"quantizeParams": {
"definition": "QNN_DEFINITION_UNDEFINED",
"quantizationEncoding": "QNN_QUANTIZATION_ENCODING_UNDEFINED"
}
}
},
{
"version": "QNN_TENSOR_VERSION_1",
"info": {
"id": 501,
"name": "box_scores",
"type": "QNN_TENSOR_TYPE_APP_READ",
"dataFormat": "QNN_TENSOR_DATA_FORMAT_FLAT_BUFFER",
"dataType": "QNN_DATATYPE_FLOAT_16",
"rank": 3,
"dimensions": [
1,
896,
1
],
"memType": "QNN_TENSORMEMTYPE_RAW",
"quantizeParams": {
"definition": "QNN_DEFINITION_UNDEFINED",
"quantizationEncoding": "QNN_QUANTIZATION_ENCODING_UNDEFINED"
}
}
}
],
"numUpdateableTensors": 0,
"updateableTensors": [],
"graphBlobInfoSize": 40,
"graphBlobInfo": [
{
"version": "QNN_SYSTEM_CONTEXT_HTP_GRAPH_INFO_BLOB_VERSION_V1",
"info": {
"spillFillBufferSize": 0,
"optimizationLevel": 0,
"vtcmSize": 4,
"htpDlbc": 0,
"numHvxThreads": 0
}
}
]
}
}
],
"contextMetadataSize": 8,
"contextMetadata": {
"version": "QNN_SYSTEM_CONTEXT_HTP_CONTEXT_INFO_BLOB_VERSION_V1",
"info": {
"dsp arch": 68
}
},
"soc model": 0
}
}
Binary file added assets/mnistwild/mnist.serialized.bin
Binary file not shown.
112 changes: 112 additions & 0 deletions assets/mnistwild/mnist.serialized.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,112 @@
{
"version": "QNN_SYSTEM_CONTEXT_BINARY_INFO_VERSION_3",
"info": {
"backendId": 6,
"buildId": "v2.29.0.241129103708_105762",
"coreApiVersion": "2.22.0",
"backendApiVersion": "5.29.0",
"socVersion": "",
"contextBlobVersion": "3.2.0",
"contextBlobSize": 8650744,
"numContextTensors": 0,
"contextTensors": [],
"numGraphs": 1,
"graphs": [
{
"version": "QNN_SYSTEM_CONTEXT_GRAPH_INFO_VERSION_3",
"info": {
"graphName": "mnist",
"numGraphInputs": 1,
"graphInputs": [
{
"version": "QNN_TENSOR_VERSION_1",
"info": {
"id": 1,
"name": "input_1",
"type": "QNN_TENSOR_TYPE_APP_WRITE",
"dataFormat": "QNN_TENSOR_DATA_FORMAT_FLAT_BUFFER",
"dataType": "QNN_DATATYPE_FLOAT_16",
"rank": 4,
"dimensions": [
1,
224,
224,
1
],
"memType": "QNN_TENSORMEMTYPE_RAW",
"quantizeParams": {
"definition": "QNN_DEFINITION_UNDEFINED",
"quantizationEncoding": "QNN_QUANTIZATION_ENCODING_UNDEFINED"
}
}
}
],
"numGraphOutputs": 2,
"graphOutputs": [
{
"version": "QNN_TENSOR_VERSION_1",
"info": {
"id": 370,
"name": "_538",
"type": "QNN_TENSOR_TYPE_APP_READ",
"dataFormat": "QNN_TENSOR_DATA_FORMAT_FLAT_BUFFER",
"dataType": "QNN_DATATYPE_FLOAT_16",
"rank": 1,
"dimensions": [
1
],
"memType": "QNN_TENSORMEMTYPE_RAW",
"quantizeParams": {
"definition": "QNN_DEFINITION_UNDEFINED",
"quantizationEncoding": "QNN_QUANTIZATION_ENCODING_UNDEFINED"
}
}
},
{
"version": "QNN_TENSOR_VERSION_1",
"info": {
"id": 371,
"name": "_539",
"type": "QNN_TENSOR_TYPE_APP_READ",
"dataFormat": "QNN_TENSOR_DATA_FORMAT_FLAT_BUFFER",
"dataType": "QNN_DATATYPE_INT_32",
"rank": 1,
"dimensions": [
1
],
"memType": "QNN_TENSORMEMTYPE_RAW",
"quantizeParams": {
"definition": "QNN_DEFINITION_UNDEFINED",
"quantizationEncoding": "QNN_QUANTIZATION_ENCODING_UNDEFINED"
}
}
}
],
"numUpdateableTensors": 0,
"updateableTensors": [],
"graphBlobInfoSize": 40,
"graphBlobInfo": [
{
"version": "QNN_SYSTEM_CONTEXT_HTP_GRAPH_INFO_BLOB_VERSION_V1",
"info": {
"spillFillBufferSize": 0,
"optimizationLevel": 0,
"vtcmSize": 4,
"htpDlbc": 0,
"numHvxThreads": 0
}
}
]
}
}
],
"contextMetadataSize": 8,
"contextMetadata": {
"version": "QNN_SYSTEM_CONTEXT_HTP_CONTEXT_INFO_BLOB_VERSION_V1",
"info": {
"dsp arch": 68
}
},
"soc model": 0
}
}
Binary file added assets/mnistwild/mnist_app.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Loading