bg
git
We are
Digital Roll
Project Info

Our Project

Our client, Jason Robinson is the founder of the website design company, Beautymark Desisgn Studio. Mr.Robinson enjoys to play tabletop role-playing games(TTRPGs) and wants to use technology to improve how these games are played. TTRPGs use various polyhedral dice like in Figure 1, to determine the outcome of a given event. Mr. Robinson has envisioned an application that reads and outputs the result of all players rolls for both in-person and online players. To do this, Mr. Robinson wants the application to use a smartphone's camera to detect every player's roll. This could solve many problems such as maintaining the pace off the game and reducing the likelihood of dishonest rolls.

dice
Figure 1: polyhedral dice used in TTRPGs

To use this application the phone will need to be set up against a common household object so the camera is angled toward a flat surface. While playing the game, players will roll their dice in the area the camera is observing. The application will use machine learning to detect each dice and output how many dice were rolled, which dice were rolled, and the total sum of all the top faces.

This is a very impressive application and will require the use of object detection with machine learning. Digital Roll's goal is to assist Mr. Robinson in creating this application by making a pipeline. This pipeline will let the people at Beautymark Design Studio use their own dataset to create ML kernels and validate them. We will use Python as our wrapper language and TensorFlow as our machine learning framework. Due to its portability and compatibility with Apple products, Mr. Robinson needs the kernels to be Apple's Core ML. Therefore, we will convert our model from TensorFlow to Core ML. We researched the feasibility of our project and have found many sources that confirm the possibility of this project. Core ML has already been proven to detect pip dice, so it is possible to use polyhedral dice, given a large and diverse dataset.

python
cml

The pipeline that we create can be broken up into two parts: an API (application programming interface) and a workbench. The API will take in an image and metadata from a smartphone and produce a consistent format. The data that is produced by the API will be the input for the workbench. The workbench will take in the data to produce an TensorFlow instance model, and convert that to Core ML format (Figure 2).

dice
Figure 2: flowchart for pipeline

For more information on our project specifications. See our Requirements Document