Closing thoughts

Overall, I think that I delivered an mvp that could later be expanded upon.

A cube spawns when it sees a target and the user is able to move the cube around and scale it to their needs.

I think that the project has a lot of room for improvement, I would have liked to explore more dedicated facial recognition options such as AWS Rekognition, and find a way to use the data returned from those services to make an AR target that could be tracked.

 

Development issues and overcoming them.

One major issue that arose in development was the fact that building the app to test on the glasses was often a time consuming process. As such, I decided to work using a webcam instead so that I can just run the unity editor to get an Idea of how the project is coming along.

This also meant that I would develop things for use with a mouse and keyboard and then find ways to translate them over to a touchscreen for use with the glasses.

 

Cube movement

With the cube now able to spawn in, I had to make it so that the cube covered someone’s face. This meant moving the cube away from the target.

 

I briefly experimented with changing the colour of the cube to determine what colour cube blocked someone out best, especially as the cube would be transparent with the glasses on. I found that a white cube worked best.

 

For my first implementation I simply made the cube move based off of key presses, however I later found a script to handle android swiping movements[1]. I could then use this script to detect swipes in a direction and adjust the position of the cube based off the swipes. The only real problem I faced with the implementation of this script was the way the logic worked, moving the cube with swipes required multiple swipes compared to just holding down a key.

 

I also made it possible to scale the size of the cube, as the cube gets based off the size of the target, and the target size is not the same as someone’s face, so it is an important quality of life to change the cube size for the cube to actually block someone’s face out.

 

 

[1]Ferran Bertomeu, 2019, SwipeInput https://gist.github.com/Fonserbc/ca6bf80b69914740b12da41c14023574

 

 

What makes a good target?

Using Vuforia, I had to come up with a target to feed into the database for Vuforia to detect. This meant coming up with a satisfactory target. On the Vuforia website there is a guide on what makes a good target image and what is not a good target image[1].

 

The factors listed for a good target are: High contrast, Rich in detail, and no repetitive patterns.

Vuforia also provides a rating system for every target given, and shows the points of interest that the library looks at.

For example, a straight line does not make for a good target. There is very little contrast or details for vuforia to pick up on. As such, vuforia gives the target a 0 star rating, and there is a single point of interest.

Adding more lines does improve the rating of the image, as it can add more points of interest for the library, however this caps out at about 2 stars.

As such

show picture I ended up using was:

As can be seen, there are multiple points of interest around the text, mouth, and glasses area, there is high levels of contrast with a white background(as recommended by the Vuforia page[1]) and there is no repeating patterns, making this a good image target to use.

 

[1]Vuforia Developer Library, 2019, Optimizing Target Detection and Tracking Stability, https://library.vuforia.com/articles/Solution/Optimizing-Target-Detection-and-Tracking-Stability#attributes

Which library to use?

In my research for an AR engine to use, I had a set of criteria to help assess how useful an engine would be; I planned on using Unity so compatibility with Unity was important, how much support there was around the engine, and what features the engine had that would be relevant to me.

Firstly, the main feature that I was looking to use was object detection, as previously mentioned in a blog, so finding a library with the ability to do so was the most important factor.

 

Overall, it came down to a choice between two libraries.

 

The first option was Vuforia, which is built into unity[1]. There is also a lot of support around vuforia on places like YouTube, and I was able to find high quality videos[2] guiding me on how to implement an object database very quickly, and have the database set up and running inside unity.

 

The other option was wikitude, which was compatible with unity, and also had support on websites such as youtube.

As there was no clear distinction on which one to use straight away, I decided to familiarise myself with both over the course of a week instead. Over that time I found Vuforia easier to use, and as such decided to use it for my project.

 

[1]unity/vuforia, 2019, https://unity3d.com/partners/vuforia

[2]Playful technology, How to create an Augmented reality App, 2019, https://youtu.be/MtiUx_szKbI

Different methods of detecting someone.

In researching different methods of AR detection, I came across two different ways I could approach the program.

 

The first method would make use of libraries such as OpenCV, which can be used for facial recognition in pictures. I could take a picture using the smart glasses inbuilt camera, then use OpenCV to detect faces in the picture, and cover them up, then project the edited picture to the user. This would have the drawback of being a static image, which wouldn’t be very useful in blocking someone out.

The other method involved libraries like Vuforia, which have object tracking. I could pass in a target for the library to track. I could make a target for someone to wear, and when Vuforia detects the target it spawns a cube, which the user can then move to cover someone’s face. This would be able to work in real time, with the only problem being how to get the target on someone, or deal with moving the cube meters away from the target, and having to manually move the cube to cover someone instead of the cube moving when the person moves.

Ideally, the best method would be to take a picture of someone with the in built camera, use OpenCV to detect faces in the photo, and have those photos uploaded to a Vuforia database as targets. This would combine the use of both libraries and provide the best of both methods. However, this method has ethical considerations with regards to the uploading of someone’s face, which makes this a very bad idea. There is also the issue of people being animate with their face and body which would make object detection unreliable if someone were to turn their head, compared to having a badge on their chest which would still be visible to the camera if someone turned their head.

 

As such, I decided to go for the simpler option of merely finding a library that supports object detection, and create a badge that it can recognise to then spawn a cube for the user to cover up someone’s face.

I’m limited by the technology of my time.

With AR being a newly emerging technology, many implementations are very basic and have drawbacks. I myself will be using MOVERIO glasses. These glasses run android, and work by projecting a 0.43″ 720p screen at the user’s eyes to give the illusion of the tablet floating in the air in front of the user. With the small screensize, it’s very obvious that it’s difficult for any large amount of space in front of the user to be covered by the projected display.

Another limitation of the technology is that I will be reliant on the inbuilt camera, which has a very small field of view. A small camera field of view means that the headset will only “see” a fraction of the space that the user is able to see. This risks breaking the illusion as the user is able to see through the projected screen, and the camera itself isn’t able to see as much of the space it’s in compared to the user, meaning that the user may be able to see someone they wish to block out while the headset itself cannot see this person.

 

While these limitations aren’t the end of the world for the technology, and can be improved upon in future alterations of this product, in it’s current state the user experience with AR will be very temperamental.

 

Science fiction become real

One feature shared among many, if not all(decent), social media platforms is the ability for individual users to block other individual users. This prevents a person from being able to see or interact with whoever blocked them. While this is currently contained to the digital world, Science fiction writes like to envision this technology existing in the real world.

One way that this could be implemented is using AR, and over the course of this module I will try to create such a feature.