Hello everyone, I hope you are having a great Friday so far.
We did our 9th session today and had our usual check-in before we screen shared our testing images project on Snap!. I shared my program where it recognised images of a daffodil or a sunflower. It was fun to do and I was happy that it worked. I also worked on my other program, where it recognises images of The Simpsons, but it hasn’t been completely done yet.
In today’s session, we focused on pose detection in machine learning.
Google Creative Lab released a browser-based software called Posenet for real-time human pose estimation. It can recognise locations of the following different facial features and body parts: eyes, ears, nose, shoulders, elbows, wrists, hips, knees and ankles. It was created using deep machine learning. The best thing about Posenet is that it works in a browser without any special software or hardware (apart from a webcam).
Afterwards, we each had a turn at experimenting with a pose program on Snap! where it detected the locations of 17 different parts of the body. I had a go when the green arrow pointed to where my left ear was. I also tried with it my mouth, but it didn’t work due to the mouth not being listed as one of the different body parts. I enjoyed playing around with it.
Next, Ken showed a sample program about not touching your face while sitting at a computer and how it works using Snap!
Our homework is to make our own ‘filters’ using sprites with the pose program on Snap! e.g. drawing glasses or a nose. I’m looking forward to trying it out.
I hope you all have a lovely weekend and I will see you in the next blog post!