Challenges of the Journey and How We Overcome Them

The Critical Infrastructure worked through a number of challenges in building SMARC. We somehow managed to overcome some of these challenges as explained below.

ChallengesHow We Overcome Them
Working across three different cities as a result of covid lockdowns
As the smallest group in the cohort, our work was already slightly increased. This was exacerbated when covid led to our group being the only group also scattered across three different cities. 
To tackle this challenge, we communicated extensively across Teams and used shared Google Documents to bring our work together. 
We also realised the need to divide and conquer the tasks. A natural division of labour developed with Myrna heading up the physical build, design components and risk analysis; Adrian took the lead on the system life-cycle, design strategy, coding and final editing; Julian focussed on written and analytical tasks, including the Newcast video. 
As we all operate on slightly different schedules, this also created opportunities for work to be refined across the course of a day. Julian and Myrna tend to start work slightly earlier, while Adrian prefers to work late into the night. This led to a very effective system whereby any work Adrian had reviewed or started over night/into the morning, could then be continued by Julian &/or Myrna in the morning. 
We worked collaboratively across 3 different cities/states over various mediums such as:
Microsoft Teams for our main communication
Google Docs to write our notes and Google Drive to store all the files/photos/video.
GitHub for storing the codes (still learning how to use this).
Miro board to build the user persona, storyboard, timeline, brainstorming ideas, initial design of the prototype, user flow map, etc.
Canva to design the logo, the website/user interface design.
Marvel App to design the prototype for the SMARC website interface.
Arriving at an idea
At the early stages of the semester, we spent a lot of time trying to arrive at an idea for our project. The problem was. CI was such a broad field and there were so many interesting ideas out there. 
We looked at water resources, water meters and even the management of old industrial infrastructure and decommissioning.
Extensive brainstorming and long-conversations, including with support from Build teaching staff and the cohort are what ultimately helped us arrive on an idea. We focussed on our values and particularly our interest in exploring the potentials of a scalable CPS. This led us to consider the problem of e-waste. Our interest led us here because we were all motivated to look at how CI can improve environmental outcomes and our research had highlighted the unsolved issues of e-waste recycling system. The more we researched this area, the more we realised it also satisfied our intent to develop  a technological intervention at small scale, that could be scaled to create an entire system of positive change. SMARC achieves this as an intervention at the household scale, which ultimately creates a broader network for supporting the entire e-waste system.  
Scoping the prototype
Even once we decided to focus on e-waste, we need to narrow down the scope of the prototype to focus on a particular aspect of the e-waste problem (our research helped us realise how complex and multi-faceterd it was). 
To assist in this process, we reached out to leaders across various industries to get an understanding of infrastructure problems they faced, to identify opportunities for cybernetic interventions. Ultimately, it was our conversations with experts in the e-waste industry that helped us identify an opportunity for an elegant and scalable cyber-physical system to facilitate e-waste collection, specifically. We were given great advice and insights from Warren Overton (CEO of ANZRP), Mark Fowler (CTO of CleanEarth Technologies Singapore), and Lucas Way and Anirban Ghose (Microfactory engineers fromSMART, UNSW). They insights we gained into the existing e-waste recycling system helped us identify the need for a particular intervention targeted specifically at collection. Collection was effectively a pinch-point in the entire system which, if improved, could have significant benefits to downstream e-waste recycling processes. This led us to develop SMARC, which is an example of a simple cyber-physical system which, if implemented in a supportive regulatory environment, is capable of substantially improving e-waste management in Australia. 
Sourcing old telephonesFor the prototype, we wanted to demonstrate the SMARC’s ability to recognise and process phones of all types and ages. Amongst the group were able to source a number of more recent models, but, we did not have access to a lot of older mobile phones. To address this, we reached out to the cohort and our networks to ask people to send us old phones and help us train the model. 
Julian’s family sent a number of old Nokia and Samsung mobile phones which we used. Myrna used her stash of various old iPhones models and her brother’s old Sony Ericsson mobile phone. As did Chloe, Xuanying and Matthew Phillips sent photos of their old mobile phones (more Samsung and Ewtto).
Having a baby + PhD applicationWe were already the smallest of the teams, but the availability of labour was further stretched when my (Julian’s) first baby was born, Enzo. I was largely offline for nearly a fortnight and then slowly eased back into university work. Thankfully, the team is very supportive and they managed basically by the other picking up the slack and taking on more work. Needless to say, I am very grateful! 
Home-schoolong kidsWith the lockdown happening across NSW, Myrna was stuck in Sydney with her family and had to manage the online remote study as well as home-schooling her 2 children plus WFH husband. Juggling all of that was a challenge itself on top of learning and building the SMARC cyber-physical system. To overcome this she’s eating more chocolate to keep her going 😅
Touch screen capability, availability and compatibilityTo help me start with this build process, the Build course teaching staff (Mina) has kindly sent Myrna a box of electronic components that she can use to build SMARC. One of the components is the touch screen that can be used as the interface between the user and SMARC. Unfortunately this touch screen (Elecrow 10.1 inch) was not working and had rainbow barcode lines. To overcome this, she bought a new but smaller touch screen and it works with the raspberry pi 3 model B+ that Mina sent.
Coral Dev Board vs Coral USB AcceleratorBoth provide an Edge TPU as a coprocessor for the computer, to help with prototyping SMARC that demand fast on-device inferencing for machine learning models. Myrna never used any of these Corals before, and started with the Coral Dev Board because that’s what she received from Mina and she followed the instructions to set up the Dev Board here. However Myrna couldn’t get this to work either and stuck with the “Flash the board” step. She has consulted this issue with a few people (Matthew, Kathy and Mina). She suspected that the Coral Dev Board was also faulty/broken, just like the touch screen from the studio, as explained on the challenges previously. 
So she asked for the alternative Coral USB Accelerator. Unfortunately, none of the Coral USB Accelerators are available from the 3Ai studio. However, two of our cohort members have the Coral USB Accelerator from their Maker Project last semester, Adrian and Chloe but Myrna couldn’t get theirs earlier because Chloe’s team needed to use it for their CPS project and Adrian also needed it to test the model in Adelaide. In the end, Chloe’s team decided not to use the Coral USB Accelerator and Myrna was able to borrow it a couple of days before the Demo Day when she finally made it back to Canberra again. 

I followed the instruction for Coral USB Accelerator here, and the followed by the instruction on GitHub for Edge TPU simple camera examples and OpenCV camera examples with Coral. After managed to get Coral USB Accelerator to work on the Raspberry Pi, Myrna run the codes that Adrian sent couple of days before the Demo Day.
(Almost) Broken new touch screen and Pi CameraWhen we were asked to make the video for the Demo Day, Myrna was trying to take a video using a small cheese board lazy susan as the base that she borrowed from her sister in laws. When she spinned the lazy susan, the SMARC fell over and hit the wall on the front where the camera was located and also hit the touch screen. So the acrylic camera holder broke into pieces and the touchscreen was showing another rainbow colour as shown on the pictures below.

To fix this problem, Myrna was going to buy another touch screen and use Julian’s Pi Camera when she’s back to Canberra. But she tried the famous IT CROWD advice: turn everything OFF and turn it back ON again and it surprisingly fixed the rainbow colour, everything is back to normal again, phew😅 But for the camera holder, Myrna couldn’t put it back together even with using the power glue, plastic glue, super glue, etc. So she built the camera holder using the remaining wooden timber that she has left from building the frame for the touch screen.
Materials choice and availability to hold the touch screen and Raspberry PiOriginally, we were hoping to use a 3D printer or laser cutting machine from the Maker Space. But lockdown has changed everything. Myrna was going to use an aluminium metal sheet to hold the frame and Raspberry Pi onto the bin, but she didn’t have the right tools and equipment to cut and shape the aluminium metal sheet. So she asked her neighbour who has a workshop like mini Bunnings under his house with many handy tools and equipment to cut and shape the wooden timber that she had around the house from garden bed (recycled wood). Through this building process, Myrna is developing her creative “bush” carpentry and handy tool skills as well.
Built the Model RemotelyMyrna followed the instruction for the Coral USB Accelerator that Adrian sent here, and then followed by the instruction on GitHub for Edge TPU simple camera examples and OpenCV camera examples with Coral. After managing to get the Coral USB Accelerator to work on the Raspberry Pi, she ran the codes that Adrian sent a couple of days before the Demo Day. Unfortunately the codes worked partially in her Raspberry Pi, where the Pi Camera worked to recognise all of the mobile phones that she has as “Other”. 
We soon realised that we all had different mobile phones to train and test the model. Adrian didn’t have the same model of phones that Myrna & Julian used to train the model, so when Adrian tried to test the model and build the codes for Coral USB Accelerator in Adelaide, it didn’t work as expected when Myrna tested it on the Raspberry Pi in Canberra. To overcome this, Myrna modified Adrian’s codes based on the mobile phone models that she has and it finally worked on the Raspberry Pi. Kathy also has helped Myrna how to remotely access the Raspberry Pi from her laptop so she can copy and paste the codes from the laptop without having to do it directly on the Raspberry Pi using SSH through interface PuTTY. Myrna learned to make sure to use the same IP address for the laptop and the Raspberry Pi to make this connection successful. Adrian also recommended and guided Myrna to use VNC to access the Raspberry Pi remotely. For the Demo Day, Myrna used VNC to demonstrate how SMARC works to the audience because she found the VNC interface is much better than PuTTY.
Preparing the Video for NewcastPreparing the Video for Newcast was a real challenge.
One of the biggest challenges was trying to strike the right balance between demonstrating what the product was, while also capturing the larger objectives of the system at scale. The latter of these objectives was particularly hard. In the short period of time we had, we found it really challenging to think of creative ways to quickly and effectively demonstrate how SMARC created broader system change – this was just very hard to represent visually using the stock footage available on Envato. 
I’m not sure that we actually overcame this challenge to the extent we would have liked. While we are happy with the video, we are not as happy as we could have been and we think it’s possible to improve on it. However, this was our first time doing something like this. It was a valuable lesson in the difficulties of communicating conceptual ideas visually, particularly when there are time and resource constraints. 
As we mentioned in ‘things we do differently’, the act of creating the video and trying to tell the story actually helped shine a light on the key attributes of the SMARC system we wanted to demonstrate. In turn, this helped bring new focus on our build objectives, onto the more important/persuasive elements. This was a valuable lesson of the benefits of thinking about the story from an early point in the development process.  
Experience Level/Finding a way to collaborateAs it is to be expected from such a diverse cohort, we each brought a different skill level to the project. As a team we wanted to embrace this and find a way to collaborate on the Machine learning part of the project. We each had assigned roles, Julian in charge of sourcing the data, Myrna trained the model and Adrian was responsible for writing the code to run the model.

Specific issues we found with this approach were:
Sourcing data (i.e. phone images) for the model. Our initial models had low accuracy rates due to limited data which was mainly sourced from available online databases. We solved this (after advice from Mina and Matthew) by changing the devices to ones we could source and take images directly with the webcam via the teachable machine web interface. This change greatly increased the amount of images, our flexibility in changing backgrounds to suit the SMARC unit and ultimately the accuracy rate of the model.
End-to-End Testing was complicated through this approach. We encountered situations where we were testing the code of earlier versions of the model which resulted in complications. Secondly, as the story evolved there were occasions where the code was being tested off earlier versions of the model that was trained off different devices. Even when the model was updated it could not be fully tested as Adrian did not have all the phones (e.g. iPhone 5c). 
Developing/Testing remotely from the device – the code was largely being tested away from the physical unit. We overcame this in part through having a replica setup (e.g. Raspberry Pi & camera but there were delays/additional errors caused by not having access to the device to test early in the dev cycle.
Website DevelopmentAdrian tried many frameworks for easy web development. None proved to be user-friendly and provide the rapid prototyping environment he was looking for. He did get a rudimentary website working using  Django, but the time required for development would have detracted from the other tasks, especially the story and video production. Also, his previous experience was a hindrance rather than a help, in that it was difficult to unlearn how he used to do things. In the end we decided to use the Marvel App to simulate the web experience, rather than the full website he would have preferred. This will be replaced when we go to scale.
Development Environments 
Adrian struggled to get an environment to consistently operate on his mac. There were many issues with ‘path’ errors, due to his installation not being methodical enough. He had installed the various parts over time and without documenting the process he followed, so it was difficult to undo these actions and revert to previous working versions. What he really needed was a ‘clean’ development machine, or virtual environments to run the development on .
Performance of the Teachable Machine Algorithms
We initially had a model that included a large number (e.g. 10+) of classes with 10 different mobile phone brands and models, using what Julian and Myrna have collected from family and friends. Because we took 1000+ images for each class, it took us a very long time to train the model (more than 15 minutes). We then refine the model by reducing the number of classes down to 5 mobile phones (3 phones in Sydney and 2 phones in Canberra), so it doesn’t take too long to train the model. But because Myrna has the Raspberry Pi and all the other components in Sydney, she can only test the model using the phone that she has in Sydney. So we agreed to reduce the model again and only train with 3 mobile phones that Myrna has to simplify the model and the testing for Adrian. 
There are 3 ways of exporting the model from Teachable Machine:
Tensorflow.js
Once you’ve trained the model, download it onto your computer, and the code is available to use in JavaScript or p5.js version. Myrna attempted to try using this because she didn’t have the Coral USB Accelerator at the beginning. So she used p5.js Web Editor to test the model, and it worked on her laptop. She tried to apply this model in the RPi, but unfortunately, the Pi Camera wasn’t working as the webcam, only to take a still photo or video.  
Tensorflow
Myrna didn’t do this one because she didn’t know how to use the model with Keras. Perhaps in the future, she might give this a try.
Tensorflow Lite
The trained model is converted into an Edge TPU type and then downloaded and saved on your laptop or straight into the Raspberry Pi if possible. The code snippets are available in the Android or Coral version. I haven’t tried the Android version because I don’t have one. And the Coral version also didn’t work when I tried to run it. Adrian then recommended me follow the instructions from the following (in order):
https://coral.ai/docs/accelerator/get-started/
https://github.com/google-coral/examples-camera
https://github.com/google-coral/examples-camera/tree/master/opencv

Then we had another challenge with this new model. Adrian was working on the codes for the coral using the model that Myrna trained. Unfortunately, he didn’t have the same mobile phones to test the code to see if the model could identify/classify the phone correctly or not. Due to the pressing time, Myrna modified the code that Adrian sent in the morning of the Demo Day and saved and ran it on the Raspberry Pi, and it amazingly worked! So happy and couldn’t be more satisfied and proud of what we achieved together, collaborating across 3 different cities/states.