Final Reflection

Welcome to the final reflection of my project! This has been one journey and I have learned a lot. I faced and overcame many challenges that I did not think I would encounter. Being a computer science major and only have dealt with software, I remember thinking to myself how well I would handle the hardware side of my project. There are many physical mechanisms that we don’t think how difficult they are until you need to work with the design. One design I remember vividly from my project that made me realize how challenging designing hardware is the hopper component. I had initially started with a water bottle for my hopper but I then designed a cleaner version on TinkerCad. Creating this design was difficult to perfect because I realized that it took a lot of time for engineers to come up with a great design for even a single water bottle. As you can see there were many small bumps in the road that showed me a different perspective of the mechanisms in my project. I remember at first when I said I wanted to create a voice-controlled dispenser, but I had no clue what materials or any details would be entailed. The first week we were told to make a list of materials and budget, this did not come so easy to me as I couldn’t think of what I wanted to use. The opportunities were endless with all the resources in the Innovation and Creativity Lab (ICL). I was glad that we had many chances to practice our elevator pitch because it pushed me to think of possible obstacles and solving my problem from different perspectives. I initially had not even thought about using a voice-controlled method, I thought about a button approach.

This project was very important to me and close to home because I was inspired by my father and uncle who are both visually impaired. Living with my father, I have seen the struggles of just doing basic everyday things like cooking. It can get quite messy at times when measuring everything out. That is why I decided I wanted to work on an impactful project where it can serve as an everyday application. My project is a kitchen tool that can be vocally prompted to measure the desired measurement.

During this project, there were many lows and highs. The first obstacle I encountered was choosing an already existing voice assistant and programming a skill with it. There are many voice assistants out there, I ended up using an Amazon Alexa. I was worried about how much I would be able to implement due to this being my first time. However, it was quite easy to program my skill because there were so many fantastic tutorials out there. I will say there were small technical things to be on the look for when setting the settings for the skill because that is what caused me to not be able to test locally or publish my skill to use on my actual device. But it didn’t take long for me to quickly scratch this implementation. I ran into the issue of connecting the Alexa to an Arduino. I needed an Arduino in order to connect to a driver and stepper motor, which was necessary to actually have the physical parts moving. There was an already existing skill connect these two, but it was taking me too long to comprehend how to integrate the Alexa-to-Arduino skill with my own Voice Measure skill. Therefore, I was fortunate that another peer of mine was also working with a voice assistant. I was able to smoothly transfer to use the Google Speech-to-Text API on a Raspberry Pi. The Raspberry Pi overall was more convenient because I was able to program directly to the pins to control the motor and have faster processing. After this challenge, I really faced problems with sizing my 3D prints to fit nicely. I also had some problems with the motor acting up over time because the motor would be out of sync with the commands coming from the Raspberry Pi. I had to reset the Pi multiple times for this reason. I would have liked to have attached a hat to the Raspberry Pi, but it was coming down to the last few days and I was unsure how many changes and how long it would take to integrate.

Overall, I am very proud of my project and happy that we had the liberty to choose our own projects. I learned not only about technologies but about project management as well. It is a very much needed skill especially because I will be having my senior CS capstone in two years. I will definitely take my managing and budgeting skills for the future. I also built my presentations skills, which is something we all need to practice. During the weekly presentation updates and elevator pitches, I grew much more comfortable in owning what I am working on and learning. I also learned a lot of hardware troubleshooting and use some technologies that I was not initially exposed to. I will be taking all I have learned this summer to stretch my abilities to help in the ICL as the assistant once again.

My future plans for this project are to continue this during the time I work at the ICL this coming year. I hope to implement a Text-to-Speech feature where my dispenser assistant can speak back to the user. I also hope to be able to support liquids. I was successful in calibrating some solids and adding conversion tables to go between units. There is still more I would like to do and I hope I can have the time to continue and talk to more people to see what other useful features I could implement.

Lastly, I just want to say that I feel blessed to have had the opportunity to be a part of DTSF this summer. I want to thank Eric and Josh, our supervisors for challenging and helping us along the way. Alongside, a big thank you to my DTSF peers who made this summer one of the best yet and always were so willing to help at any given time. That being said, thank you for following along with the progress of my summer project.

The eighth week: Reflection and goodbyes

Welcome back to my last project update. I still can’t believe the fellowship is over and honestly it feels insane thinking back to the time we started this project and how much we’ve achieved by the end of it. I became aware of DTSF my freshman year from my international mentor and didn’t ever think I was capable enough to carry out my independent project with little knowledge of using technologies. But now that I completed my project, I feel proud and grateful for how far I’ve come. From dancing with a VR robot during lunch breaks to continuously debugging the same code for an entire week, I can safely say it has been a tough but fun ride.

The project kicked off by brainstorming ideas. Firstly, I knew I wanted to work with drones. I’ve always wanted to fly one and thought it would be great if I could program it and make it more useful. I came across an article that mentioned mind-controlled drones. I thought this idea of being able to think about moving a drone in a certain direction and making it do exactly was pretty incredible. However, this was too complex and time-consuming for me to execute so I settled on creating a voice-controlled drone instead. Now, let’s talk about how I turned this project idea into reality!

DJI Tello EDU

Firstly, I used the DJI Tello EDU drone for my project because it was easily programmable in python. It had pre-built APIs which were pretty flexible to use and small in size that could be easily tested indoors. Then I proceeded to write a program to convert speech commands to text commands so that the drone can receive those text commands and execute them. To do so, I used google’s speech to text API. While this was a decent API, it has some disadvantages too. Sometimes it would give widely inaccurate predictions and the drone would receive a wrong command. Below is an example where I passed a voice command ‘Take-off’ and it thought I said: “steve-o”, “Eva”, or “takeoff”.

To solve this issue, we used an external microphone. I also added a noise cancellation function so that the voice is clear and the API can predict more accurately.

Next step: Operating the drone via laptop. I used the official DJI Tello API to control the drone using my laptop. This part didn’t take a lot of time and I thought was the easiest one. As you keep reading, you will realize how wrong I was. Now at this point, I realized that I needed my laptop to connect to the internet for my speech-to-text API to work but I also needed my laptop’s WIFI to connect to the drone. Both could not be done at the same time. So, I used a USB wifi adapter to create a dual WIFI interface and this issue was resolved.

Finally, now was the moment of truth: feeding voice commands to operate the drone. I tried passing few commands but sadly the drone refused to take any other command than “takeoff”. It was as if the drone was confused about what to do after the execution of the first command was completed. My voice command took a while to process into a text command so the drone couldn’t stabilize until the next command was processed. I had to completely change my entire code and import a different API other than the official one to solve this issue. This took quite a bit of time.

At this point, my drone could follow the voice commands sent but I had to wait until the execution of the first command to pass another command. However, I wanted my drone to be able to take continuous voice input, create a queue of command sequences and execute it one by one. I did so using a programming concept called Threading. Using threading you can have two or more two code blocks running at the same time. This took a huge chunk of the project time because I had limited experience with threading, especially in Python. After I was successful in completing this part, my drone had a background program that would continuously take voice input and the main program would execute the commands.

I was almost done with my project but I wanted to add more functionality to make the product more user-friendly and interactive. I decided to add text to speech functionality as well. This will allow the user to ask questions such as “What’s the battery life?” or ”What’s the internal temperature?” and the drone should be able to speak back to the user through the external speaker. In addition to that, the user would also be able to provide their value for commands such as “go forward by 60” or “go up by 100” instead of the drone executing the commands using the default value of 50. Putting everything together, the drone was now able to process the commands fed by the user and even talk back to the user in certain situations.

This was the end of my original project idea. However, I had about a week and a half to add more functionalities to the project and make it even better. To do so, I wanted to make my drone be able to scan QR codes and process the commands encoded in the QR code. Firstly, I was able to create two different programs to encode and decode the QR codes. However, the problem arose when I tried taking a video stream from the drone. The thread that I used to take the video stream did not communicate well with the other previous threads and the program entirely crashed. I took a week trying to figure this out but I couldn’t and was running out of time. So, I started preparing for my final presentation instead.

The final “take-off”

The presentation was great and we all got really good responses from the audience. As for the future, I would love to keep working on this idea and take it further by completing the QR detection part, adding object recognition functionality, and in the end creating a prototype of a device that would help physically challenged people to move different objects around in their apartment.

At last, I would like to thank all my fellow friends along with Josh and Eric for being amazing support throughout the project. I learned so much this summer and hope to continue using the creative lab to expand my knowledge of different technologies.

Eight Week – Final Reflection

This week I did a final reflection on my eight weeks of summer research work, rebuilding a pulse oximeter. A pulse oximeter is a non-invasuive device used mostly in the medical field to check the oxygen level in the human blood. Usually, a person reading 95 – 100 percent from the device means they are healthy and anything lower than this it is advisable to consult a medical practitioner.
Picture of a Pulse Oximeter

Recently, during the COVID-19 pandemic, there was a high count of Blacks dying from this virus compared to Whites. Apparently, when Blacks visit the hospital to check their oxygen level using the pulse oximeter it gives data that says they are healthy, and in fact, they are not. The Food and Drug Administration(FDA) discovered this high death toll in the Black and investigated. They came down to the conclusion that 1 in 10 pulse oximeters gives wrong readings for people of color.

Picture of a Pulse Oximeter reading.

A research carried out by the University of Michigan Hospital did research on acute hypoxemia with a large cohort of 10,000 patients of both Black and White races. They tested for occult hypoxemia, arterial oxygen saturation of <88% despite an oxygen saturation of 92 to 96% on pulse oximetry. In this research a pulse oximeter and arterial oxygen saturation in arterial blood; this is basically the ultimate test where the blood sample is taken from the patient and tested in the lab for its oxygen level. Black patients had nearly three times the frequency of occult hypoxemia that was not detected by pulse oximetry as White patients.

Rate of Acute Hypoxemia

HOW DOES THIS DEVICE WORK?

A commercial pulse oximeter is made up of a light sensor and two lights; the red light at 660 nanometers and the Infrared at 940 nanometers in the visible spectrum.

Depth of Light Penetration.

The above picture shows why only red and infrared lights are used because those lights can penetrate past the tissues and reach your arterial veins where blood flows from the heart.

Deoxyhemoglobin is a form of hemoglobin (blood) without oxygen and is represented as Hb. Oxyhemoglobin is a form of hemoglobin with oxygen and is represented as HbO2. The graph below shows the absorption level to the wavelength of the red and infrared light. From the below graph, you see that deoxyhemoglobin absorbs more red light while oxyhemoglobin absorbs more infrared light.

The Absorbance of Hb and HbO2 by Red and Infrared Light.

A group of doctors from Jerusalem College Technology(JCT) had a project for building a pulse oximeter and came up with the following data and formulas. This graph gives you the heartbeat of the patient. And it is measured from trough to peak. This uses light to measure blood flow.

A Photoplethysmography graph

AC is the trough to peak amplitude and DC is the mean value of the pulse basically the average and should not be mistaken for distance. The bigger the AC the more accurate your result tends to be.

The formula for Extinction Coefficient of Hb and HbO2 at various wavelengths.

The extinction coefficient is the absorbance divided by the concentration and path length basically how transmissive light can be. This formula has been labeled questionable because these values cannot be found from the data collected.

Path Length formula

This explains the theory behind scattered and direct light when passed through something. Ideally when light passes through the finger of a person, the output or absorption level is dependent on how thick that finger is. That is if your finger is really thick the absorption level would be small due to the fact that lights do scatter and vice versa.

This is the overall formula with path length correction

This basically is the combination of both the extinction coefficient and path length formula. This is assumed to be a more accurate formula for calculating the peripheral capillary oxygen saturation, an estimate of the amount of oxygen in the blood.

The Empirical Formula

This is an empirical formula. K1 to K4 are not derived numbers they don’t look them up in the literature they literally took people’s heart ratio and SPO2 and took enough of those until they could fit those numbers. There is no fundamental science behind this formula this is just getting numbers that work.

Apparently, every formula that was derived before this empirical formula was thrown away and they ended up with this empirical formula. This raises a lot of questions. We do not know the extinction coefficient and path length which seems to be important in building a pulse oximeter.

Melanin Light Absorption Graph

The above graph is a piece of information I would like you to hold on to for further explanation and it shows us that red light is absorbed by melanin and infrared is barely absorbed by melanin. So basically red light is affected by how much melanin a person has and infrared is not.

DESIGN OF PULSE OXIMETER

My intention was to build a pulse oximeter similar to the ones used commercially using a 3D printer. This turned out to be a failure specifically because of the hinges that connect the led holder and sensor holder together. I spent a lot of time trying to perfect this design but I couldn’t. As Prof Eric Remy would say ” We are too focused sometimes on one particular goal we forget that there is more to do and achieve.”

My failures using a 3D printer to build the commonly used commercial pulse oximeter.

So I moved on from this design and made an All-in-One pulse oximeter that contains the light sensor and led lights. Below are my designs using the Tinkercad software and 3D prints.

Tinkercad design of All-in-One pulse oximeter

Tinkercad design of All-in-One pulse oximeter

3D printer and setup of pulse oximeter design
Readings taken from an All-in-One built pulse oximeter.

DATA COLLECTION

Two types of data were collected the visible light(red) data and infrared light data. Like I stated earlier the bigger the distance from the trough to peak the more accurate it tends to be.

This data shows that the Black male’s graph for visible light(red) is difficult to read and you can barely get any information from it and this is due to the fact that melanin affects reds light just like we pointed out in the melanin absorption graph and that is why the graph seems unstable and the AC(trough to peak) is really low or sound to signal ratio is low. In the infrared light, we suddenly see that the Black male’s graph has a better sound to signal ratio and this proves the point from the melanin absorption graph that melanin barely affects infrared light.

CONCLUSION

The commercial pulse oximeters do not measure two major factors and this could result in the misleading of data analysis. This is especially for People of Color. These factors may include;

The thickness of one’s finger and

The percentage of melanin present in a person’s skin.

Therefore, I do not believe these devices can be trusted because of their inaccuracy. I would advise that people who try to check their oxygen level especially now that it is used at a high rate due to covid-19 should visit the hospital and opt to take the ultimate test that involves extracting the patient’s blood and taken to a lab for analysis and accurate results.

APPRECIATION

I thank God for the completion of this research project and appreciate my DTSF supervisors, fellow interns, and family. I wouldn’t have been able to accomplish this much without the love and care I got from everyone thank you! Thank you Gettysburg College. It has been an honor to be a part of this family.

WHAT A TEAM!!!

REFERENCE

Sjoding, Michael W., et al. “Racial Bias in Pulse Oximetry Measurement.” New England Journal of Medicine, vol. 383, no. 25, Dec. 2020, pp. 2477–78. Taylor and Francis+NEJM, doi:10.1056/NEJMc2029240.

Yossef Hay, Ohad, et al. “Pulse Oximetry with Two Infrared Wavelengths without Calibration in Extracted Arterial Blood.” Sensors (Basel, Switzerland), vol. 18, no. 10, Oct. 2018, p. 3457. PubMed Central, doi:10.3390/s18103457.

Zonios, George, et al. “Melanin Absorption Spectroscopy: New Method for Noninvasive Skin Investigation and Melanoma Detection.” Journal of Biomedical Optics, vol. 13, no. 1, International Society for Optics and Photonics, Jan. 2008, p. 014017. www.spiedigitallibrary.org, doi:10.1117/1.2844710.

Week the Last: A Reflection

When I accepted the DTSF project, I expected a fun and enlightening project, probably challenging, but nothing to crazy. I was wrong, VERY wrong. This project has been an interesting one, to say the least. I had two ideas that I had considered and I went with the one that I thought would take eight weeks instead of four, turns out the one I chose would have fit better into ten weeks, the other one probably would have taken the whole eight. Looking over all my blog posts it suddenly hits me how little time I actually had for this project, seven weeks not counting time lost to workshops, presentations, and other interruptions. It’s frankly insane simultaneously long/short these eight weeks have felt and I know I’m not alone in these feelings.

This summer has all around been crazy to be honest. Not just the project, but everything I did outside of it while on campus to what I did/will do between DTSF and the semesters. I’ve learned at least a comparable amount to what I would learned in an class during a semester. I’ve learned a lot about how 3d printing works, how to program an Arduino, about prototyping, and about finicky electronics can be.

When I first tarted the project, I wasn’t really sure about anything. I had what I was going to do but only a vague idea of how. I spent a lot of the first few days just figuring out what to do. I kind of wish we had had meetings of some kind between DTSF and the end of the last semester, BUT that would have cut into the little break time we had between both. That said I figured out enough that I was ordering the base components by the middle of the week and we had enough of the parts I would use around, that I could quickly start on Arduino Programming. It was great, I enjoyed programming the Arduino and made a lot of progress. The next few weeks were something like this, planning somethings out and making progress on other parts, mostly what could be 3d printed or had around us.

Then, around week five, I started to reach some of the big humps. I’d had some bumps in the road already with the project but they soon started getting bigger and bigger, some because of how far along I was, others just out of pure coincidence. Week six’s switch failure was enough to just leave a bad taste in my mouth for a day or two. I think it sums up the challenges in working with hardware. Debugging is slower and parts you can discount in programming can break everything by just existing, not to mention the lack of compilers telling me exactly what I did wrong.

So, things I could have done better and what I’ve learnt. I’d say chief among them would have been to not delay things as much as I could have, mostly on the hardware side of things. I’ve learned that it’s fine to through somethings whether some junk part or starting over on something. I’ve learned some of the things you can do to MacGuyver together a solution to some issue you have with a project. I am now convinced that tutorials are the greatest thing we as a species have created, as in I know prefer reading a tutorial then diving head first instead of just going in head first. For example, following a tutorial and then doing the same thing again but with a small new complication, really helped me get comfortable with Arduino programming. And most of all, if you have a problem feel free to try multiple solutions, maybe at the same time if you can keep them from mixing up with themselves.

Close to end, I started to come up with a lot of alternatives of where I could have taken the project. I realized that a Virtual Reality implementation would have been quite something and fairly doable compared to the physical idea I’d gone with. Just the sheer flexibility that would have given me would have been worth it. I could have changed the scale of everything on a dime, had the cubes be anything I could image, could’ve had millions of colors for the cubes. Just some thoughts for future work.

The only part of the project that was not enjoyable was the last week. I was rushing to get as much done as I could, mostly because it took me forever to realize that the goal for this summer was not necessarily to finish, due to how little time we had. I’m proud of what I managed to create this summer, the individual pieces all work quite well, even if they don’t necessarily work well together yet.

We’ll all keep working on these projects to some extent, though probably not until the semester ends for obvious reasons. I’m just need to figure what to work on next, but I’ve got time. Anyway, thank you for reading and see you later.

7th week:

Welcome to almost the end of the project! This has been a bitter-sweet journey with some failures and some successes but I am proud to say that I definitely have learned a lot. Especially, patience.


I spent this entire week working on making my drone follow the commands encoded in a QR code. To do so, I started coding to write a program that could create a QR code out of the text. I had tons of installation issues and errors in the terminal. Then, I looked through the internet, and after following about 100 StackOverflow posts, I was finally able to solve it by installing the Visual C++ Redistributable package. Then, Josh helped me create some QR code stickers using a vinyl cutter. After this, I moved on to create a program to decode the QR code. Then, it took me a while to figure out how to get a video stream from the drone as I was using EasyTello as my API instead of the official one. So, there wasn’t a lot of documentation out there that I could follow. I tried copying functions from the official API but it didn’t work. Then, I tried different ways of getting a video stream and finally figured out how to do it. Even after this, there was a lag in the video and it wasn’t very clear. So, I first decided to try and pass the QR encoding function and see how it goes. The program collapsed and did not respond well to the other threads that were running.


Debugging the code to make the QR code scanning work took a lot of time so I decided to let go of this for a while and start focusing on preparing for the presentation. I took a lot of videos and edited them. We also had some practice sessions to improve our presentation.

Seventh Week- Analysing Data

This week was centered on analyzing the data I collected and referring to the papers I found from week six. I analyzed different races and gender. The papers had some interesting and scientific questionable findings. But first, after analyzing some data I noticed a big discrepancy with the Black male compared to the White male and Hispanic female. It was hard to get reasonable data from the black male because there was no significant distance from the trough to peak. And in one of the papers by Yossef Hay, Ohad, et al. To have an accurate result there must be a significant distance from trough to peak and this was represented by AC in the ratio of blood flow formula. So, this raises a lot of questions like what factors could make the result of a person of dark skin have a less accurate result. My hypothesis was that melanin plays a major role in this data collection.

I researched more on my project and acquired a lot of data to compare and deduce something. In my study, I studied research where a group of doctors from Jerusalem College of Technology (JCT) was building a pulse oximeter for babies, and it had several formulas.

Extinction Coefficient Formula
Path Length Formula
Correctional Formula
Empirical Formula

First, was the ratio of blood flow formula which is basically the distance of the trough to peak divided by the average PPG or lux. Extinction coefficient formula basically involved the Eo of Deoxyhemoglobin and Oxyhemoglobin which I would say had some doubts, path length formula that explained scattered and direct light, a correctional formula that had both and the extinction coefficient and path length merged. The empirical formula had fixed constants from their samples. These formulas were used to design a pulse oximeter. And these are questionable because these commercial pulse oximeters’ do not calculate two very major factors that could affect the readings of how much oxygen is in the hemoglobin. These factors include.

  • Melanin affects the how much light is absorbed based on the Melanin Absorption Spectroscopy.
Melanin Absorption Spectrum
  • Thickness of finger because everyone has different sizes of fingers and when light passes through the finger it scatters differently depending on how thick that finger is and this affects the light output.

Now with these factors is there a pulse oximeter that has been invented that measures these important factors before giving out data to its users? I don’t think so! This medical device is scientifically questionable.

Sixth Week – Collection and Analysis of Data

I collected up to 20 samples of different races and gender. While reading these data they were some unpleasant results, and this was due to some factors.

Firstly, we noticed that the red led overheats over time and tends to give inaccurate readings and the infrared light didn’t overheat so we added a breadboard with a 440 ohms resistor to avoid overheating and stabilized the reading. Secondly, after some research on the commercial pulse oximeter, we noticed the pulse oximeter goes through a cycle of red on and infrared off, red off and infrared on, red off, and infrared off under one second. This was a challenge for us because our sensor (TSL2591) could not read that fast and we considered increasing the integration time and gain but the consequence of doing this was that the accuracy of the reading would reduce. So, we wrote a code that gives reads 5 times per second but what we wanted was 10 times per second.

Photo including a breadboard with a 440 resistor

Towards the middle of the week, I moved my attention to researching written articles and journals on Pulse Oximeter. Below are some cited articles and journals that I studied.

Sjoding, Michael W., et al. “Racial Bias in Pulse Oximetry Measurement.” New England Journal of Medicine, vol. 383, no. 25, Dec. 2020, pp. 2477–78. Taylor and Francis+NEJM, doi:10.1056/NEJMc2029240.

Yossef Hay, Ohad, et al. “Pulse Oximetry with Two Infrared Wavelengths without Calibration in Extracted Arterial Blood.” Sensors (Basel, Switzerland), vol. 18, no. 10, Oct. 2018, p. 3457. PubMed Central, doi:10.3390/s18103457.

Zonios, George, et al. “Melanin Absorption Spectroscopy: New Method for Noninvasive Skin Investigation and Melanoma Detection.” Journal of Biomedical Optics, vol. 13, no. 1, International Society for Optics and Photonics, Jan. 2008, p. 014017. www.spiedigitallibrary.org, doi:10.1117/1.2844710.

Week 7: The last mile

Hello and welcome to the last actual week of my project DTSF. This week was the most intense because between the presentation and just the fact that working on these projects over the semester is going to be impossible.

I learned this week that electromagnets can get hot enough to melt plastic, specifically hot enough to melt PLA, the plastic I’ve been printing all my parts out of. This has only happened now because the magnet has only been on for short bursts until this week with all the testing I’ve been doing. I reprinted the piece holding the magnet with a different plastic, PETG, which should be comfortable with the magnet’s temperatures. After that, it turned out that thanks to how they behave at different temperatures my PLA parts did not move with my PETG parts. After reprinting some more of the PLA parts and some sanding down and lubrication of a sliding part, everything was working again.

The plastic melted by the magnet

I spent most of the week calibrating the movement of the assembly. We took apart the base several times this week, in an attempt to add some balance to it. This whole part has been quite annoying because I keep running into lots of little bugs. A lot of the time, the code is just not behaving the same way from test to test. The magnet was also quite selfish, only working part of the time, mostly because of a grounding issue. The magnet also melted the PLA, but that was fixed by printing a part in PETG.

This part of the project has been exhausting. I barely have anytime left and I’m scrambling to get a lot less than I what had planned done. Everyday something breaks. Seriously, I am not exaggerating, something always breaks on a daily basis. Sometimes it’s just that the Arduino or my PC need to be restarted. Other times, I or someone else notice something that is breaking down or will soon enough and I have to take an hour or two to fix it. Other times, it’s something really bizarre that takes the whole day to sort out. A good example is when my relay for the electromagnet stopped working, it would give power to the magnet but the magnet stopped taking it. Or kept taking it as sometimes the magnet would just keep sticking long after having power directly cut off. After testing out different relays, breadboards, checking everything worked with a multimeter, we found that the magnet worked when touching something conductive. It turned out the wire completing the circuit had died.

I don’t have any major regrets about the project and I feel like I’ve done a lot and learned a lot, but it’s just tiring at this point. See you in the reflection post.

Week 7: Calibrating

Week 7, counting the days down to our final presentation. This week I had more prints to make. I made copies of some parts I already had, to have backups. I also printed more spirals because I had to experiment with managing a way to have the spiral and metal rod be tightly fit. At first, I had put a nail to tighten the metal rod and 3d printed resin spiral, but there kept being some slipping since the metal rod has wedges because it is also spiral. I ended up putting glue on the metal rod to slide the resin spiral to stick tightly. This fixed the problem I was facing, which was that the resin spiral was not spinning at all. Now the metal rod and spiral are spinning smoothly.

Apart from 3d print, we also had practice presentations at the end of the week to prepare for the final presentation. I received a lot of good feedback to polish my presentation format and make sure to explain more of my process.

I also spent most of my week calibrating as I said I would at the end of last week’s blog post. Luckily, my peer Angel was nice enough to lend me the scale he is using for his DTSF project. I was able to weigh the average of how much each section of the spiral could hold. Doing this meant filling and refilling the dispenser multiple times with each substance, rice, beans, sugar, and flour. I went back and forth to also weigh the average weight of cups and mL for each substance. This is helpful because I was able to create a mini conversion table in my code. In my code, I implemented several dictionaries in python to associate the substance name with the weight depending on the unit (ex. “beans” = 4.2 decagrams). I also had to do some conversions after weighing everything because Angel and I found out that the scale weighs by decagrams, which seems specific. In finding the average for cups and mL, I had to create some conversion calculations to get other measuring units (ex. grams, oz, liter, pounds, etc.).

Week 6: Progress taking off..

What a busy week! It took a couple of days for me to solve the issue with python concurrency. I was finally able to do it by having a code running in the background that continuously takes voice commands while the main program runs the commands given. After solving the issue, I was able to give voice commands even when the drone was still executing prior commands. I thought my work is done after solving this however google speech to text API has not been very nice to me these days. The following pictures are the screenshots of google’s API trying to predict my ‘’take-off’ command. Some of these predictions aren’t even close to sounding similar so you can see how inaccurate they can be sometimes.

I could maybe be able to make it do better by training it if it was a model I created on my own but since it is a pre-built API, I cannot make it perform any better than this. So, I just decided to work with what I had and maybe try to use it in a silent area with a better microphone for better predictions. Furthermore, I added text-to-speech functionality to the drone as well to improve the user experience. The drone would talk back to the user when their command is being processed and when they want to know what about the battery left, flight time etc.

Moving on, I wanted to add gesture control functionality to the drone so I started looking for datasets online to create a model and train it to recognize different gestures. After working on it for a little bit, I realized that the drone’s API already had a gesture recognition feature so I had to change my plan. Then, I moved on to the idea of creating a prototype for a drone assistant that could help physically disabled people to move things around in their apartment. First, I would use QR codes for the drone to scan and go to the destination room. After that, I plan to use object recognition to have the drone recognize objects. I started this by modifying my program so that it can take video streams from the drone. We also had a presentation this week with more people. It helped us a lot to gain more confidence and also to get feedback from the audience.