For the first three days of week 3 my focus was on configuring the NightScout website that I am using to continually track the values from my CGM (continuous glucose monitor) and store them in the cloud. The deployment that I used can be found here, and it is the configuration for the CGM that I am using. To use NightScout, you must first set up a website using the instructions on Github, and you can find that through the link that I provided above. At first it seemed as though I was going to be using an Azure deployment, which is just Microsoft’s method of deploying apps to the internet, however I found that Azure was quite complicated to use and set up. I found that it was much easier to use the Heroku deployment method. All of the information about deployment of the app can be found on the NightScout website. Your NightScout app also has to be connected to a database in the cloud, and using the Heroku deployment method automatically sets one up for you, so I found that to be much easier. Heroku automatically deploys the app and gives you a website URL that you can use to see your app. Here is a screenshot of what mine looks like:
The next obstacle that I had to overcome was how exactly I was going to upload the data from my CGM to my app.
Day 4 and 5
For this project to work, I have to be able to access the data from my CGM via the cloud. To get the data into the cloud was a daunting task in itself. The NightScout app requires that you upload using an Android device. You can download the NightScout Android app from their website, just make sure that it is the app that is configured for the CGM that you are using (yes, I downloaded the wrong one and it would not work for nearly a whole day). Once the app is downloaded I had to go into the settings and connect it to my cloud database that I had set up. The nice thing about their Android app is that you have the option to auto-configure using a QR scanner. All I had to do was enter my URL and API Secret that I had set up during the Heroku deployment, and then scan it using the Android phone that I have. Once that was set up, the data from my app was being sent to the database in the cloud. The next step for my project will be figuring out how I can access that data in the cloud and use it to run a program that I will create on the Raspberry Pi.
This week presented itself with a great deal of frustrations and successes. I decided to work on taking high quality aerial photos of Stine Lake and West Quad of the college campus during the weekend because I did not want to risk loosing a single day of clear weather to take photos! Sunday was especially hot – I learned that hot temperatures are detrimental towards the performance of both the drone and my phone used as the interface for the Pix4D mapper pro program because I had to allow the devices to cool off for a significant amount of time before I could run new missions which was a bit frustrating. On Monday I focused on taking better quality photos of my designated areas from a closer point of view so the image quality would be better for the photogrammetry programs I was creating on Pix4D. I then combined my two projects of West Quad models together for the best and most accurate model that could be created by two data sets. I did this by creating tiepoints in identical locations on each model and reprocessing the models as one. I quickly discovered how long it actually takes to merge projects together and to create new ones from large metadata. I learned that when merging projects, I had to reload the point cloud, mesh, DSM (digital surface model), orthomosaic factors, and index which was very frustrating. However, I knew that the loading process was important since I needed to improve the geolocation and clarity of my models in order to avoid empty voids in each structure. The more overlaps I had in my photos, the better models I would create on Pix4D. After several runs of similar missions of Stine Lake and the West Quad I managed to get overlap areas of 5+, the highest categorical amount of overlaps in the Pix4D program that allow for the best resolution models.
It rained heavily from Tuesday through Friday which made it difficult to find the opportunity to take more photos to improve my models. During these times, I managed to work with the photos that I had already taken before waiting for the rain to clear up to take more. On Wednesday and Thursday I used the low, circular flight feature of the Pix4D app to take heavily detailed photos of the library as well as the West Quad first year halls. By the end of the week, I had created not only a merged project displaying Musselman Library as well as the West Quad but the map also showed DTM (digital terrain level), elevation levels of the terrain of the campus! The Pix4D program was set to DSM, however, so I had to reset the settings to display DTM since I only wanted to show the elevation levels of the terrain without any additional buildings or structures.
This week’s library workshop was on Scalar, a website used for interactive storytelling. We learned how to operate through the user interface as well as the basic functions of the program. Although I am not quite sure if I will use this program with the current project I am working on, I believe that it may be very useful when I create a website of my digital projects and work after this summer is complete.
Next week I plan to finish all four photogrammetry models as well as calculating their DTM in order to begin evaluating the slope of the terrain. On Monday, I will also be meeting with Professor Principato from the environmental science department to learn how to use a total station to manually take calculations and measurements of the slope of the terrain!
Model of the Stine Lake and the West Quad before triangle meshes were loaded in.
Photos taken from different angles and areas of the West Quad and Stine Lake being processed by Pix4D to fill in the mesh of the model.
Photogrammetry model of the West Quad and Musselman Library after merging several projects together over the course of many days.
Week3 was not that fortunate as I thought. Poland was going to beat the world at World Cup this year, but unfortunately Poland was beaten by Senegal.
Day 1
Also unfortunately, my initial plans have to be changed a bit, because of a problem that I was supposed to predict. Gettysburg Battlefield belongs to Gettysburg National Military Park, so I am not allowed to fly with a drone above the park and film it. Thus, I got the new topic, which is equally important, or even more, the Lincoln Cemetery in Gettysburg (please, do not confuse with National Cemetery in Gettysburg). Lincoln Cemetery is a cemetery for African American citizens and Civil War soldiers, as even after the Civil War, United States Colored Troops were not allowed to be buried along with white soldiers.
Day2
On a second day I really wanted to build a VR bowling game, because this game would show me how difficult is to build a simple scene and how the objects interact with each other (ball with bowling pins). This was also a good place to play with sound (the ball hitting the pins, the bowling rolling on a lane) and background music. I found a very good tutorial which showed me step by step how to build a bowling game, here (shout out to FuzedVR). After building a bowling game, I realized that 3d modelling will be very important for me, so I started to work on 3D modelling in Blender, an open- source 3D modelling software.
Day3
We got new, strong computers, on which me and Orrin will be working. It is vital to have powerful computers which are processing and rendering heavy graphics for Virtual Reality content. Later that day, I was trying to show more of my VR content to Sharon, but unfortunately everything stopped working and crashed working in Unity. I was so sad that I could not show it, I hope that next time I will be able fix it and make it even better, especially for Sharon. In the afternoon, Eric Remy taught us techniques for public speaking and presentation. It was a very valuable lesson, as public speaking is an integral part of our life and it is important to develop it. I will try to build up my confidence and improve my speeches. Thank You, Eric!
Day4
Today, I felt like building more games to get used building environments in Unity. Also, Tyler encouraged me to build a baseball game. The game was quite basic, where the user could grab a bat and hit the incoming balls. Because, the game was so real, it was so difficult to hit the balls. I also tried to build the tennis game, and I used my 3D model of a tennis racket. Tennis game was not that successful as the baseball one. In both of these games, I played with colliders, which are determining the place where the collision happens with different objects that also have colliders.
Day5 – testing new things, such as portal, new project
After building these small games, I started to test new things that I can use in Unity. I tried to use portals which can teleport the user to different place. This can be very helpful for my new project, especially if I want to implement 3D modelling, photogrammetry, and 360 videos. The portals can be an interesting way of teleporting to a different, parallel “world.”
Finally, Poland is playing against Colombia this Sunday. Hope they will do better than last time.
For the second week I set out 2 goals. I wanted to find a way to begin modeling buildings in Unity and I wanted them to be interactive in my project. I decided to use Blender for my modeling platform. While I am still unsure about the best way to approach this project, I decided to begin with a modeling program. As mentioned in the previous post, the other option would have been to generate a 3D model using photogrammetry software, which would have been done by taking many pictures of the area and auto generating a model from that data. There is a good chance that the photogrammetry option would have been more accurate than hand making a 3D model, but even with lots of data and strong photo generating, the models made from photogrammetry usually have many distortions from a close distance. Considering that my project will put the user at the height of an average human, the distortions will be very apparent from such a close distance, which would take away from the experience in my opinion. Secondly, one of the goals of this internship is to develop skills for myself as well as skills that the college can use to assist students of faculty wishing to learn that skill. Photogrammetry with drones and 360 cameras have already been used a lot in past projects and I also feel like it is easier to learn than hand making 3D models from scratch. It is likely that I will still attempt photogrammetry in the future because I am curious to see how the two versions of penn hall compare with each other and which is better suited for VR.
Days 1-2
First thing to do was to learn how other people created their objects in unity. While it is possible to create things in unity, many of the people I watched online were developing the objects in a modeling platform. I chose blender because I noticed it was very commonly used with unity and a lot of the tutorials I watched when learning the basics of unity implemented blender models into their projects. I started a blender tutorial with Blender Guru. The tutorial was very long. There were 8 segments of 40 minute videos. The goal was to create a mug on a table with a donut. I didn’t finish the entire tutorial but I completed all the parts that would be useful to my project. I learned how to work the interface of blender and many of the shortcuts to make the work smoother.
Days 3-5
After this I completed a second tutorial on modeling an anvil so I could refine my skills at modeling before I took on my own project. While I feel like I am currently proficient at modeling, I am still learning how to apply textures to models and properly unwrap them using the UV unwrap tool. For now I have enough skills to start the cornerstone of my project: Penn Hall.
I started with a reference image to lay out all of the details. I used a mirror tool that automatically applied edits to the x and y direction. This allowed me to only do a fraction of the work because Penn Hall is symmetrical on 2 axis. It still took roughly 2 days to get to the image below. It is not complete yet and I will continue to edit it throughout the week but for now it is good enough for me to begin interacting with it in Unity.
Going into week 3 I plan to go back to learning Unity so I can properly implement this model. I would like to add a door that takes the user to the inside once it opens and have rooms that the user can explore. The second Vive was set up at my work station so I am finally able to use it for my project. A lot of this week will be spent transferring what I have learned the first two weeks through Unity and Blender and implementing them into VR. Wish me luck.
While week 1 was focused primarily on learning the basics behind all of the technology that I would be using, it is now time to determine how all of this technology will interface with the insulin pump that I am trying to create. I realize now that some of the concepts that I talked about in my previous week’s post can be confusing and hard to understand. I will include links to explain 3D printing here, and microcontrollers here, and go into detail on how these technologies work.
3D Printing
3D printing technology has been around for nearly 30 years. There are two main types of 3D printing, stereolithography and fused deposition modeling, and I will be using both in my project. Stereolithography was the first of the two to be created. Stereolithography works through photopolymerization of a resin that is held in a small tank. Basically what this means is that light is projected into the resin, causing the resin to link together on a molecular scale layer-by-layer. The other type of 3D printing that has gained a lot of attention lately is fused deposition modeling. This form of 3D printing works by heating up a filament (in most consumer products the printers use some form of plastic), and pushing it through a nozzle that is controlled via motors in the desired area. This technology also works layer-by-layer and builds the end product from the “ground up”.
Micro Controllers
Micro controllers allow the user endless opportunities in the world of computing. A micro controller basically works as a mini computer to do whatever job the user programs it to do. Today there are two main micro processors, the Arduino and the Raspberry Pi. Last week I discussed how I was getting the Arduino to do a few basic commands, however, I have found that for my project, the Raspberry Pi will work better than the Arduino. My knowledge on micro controllers is still limited, so for now I will leave it to the links above to explain in better detail.
Week 2 – Day 1
On the first day of week 2 I decided that I wanted to be able to hook an LCD display up to the Arduino and get it to show a message that the user inputs. Using a few online tutorials I was able to get the LCD to display two lines of a message that the user types, and I was even able to get the message to scroll across the screen. This experiment was primarily to work on how I wanted the display of the pump to read.
Day 2 and 3
Over these two days I was mainly doing research into how I can get a glucometer (the device that measures one’s blood sugar) to interface with a micro controller since I would be using the micro controller as the brains of the pump. I came across a webpage that explained a cloud based program that allows the user to store their glucose numbers in the cloud. The project is called night scout and it may be the answer to the problem. The night scout allows the user to send the information from their glucose sensors to a variety of devices and then store the numbers in the cloud. The important part however, is that people have been able to then take the numbers from the cloud and display them using raspberry pi. If I am able to set up the night scout with the raspberry pi, I could make insulin calculations based upon the numbers from that and also have the pump interface with other technology (like phones, smart watches, etc.). The night scout platform is a promising way to solve a major issue in the project.
Day 3 and 4
Thursday and Friday of this week were spent experimenting with the raspberry pi and trying to determine its capabilities. My main goal for these two days was to program the raspberry pi to cycle a stepper motor at a specific speed. Using python I was able to program the raspberry pi to control a stepper motor’s speed based off of an integer input. The stepper motor did cycle, however, I believe that there are some bugs that I need to sort out in order to have the motor function with more precision.
Finally, the World Cup has started. I am so happy, but also a bit sad. Because of the time difference, it is difficult for me to watch the matches, especially during my working hours, but that’s fine, I can follow the score from time to time and watch highlights.
Apart from World Cup, I have worked on my project and discovered so much about the capabilities of Unity, which allows me to do projects on VR.
Day1
I attended the DSSF workshop about building webpages with HTML & CSS, GitHub, WordPress in the library. It was a very interesting course and I have learned very important basic components in HTML and CSS. Also, I was able to create my first public website under school domain. It is a very simple website though, you can find it here.
Day1- 3
Before Day3 I was studying for the FAA Remote Pilot Test with Alyssa and on Day3 we went together with Dr.Remy to take the test. Fortunately, we have passed the exams and we get the certificate of remote pilot for Small Unmanned Aircraft System, usually called drones. So now I can legally and responsibly fly the drone and implement the drone into my own project, which can be very helpful in creating 3d models using photogrammetry.
Day4
On Day4, I focused on embedding the 360 videos to VR environment in Unity. Actually, it is a bit funny how the 360 video is projected in Unity. We have to use a sphere to put the 360 video on the surface of the sphere, but then we have to “reverse” the film and put the film not on the outer surface of the sphere but inner surface. Then, we can watch the film from inside the sphere and look around to watch the film like a 360 video. I also used teleporting, so that there is a way for an user to teleport into the film, when she/he wants to, and there is also a way out, after getting bored with the film.
Math fact: In order to reverse the film to play inside of the sphere rather than outside, we have to reverse the normals of the vectors that are projecting the film on the surface of the sphere.
Day5
Apart from video, what makes VR experience more “realistic” is audio (more human senses are “fooled”). I played with audio by adding some sound effects to objects once triggered or to object collisions, when at least two objects are interacting with each other. Moreover, Unity gives the possibility of using 3D audio by implementing the Doppler effect, which makes the sound have higher frequency when closer to the user. Furthermore, I even tried to add some sound when the user is walking to make it even more real.
Next week I will try to build the scene and start filming some videos to add to my project.
What an eventful week! I began my week by successfully with teaching myself how to calibrate the Mavic Pro drone as well as how to connect it to the DJIGo 4 and Pix4Dcapture apps on my iPhone. I had to first update my phone to IOS 11 which consumed a lengthy amount of time but by the end of the process my phone was at last compatible with the software. I learned the basic functions of flying the drone and familiarized myself with the interface of the apps – I discovered that there is even a beginner mode for new remote aviation pilots. I ensured that I took my time learning the proper rules and functions of the drone to make sure that I was as knowledgeable as I could be before I even attempted to fly the Mavic Pro off the ground for the first time on Monday evening. I began to test out the basic controls of the drone and its intelligent modes which include “Follow Me”, a mode that has the drone fly after its subject at an altitude of 10 feet! I practiced flying in unoccupied, open areas such as the baseball field as well as the Painted Turtle Farm.
I also learned the important of the home and pause buttons on the controller – during rare occasions the drone may loose connection and start flying on a mission that you no longer want it to fly. Always remember your Part 107 piloting rules: It is crucial to always keep the drone in your field of vision to avoid lost or accidents!
Even with the excitement of finally operating the drone for the first time, I made sure to continue to rigorously study for the FAA Remote Pilot Test for at least two hours a night. This was my third week of intensive studying for the test. On Wednesday, Dr.Remy, Hoang, and I drove to Frederick in Maryland where we both passed the test! I am overjoyed to announce that as of June 13th I am now an FAA licensed and certified remote pilot in command! After registering my piloting license on https://iacra.faa.gov, I also requested my physical remote piloting license card which should arrive in the mail shortly.
I am now a certified Remote Pilot in Command!
On Thursday and Friday I was extremely frustrated both with the drone as well as the Pix4Dcapture app: the apps consistently disconnected from the drone and the images from my flight missions were not taken properly nor did they even save to the SD memory card. After an unbelievably long amount of time of searching on the internet as well as invoking the help of Ryan Gonzales (former DTSF fellow from summer of 2017) I learned the following:
Must fly on “Safe Mode” in Pix4Dcapture app so drone stops at each waypoint to capture image
“Safe Mode” doesn’t need constant WiFi and connection to app on phone so will keep flying mission and taking photos no matter what – best since my connection falters often
I was flying on “Fast Mode” which takes photos as the drone flies about the mission and needs constant connection
Use back black buttons to focus camera – use stop button to stop then continue mission as usual for best quality
When connection lost to app from controller it’s because wire is loose but doesn’t matter too much after flight taken off because in Safe Mode will take photos regardless – side wire that connects to phone a bit rusty and loose – readjusting wire and blowing into connection port helps
More overlap percentage in Pix4D app means more pictures taken at closer distances
ADVICE FROM RYAN: -PEN BOTH PIX4D AND DJI GO 4 APP, use PIXE 4D TO FLIGHT AND PLAN FLIGHT PATTERNS THEN SWITCH OVER TO DJI GO TO VIEW PHOTOS AND TO MONITOR FLIGHT, CHANCE FOCUS, PAUSE AS NEEDED – now the photos are in the SD card!
This week was unbelievably a journey and although I had many setbacks, I also had a great deal of accomplishments as well! I can not believe the mass amount of information I have learned in simply a week through self-taught lessons, experience, and the kind advice of others. I even attended a library workshop and learned how to use HTML and CSS code to create a website, upload it to the Gettysburg public drive, as well as how to legally incorporate images and videos of the proper copyright license in my work.
My next steps are to take aerial photos of the following locations and to transfer the photos to Pix4D to create photogrammetric models of the campus and terrain:
1) Stine Lake
2) East/West first-year quad s
3) Gettysburg orchard (?)
4) Quarry Pond
Aerial view of Jaegar Fitness Center using 2D images taken with Mavic Pro drone
Pix4Dcapture app used to plan flight missions; can control area, direction, altitude, speed, and more!
The first week of DTSF has concluded and I now have a better understanding of how I want to do my project. I learned of a couple ways it could be done. The more photo realistic way would be to create a 3D model using a 360 camera and a drone. This would make the model more accurate but from a close up distance, it will appear more jagged depending on the photographs taken. The other way would be to create a model based off of maps and pictures. This will have less detail and more of an animated look to it, but it will not look photo realistic. I still have to decide which method is best for my project.
Day 1
The first day started at the library where we learned about projects made through word press. afterwards, we were free to begin our projects. I needed to be able to start with the basics of Unity 3D, so I started with watching basic tutorials to learn the interface of the platform. There is a lot of different features on the Unity interface and at first it was overwhelming. I was able to learn how to implement terrain and use assets to add external features to a project.
Day 2
The second day my goal was to learn how to script in Unity by using C#. I spent most of the day watching tutorials on learning the basics of the code. I learned that it is very similar to java, however there are a lot more basics of the language to memorize. Also, I learned there is a difference between C# for Unity and C# for generic coding. I learned basic commands such as being able to make objects move and printing statements in the console. I spend the end of the day learning how to add a 3D model from sketch fab into a Unity scene. I wanted to learn how to do this in case I end up having to make a photographic model for my project.
Day 3
Today mostly spent learning the basics of VR. We played around with the VR console and played with the vive and steam tutorials. Before this I have had very little experience with VR and this was an amazing experience for me. After messing around with the tutorials, I learned how to implement Unity with VR. Later I started an online tutorial with Unity that helped teach me the basics of the platform.
Days 4, 5
I spent the rest of the week with the tutorial I started the day before. The end result was a cube game developed through unity. The goal of the game is to control a cube on a narrow path and dodge all of the obstacles to get to the end. It was very long and I plan to review all I leaned so I can replicate it in projects of my own.
What I learned this week:
Interface implementation, terrain development, game objects, C#, VR implementation, model insertion, object movement, collision handling, restart features, in game physics, scene compilation, in-game text, score handling, exporting a project from Unity.
Next week I plan to take the skills I learned this week to start my project. I also intend to take an online course for Unity modeling so I am able to represent buildings as accurately as possible.
With the beginning stages of my project underway, I have found that Unity3D as well as utilizing HTC Vive with Steam VR is much more difficult than it seems at first glance 😉 Nevertheless, I am very excited to work for the first time in my life with Virtual Reality and be able to build my own projects for it.
Day1
On my first day I met my DTSF team and went to the Library to meet other fellows from DSSF. It was very friendly to get to know other people and their projects. Later, I was planning my timeline for my whole project for 8 weeks ahead. I found out that it’s not easy to plan so far ahead. However, it is important to plan so that you are more confident of what you are doing and reduce the risk of running out of time (or on the other hand if you finish before time, you know what you can do more). I also downloaded the Unity program to get to know the program interface. It is possible to download the program for free for personal use here.
Day2
I started the tutorials to make a simple 3D game on Unity, Roll-a-ball, which showed me so many possibilities of Unity and that there still so much more to discover. The tutorial was quite straightforward and anyone who is interested can do it. I learned to build the simple plane with some cubes and spheres in Unity and could add some physics to the objects. Therefore, the objects will behave more realistic and so the game will be authentic. Even though the game was simple, I was so excited that I could build that game and even tried to test it with the HTC Vive headsets.
The rest of the week
For the rest of the week, I tried to get to know the Unity more from the VR perspective. I read many articles and youtube tutorials to find more about VR and Unity program. I learned how to integrate the camera in the headset with the camera in the Unity program. In fact, SteamVR plugin, which we can get for free in the Unity asset store, is very helpful for that mission. It allows me to connect with the VIVE camera rig and the controllers.
I learned a lot about movements and interactions in VR. I made two models of gun simulations, where you could shoot with a gun, using a trigger button. It also had a bullet that shot out of the gun with the sound effect. The simulation was simple but looked quite real to me. Moreover, I tried some interactions with some objects, e.g. grabbing the cube, throwing it, catching, opening the door, etc. I also focused on movement in VR, e.g. you can walk in VR, by simulating the hand movements, you could use the buttons to move, or you can also teleport, which is very helpful in movements in bigger areas. It is also important to learn the C# language, which is the main language for writing Unity scripts. Fortunately, there are many tutorials showing and teaching C# scripting in Unity. Also, there is a very important toolkit, called VRTK, which has many necessary scripts for SteamVR, so I don’t have to write scripts all over again.
As you can see, the Unity above looks different on 2 screens. I used different versions of Unity on different computers and I encountered problems when I tried to import a game from one Unity to another, so now I know that it is better to work on 1 computer only, or use same versions on both computers.
I am looking forward to my project. I will start building more developed environment in the Unity and I hope this will bring me closer to my final goal.
Below is my timeline with end goals for each week, I hope I can at least fulfill them.
Timeline development with milestones:
Week1:
Goals:
I want to start creating environment with assets and material in Unity, eg. Creating a room.
I want to implement it with VR and use with VR headsets.
End Goal for week 1:
I will be able to build simple scenes in Unity.
Week2:
Goals:
I will start building one of the general green area of the Gettysburg and adding hotspots where I can later add 360 video to certain hotspots.
I want to pass drone exam that might be helpful later for me for photogrammetry.
I will find places where I have to take pictures and videos next week.
End Goal for week 2:
I will start building more developed environment and prepare for the next week shooting.
Week3:
Goals:
I will drive to areas to shoot the 360 videos of the Battlefield. I will make photos.
If I will have drone license, I will use droning photos to create the 3D models of statues and monuments in the Battlefield
I will use photogrammetry to create 3D models.
I will try to add videos to my VR environment.
End Goal for week 3:
I will have the initial environment in Unity with videos (and if have drone license, then 3D models)
Week4:
Goals:
I will improve the VR scene and user interaction with 360 videos.
I will add user movement to the scene and boundaries.
Add some mountainous area.
End Goal for week4:
The VR environment will be more interactive for the user.
Week5:
Goals:
I will add more Battlefield areas to the VR scene.
I will make more 360 videos regarding the new area.
End Goal for week5:
I will add more Gettysburg Battlefield places to the environment.
Week6:
Goals:
I will add some physics to the scene by adding some pickable leaves.
I will make the movements more interactive (teleporting), hints (the path) for users where they can go.
I will add specific spots where user can read information to specific things.
End Goal for week6:
I will make the VR environment more interactive for the user to boost the VR experience.
Week7:
Goals:
Hamilton Project
Week8:
Goals:
I will add some final embellishments, such as sound, shades.
I will find any errors and fix them.
Finalizing the project.
I will improve my speaking skills in order to present my project to the public.
My first week of the Digital Technology Summer Fellowship passed by very quickly! I spend the first few days of the internship continuing to study for the FAA Remote Pilot Aviation test so that I can be licensed and certified to commercially operate a drone for the purposes of my project as well as to represent Gettysburg College to the public. I am finding the Part 107 Rupprecht Law Study Guide and part 107 FAA certification videos to be extremely helpful in testing my knowledge and learning new material! I am reviewing aeronautical maps, aviation symbols, weather patterns, and legal conditions regarding where a drone is allowed to operate and during what circumstances. This past week I have also contacted professors from my past classes to ask for their advice of how I should, for example, create my own layers in ArcGIS using field data I collected myself. I plan on not only using the advice given by these professors but to also teach myself how to carry about these operations by searching for tutorial videos and articles online.
On the very first day, I spent a significant amount of time planning milestones, budgets, and goals. I also came up with an official list of the tools and materials I will need to conduct my project such as a drone, total station, Carvey Inventables(3D printer), and Pix4D program (photogrammetry software). I have arranged with Professor Principato from the environmental science department to meet next week when she will briefly show me how to set up the total station to measure land elevation patterns. However, the set-up and operations of the total machine will mainly be self-taught much like everything else in this internship because I will need to fill in the gaps of knowledge and learn to use the functions of the machine to the advantage of my project goals.
I have also familiarized myself with how to operate a Mavic drone by searching for tutorials and articles online as well as reading the manual. Although I am not yet able to operate the drone, I find it useful to become comfortable with the controls and operations. I have also completed tutorials for Pix4D in order to have a better understanding of how to use the program for my project. So far I have learned about the georeferencing, coordinate system, and spatial data features of Pix4D. I am currently considering using only Pix4D to create the model of the campus instead of ArcGIS as well because I believe the data may be redundant if I were to use both programs. However, my project is still going through adaptations and the final product may be different from what I currently have in mind.
Tomorrow I plan on completing more Pix4D tutorials as well as discovering how I can import my own data and images taken by a drone to the program. I am very excited to see what happens!
Pix4DMapper Pro’s rayCloud video animation trajectory