3D Digital Modeling – Week 3 of 3

Introduction

During week two, I realized I needed to find a program that was created with small object 3D model generation in mind. Pix4D requires a lot of manual intervention; adding and identifying tie points, manually sculpting models and calibrating uncalibrated cameras was time consuming, and tedious. Additionally, because the program generates the model in two steps; tie points, and point cloud & mesh, when additional cameras are calibrated, new tie points are added, or anything is done to the basic skeletal structure of the model, the tie point identification and rendering process needs to be reoptimized. Thus, discarding any work that may have been done on the point cloud and mesh, which makes up the texture and observable surface of the 3D digital model that can be manipulated in Sketchfab. This would not be a problem if the program did not confuse the background with the object’s edges. Because Pix4D identifies and adds points to the point cloud from the white background along some edges of the object undergoing rendering, the triangular mesh looks like it has white or off-white obtrusions or growths sprouting from edges or sides. When this occurs, I needed to manually delete each point on the point cloud to stop that point from showing a texture in the final manipulable 3D mesh. This was extremely time consuming, and the points deleted could also could be discarded if any changes were made to the tie points later on. For these reasons, Eric suggested I switch to Autodesk Remake for week 3 of my internship, and explore whether that program would be any more efficient.

Goals of Week 3

Because I changed my digital model generation software from Pix4D to Autodesk Remake, there was some learning about the program that needed to be done. But the main goal stayed the same; to produce high quality 3D digital models of the artifacts provided by Gettysburg College Special Collections. Fortunately, Autodesk is free to students, and has a whole page of their website dedicated to 3D model construction on their website, complete with workflow videos. This was extremely helpful during my learning process, and expedited my mastery of the program tremendously. Additionally, I could immediately tell that Autodesk was more of a consumer software than Pix4D. It was more streamlined, required less directory information, tagging, and was more flexible image identification.This will prove to be very helpful in the future tutoring role, as the overarching reason for my internship this summer in IT is to learn these programs so that I may teach students during the year of their uses, and act as a liaison between the Art and IT departments.  Additionally, Remake made editing the generated 3D model much quicker and easier, with tools like ‘slice and fill’, which creates new, solid bases for the models, ‘sculpting’, which allows the user to manipulate the objects’ surfaces if the program has made mistakes, and ‘fill holes’, which allows the user to identify errors in the surface of the model, and fill the hole with a smooth or flat surface that replicates the color around it. Using Autodesk Remake, one can also use the ‘bridge gaps’ function, which allows a misconstrued edge to be bridged and filled similarly to ‘fill holes’.

After mastering Autodesk Remake, I conducted several trial renders at low quality, and was pleased to find that the models were all sensical, and adhered to their actual forms. After seeing this, I was able to successfully generate and edit the models that I had been assigned from week two, as well as an additional two new items from Special Collections; a Confucius statuette, and an Iron Dragon. Below I have attached the models and their corresponding pictures to give an idea of the accuracy achieved in the 3D model generation process (double-click to enlarge the images).

  • The Terracotta Warrior

 

 

 

 

 

 

 

 

 

The terracotta warrior sculpture generated well. Because of its earthen surface, it did not have a shiny, polished sheen that would confuse the software. Additionally, the bust did not have detailed, layered engravings or carvings that could produce varied shadows. This render was very successful, and achieved at high quality.

  • The Cricket Cage

The outside embossings of the cricket cage rendered well, however the intricate engravings on the lid of the cage made the program do some guesswork, and produced shadows in the holes of the carved flower bed atop the lid that would shift and change when the object was rotated on the turntable. Additionally, because the ivory is polished to a sheen, tone can see that the object was confused by the flowers on the lid. This produced a lid that gets the idea across, but fails to convey the true craftsmanship and beauty of the ornamental lid.

  • The Rhinoceros Horn

This ornamental carved rhino horn rendered beautifully, and the details of the engravings on the sides were captured well. While the true detail of the rhino horn can only be fully appreciated in person with the actual artifact, this 3D digital model would serve well for studying the fragile item in an academic setting. All of of its engravings are distinguishable, and easily identified.

  • The Confucius Statuette 

 

 

 

 

 

 

 

 

 

 

The Confucius statuette was another very well produced model. The detail in the folds of his cloak, and the embroidery on his clothing, as well as his facial expression are all not lost in the digital 3D model. The digital statuette looks very close to its original physical copy, and its replication is really only signified by some editing done on the head and staff.

The Dragon

I really wanted the dragon to come out as good as the other models, it was my favorite artifact to work with. However, because it is made of iron, it gives off a sheen when photographed under bright lighting conditions. This caused some error when rendering with the Autodesk Remake software, and it can be seen in the white areas that seems to attach themselves to some of the edges of the dragon, particularly in places with fine detail, such as the horns, feet and teeth.

Conclusion

I have definitely accomplished my goals of learning and mastering photogrammetry and 3D model generation. Additionally, I’ve learned a great deal about photography, photo editing and Photoshop during my time working with Eric Remy and the IT department. I’ve mastered the skills required to manipulate and render a digital 3D model through both Pix4D and Autodesk Remake. My time has been filled with problem solving, and hands-on autonomous learning. I feel I have definitely acquired the skills to become an effective liaison between the Art and IT departments here at Gettysburg College. I look forward to continuing my work during the school year through tutoring or teaching assistant positions. Through my own trial and error, and innumerable renders of models, I’ve found a good start to finish system to create or replicate a digital 3D model. A fair amount of know-how is involved, as the type of object being photographed or created plays an important role in the lighting setup, and photography process, as well as it does with Photoshop and Pix4D or Autodesk Remake. I look forward to continue learning and honing my proficiency with these tools and programs, and to passing on what I’ve learned to other students and professors.

I would like to thank Eric Remy and the IT department for his help in making my internship this summer possible, without their resources and his knowledge, I would not have gone this far. I would also like to thank the DTSF group, especially Ryan Gonzalez ’19 for his help in getting me up to speed with photogrammetry. They did a wonderful job of ensuring I hit the ground running during my first week. Finally, I would like to thank Professor Tina Gebhardt for nominating me for this position, without her belief in me, I would still be unversed in this new link and application for technology in the art world, and vice versa. I cannot wait to see what applications my work will have looking forward, and I look forward to being able to continue learning in this field.

St. John Smith ’20

3D Digital Modeling – Week 2 of 3

Introduction

During week one, I learned the basics of 3D digital modeling, 3D printing, with an overview of how the coding and settings of the programs should be set up. Viewing the final project presentations of the DTSF group was a good way for me to end my first week, and answer any technical questions I still had. Their projects solidified what I had learned. However, week two brought another challenge; autonomy. The Fellows were no longer in IT to answer questions, or help problem solve when errors arose. Figuring out my way around Pix4D, Adobe Photoshop, and a new camera, the Nikon D810 was a great way to learn to master the new tools I needed to accomplish my tasks. During week two I was able to become quite proficient with these technologies and equipment, and embellished my photography setup in Special Collections with a new LED ring light, and fully shadowless image capture, which creates pictures that give the Pix4D modeling software limited room for error, and thus expedite the rendering process.

Goals

On Monday, Eric Remy, Director of Instructional and Educational Technology of Gettysburg College, and I met with Carolyn Sautter, Director of Special Collections, and Catherine Perry, Digital Projects Coordinator & Collections Manager. We reviewed the Chinese art pieces, many of which are from the 19th century, which were being considered for 3D digital modeling, and troubleshot potential problems their shapes and physical surfaces might present to the image capture and rendering process. During the second week, the goal was to render 3 digital models, which were to be produced from an ivory-lidded ornamental cricket cage, carved rhino horn, and terracotta sculpture of an ancient Chinese warrior’s head.

Ornamental Cricket Cage

Carved Rhino Horn

Terracotta Warrior Head

 

Accomplishments of Week 2

Because the bottoms of the rhino horn and terracotta warrior sculpture were unadorned and partially hollow, they were suited for a rendering that included a turntable with a digitally tagged surface would produce better renders, even though they wouldn’t render the bottoms of the items. However, for the cricket cage I tried to get all sides using a blank turntable and background, as I did with the trilobite during week one. But when I rendered those untagged images through Pix4D, the different angles were unable to mesh, and produced a digital model that was only 70% of the circumference of the model, it did not create a full 360° model. After retaking images of the cricket cage, the rendering process was very long, and had some errors because of the cage’s symmetrical design and engravings. The program was having trouble recognizing that it was a cylindrical object, and would fail to produce a fully rendered, meshed image of the cricket cage. Therefore, this week was mostly comprised of trial and error practice renders. I experimented with lighting conditions picture angles, manual tie points, and project merging.

Using large batch automation, and  droplets on Photoshop, I created directives by which Photoshop would automatically edit the contrast, color and brightness of the pictures, and save the hundreds of photos to a USB drive. In doing this, I ensured that the lighting of the pictures would be consistent in all of the cameras generated by Pix4D. Editing the contrast or color of the pictures also allowed certain features to stand out by sharpening the quality, and accentuating edges and standout features that the program could easily identify in the editing process.

Week 2 was important for my learning curve, as I became really familiar with the work I was doing, and how to operate the software efficiently. I determined that while Pix4D is a great orthographic mapper, it struggles with shiny objects, as well as items that have black components. Additionally, Pix4D has a lot of trouble merging two angles of an object together, even when manual tie points are identified in every picture. The program also uncalibrates certain cameras – or pictures – automatically. For some reason, the program will favor one side or angle of an object more than another, and produce a 3D model that does not have all sides, or has holes. This was the problem I kept having over and over with the terracotta warrior, and cricket cage.

Conclusions from Week 2

Week two definitely really solidified my expertise using Pix4D, and was a week of simplifying the image capture and model generation process. My attempts during most of this week created disjointed objects, or objects with holes. The main issue that I was having was that the Pix4D program was really designed for orthographic models, and not small objects. The examples hosted on the Pix4D customer support were good, and the workflows made sense, but all of the examples used renders that implemented pictures from drone flights, mapping buildings or landscapes. Because the program is not designed to deal with the intricacies of small object rendering, it struggled with depth perception, and recognizing that the shadows of fine details changed when the object turned.

Eric and I concluded that it was best to search for another program to model the items Special Collections had given us, and we found Autodesk Remake at the end of the week, it is specially designed to make 3D digital models of small objects.

 

Week 10 and Conclusion

Before I begin I’d like to give a basic run down of how to use both PLA and SLA printers

You must have a 3d model to print. You can obtain one from websites such as a thingaverse.com and sketchfab.com, create your own using photogrammetry/laser triangulation, or make one using CAD software.

You must slice your file. There are a number of different slicing softwares. The most important thing to remember is that PLA printer require the slicer to produce gcodes, while the SLA printer requires images. If you’re using the PLA printer, and are using the Repetier interface you can configure the slicing setting within the Repetier interface. For the SLA printer I used creation workshop. After entering the printer’s information you import the model and save it. You then click the icon that looks like a slice of cheese.

If you’re using the PLA printer with the repetier interface you press print and it should begin on its own. If using the SLA you must open Kudo’s 3D priting software which is the interface for the printer. You enter the configuration settings for the resin you’re using, and then you upload one of the pictures from the folder that contains the sliced model. You press print and the software should display each layer according to your setting.

Week 10

Week 10 ended with a bit of frustration. Eric presented me with an upgrade kit for the Titan. This biggest feature was the raspberry pi that controlled the print. Because of the school’s firewall, and the inability to enter the Pi it was a bit of struggle actually setting up the upgrade. The pi couldn’t be used without connecting it to my laptop to share internet. From there I would assign it an IP address and use that to enter Kudo 3D’s verision of octoprint. I was able to print the calibration sample they had uploaded using the printing parameters they had uploaded, but there were no directions on how to actually upload sliced models into the octoprint interface. I slided my models using Slic3r, which produced SVGs. I converted these into .png files and zipped them (the same way the calibration sample seemed to be uploaded). However, the software would not recognize the files. I tried a number of different things but none seemed to work.

Attempts:

  • deleting the first image in the folder so the sliced images start at 1
  • Moving the calibration pngs out of the unzipped folder and putting my models pngs into it and then zipping that folder
  • renaming my models folder to match Kudo’s calibration sample’s name

As the week came to an end I had to focus on my presentation. After not being able to successfully upgrade the printer I had to disconnect it from the pi and connect back to the laptop I had been using for the past 10 weeks. I hope to be able to solve this problem at some point in the future. I emailed Kudo, but they still have not replied.

Conclusion

The 10 weeks had its up and downs. Some weeks flew by and others went really slow. I was really happy with the amount of information I learned about 3D printing. However there is still a lot to learn. 3D printing is a technology that is just becoming accessible. I hope that I can put my experiences and knowledge to use in the future, whether its expanding what I’ve learned, or delving into another branch of 3D printing. I’ve never really had to opportunity to venture into a project of my own before and I think I learned a lot about myself and my own workflow. I think one of the most important things when it comes to these types of project is being passionate about what you’re doing. Its very easy to lose motivation when things don’t work smoothly, but as long as you know what you want, and what you hope to gain then it’s even easier to stay focused. I’d like to thank Istvan Urcuyo one of the directors of the STEMscholar program for suggesting that I apply to the program in the first place, and Eric for making it possible. Next summer is still a while away, but I hope that I can return with new knowledge acquired throughout the year to apply into, and further the work I did this summer. What I enjoyed most about the summer was how I was able to work in conjunction with my peers throughout the week, and how our projects were in a way interconnected. I look forward to applying my knowledge throughout the year helping professors who might find my works useful in the classroom.

Week 10 Post & Conclusion

This write-up marks the tenth week of my DTSF internship. We’ve all tied up our research and finished strong with our presentations, all that’s left is to look back at what we’ve accomplished and plan where we go from here.

Before I get into the meat of this post, I want to start by thanking Eric, the IT department, my coworkers, and the program as a whole, for giving me the opportunity to experience such a unique internship. Of all the ways I’d have expected to spend my summer, the last would be 3D modeling with a drone. I had no idea the program even existed until a few months before research began and never thought I’d be able to accomplish my end goals when I first began, in fact, if you asked me during the first few weeks I’d have said I was way over my head. Most of my initial schedule was liberal guesswork, I had no idea how much or little effort I’d need to properly achieve my milestone goals. Yet, at the end of it all, I can confidently claim proficiency in drone photogrammetry. I had no idea what my research topic was until Eric came forward and suggested it to me, other than what little tangential knowledge of 3D modeling had already. A few hours of research and a couple meetings later, I was setting out to learn something I’d never even heard of before. Despite this, I have no regrets in doing so. I fell in love with my topic and found the balance between hands-on work and computer editing to be a refreshing change of pace. This was the best motivation, second to watching the work Jon and James were doing alongside me. Though we all had different topics, every one of us found ourselves consequently linked by them. My work with small objects tied to Jon’s printing, and his printing tied to James’s automation. If given the time, we could potentially work to generate cohesive workflows that from start to finish would print any real-world object. Working alongside them was a pleasure, and I’d happily do so again for that sort of goal.

The experience was about as much learning about our research as it was learning to function independently. Everything Jon, James, and I learned was done so through trial-by-fire. If we had an idea we tried it, if it failed we fixed it, if we couldn’t we’d scrap it. There was no hand-holding, and other than the occasional word of guidance we were truly left to fend for ourselves. However, I think all three of us thrived in such an environment. At the end of our work, all of us had plenty to show for our efforts and could confidently research as independents.

Like I previously stated, all of our topics were related to each other. My work in drone photogrammetry is meant to create copies of real world objects in a computer environment, Jon’s work in 3D printing is meant to remake those models on much smaller scales, and James’s interface is meant to employ Jon’s works in a streamlined remote method. Despite being interconnected, the nature of our research differed between the three of us. From what I understood, Jon’s goals were relatively focused: generate prints in high resolutions. James’s work took the form of milestone goals, tackled as he climbed to remote 3D printing and striking down tangential goals. My research was to model anything presented to me in a 3D environment, more of an all-encompassing statement than a single goal or series of projects. No matter the scale, I needed to know how to make renders of any object that is useful to both students and faculty. This meant my research could cover anything from shiny rocks to dorms and campus quads.Instead of working to tackle goals progressively like a ladder I needed to develop methods that expanded the umbrella I worked under. This proved both a blessing and a curse; though most knowledge didn’t compound upon itself, general techniques I learned or developed could carry between. Improving upon one type of model would often by nature extend to improving the rest as well. In this gray areas were slowly fleshed out, and over time all the difficult betweens were filled in.

Progress Reports

So what exactly did I cover, under the umbrella of models I addressed? When I began planning out how I’d devote my ten weeks, I broke down photogrammetry modeling into four archetypes: turntable, statue, building, and grid-scale modeling. These four groups were meant to cover every and any subject presented to me so that I’d be fully prepared for anything thrown at me. This determined what models I’d be developing, but the how and why were left open. To address this, I planned to familiarize and eventually master any software I could find that expedited my workflow processes. This proved easier said than done for two principal reasons. First, there’s a lot more software out there to test and learn than I initially thought. Second, almost none of it turns out to be straightforward. The sheer number of options and approaches I found was absurd, and I learned the hard way that the more expensive any software is, the more complicated it usually proves. What follows is a list of any goals I’d initially set out to finish, and my progress tied to them.

  • Model Objects of Any Scale: This is a goal too complicated to answer in one small paragraph. As mentioned, there are four archetypes I broke this goal into, each really serving as its own small project. I’ll exclude the gritty detail, but my progress as a whole was enough to claim success. My progress individually is detailed below.
    • Turntable-scale Modeling: the process of generating small models in full 360 renders. This subject is tricky because though the goal is achieved, I’m admittedly not alone in doing so. I managed to finish models from a single perspective using the Pix4D official workflow, however, that was as far as it carried me. Using their method for fully rotatable models was not only unnecessarily complicated but failed to ever yield sufficient results. It wasn’t until I began working with St. John later that we spitballed a new way to generate turntable models using a tweaked method, with which he made the first complete render. There are still issues to be addressed, but overall we’ve proven it’s at the very least fully possible.
    • Statue-scale Modeling: the process of generating models the size of statues. In no way does this mean these models have to be statues, it just turns out that most often models of this size will be one. If you need proof of how successful our process was, look simply to our Lincoln Statue models on Sketchfab. There are parts that can be fleshed out, but adding detail is little more than that. The general workflow is done, all that’s left is to build where the process is lacking.
    • Building-scale Modeling: the process of generating models of large structures. Buildings, monuments, construction sites, anything where a simple camera isn’t enough. This is where I actually began to break out the drone and proved to be one of the faster workflows. Most of the high-end models I have on Sketchfab are of this scale. There’s little else to be done, anything that could be fixed would be addressed in Statue-scale Modeling.
    • Grid-scale Modeling: the process of generating models of large open areas using a drone, flying in a grid-like pattern to get many images and angles. These are the models with most room for improvement, but I found that their purpose is not in the model itself, but the data it offers. Most often the models look nowhere near as attractive as their counterpart archetypes, however, they effectively provide volume measurements and altitude maps the others cannot. What’s left is to improve results to ensure final products are truly accurate measurements, if not already, and to begin making hi-res grids (more for show than anything else).
  • Master Pix4D: this is our photogrammetry software we used to make each of our models. I fell too far short to call myself a ‘master’ of the program, but I’ve learned more than enough to claim proficiency at the very least. The sheer amount of settings and quirks associated with this program is dwarfed by the time spent waiting for it to power through every project as it renders, making it difficult to try and test them all even with full weeks to dedicate to the process. There is a slew of guides online provided by the company that developed the software, however a great deal of the information provided I’ve already learned myself or proved unnecessary to me as I worked along. There’s always room to learn though, so I may find myself becoming reacquainted in the near future.
  • Employ Smartphone Software: in short, yes this goal was completed. The DJI Go 4 app is required to do anything complicated with our drone, but what the goal really referred to was the Pix4Dcapture app employed during missions. In short, the app lets us pre-plan flights to make drone work quicker and more efficient. The program itself is awful, it does very little of what it claims it can do. The core of purpose, however, and what we really needed it for, is functional, and after learning to work through its kinks and growing pains the app shined when it was needed most. The DJI Go 4 app seems to be capable of doing what Pix4Dcapture does, but to what extent and flexibility are unknown to me. I was short on time when I realized this, and because I spent so much time on Pix4D I really had little time to spend on the drone itself and its associated software.
  • Online Sharing: I found a website called Sketchfab I began to upload all of my files to. This served not only as an effective and streamlined way to share models for embedding and downloading, it also was completely free. I have no complaints regarding this goal and can say I fully achieved it if anything my gripe with it is that Sketchfab could improve upon it’s VR support.
  • Virtual Reality (VR): This goal is really meant to show off more than anything else. We simply didn’t have the time or hardware to toy around with this enough to claim effective implementation. However, Sketchfab, as mentioned, has support for the platform, if the models are already up on the site there’s no reason not to experiment with it in the future.

That’s about it. When broken down my research is more or less mastering photogrammetry and drone imaging, thus gauging progress is simply asking how close I was to doing so. If you were to take any of the above information and really distill it down as best you could, you’d essentially have my presentation from Friday minus any conjecture or live demos I had. What’s left, really, is where to go from here.

Future Research

At the core, my work is done. However, that’s also about as good as claiming an unfurnished house ready to be lived in. I’ve done what I needed, learned what I could, and became proficient enough to teach. Assuming we even continue my research, the question is whats to do for me or those picking up in my place. Going ahead, there’s not a lot left to learn. Truly refining my models to the point of being professional grade and scrubbing workflows until they are as concise and reliable as possible is all that’s truly left. However, they’ve already reached the point where they’re applicable to research and work throughout other departments of the college, so that may not even be necessary. There are three possible objectives which should be addressed before the project should truly be considered finished if I am able to work again within the DTSF program.

First, 3D printing. Digital models are useful, but it’s nothing like holding the subject in hand. Making sets of matching scale models to a particularly fragile artifact can turn a fossil on display to show-and-tell, and printed copies of objects in special collections could allow students to study subjects in at least moderate detail from the comforts of their dorms. Miniatures of Glatfelter or on-campus statues, at the very least, could sell for decent prices in the bookstore.

Second, project merging. It’s something I’ve played with in my head but never had the chance to experiment with. Essentially, the project would be to make as many models of landmarks and buildings on campus as possible, make low-detail maps of everything between, and combine them all to generate a complete and cohesive 3D map of campus. The sheer processing power and time required would be immense though, and require crisp models with similar conditions and details to create results worth of effort spent. This would be the ultimate end-game project.

Third is indoor modeling. I experimented plenty with generating models outdoors, but I never once tried anything from indoors. I have no clue how the program would handle hallways and rooms, but being able to generate full 3D models of indoor settings would be an incredible asset the college. Recreating galleries or buildings could be used in 3D tours, and data sets sent to power users by professors on sabbatical could return full models researchers could return to at any time.

There’s any number of projects that could branch directly off of my research, and even more that could go individually. At the very least, I can act as a resource for students and professors interested in the subject.

Conclusions

Again, I’d like to thank Eric, the IT department as a whole, the DTSF program, and my coworkers for the experience I was given this summer. I never imagined I’d have spent the past ten weeks the way I did, nor would I have hoped to learn what I came out with by the end. Jon, James, and I have all made incredible projects within the emergent fields of our research worthy of our pride and efforts. I hope to see more from their work and my own in the future to come.

Ryan

Week 10 & Conclusion

With the research completed, the final presentations over, and the start of a new school year fast approaching, it’s time to reflect on the ups and downs of this summer and set new goals moving forward.

First, I would just like to say how happy I was with the program as a whole. I was not expecting to make the sort of progress that I did, and was able to come away not just with a good research experience, but also with a solid deliverable — and more than enough substantive material to fill a two minute elevator speech. If you had told me in April that I would be able to answer the “What did you do this summer?” question with “I built a completely autonomous process for a 3-D printer to accept new jobs via email,” I wouldn’t have believed it. Ten weeks ago, the complexity of automation was barely on my radar (I wrote it off with one sentence in my first blog post — “Week 6: Add wireless capability to FL-Sun7 3D. This can easily be done using a Raspberry Pi and the open source software Octoprint”), and I had been more focused on improving the temperature sensors on my 3D printer. While this could still be in the cards for the future, Jonathan’s work this summer is what really allows high quality prints, and it seems to me that my process will serve best for rapid, remote prototyping of ideas. In fact, Jonathan, Ryan, and I’s work all proved to be much more interconnected than we all had initially thought. Our research this summer laid the groundwork for a future in which someone could, with minimal effort, use a drone to capture and create a 3D model of any subject (Ryan’s work), rapidly prototype a print of that model remotely (my work), and then print it later as a high quality resin model (Jonathan’s work). The exact workflow for this is still a little ways off, but it was awesome to see the flashes of brilliance from everyone in the group that signified how close we are to something like that.

We also learned the importance of remaining flexible, yet motivated. The focus of my project might have changed three or four times throughout these ten weeks, but I never at any point felt that I wasn’t making positive progress. Proper documentation, as practiced in things such as these blog posts, serves to prove to oneself just how much as been tried, even if not that much as been accomplished. We were able to easily hold each other accountable, and exercises such as the biweekly lightning rounds with the DSSF scholars helped to serve as benchmarks for our progress. The occasional joint workshop, while not always catering specifically to our needs, also helped to mix things up and prevent monotony.

Most importantly, we gained experience conducting independent research. I never felt as though I was being watched during this program, but instead I was decidedly aware of the wealth of resources available to me should I need them. Upon encountering a problem, the process usually went like this: First, Google (duh). Then, ask coworkers or our friends in IT. If that didn’t work, we always knew we could go to Eric or Sharon and receive further guidance. It is impossible to understate the importance of being able to solve your own problems.

On that note, I should probably take note of what I accomplished in my final week. I spent a long time trying to help Jonathan set up his new kit (which I’m sure he will go over himself), and we also set up a dedicated Raspberry Pi for him to test configurations on PLA prints. In my research, I did manage to implement the solution to a problem I concluded with in my last blog post — allowing the user to cancel a job mid-print — although not in the way I had originally hoped. Due to the fact that I only had one touch screen device, I decided to change my application to work primarily with a keyboard for ease of use across devices as the program came to a close. The user-program interaction now goes as follows:

  1. The user sends an email, containing a .stl file, to the email address I have set up.
  2. The handler program notices a new email, and checks the sender’s email address and their secret code against the database, also verifying that there is an attachment present of filetype .stl.
  3. After passing the required checks, the handler creates a new job in the database, and assigns the job to a vacant printer.
  4. The node program notices that a print job has been assigned to its own printer id, downloads the correct attachment from the email, slices the print, and starts the job, updating its progress online as it goes.
    1. Should the user wish to cancel the print midway through, they press Control + C on the keyboard connected to the RPi that is processing their print. In the future, this could be done remotely.
    2. If the print is cancelled, the program asks the user if they would like to re-attempt the print. If the answer is no (done through keyboard input),  both the .stl file and the job in the database are disposed of. Otherwise, the print is re-attempted as normal.
  5. If the print continues to completion, the node waits for the user to retrieve the print off of the bed. Once it has been retrieved, the user lets the node know by inputting the correct command using the keyboard.
  6. Once the node registers that the completed print has been removed, it disposes of the .stl file and removes the job from the database, signaling to the handler that it is available to accept a new job.

This process generally works flawlessly, and I was able to demonstrate it fully during my final presentation on Friday! However, there are currently a few drawbacks. Occasionally, the handler or the node briefly loses connection, which causes connections to the mail server or database to fail. I have implemented try/catch blocks to attempt to retry the connection when this happens, but for some reason it doesn’t correctly re-establish.

Continuing Research

I would certainly like to continue working on this project, and I have been presented with the opportunity to do so by using the project as my senior capstone project for Computer Science. Were I to accept this opportunity, I would likely take steps to implement a number of new features into the program, such as:

  • Working with IT to allow a real login process, using a Gettysburg ID
  • Allowing remote upload of files via the website
  • Implementing webcam support to watch prints as they complete
  • Alternative status alerts on prints, such as email or SMS
  • A robust scheduling system to assist with the timing of print jobs
  • Training system to show how to build and maintain the printing setup (already underway)

To conclude, I would just like to repeat how lucky I feel for being selected to this program and how proud I am of my own results and my peers’. This is the beginning, not the end, of my work in this area.

James

3D Digital Modelling – Week 1 of 3

My name is St. John Smith, I’m entering my sophomore year at Gettysburg College, and intend to declare as an Organizational Management Studies major. The purpose of my research and work will be to learn the intricacies of photgrammetry, as well as to be able to adapt the 3D models I will create using the Pix4D photogrammetry 3D model generation software, and render the final results into SketchFab for public reference. My work will primarily focus on capturing photographs of items in Gettysburg College’s Special Collections Department, which are either too fragile or expensive to handle in an academic setting, and then render the images through Pix4D, making appropriate edits, and uploading the finished digital 3D models into SketchFab, an easily accessible platform for students to manipulate and study the virtual models, while keeping the originals safely intact.

Goals and Introductions 

The first week of my internship was an introductory phase. I shadowed Gettysburg’s Digital Technology Scholarship Fellows (DTSF), and learned the ins and outs of each of their specialized interests. From Ryan Gonzalez ’19, I learned how to use the Pix4D program, and create my first digital 3D models. I spent the majority of my time focusing on mastering Pix4D while Ryan was still at Gettysburg, as its use and software would be what I needed most in the coming weeks, using it to capture artifacts and models from Special Collections. Ryan also walked me through several test flights using the Information Technology Department’s DJI Mavic Pro Drone. This drone pairs with the user’s smartphone, allowing one to program flights for the drone to follow using the Pix4D mobile companion app to take hundreds of pictures of a subject. The drone can follow grid, double grid, circular, or manually flown flights, all while pointed at a subject, and capturing 4K resolution images to later be transferred and rendered on the PC Pix4D software. After learning to fly the drone, Ryan supervised my own flights, during which I captured Gettysburg’s Jaeger Center for Athletics, and produced a digital 3D model, shown below.

Jonathan Trilleras ’20 showed me the basics of 3D printing, and illustrated through test prints and demos the differences between the PLA printers and the high-end resin slicing printer. James Arps ’18 helped me to grasp the usefulness and workings of the Octoprint program as well, which allows the user to 3D print wirelessly via an internet connection.

Accomplishments of Week 1

As the week progressed, I began to work more autonomously, occasionally seeking out my peers and mentor, Eric Remy for advice. Using what I learned from Ryan, I began to address a problem he had been having with Pix4D. It was very hard to render all sides of an object, and the program would generate an image that was disjointed or separated and unaligned. This issue was of less importance to Ryan and his research than to myself, as my primary responsibility would be to render artifacts and decorated objects of significant worth, for which a 360° rendering would be very important. Unfortunately, the official Pix4D small object rendering how-to guide was proving useless. The directions instructed us to place our small object on a tagged turntable, then rotate the object and then reposition it on a different side and rotate it again so as to capture images of all sides of it (seen below).

  However, after much trial and error, I discovered that the software was using the tags on the turntable more frequently as tether points than on the actual object itself, therefore not recognizing that the object was placed on its side, and caused the two images to fuse together, which produced a convoluted 3D rendering.

To remedy this issue, I replaced the turntable covering recommended by Pix4D with a blank white turntable when attempting to render a complete 360° model of a fossilized trilobite.

After retaking pictures of all of the previous angles and rotations, The trilobite render worked flawlessly, and produced a completed digital 3D model, which I then uploaded onto Sketchfab (seen below). 

 

Looking Forward

This week was very helpful, I especially appreciate the help I received from my peers in the DTSF program, without them I would not be as knowledgeable, nor as proficient as I have become with the software and processes of producing digital 3D models. I look forward to working with the Special Collections Department in the coming weeks, and helping them digitize their fragile and valuable items. I’m still attempting to find a solution to a problem I’ve found with black and shiny objects. Pix4D misinterprets them due to their reflective sheen, and mistakes the black for negative space most of the time, this is a problem I will continue to attempt to solve next week by using more models and trying different settings in the imaging and processing options of Pix4D.

 

Week 9

Week 9 has been centered around developing workflow and ironing final kinks before the big presentations next week. So far I’ve finished three workflows for each classification of 3D model, I plan to do Geological Surface Mapping and Drone Flight on Monday, and add/iron out any and all workflow related material Tuesday.

Other than toying with a heavy data set project and a couple mockup projects I ran to show off, I really don’t have much field work left to do. A fellow is supposed to be coming in this week to see my work, and I will likely spend a decent portion of Week 10 with him and Eric passing on what I know. Otherwise, I’m fairly certain this past week was the last I had to spend on drone and data work. I still have a model of Glatfelter in the mix, but I imagine teaching will take precedence over finishing projects.

Though they’re nothing incredible or groundbreaking, the models I made to show how final results have leveled out and a number new features I never could put to use, so I’ve included them below.

Models

https://skfb.ly/6swLO

The Observatory was a simple manual flight project I made to demonstrate how large scale models could be performed manually as well, though I find there isn’t much use in doing so without a specific reason. As long as a sufficient number of photos are taken and the flight path keeps a reasonable distance from the subject, there shouldn’t be many issues. However knowing it’s possible to create high-end models manually is useful in scenarios where the model is obstructed by trees or other objects, or in scenarios where a strict circular pattern is impossible. More importantly, it helped establish a baseline data volume for individual scenarios. It’s important to note that the observatory isn’t a very large structure, and settles somewhere between building and statue scale.

This model of the Chapel on campus is similar to the Observatory in that it was taken using a manual flight, but the dataset was many times larger. As a result, the process was far too time and resource intensive with little increase in results, proving that too much of a good thing is indeed a bad thing. I never had the opportunity to see the model through to its end, but I learned more than enough from the experience to say it was worthwhile.

Third is the Glatfelter model, taken using two rings instead of one. I didn’t have the time to go in and edit the model, but the results so far have been very promising. It’ll take more time to clean, but I think two complete circles with minimum angle between cameras is the ideal setup using pix4Dcapture.

https://skfb.ly/6swpA

The Turtle Farm is a simple test run used to demonstrate a successful geographic model in Pix4D. Unlike the other 3D models, large maps are meant to capture volumes and data rather than visual aspects, as a result, the output 3D model is nowhere near as appealing as the other final projects posted. However, the program more than makes up for these shortcomings with DSM and orthomosaic captures and volume readings useful in agriculture and scientific research.

The volume tool allows users to pick out areas and calculate encompassed volumes, useful in architecture, construction, and agriculture. This model was made relatively quickly, and could easily be incorporated into a daily routine as part of a checkup on crops or work progress.

Equally as impressive are the orthomosaic and DSM renders, shown above respectively. The first is a 2D flattened map generated from the images taken, the second is an elevation map taken from it and other data that allows users to see how geography of an area changes at a glance.

These models were taken using a double flight grid instead of a circular pattern, similar to the style of the baseball field from previous work.

Looking Ahead

Honestly, there isn’t much time left; the DSSF group presents Thursday and DTSF presents Friday. I likely will spend any free time I have Wednesday and Thursday preparing and refining my presentation to make sure I miss nothing and know my presentation well enough to stand in front of a crowd, but thankfully my Lightning Round power point already stands as an effective bare-bones structure I’ll be able to build upon. It’s weird to think that work here is almost done, its been a long ten weeks but somehow it seems to early to be finished.

Week 9

Approaching the end of the project, and everything is finally coming together.

The first thing I did this week was to redistribute my workflow across multiple Pis in order to allow for better scalability. I now have Pis serving two different kinds of functions, one called a handler, and one called a node.

Handler

While this program will eventually be run off of its own dedicated Pi, it can be run from anything, including my own personal computer. What the handler does is somewhat self-explanatory: It handles the assignment of jobs to each printer on the network. Upon starting, the handler searches the email inbox for new mail that has not been added to the database (a process described earlier — I essentially just copied the code from my previous week) and adds it. If all the necessary security checks pass, it adds a line to the JOBS table in my database, assigning the job to a (imaginary) printer with PRINTER_ID = 0.

Once that is done, the handler retrieves all jobs from the JOBS table with a printer ID of 0 (there may be many, as there could be multiple prints waiting for a free printer). With a list of all available jobs, it then compares the jobs which are currently in progress (they have a nonzero printer ID assigned to them) with the master list of printer ID numbers. So, for example, if the network has four printers with IDs 1, 2, 3, and 4, and the JOBS table looks like this:  

Then the handler will reassign the PRINTER_ID value for the row containing file unassigned.stl to be equal to either 2 or 4, since those are the printers which currently have no jobs assigned to them. Once this is done, the handler continues scanning the mail inbox for new messages, and leaves the rest of the work to the nodes.

Nodes

Instead of scanning the email inbox for new messages, the nodes simply check the JOBS database for jobs which have been assigned to their own PRINTER_ID. Once it finds one, the node fetches the email with the given UID, downloads its attachment (which has already been verified to be of the correct filetype and from a trusted sender by the handler, however the node also rechecks for this information), and uploads it to the OctoPrint instance that it is running locally. Due to the school’s firewall restrictions preventing the generation of temporary API keys (meaning that the only way to guarantee that a print will go through is to actually log into OctoPrint from the Pi’s local network, which is impossible to do on a headless setup not connected to another computer), one of the changes I had to make involving the implementation of the nodes was to switch them over to OctoPi’s GUI setup (the desktop version). While this is less than ideal, because it would be nice to have every Pi running completely without a monitor and on its own, there are solutions to this and in the long run it will help with debugging. I got a (very small) touchscreen monitor for my Pi, and used that to log in to OctoPrint and run my node program.

After the node starts the print, all it has to do is wait until it is finished, collecting updates on the print progress and pushing them to the database, where they are then updated on a website I made:

Once the job reaches 100% completion, it waits for the button to be pressed before removing the job from the database, freeing up its slot as an available printer, and it waits to accept any new job the handler assigns it.

The node and the handler work great together, and the system seems to be just as scalable as I had hoped. One remaining challenge is that I currently have not figured out how to handle a print that is going wrong, i.e. the user wants to cancel it in the middle of the job. This could be remedied by installing a larger monitor, as the user could then use OctoPrint’s interface to cancel the job and the node would simply restart the print. However, this might not be the optimal solution and is something I am going to be exploring in the coming week.

In the future, the webcam stream of the prints would also be shown on the website but I currently am not sure about the viability of the webcam idea given the time constraints I am working under. I think it may be better to spend the last week fixing the problem of a failed print and working out any other bugs that I may find, as well as writing detailed documentation for various parts of my project which may need to be serviced or altered in the future by persons other than myself. My final blog post will contain links to this documentation, as well as summarizing my last progress points and where the project will go from here.