All posts by eweb0124

SYNERGY – NH Science Podcast (Weeks 3 & 4)

UPDATE: Future videos will continue to be placed on our YouTube account in our playlist, “SYNERGY – The NH Science Podcast Episodes.”

The New Holstein Science Podcast is here to share interesting news in science! Tune in to learn about current scientific events that have profound effects on society!

Week 4 Topics Include:
-Atlas Robot by Boston Dynamics
-Virtual Reality and Augmented Reality
-Nanoscience and Nanotechnology
-DNA “Trojan Horse”
-DNA Origami

Week 3 Topics Include:
-Black Holes
-Déjà vu
-3D Printed Body Parts
-Bionic Spinal Cord
-Cyborgs
-Artificial Intelligence

Cast: Matt Rupp, Max Knauf, Isaac Weber, Ethan Weber (myself), and Guest Doug Kestell.

Thanks for watching!

SYNERGY – New Holstein Science Podcast

Recently, I have started a science podcast at my school with my AP Biology teacher Mr. Rupp, my friend Max Knauf, and my brother, Isaac Weber. We are a student run science podcast at New Holstein High School in Wisconsin. Most Thursdays at 4:00 PM CST we will broadcast live on YouTube discussing scientific events around the world. Our YouTube channel can be found at https://www.youtube.com/channel/UC2B79GS42LNYAW4N_KQdvqA.

This is our first science podcast, “SYNERGY – NH Science Podcast (Week 1).” Please ignore the first minute; we didn’t know we were live.

 

 

Haar Feature-based Cascade Classification using Java OpenCV 3.0 for Object Recognition

After participating in a hackathon (MHacks 6) at the University of Michigan–Ann Arbor, I have continued my work to help the visually impaired. During the hackathon, our team became particular focused on the object detection technique known as Haar Feature-based cascade classification. (More information can be found at http://docs.opencv.org/2.4/modules/objdetect/doc/cascade_classification.html.) This method for object detection involved taking many photos and cropping each one separately to “train” the classifier what types of objects to recognize. Unfortunately, this process is very tedious and time consuming. After much research and practice, I have developed an elegant solution to aide in the efficiency of developing these classification templates for object detection.

Two of the most significant differences of my program design compared to other techniques is that, 1) I sped up the process to generate the needed files and referrenced for the OpenCV HAAR Classifier Trainer (information about this process can be found at http://docs.opencv.org/2.4/doc/user_guide/ug_traincascade.html) and 2) I wrote the software in Java, which is rather unique to OpenCV (most development is done in C++ and C#, but I have chosen to use Java to practice for my AP Computer Science A course).

In order to “train” the OpenCV Classifier Trainer, one must collect hundreds of “positive images” (images containing the desired object to be detected) and hundreds of “negative images” (images NOT containing the desired object). However, not only do the positive images need to contain the object; they also need to contain the bounding box coordinates (also known as the cropped region) of the object. This is where it becomes time consuming. There are many programs that allow one to cycle through each photo and crop it individually, but multiply the time spent on one by a few hundred it quickly becomes tiresome! What I have done, instead, was to use “color detection” to aid with “object/feature recognition”.

By using OpenCV in Java to draw boxes around “blobs” of a specific HSV color value, I can isolate, for example, a blue, object from a non-blue surrounding. The program will then draw a single rectangle around this blue “blob”, which specifies the coordinates of the desired object to be detected. As I move the object around the screen, the webcam will continue to draw the rectangle in the correct place for each frame. Furthermore, with this information the program does two important processes needed for the OpenCV Classifier Trainer. As the program cycles through the frames, it saves each image into a folder (the file path is specified in the code), and it also saves a text document depicting the coordinates of the rectangle. This essentially is the template for “cropping” the image without the tedious process of doing it manually.

At this point, the “positive” image datasets are completed. In order the generate the “negative” image datasets, a similar process is conducted but this time, without needing to “crop” the image. This negative image program saves image frames individually to a “negative” folder (specified in the program), and it also saves a text document depicting where each photo is placed.

Finally, after both the “positive” and “negative” data sets are created, they can then be used for the OpenCV Classifier Trainer (once again found at http://docs.opencv.org/2.4/doc/user_guide/ug_traincascade.html). By following the steps, which utilize the generated files from my program, an .xml file will be created which can be used with sample Java programs such as in the one found at http://opencvlover.blogspot.com/2012/11/face-detection-in-javacv-using-haar.html.

Although this process may sound complicated, I will continue to work on the design of the program to make it more simple and elegant. I am attempting to produce an optimal form of object detection in Java because I wish to help the visually impaired. With the working programs in which I have created, I have the means to “train” a cascade/object classifier with a colored image, and from then have it recognize the object shape and texture, regardless of color, from there after. It is a very exciting concept, and I look forward to continuing its development for my work for the visually impaired started with my hackathon project, Innovation InSight (a few posts below).

Files:
https://drive.google.com/folderview?id=0B6AC0-HiND_6RWpyUkVUT1NzWjg&usp=sharing
This is the link to the two Java programs mentioned above. ClassifierHelper is the “positive” files creater. PhotoTakerHelper is the “negative” files creater. OpenCV 3.0 will have to be configure with Eclipse, the HSV values will have to be changed, and the file paths will likely differ, in order to run the programs correctly.

It may take trial and error to get the code to work. It is not fully commented because it is a work in process, so please feel free to email me at eweb0124@gmail.com with questions about the source code linked directly above. I can provide pictures and detailed explanations about how the code works, as well as how to use the files in the OpenCV Classifier Trainer to generate .xml files. I look forward to helping others in any way that I can. Thanks!

Best Overall Use of Microsoft Technology Award


I was on a team of 4 to create a project (called Innovation InSight) based on my idea of helping the visually impaired use object recognition and vibration feedback. I was the only high school student on the team with my teammates from the University of Michigan, the University of Southern California, and the University of Toronto, Canada. The hackathon took place at the University of Michigan and was a 36 hour event called MHacks 6. My team won the “Best Overall Use of Microsoft Technology Award” with our idea and submission. We used a Kinect V2, OpenCV, HAAR classifiers, voice recognition, object recognition, arduino, a myo armband, vibration motors, among a variety of other things. For more information, check out our submission at http://devpost.com/software/innovation-insight (I wrote this submission at 5 AM with no sleep, so please ignore the typos:D). We are currently continuing our work on the project. The goal is to have the user perceive objects around him/her, use a voice command to select and object, and then use vibration feedback to guide the hand to the object. We proved our concepts at the hackathon, and now we wish to improve on them to help the blind and visually impaired. We also have another website with some more information about this at msight.co.