View allAll Photos Tagged openframeworks
Skeleton Tests
Skeleton Tests
First skeleton tests using OpenCv and code from this article: www.eml.ele.cst.nihon-u.ac.jp/~momma/wiki/wiki.cgi/OpenCV...
openFrameworks
Color pixels from Van Goh - self - portrait
www.nortonsimon.org/van-gogh-s-self-portrait-1889-on-loan...
looking for places where the gray code scanning has aliasing artifacts that make the thresholding unreliable.
FreeImage is waaaay too slow! almost 20 times slower than openCV!
Another fun experiment made by Oriol.
Uploaded With FlickrDrop
working with metaballs, modifying the implicit surface function, exploring different rendering techniques and other parameters
At the first international OpenFrameworks DevCon. January 10-17, 2011 at the STUDIO for Creative Inquiry, CMU.
In attendance: Zachary Lieberman, Theodore Watson, Arturo Castro, Anton Marini, Memo Akten, Damian Stewart, Zach Gage, Jonathan Brodsky, Kyle McDonald, Daito Manabe, Todd Vanderlin, Keith Pasko, Diederick Huijbers, Dan Wilcox, Golan Levin.
This activity was part of V&A half term activities celebrating the theatricality of the exhibition Diaghilev and the Golden Age of the Ballets Russes. Visitors were invited to experience a magic world of digital animal masks using the computers in our Digital Studio.
This installation by Hellicar&Lewis uses Openframeworks to create a system that appears to act as an augmented mask-making mirror.
The code is written to be both cross platform (PC, Mac, Linux, iPhone) and cross compiler.
The piece uses an Open Source library called OpenCV (Open Computer Vision) to track viewers faces, and augment the reflection with masks. In addition, the piece is audio reactive, which can be observed by an animation effect that happens when you make a noise. What kind of noise should
your animal mask make?
For more information, and other projects, see: hellicarandlewis.com
openFrameworks:
Photos of a screen I made for Fever Creative (http://www.fevercreative.com/) taken by Jacob Milam. A video of a runway show floats around the screen, following the users face, while the liquid simulation (thanks Memo! www.memo.tv/ofxmsafluid) in the background reacts to the users silhouette.