View allAll Photos Tagged ComputerVision

Data Labeling company like Learning spiral help to improve the accuracy of data as Labeled Data improves the quality of training data & there is no doubt Data Labeling services help to get better results for AI projects. learningspiral.ai/

Photo taken on the rally stage of the 2012 Oulton Park Gold Cup

Recently Chief Minister K. Chandrasekhar Rao has directed the field-level engineers to gather data of damaged roads in different parts of the state and instructed to repair them on top priority. He has emphasized the use of software for continuous monitoring of road conditions across the state to take immediate actions.

 

Photo taken on the rally stage of the 2012 Oulton Park Gold Cup

Oh, they're all out to get you

Once again, they're all out to get you

Once again...

en.wikipedia.org/wiki/Laid_(album)

 

A young man looks into the camera. A list of words and numbers describe the man’s expression:

 

Happiness 4.185

Neutral 0.901

Surprise 89.864

Sadness 0.01

Disgust 0.01

Anger 5.021

Fear 0.01

 

A facial recognition system is a technology capable of matching a human face from a digital image or a video frame against a database of faces. Such a system is typically employed to authenticate users through ID verification services, and works by pinpointing and measuring facial features from a given image. en.wikipedia.org/wiki/Facial_recognition_system

  

Image by www.comuzi.xyz / © BBC / www.betterimagesofai.org / Mirror D / Licenced by CC-BY 4.0

Longevity of light bulbs (and how to make them last longer)

Electroacoustic interactive performance

Stefano D'Alessio

Vienna 2016

  

Longevity of light bulbs fuses natural and processed sound with physical movement and dynamic light.

 

The performance proposes a DIY way of music making, focusing on low budget items hacking, creative use of new technologies designing alternative interfaces for sound creation.

 

Resembling of a solo concert piece, L.O.L.B. has as its core an amplified ikea desk lamp, explored in all its sonorities by one performer playing it with bare hands.

 

An important characteristic of the lamp is that it can be easily moved, this mobility amplifies the physical performance and modifies the original sound, thanks to a custom made interface and digital audio processing.

 

The interface between the lamp movement and the sound manipulation, is ideated hypothesising how a “natural” sound would behave, taking advantage of the possibilities of digital audio but bringing them out of the computer, interfacing them with something more physical and “primitive” than a controller fader or a mouse.

 

The focus is to magnify how the sound processing is controlled, transforming unperceivable digits unseeable to human eyes, into visible performative actions.

 

The spatial position of the light source is tracked by a camera and translated into parameters usable by the audio engine, naturally connecting the lamp movement to the behaviour of its sound transformations.

 

The final audio, even when processed, has tones which are close to the natural ones, as the audio processes basically consist in different applications audio delay, meaning that there is no synthesised sound or recorded sample added to the final output.

 

The only light source present in the performance is the lamp’s light bulb itself. Meaning that the moving lamp is, beside changing its own sound, acting as a dynamic lighting device. It points and looks in different direction, behaving like a curious creature, illuminating different part of the environment, scanning through the audience and sometimes blinding the people.

 

The result is a performance and electroacoustic piece, where natural and artificial tones fuse seamlessly, symbiotically combined with light, developing in time with movements of the performer and its instrument.

 

Camera manufacturers are meeting ever-increasing visual world demands seamlessly with the help of Computer Vision models. Data Labeler is the best place for high-quality personalized labeled data sets. Read more - www.datalabeler.com/camera-manufacturers-are-making-the-b...

Red TV is a video analysis project.

A collaboration with Brad Todd.

Built with openFrameworks.

Photo taken at 2013 Donington Historic Festival

George Bernet, John Maske, Ina Schlechte

Interface Cultures Lab 2012

for more - www.kitdastudio.com/?p=90

 

i’ve joint the openKinect project initialized by theo, memo (openframeworks forum) at 2010 nov.

this device (xBox kinect sensor) is superb, it got an IR beam generator enabling the depth-camera to read the IR reflection

(so it works perfectly indoor in day time or in complete darkness),

and it got a normal web cam for more video-programming manipulation (like face detection etc)

 

this project is so exciting, it is another level of computer vision to me.

it just opened up unlimited possibilities in interactive installations !

 

what to do next ? in order to make it applicable to interactive installation,

it has to be able to do some data things

- limbs motion detection

- incorporate openCV face detection

- dynamic range human-shape recognition

- hand / finger motion recognition

- sync the 2 cams as well as possible

Photo taken on the rally stage of the 2012 Oulton Park Gold Cup

1 2 ••• 51 52 54 56 57 ••• 79 80