View allAll Photos Tagged interface
This is a short demo of some user interface concept work I've been developing recently. The interface is entirely built with HTML, and then progressively enhanced using jQuery. The slider controls use jQuery UI's Slider package, and Filament Group's enhanced Accessible Slider extension.
Command Line Interface - CLI
Type: Text
Static, Disconnected, High-Low, Directed, Recall
Graphical User Interface - GUI
Type: Graphics
Responsive, Indirect, DBL Medium, Exploratory, Recognition
Natural User Interface - NUI
Type: Objects
Evocative, Unmediated, Fast Few, Contextual, Intuition
So here we are, on the road again, and I hadn't sorted out the Sony A6000 to Snapseed interfaces.
I shoot RAW and my image transfers from this trip look meh. After 4 weeks it finally occurred to me to look at the file size. Lo and behold, thumbnail jpgs were transferred. Ugh.
This is why my cellphone images look sharp on Flickr and the A6000 images do not.
I tested shooting RAW + JPG and the good, full Rez JPG does transfer. Lesson learned.
Next thing is image processing.
I read about how Norman Seeff used to print high contrast works with a twist. He used a black stocking between the enlarger lens and paper to give a interesting softness to some of his images.
He wasn't by any means the only one to do this.
When I worked at Samy Cameras photo lab on Sunset Blvd in Hollyweird we used to do this at client request. It was really no big deal.
What was a bigger deal was our use of Agfa Portriga Rapid 111 Glossy paper. It gave a gorgeous deep walnut brown tone. We used this for may of the gallery shows we printed for various then famous photographers.
Taking the black stocking idea and borrowing tones from Portriga Rapid, it turns out, expresses pretty well how I feel about Rome.
So, here is a series of images done in an old, outdated, likely not very hip manner.
Hardest thing always is to make a housing... This is an interface switching the keyer and PTT with RTS/DTR on the serial port and couples the audio in and out. All fully insulated with relais or transformers. Cost: $ 0.00 as I had all this stuff in the junk box.
Google Video is increasingly cluttered. Check out the viral links in the blue box ('Email - Blog - Post to MySpace') - do Google not test their interfaces at a range of font sizes?
The interface does complement the video though:
The Interface Region Imaging Spectograph (IRIS) satellite at Lockheed Martin Space Systems Company in Sunnyvale, Calif., with the solar telescope and bus structure fully integrated. Scientists will use IRIS to study energy and plasma movement near the sun’s surface. Learn more: www.lockheedmartin.com/iris
The media consumption experience is poised to transform, and fast. Technologies that have been tinkered with for years, ranging from virtual and augmented reality to sensors and robotics, are finally on the tipping point of mass commercialization. As the physical and digital worlds converge, how will these technologies shape how people interact with digital media?
On November 18, 2014, NYC Media Lab and Razorfish hosted the second occasion of Future Interfaces, an evening "science fair" on the future of human-computer interaction and digital media. More than 300 guests came to go hands-on with 30 demos from startups and universities to see what's on the verge of commercialization, what’s still in the lab, and what advances will change the nature of media and communications in the future.
To learn more about the event and to see a full list of participating demos, visit www.nycmedialab.org/events/future-interfaces/
Ashikin Ahmad, Chief Financial Officer, International Centre for Industrial
Transformation, Singapore; Frank L. Blaimberger, Global Head Advanced Manufacturing (VP), TÜV SÜD, Germany; Michelangelo Canzoneri, Global Head, Group Smart Manufacturing, Merck, Germany; Efe Erdem, Director, MESS Technology Platform, Turkish Employers Association of Metal Industries (MESS), Türkiye; Daniel Kuepper, Managing Director and Partner, Boston Consulting Group, Germany; Natan Linder, Chief Executive Officer, Tulip
Interfaces, USA; Cyril Perducat, Senior Vice-President and Chief Technology Officer, Rockwell Automation, USA, speaking in the AI for Industry Transformation session at the Industry Strategy Meeting 2023 in Geneva, Switzerland, 16 March. Copyright: World Economic Forum/ Marc Bader
One of my bizarre photo interests are the variety of user interfaces presented in hotel showers. Here at the Inn at Saratogo (near San Jose), just in case people are nor intuitively familiar with world wide cultural references of "how water on left", they provide strong clues using the kinds of stickers used to put address numbers on your house.
Worse, the knobs rotating is i different directions; while for symmetry it might be clever, to get more hot, you have to rotate the knob left and to get more cold, you rotate right.
it took a good 8 minutes of wasting water to figure out this interface.
This is what it looks like right now. Lots of problems with it, not least the size of the machine is directly coupled to the size of the screen window. Small machines makes half the messages fall off, big machines mean you can't fit it all on your monitor. I know, I know.
The media consumption experience is poised to transform, and fast. Technologies that have been tinkered with for years, ranging from virtual and augmented reality to sensors and robotics, are finally on the tipping point of mass commercialization. As the physical and digital worlds converge, how will these technologies shape how people interact with digital media?
On November 18, 2014, NYC Media Lab and Razorfish hosted the second occasion of Future Interfaces, an evening "science fair" on the future of human-computer interaction and digital media. More than 300 guests came to go hands-on with 30 demos from startups and universities to see what's on the verge of commercialization, what’s still in the lab, and what advances will change the nature of media and communications in the future.
To learn more about the event and to see a full list of participating demos, visit www.nycmedialab.org/events/future-interfaces/
Researchers at the German primate center (DPZ) in Goettingen analyse the brain activity of non human primates during the execution of hand grasping movements. This neuronal information, recorded with fine electrodes in the cortex, could one day be used to control a robotic hand as a prosthesis for paralyzed people.
*view fullscreen to see the electrode details*
Strobist info:
one flash with softbox from the top left, one constant warm light source directed on the finger tip, and brain activity on a 17inch screen in the background
Mature trees next to a road can cause problems. The nearside mirror is next to the bus on the verge.
A snapshot of the two main views we currently use to monitor flagging on Metafilter. 99% of the time we're interested in where the flags are piling up, not who is doing the flagging.
The top bit is what we see in the upper right corner of the main admin page on mefi; it lists flags sorted by volume and then by date for equally-flagged items. One recent change pb has made for us is the addition of subsite filters (see "all | ask | mefi | other") to make it simpler to keep an eye on major subsites independently if there's a lot of flagging activity on one that's obscuring lower-volume but still important flags on another.
We also use that "good spots" bit to keep an eye on "flagged as fantastic" stuff, since that flag carries a very different payload than most of the "there's a problem" choices. We will often notice sidebar-worthy comments because they show up here.
Down below is the inline flag info, something we've had for maybe a year now; it just lists flag count on individual items, which can be helpful for us when we're trying to figure out what's going on within a given thread. It used to be that we'd have to navigate comment-by-comment from the admin flag queue above, which worked but was tedious. This way, if a dozen things ended up flagged in a thread, we can tell what they are at a glance.
The blue pop-up is what we get if we hover over the "x times" link on the inline flag message. We don't need to use this much, but it's handy for the now-and-then occasion where we're not sure *why* a comment was flagged, as well as for the blue-moon situation where *who* flagged might help explain what's going on in a specific circumstance.
Other details not pictured here:
- Hovering over an item in the admin flag queue provides an abbreviated tooltip of the start of the comment or post flagged, which can help with quickly orienting or re-orienting us to what still needs attention before we even click through.
- There's a summary of flagging behavior in and on a thread at the top of each thread, to go with the per-comment inline flagging info. It tells us how many flags a post has gotten, as well as how many distinct comments have been flagged and how many total comment flags have accrued. We mirror the same information on the front page below every post. It's useful for telling at a glance if something looks like trouble without having to go to the admin interface first, which can help us notice things quicker if we're just casually browsing the site at the time.
A little urban detail which probably goes unnoticed every day.
Canon 5D Mk III with Canon EF 24mm F1.4L Mk II lens. 1/250th sec at F1.4, ISO 100.
I used heavy weight sew-in, backed with black batting.
I was concerned that the white interfacing would show through, and I also wanted a bit of extra padding.
I machine stitched them together just inside the edges, then trimmed along the cutting line.
The Glocal Project is a massive contributive artwork. Two months before the launch of the project, we already have upwards of 8,000 submissions from more than 2,000 participants around the world.
One of the most challenging questions has been: how can we make sense of such a large collection of images?
These 'phylogenies' imagine how an anthropologist might attempt to build relationships between images in the Glocal Pool.
Through image analysis technology, each image in the pool is assigned a 'signature', which can be though of as the image's genome - the colours, composition, symmetry, etc. that define the image.
In these phylogenetic trees, pairs of images are 'bred' to produce offspring which could conceivably have been born from these parent images.
The result is a 'family tree' of images which attempts to invent a history inside of the large project pool.
For more information, check out my blog - http:blog.blprnt.com, or the Glocal website at www.glocal.ca.
Also, please consider joining the Glocal Project Pool - www.flickr.com/groups/glocal.
The media consumption experience is poised to transform, and fast. Technologies that have been tinkered with for years, ranging from virtual and augmented reality to sensors and robotics, are finally on the tipping point of mass commercialization. As the physical and digital worlds converge, how will these technologies shape how people interact with digital media?
On November 18, 2014, NYC Media Lab and Razorfish hosted the second occasion of Future Interfaces, an evening "science fair" on the future of human-computer interaction and digital media. More than 300 guests came to go hands-on with 30 demos from startups and universities to see what's on the verge of commercialization, what’s still in the lab, and what advances will change the nature of media and communications in the future.
To learn more about the event and to see a full list of participating demos, visit www.nycmedialab.org/events/future-interfaces/
#Interactions at the #interface of #systems
#Abstract #Art #abstractpaintings
Acrylic on Canvas
80 x 100
The media consumption experience is poised to transform, and fast. Technologies that have been tinkered with for years, ranging from virtual and augmented reality to sensors and robotics, are finally on the tipping point of mass commercialization. As the physical and digital worlds converge, how will these technologies shape how people interact with digital media?
On November 18, 2014, NYC Media Lab and Razorfish hosted the second occasion of Future Interfaces, an evening "science fair" on the future of human-computer interaction and digital media. More than 300 guests came to go hands-on with 30 demos from startups and universities to see what's on the verge of commercialization, what’s still in the lab, and what advances will change the nature of media and communications in the future.
To learn more about the event and to see a full list of participating demos, visit www.nycmedialab.org/events/future-interfaces/
The image is a high resolution TEM micrograph of the gate of a semiconductor silicon device and it shows the interface between the crystalline Silicon substrate and the amorphous Silicon Oxide.
Courtesy of Marco De Biase
Image Details
Instrument used: TEMLink
Magnification: 6.000.000
Horizontal Field Width: 40 nm
Voltage: 300 kV