View allAll Photos Tagged imageprocessing
The diagonally aligned pattern is a result of a Fast Fourier Transform that cut some low frequencies. The JPEG algorithm does something similar, with a variant of the FFT, the Discrete Cosine Transform. JPEG tries to remove frequencies the human eye is unlikely to detect. Here, on the contrary, I try to emphasize bands the human will definitely detect. Other processes in the image include channel-swapping, pixel-sorting, and using JPEG artifacts as a compositing mask.
Autumn Colors 2011 in Tucson:
Tucson is definitely not known for its autumn colors! Here is a series I did on Fort Lowell Park's Autumn Colors of 2011.
The following day I was "hiking" (really walking as part of my Cardiac Rehab Phase 3 Program ) in Sabino Canyon. I did not see any colors as good as these in Fort Lowell Park. That was a surprise..
I did some image post-processing for fidelity to the Image in my Mind"
I love how the curve of the lower right leaf echoes or evokes the curve of this lovely thumb. That is total serendipity... I didn't see all that at the time... I did severely crop the original photograph to bring all this together, and it seemed to hit me between the eyes...
I see more things as I look at it. The thumb and forefinger together remind me of the two rightmost leaves' seeming pincer movement... I better stop now...;)
101_0534 - Version 2
Pink apartment building near Foster Beach, on the north side of Chicago, four different runs of glitching/visulaization software assembled into one image. These are somewhere between visualizations and glitches. The image is degraded (glitch), but it provides information about its structure (visualization). In this case, the visualization uses pixel-sorting to reveal the distribution of color in local areas, but it also reveals the mark of the sorting tool.
Glitch composite of Yucca Flats nuclear test with anatomical imagery. Slightly enlarged from original with "nearest neighbor" algorithm, appropriate for glitch art.
Moon Pre-Dawn shot was taken during a Night Sky test of a clearance demo camera from Target::
Kodak EasyShare Z915 with 10x zoom lens.
it's a fun to use camera... Easy to master... i'm still learning after three sessions. Interesting test shots.
100_0104
Popsicolor app for iPad, iPhone, and iPod touch
'minimal' focus with drips (tangerine & blue agave?)
“Look at your feet. You are standing in the sky. When we think of the sky, we tend to look up, but the sky actually begins at the earth. We walk through it, yell into it, rake leaves, wash the dog, and drive cars in it. We breathe it deep within us. With every breath, we inhale millions of molecules of sky, heat them briefly, and then exhale them back into the world.”
― Diane Ackerman, A Natural History of the Senses
A friend suggested turning this image into an Impressionist painting. I rather like it.
The original is here:https://flic.kr/p/2jEp9mZ
Color quantization, JPEG degradation and channel-swapping in interrupted pixel sorting run on blocks of zigzag-scanned arrays produced this result. It amuses me that the quantization algorithms tend to emphasize edges, including facial features and other details, and so most of the glitching happens in faces and other transitions. Clothing, especially uniform gray suits, presents little detail, and so gets little glitching.
Detail of State of the Union 003. Visualization of color distribution in image composited with original image using JPEG compression artifacts. Original is a work of the U.S. Federal Government and in the public domain. URL: commons.wikimedia.org/wiki/File:Obama_entrance_State_of_t....
August 28, 1963, Civil Rights March on Washington, from U.S. Information Agency. Press and Publications Service. I like the way the diagonal artifacts of the FFT operation play off the spatial rhythms of the crowd.
Technical note: Median cut of a stack of 6 images, including the original. Mostly glitched with FFT transform over zigzag-scanned blocks, plus some channel-swapping and compositing with masks created from JPEG artifacts.
photoshop; photo retouching; retouch; photoshop editing; photography; background removal; adobe photoshop; photoshop work; amazon listing; color correction; photoshop editing; photo manipulation; amazon product photo; photoediting; background remove; image editing; amazon; remove background; retouching; image resizing; photo edit; amazon photo editing; background removal; image resize; ebay; cut out; online shop; photoshop image editing; resizing images; fashion; amazon store; resizing; jewelry edit; photo manipulation; crop; remove background; resize image; fashion; photoediting; resizing; transparent; photo manipulation; amazon; photoshop; white background; retouch photo; changing background; product retouch; ecommerce product; photoshopping; photoshop work; clipping; product editing; photoshop work; photo editing; photo manipulation; amazon infographic; cut out; amazon editing; cut out images; photoshop retouch; cut out background; photo fixing; resizing; e commerce; photo resize; resizing images; resize; image processing; nick add; image masking
Pixel-sorted vertically then horizontally, comparing brightness, saturation and hue (in that order). Effectively a visualization of Monet's palette as seen by the digitization workflow, which ended with a JPEG image in Wikimedia's Creative Commons.
Turn Turn Turn...
This is the best I can do without cartographic software tools... Being retired has it's perks. However, it also imposes some limitations. I am used to having great tools. Oh well, to every season unto heaven...
An example of my image processing from straight out of the camera to finished file. Shot with the Sony a7iii and Canon 24-105/4 L. All work and perspective correction done in Luminar 2018.
August 28, 1963, Civil Rights March on Washington, from U.S. Information Agency Press and Publications Service.
Baaaaa
Todays sheep. Mucking around with gimp, sheep & neek. Todays lesson is save the transparent images so you can use them again (saves time - doh!). Added colour this time.
Neek! flickrhack #1
This particular shot is taken with a camera. If you look carefully you can see the angular distortion to the right of the screen. Now I got to thinking about this on satuday and I asked myself a question, 'how does flickr know this particular image is a photo, not a screencap?' as the previous sheep shot is.
This is important because flickr caps screencap numbers for example those who upload SecondLife images. But how do you do this in code?
Well looking through the flick api's I found an api (flickr.photos.getExif - gets 'list of EXIF/TIFF/GPS tags for a given photo') that could allow you to distinguish between an image taken by a camera and an image taken as a screen cap.
So if I imagine if you supply EXIF information of known camera types I to an screencap image the code restricting screencaps to 200 would be skipped over as the code would be thinking the image is taken from a camera and not a screenshot.