View allAll Photos Tagged techniques

I am so excited about this discovery and think you will be too.

I love this crackle because it is easy to do with easy to find materials that don't cost an arm and a leg.

The tutorial is available here

beadcomber.artfire.com

 

or you can read about it on my blog

beadcomber.blogspot.com

My new favorite modified plate.

ST Suspensions aka Suspension Techniques is the go-to stop for all your suspension-related needs. They’ve got everything from coilovers and wheel spacers to sport springs, sway sets and shocks. Starting now, ST Suspensions is having their annual rebate program where you can save up to $150 on sel...

 

www.vividracing.com/blog/announcing-new-products-specials...

This image by Duncan Rawlinson explores the intersection of photography and artificial intelligence, portraying a modern subject with an AI chip embedded in her head. Through innovative techniques, Rawlinson delves into the latent space, creating a compelling narrative of human augmentation and technological integration.

 

Duncan.co/augmented-human

This is too awesome and very exciting news! Photo Technique Magazine dedicated part of the Jan/Feb issue to "MASTERS of LIGHT PAINTING"!! There is a long story behind how this article came to be but I owe it all to a great guy in Norway by the name of Ole Utne! Ole contacted me several months ago about doing an article for his project in college and I was happy to work with him on it. He then submitted that article to Photo Technique and they decided to feature this article along with another highlighting the drastic difference between two different light painting styles. The other featured artist is well known New York based photographer Stan Patz! What an honor!

 

This is awesome news for me and with Light Painting being mentioned on the cover, it is great for the community as well! I can't wait to get a copy of the magazine to see what is inside!! :-)

 

THANKS A TON OLE!! :-)

Here is a new set of LEGO ideas and techniques, made with LDD

I'm sure you'll find a use to this idea

I tried to make the explanation readable thanks to the colors as if we had a tutorial

 

Do not forget to watch the album with all the right techniques on your right =>

 

Find all my creations on Flickr group « News LEGO Techniques ».

This Flickr group includes:

 

- Ideas for new LEGO pieces

- Techniques for assembling bricks

- Tutorials for making accessories, objects, etc.

This is the basic technique used to make a row of hearts. Here the first and the last hearts are joined to form a ring of endless hearts. It is really very difficult to find the joint once the two ends are merged.

 

There are also many variations of making the Endless Love model. This model shows the hearts overlapping from the left to the right. Besides reversing the overlaps, the hearts can be made, non-overlapping, inverted, rotated left or right and also with or without colour-changes. Furthermore, instead of folding from a single strip, the hearts can be folded from modules. This will give a different colour for each heart. The possibilities are really endless.

 

The model here is folded from a single strip of 48cm x 4cm Embossed paper.

   

Operating Room Techniques Class (practice) led by Mr. Randell Docks, #SurgicalTech. Program Director, and his students, Karin Schmidt and Kirk Mitchell. Here we see them practicing gowning and gloving for surgery #Surgery #Training #OperatingRoom #Miami #CityCollege

Source: en.wikipedia.org/wiki/Hoover_Dam

 

Hoover Dam is a concrete arch-gravity dam in the Black Canyon of the Colorado River, on the border between the U.S. states of Nevada and Arizona. Constructed between 1931 and 1936, during the Great Depression, it was dedicated on September 30, 1935, by President Franklin D. Roosevelt. Its construction was the result of a massive effort involving thousands of workers, and cost over 100 lives. In bills passed by Congress during its construction, it was referred to as the Hoover Dam, after President Herbert Hoover, but was named Boulder Dam by the Roosevelt administration. In 1947, the name Hoover Dam was restored by Congress.

 

Since about 1900, the Black Canyon and nearby Boulder Canyon had been investigated for their potential to support a dam that would control floods, provide irrigation water, and produce hydroelectric power. In 1928, Congress authorized the project. The winning bid to build the dam was submitted by a consortium named Six Companies, Inc., which began construction in early 1931. Such a large concrete structure had never been built before, and some of the techniques used were unproven. The torrid summer weather and lack of facilities near the site also presented difficulties. Nevertheless, Six Companies turned the dam over to the federal government on March 1, 1936, more than two years ahead of schedule.

 

Hoover Dam impounds Lake Mead and is located near Boulder City, Nevada, a municipality originally constructed for workers on the construction project, about 30 mi (48 km) southeast of Las Vegas, Nevada. The dam's generators provide power for public and private utilities in Nevada, Arizona, and California. Hoover Dam is a major tourist attraction, with 7 million tourists a year. The heavily traveled U.S. Route 93 (US 93) ran along the dam's crest until October 2010, when the Hoover Dam Bypass opened.

 

Source: hoover.archives.gov/hoovers/hoover-dam

 

85 years after its completion, Hoover dam is still considered an engineering marvel. It is named in honor of President Herbert Hoover, who played a crucial role in its creation.

 

For many years, residents of the American southwest sought to tame the unpredictable Colorado River. Disastrous floods during the early 1900’s led residents of the area to look to the federal government for aid, and experiments with irrigation on a limited scale had shown that this arid region could be transformed into fertile cropland, if only the river could be controlled. The greatest obstacle to the construction of such a dam was the allocation of water rights among the seven states comprising the Colorado River drainage basin. Meetings were held in 1918, 1919 and 1920, but the states could not reach a consensus.

 

Herbert Hoover had visited the Lower Colorado region in the years before World War I and was familiar with its problems and the potential for development. Upon becoming Secretary of Commerce in 1921, Hoover proposed the construction of a dam on the Colorado River. In addition to flood control and irrigation, it would provide a dependable supply of water for Los Angeles and Southern California. The project would be self-supporting, recovering its cost through the sale of hydroelectric power generated by the dam.

 

In 1921, the state legislatures of the Colorado River basin authorized commissioners to negotiate an interstate agreement. Congress authorized President Harding to appoint a representative for the federal government to serve as chair of the Colorado River Commission and on December 17, 1921, Harding appointed Hoover to that role.

 

When the commission assembled in Santa Fe in November 1922, the seven states still disagreed over the fair distribution of water. The upstream states feared that the downstream states, with their rapidly developing agricultural and power demands, would quickly preempt rights to the water by the “first in time, first in right” doctrine. Hoover suggested a compromise that the water be divided between the upper and lower basins without individual state quotas. The resulting Colorado River Compact was signed on November 24, 1922. It split the river basin into upper and lower halves with the states within each region deciding amongst themselves how the water would be allocated.

 

A series of bills calling for Federal funding to build the dam were introduced by Congressman Phil D. Swing and Senator Hiram W. Johnson between 1922 and 1928, all of which were rejected. The last Swing-Johnson bill, titled the Boulder Canyon Project Act, was largely written by Hoover and Secretary of the Interior Hubert Work. Congress finally agreed, and the bill was signed into law on December 21, 1928 by President Coolidge. The dream was about to become reality.

 

On June 25, 1929, less than four months after his inauguration, President Herbert Hoover signed a proclamation declaring the Colorado River Compact effective at last. Appropriations were approved and construction began in 1930. The dam was dedicated in 1935 and the hydroelectric generators went online in 1937. In 1947, Congress officially "restored" Hoover's name to the dam, after FDR's Secretary of the Interior tried to remove it. Hoover Dam was built for a cost of $49 million (approximately $1 billion adjusted for inflation). The power plant and generators cost an additional $71 million, more than the cost of the dam itself. The sale of electrical power generated by the dam paid back its construction cost, with interest, by 1987.

 

Today the Hoover Dam controls the flooding of the Colorado River, irrigates more than 1.5 million acres of land, and provides water to more than 16 million people. Lake Mead supports recreational activities and provides habitats to fish and wildlife. Power generated by the dam provides energy to power over 500,000 homes. The Hoover Compromise still governs how the water is shared.

 

Additional Foreign Language Tags:

 

(United States) "الولايات المتحدة" "Vereinigte Staaten" "アメリカ" "美国" "미국" "Estados Unidos" "États-Unis"

 

(Nevada) "نيفادا" "内华达州" "नेवादा" "ネバダ" "네바다" "Невада"

 

(Arizona) "أريزونا" "亚利桑那州" "एरिजोना" "アリゾナ州" "애리조나" "Аризона"

 

(Hoover Dam) "سد هوفر" "胡佛水坝" "हूवर बांध" "フーバーダム" "후버 댐" "Гувера" "Presa Hoover"

Have you pictured how to achieve peak sexual pleasure? Well let us get started with some of the most popular women masturbation techniques. They are erotic, they are rigid, pleasurable realistic and solid. If you have been wondering how you attain an orgasm, well here is the blueprint.

This is now a female-to-female gender changer (three plates high) between the brown tile and the red 1x1 plate.

This is an example of a digital scrapbook technique using The Big Picture Action from PanosFX

What makes a good photo for you? Is it the technique, the story or the subject?

I decided to try a new river technique and this is the result! :D

Technique : coffee ground on wooden chopping board & photo post processing

Card created for Hero Arts Technique challenge. I used the clear emboss/resist technique over patterned paper and distressed & inked the edges. TFL!

 

Stamps: HA CL058 ClearDesign Month November Set

Ink: ColorBox Ink Queue & VersaFine Vintage Sepia

Patterned Paper: Basic Grey

Cardstock: Bazzill

Ribbon: Making Memories

Jewels: Hero Arts

PrismaColor Colored Pencils & Gamsol

Ranger Crystal Stickles used over pumpkin

I did this experiment with my dad on Christmas. He is interested in learning more mixed media techniques to go with his watercolors. We are going to work through the Compendium of Curiosities books together.

Title: Fantasy Art Techniques.

Author: Boris Vallejo.

Publisher: Paper Tiger Books.

Date: 1985.

Artist: Boris Vallejo.

Here is a new set of LEGO ideas and techniques, made with LDD

I'm sure you'll find a use to this idea

I tried to make the explanation readable thanks to the colors as if we had a tutorial

 

Do not forget to watch the album with all the right techniques on your right =>

 

Find all my creations on Flickr group « News LEGO Techniques ».

This Flickr group includes:

 

- Ideas for new LEGO pieces

- Techniques for assembling bricks

- Tutorials for making accessories, objects, etc.

Similar to the old "Pony Ear" (displayed here too). See my write-up at www.dagsbricks.com/2014/02/lego-techniques-pony-leg.html

The early stages after laying out the torn green, blue and yellow watercolor strips onto an Ecru layer of clay

As a schoolboy in the 1970s I experimented with various photographic techniques. This was made by rotating bent wire in a darkened room in front of the camera with the shutter locked open. I used coloured light and sometimes added electronic flash to freeze the movement.

The film was Ferrania Colour CR50, home processed in E6 chemistry.

Korak is an ancient patchwork technique from Central Asia. This is my first attempt at making a piece of Korak. You can’t actually call them quilts, because they have no batting.

The pneumatic T-piece is the hidden part I want to showcase.

Here is a new set of LEGO ideas and techniques, made with LDD

I'm sure you'll find a use to this idea

I tried to make the explanation readable thanks to the colors as if we had a tutorial

 

Do not forget to watch the album with all the right techniques on your right =>

 

Find all my creations on Flickr group « News LEGO Techniques ».

This Flickr group includes:

 

- Ideas for new LEGO pieces

- Techniques for assembling bricks

- Tutorials for making accessories, objects, etc.

My technique for building medieval houses in its simplest form. More advanced versions will be tackled in my up coming guide to building a medieval village.

Ways to love making

A right triangle drawn on a 1x2 brick with the hypotenuse running from one corner to the center of the far stud. The short leg is 10 LDU and the long leg 30 LDU.

 

In this technique with the 1x4 hinge brick, the corner on which the brick hinges maintains its position of 30 LDU from a perpendicular line through the center of the outer stud. That means we can construct a congruent right triangle on the hypotenuse of the first triangle by drawing a horizontal line from the center to the line through the outer stud. We know this triangle is congruent because of the hypotenuse-leg postulate (the triangles have a hypotenuse and one leg of equal lengths).

 

We know the lengths of two sides of these right triangles so we can determine the angles using trigonometry.

 

a=10

b=30

tan(A) = a/b = .3333

A = 18.43°

 

Multiply that by 2 since we know the angles are equal in the congruent triangles and we have 36.86° for the angle between the horizontal line through the center of the hinge and the long edge of the brick.

  

Todays technique tip is foreground.

 

I am using trees as an example, and here I am using two images from Kensington Palace and one from Hampton Court Palace both in London and both courtesy of HM Queen Elizabeth.

 

If you are taking photos of distant objects, such as trees, landscapes oceans horizons, you need to add something in the foreground otherwise there is a risk that they will be a bit boring.

 

In these two gardens, the trees were really impressive, as you can see. But the eyes actually put them in context and you can see the rest of the garden or building around them to add to the interest level.

 

When you take a photo, and depending on the focal length, the interesting bits can look a little distant, and well . . . . . a little uninspiring.

 

So, where possible, add some eye candy in the foreground. It can be flowers, trees, a branch or or rock.

There are exceptions though as in this shot here, where the symmetry and perspective distortion lead your eye to the teahouse at the end, so it is a sugar free option. this assumes that the thumbnail caught your eye and interest in the first place, enough for you to look at it here.

 

If you look at the first shot below though, there is eye candy by the truckload with the flower beds, and compare that to the lower shot of the trees where the large expanse of lawn and the bird is well . . . . . . boring.

 

As usual, I have had to plug in a blue sky as it was in London, and the washed out cloudy sky was well . . . . . you guessed it - boring.

 

Knock knock - who's THERE

 

Trees, Shrubs and Bushes Theme

A close up of the Desert Watercolor clay design

Well I am building something that requires a 2x2 tile to line up with plates and bricks. I gave myself a headache trying to figure out how get the correct connection. Here is what I came up with. Depending on how you look at it, the 2x2 tile is pushed back a bit, it is very hard to notice.

10 Sept 2008 IR HDR Images.

 

High Dynamic Range (HDR)

 

High-dynamic-range imaging (HDRI) is a high dynamic range (HDR) technique used in imaging and photography to reproduce a greater dynamic range of luminosity than is possible with standard digital imaging or photographic techniques. The aim is to present a similar range of luminance to that experienced through the human visual system. The human eye, through adaptation of the iris and other methods, adjusts constantly to adapt to a broad range of luminance present in the environment. The brain continuously interprets this information so that a viewer can see in a wide range of light conditions.

 

HDR images can represent a greater range of luminance levels than can be achieved using more 'traditional' methods, such as many real-world scenes containing very bright, direct sunlight to extreme shade, or very faint nebulae. This is often achieved by capturing and then combining several different, narrower range, exposures of the same subject matter. Non-HDR cameras take photographs with a limited exposure range, referred to as LDR, resulting in the loss of detail in highlights or shadows.

 

The two primary types of HDR images are computer renderings and images resulting from merging multiple low-dynamic-range (LDR) or standard-dynamic-range (SDR) photographs. HDR images can also be acquired using special image sensors, such as an oversampled binary image sensor.

 

Due to the limitations of printing and display contrast, the extended luminosity range of an HDR image has to be compressed to be made visible. The method of rendering an HDR image to a standard monitor or printing device is called tone mapping. This method reduces the overall contrast of an HDR image to facilitate display on devices or printouts with lower dynamic range, and can be applied to produce images with preserved local contrast (or exaggerated for artistic effect).

 

In photography, dynamic range is measured in exposure value (EV) differences (known as stops). An increase of one EV, or 'one stop', represents a doubling of the amount of light. Conversely, a decrease of one EV represents a halving of the amount of light. Therefore, revealing detail in the darkest of shadows requires high exposures, while preserving detail in very bright situations requires very low exposures. Most cameras cannot provide this range of exposure values within a single exposure, due to their low dynamic range. High-dynamic-range photographs are generally achieved by capturing multiple standard-exposure images, often using exposure bracketing, and then later merging them into a single HDR image, usually within a photo manipulation program). Digital images are often encoded in a camera's raw image format, because 8-bit JPEG encoding does not offer a wide enough range of values to allow fine transitions (and regarding HDR, later introduces undesirable effects due to lossy compression).

 

Any camera that allows manual exposure control can make images for HDR work, although one equipped with auto exposure bracketing (AEB) is far better suited. Images from film cameras are less suitable as they often must first be digitized, so that they can later be processed using software HDR methods.

 

In most imaging devices, the degree of exposure to light applied to the active element (be it film or CCD) can be altered in one of two ways: by either increasing/decreasing the size of the aperture or by increasing/decreasing the time of each exposure. Exposure variation in an HDR set is only done by altering the exposure time and not the aperture size; this is because altering the aperture size also affects the depth of field and so the resultant multiple images would be quite different, preventing their final combination into a single HDR image.

 

An important limitation for HDR photography is that any movement between successive images will impede or prevent success in combining them afterwards. Also, as one must create several images (often three or five and sometimes more) to obtain the desired luminance range, such a full 'set' of images takes extra time. HDR photographers have developed calculation methods and techniques to partially overcome these problems, but the use of a sturdy tripod is, at least, advised.

 

Some cameras have an auto exposure bracketing (AEB) feature with a far greater dynamic range than others, from the 3 EV of the Canon EOS 40D, to the 18 EV of the Canon EOS-1D Mark II. As the popularity of this imaging method grows, several camera manufactures are now offering built-in HDR features. For example, the Pentax K-7 DSLR has an HDR mode that captures an HDR image and outputs (only) a tone mapped JPEG file. The Canon PowerShot G12, Canon PowerShot S95 and Canon PowerShot S100 offer similar features in a smaller format.. Nikon's approach is called 'Active D-Lighting' which applies exposure compensation and tone mapping to the image as it comes from the sensor, with the accent being on retaing a realistic effect . Some smartphones provide HDR modes, and most mobile platforms have apps that provide HDR picture taking.

 

Camera characteristics such as gamma curves, sensor resolution, noise, photometric calibration and color calibration affect resulting high-dynamic-range images.

 

Color film negatives and slides consist of multiple film layers that respond to light differently. As a consequence, transparent originals (especially positive slides) feature a very high dynamic range

 

Tone mapping

Tone mapping reduces the dynamic range, or contrast ratio, of an entire image while retaining localized contrast. Although it is a distinct operation, tone mapping is often applied to HDRI files by the same software package.

 

Several software applications are available on the PC, Mac and Linux platforms for producing HDR files and tone mapped images. Notable titles include

 

Adobe Photoshop

Aurora HDR

Dynamic Photo HDR

HDR Efex Pro

HDR PhotoStudio

Luminance HDR

MagicRaw

Oloneo PhotoEngine

Photomatix Pro

PTGui

 

Information stored in high-dynamic-range images typically corresponds to the physical values of luminance or radiance that can be observed in the real world. This is different from traditional digital images, which represent colors as they should appear on a monitor or a paper print. Therefore, HDR image formats are often called scene-referred, in contrast to traditional digital images, which are device-referred or output-referred. Furthermore, traditional images are usually encoded for the human visual system (maximizing the visual information stored in the fixed number of bits), which is usually called gamma encoding or gamma correction. The values stored for HDR images are often gamma compressed (power law) or logarithmically encoded, or floating-point linear values, since fixed-point linear encodings are increasingly inefficient over higher dynamic ranges.

 

HDR images often don't use fixed ranges per color channel—other than traditional images—to represent many more colors over a much wider dynamic range. For that purpose, they don't use integer values to represent the single color channels (e.g., 0-255 in an 8 bit per pixel interval for red, green and blue) but instead use a floating point representation. Common are 16-bit (half precision) or 32-bit floating point numbers to represent HDR pixels. However, when the appropriate transfer function is used, HDR pixels for some applications can be represented with a color depth that has as few as 10–12 bits for luminance and 8 bits for chrominance without introducing any visible quantization artifacts.

 

History of HDR photography

The idea of using several exposures to adequately reproduce a too-extreme range of luminance was pioneered as early as the 1850s by Gustave Le Gray to render seascapes showing both the sky and the sea. Such rendering was impossible at the time using standard methods, as the luminosity range was too extreme. Le Gray used one negative for the sky, and another one with a longer exposure for the sea, and combined the two into one picture in positive.

 

Mid 20th century

Manual tone mapping was accomplished by dodging and burning – selectively increasing or decreasing the exposure of regions of the photograph to yield better tonality reproduction. This was effective because the dynamic range of the negative is significantly higher than would be available on the finished positive paper print when that is exposed via the negative in a uniform manner. An excellent example is the photograph Schweitzer at the Lamp by W. Eugene Smith, from his 1954 photo essay A Man of Mercy on Dr. Albert Schweitzer and his humanitarian work in French Equatorial Africa. The image took 5 days to reproduce the tonal range of the scene, which ranges from a bright lamp (relative to the scene) to a dark shadow.

 

Ansel Adams elevated dodging and burning to an art form. Many of his famous prints were manipulated in the darkroom with these two methods. Adams wrote a comprehensive book on producing prints called The Print, which prominently features dodging and burning, in the context of his Zone System.

 

With the advent of color photography, tone mapping in the darkroom was no longer possible due to the specific timing needed during the developing process of color film. Photographers looked to film manufacturers to design new film stocks with improved response, or continued to shoot in black and white to use tone mapping methods.

 

Color film capable of directly recording high-dynamic-range images was developed by Charles Wyckoff and EG&G "in the course of a contract with the Department of the Air Force". This XR film had three emulsion layers, an upper layer having an ASA speed rating of 400, a middle layer with an intermediate rating, and a lower layer with an ASA rating of 0.004. The film was processed in a manner similar to color films, and each layer produced a different color. The dynamic range of this extended range film has been estimated as 1:108. It has been used to photograph nuclear explosions, for astronomical photography, for spectrographic research, and for medical imaging. Wyckoff's detailed pictures of nuclear explosions appeared on the cover of Life magazine in the mid-1950s.

 

Late 20th century

Georges Cornuéjols and licensees of his patents (Brdi, Hymatom) introduced the principle of HDR video image, in 1986, by interposing a matricial LCD screen in front of the camera's image sensor, increasing the sensors dynamic by five stops. The concept of neighborhood tone mapping was applied to video cameras by a group from the Technion in Israel led by Dr. Oliver Hilsenrath and Prof. Y.Y.Zeevi who filed for a patent on this concept in 1988.

 

In February and April 1990, Georges Cornuéjols introduced the first real-time HDR camera that combined two images captured by a sensor3435 or simultaneously3637 by two sensors of the camera. This process is known as bracketing used for a video stream.

 

In 1991, the first commercial video camera was introduced that performed real-time capturing of multiple images with different exposures, and producing an HDR video image, by Hymatom, licensee of Georges Cornuéjols.

 

Also in 1991, Georges Cornuéjols introduced the HDR+ image principle by non-linear accumulation of images to increase the sensitivity of the camera: for low-light environments, several successive images are accumulated, thus increasing the signal to noise ratio.

 

In 1993, another commercial medical camera producing an HDR video image, by the Technion.

 

Modern HDR imaging uses a completely different approach, based on making a high-dynamic-range luminance or light map using only global image operations (across the entire image), and then tone mapping the result. Global HDR was first introduced in 19931 resulting in a mathematical theory of differently exposed pictures of the same subject matter that was published in 1995 by Steve Mann and Rosalind Picard.

 

On October 28, 1998, Ben Sarao created one of the first nighttime HDR+G (High Dynamic Range + Graphic image)of STS-95 on the launch pad at NASA's Kennedy Space Center. It consisted of four film images of the shuttle at night that were digitally composited with additional digital graphic elements. The image was first exhibited at NASA Headquarters Great Hall, Washington DC in 1999 and then published in Hasselblad Forum, Issue 3 1993, Volume 35 ISSN 0282-5449.

 

The advent of consumer digital cameras produced a new demand for HDR imaging to improve the light response of digital camera sensors, which had a much smaller dynamic range than film. Steve Mann developed and patented the global-HDR method for producing digital images having extended dynamic range at the MIT Media Laboratory. Mann's method involved a two-step procedure: (1) generate one floating point image array by global-only image operations (operations that affect all pixels identically, without regard to their local neighborhoods); and then (2) convert this image array, using local neighborhood processing (tone-remapping, etc.), into an HDR image. The image array generated by the first step of Mann's process is called a lightspace image, lightspace picture, or radiance map. Another benefit of global-HDR imaging is that it provides access to the intermediate light or radiance map, which has been used for computer vision, and other image processing operations.

 

21st century

In 2005, Adobe Systems introduced several new features in Photoshop CS2 including Merge to HDR, 32 bit floating point image support, and HDR tone mapping.

 

On June 30, 2016, Microsoft added support for the digital compositing of HDR images to Windows 10 using the Universal Windows Platform.

 

HDR sensors

Modern CMOS image sensors can often capture a high dynamic range from a single exposure. The wide dynamic range of the captured image is non-linearly compressed into a smaller dynamic range electronic representation. However, with proper processing, the information from a single exposure can be used to create an HDR image.

 

Such HDR imaging is used in extreme dynamic range applications like welding or automotive work. Some other cameras designed for use in security applications can automatically provide two or more images for each frame, with changing exposure. For example, a sensor for 30fps video will give out 60fps with the odd frames at a short exposure time and the even frames at a longer exposure time. Some of the sensor may even combine the two images on-chip so that a wider dynamic range without in-pixel compression is directly available to the user for display or processing.

 

en.wikipedia.org/wiki/High-dynamic-range_imaging

 

Infrared Photography

 

In infrared photography, the film or image sensor used is sensitive to infrared light. The part of the spectrum used is referred to as near-infrared to distinguish it from far-infrared, which is the domain of thermal imaging. Wavelengths used for photography range from about 700 nm to about 900 nm. Film is usually sensitive to visible light too, so an infrared-passing filter is used; this lets infrared (IR) light pass through to the camera, but blocks all or most of the visible light spectrum (the filter thus looks black or deep red). ("Infrared filter" may refer either to this type of filter or to one that blocks infrared but passes other wavelengths.)

 

When these filters are used together with infrared-sensitive film or sensors, "in-camera effects" can be obtained; false-color or black-and-white images with a dreamlike or sometimes lurid appearance known as the "Wood Effect," an effect mainly caused by foliage (such as tree leaves and grass) strongly reflecting in the same way visible light is reflected from snow. There is a small contribution from chlorophyll fluorescence, but this is marginal and is not the real cause of the brightness seen in infrared photographs. The effect is named after the infrared photography pioneer Robert W. Wood, and not after the material wood, which does not strongly reflect infrared.

 

The other attributes of infrared photographs include very dark skies and penetration of atmospheric haze, caused by reduced Rayleigh scattering and Mie scattering, respectively, compared to visible light. The dark skies, in turn, result in less infrared light in shadows and dark reflections of those skies from water, and clouds will stand out strongly. These wavelengths also penetrate a few millimeters into skin and give a milky look to portraits, although eyes often look black.

 

Until the early 20th century, infrared photography was not possible because silver halide emulsions are not sensitive to longer wavelengths than that of blue light (and to a lesser extent, green light) without the addition of a dye to act as a color sensitizer. The first infrared photographs (as distinct from spectrographs) to be published appeared in the February 1910 edition of The Century Magazine and in the October 1910 edition of the Royal Photographic Society Journal to illustrate papers by Robert W. Wood, who discovered the unusual effects that now bear his name. The RPS co-ordinated events to celebrate the centenary of this event in 2010. Wood's photographs were taken on experimental film that required very long exposures; thus, most of his work focused on landscapes. A further set of infrared landscapes taken by Wood in Italy in 1911 used plates provided for him by CEK Mees at Wratten & Wainwright. Mees also took a few infrared photographs in Portugal in 1910, which are now in the Kodak archives.

 

Infrared-sensitive photographic plates were developed in the United States during World War I for spectroscopic analysis, and infrared sensitizing dyes were investigated for improved haze penetration in aerial photography. After 1930, new emulsions from Kodak and other manufacturers became useful to infrared astronomy.

 

Infrared photography became popular with photography enthusiasts in the 1930s when suitable film was introduced commercially. The Times regularly published landscape and aerial photographs taken by their staff photographers using Ilford infrared film. By 1937 33 kinds of infrared film were available from five manufacturers including Agfa, Kodak and Ilford. Infrared movie film was also available and was used to create day-for-night effects in motion pictures, a notable example being the pseudo-night aerial sequences in the James Cagney/Bette Davis movie The Bride Came COD.

 

False-color infrared photography became widely practiced with the introduction of Kodak Ektachrome Infrared Aero Film and Ektachrome Infrared EIR. The first version of this, known as Kodacolor Aero-Reversal-Film, was developed by Clark and others at the Kodak for camouflage detection in the 1940s. The film became more widely available in 35mm form in the 1960s but KODAK AEROCHROME III Infrared Film 1443 has been discontinued.

 

Infrared photography became popular with a number of 1960s recording artists, because of the unusual results; Jimi Hendrix, Donovan, Frank and a slow shutter speed without focus compensation, however wider apertures like f/2.0 can produce sharp photos only if the lens is meticulously refocused to the infrared index mark, and only if this index mark is the correct one for the filter and film in use. However, it should be noted that diffraction effects inside a camera are greater at infrared wavelengths so that stopping down the lens too far may actually reduce sharpness.

 

Most apochromatic ('APO') lenses do not have an Infrared index mark and do not need to be refocused for the infrared spectrum because they are already optically corrected into the near-infrared spectrum. Catadioptric lenses do not often require this adjustment because their mirror containing elements do not suffer from chromatic aberration and so the overall aberration is comparably less. Catadioptric lenses do, of course, still contain lenses, and these lenses do still have a dispersive property.

 

Infrared black-and-white films require special development times but development is usually achieved with standard black-and-white film developers and chemicals (like D-76). Kodak HIE film has a polyester film base that is very stable but extremely easy to scratch, therefore special care must be used in the handling of Kodak HIE throughout the development and printing/scanning process to avoid damage to the film. The Kodak HIE film was sensitive to 900 nm.

 

As of November 2, 2007, "KODAK is preannouncing the discontinuance" of HIE Infrared 35 mm film stating the reasons that, "Demand for these products has been declining significantly in recent years, and it is no longer practical to continue to manufacture given the low volume, the age of the product formulations and the complexity of the processes involved." At the time of this notice, HIE Infrared 135-36 was available at a street price of around $12.00 a roll at US mail order outlets.

 

Arguably the greatest obstacle to infrared film photography has been the increasing difficulty of obtaining infrared-sensitive film. However, despite the discontinuance of HIE, other newer infrared sensitive emulsions from EFKE, ROLLEI, and ILFORD are still available, but these formulations have differing sensitivity and specifications from the venerable KODAK HIE that has been around for at least two decades. Some of these infrared films are available in 120 and larger formats as well as 35 mm, which adds flexibility to their application. With the discontinuance of Kodak HIE, Efke's IR820 film has become the only IR film on the marketneeds update with good sensitivity beyond 750 nm, the Rollei film does extend beyond 750 nm but IR sensitivity falls off very rapidly.

  

Color infrared transparency films have three sensitized layers that, because of the way the dyes are coupled to these layers, reproduce infrared as red, red as green, and green as blue. All three layers are sensitive to blue so the film must be used with a yellow filter, since this will block blue light but allow the remaining colors to reach the film. The health of foliage can be determined from the relative strengths of green and infrared light reflected; this shows in color infrared as a shift from red (healthy) towards magenta (unhealthy). Early color infrared films were developed in the older E-4 process, but Kodak later manufactured a color transparency film that could be developed in standard E-6 chemistry, although more accurate results were obtained by developing using the AR-5 process. In general, color infrared does not need to be refocused to the infrared index mark on the lens.

 

In 2007 Kodak announced that production of the 35 mm version of their color infrared film (Ektachrome Professional Infrared/EIR) would cease as there was insufficient demand. Since 2011, all formats of color infrared film have been discontinued. Specifically, Aerochrome 1443 and SO-734.

 

There is no currently available digital camera that will produce the same results as Kodak color infrared film although the equivalent images can be produced by taking two exposures, one infrared and the other full-color, and combining in post-production. The color images produced by digital still cameras using infrared-pass filters are not equivalent to those produced on color infrared film. The colors result from varying amounts of infrared passing through the color filters on the photo sites, further amended by the Bayer filtering. While this makes such images unsuitable for the kind of applications for which the film was used, such as remote sensing of plant health, the resulting color tonality has proved popular artistically.

 

Color digital infrared, as part of full spectrum photography is gaining popularity. The ease of creating a softly colored photo with infrared characteristics has found interest among hobbyists and professionals.

 

In 2008, Los Angeles photographer, Dean Bennici started cutting and hand rolling Aerochrome color Infrared film. All Aerochrome medium and large format which exists today came directly from his lab. The trend in infrared photography continues to gain momentum with the success of photographer Richard Mosse and multiple users all around the world.

 

Digital camera sensors are inherently sensitive to infrared light, which would interfere with the normal photography by confusing the autofocus calculations or softening the image (because infrared light is focused differently from visible light), or oversaturating the red channel. Also, some clothing is transparent in the infrared, leading to unintended (at least to the manufacturer) uses of video cameras. Thus, to improve image quality and protect privacy, many digital cameras employ infrared blockers. Depending on the subject matter, infrared photography may not be practical with these cameras because the exposure times become overly long, often in the range of 30 seconds, creating noise and motion blur in the final image. However, for some subject matter the long exposure does not matter or the motion blur effects actually add to the image. Some lenses will also show a 'hot spot' in the centre of the image as their coatings are optimised for visible light and not for IR.

 

An alternative method of DSLR infrared photography is to remove the infrared blocker in front of the sensor and replace it with a filter that removes visible light. This filter is behind the mirror, so the camera can be used normally - handheld, normal shutter speeds, normal composition through the viewfinder, and focus, all work like a normal camera. Metering works but is not always accurate because of the difference between visible and infrared refraction. When the IR blocker is removed, many lenses which did display a hotspot cease to do so, and become perfectly usable for infrared photography. Additionally, because the red, green and blue micro-filters remain and have transmissions not only in their respective color but also in the infrared, enhanced infrared color may be recorded.

 

Since the Bayer filters in most digital cameras absorb a significant fraction of the infrared light, these cameras are sometimes not very sensitive as infrared cameras and can sometimes produce false colors in the images. An alternative approach is to use a Foveon X3 sensor, which does not have absorptive filters on it; the Sigma SD10 DSLR has a removable IR blocking filter and dust protector, which can be simply omitted or replaced by a deep red or complete visible light blocking filter. The Sigma SD14 has an IR/UV blocking filter that can be removed/installed without tools. The result is a very sensitive digital IR camera.

 

While it is common to use a filter that blocks almost all visible light, the wavelength sensitivity of a digital camera without internal infrared blocking is such that a variety of artistic results can be obtained with more conventional filtration. For example, a very dark neutral density filter can be used (such as the Hoya ND400) which passes a very small amount of visible light compared to the near-infrared it allows through. Wider filtration permits an SLR viewfinder to be used and also passes more varied color information to the sensor without necessarily reducing the Wood effect. Wider filtration is however likely to reduce other infrared artefacts such as haze penetration and darkened skies. This technique mirrors the methods used by infrared film photographers where black-and-white infrared film was often used with a deep red filter rather than a visually opaque one.

 

Another common technique with near-infrared filters is to swap blue and red channels in software (e.g. photoshop) which retains much of the characteristic 'white foliage' while rendering skies a glorious blue.

 

Several Sony cameras had the so-called Night Shot facility, which physically moves the blocking filter away from the light path, which makes the cameras very sensitive to infrared light. Soon after its development, this facility was 'restricted' by Sony to make it difficult for people to take photos that saw through clothing. To do this the iris is opened fully and exposure duration is limited to long times of more than 1/30 second or so. It is possible to shoot infrared but neutral density filters must be used to reduce the camera's sensitivity and the long exposure times mean that care must be taken to avoid camera-shake artifacts.

 

Fuji have produced digital cameras for use in forensic criminology and medicine which have no infrared blocking filter. The first camera, designated the S3 PRO UVIR, also had extended ultraviolet sensitivity (digital sensors are usually less sensitive to UV than to IR). Optimum UV sensitivity requires special lenses, but ordinary lenses usually work well for IR. In 2007, FujiFilm introduced a new version of this camera, based on the Nikon D200/ FujiFilm S5 called the IS Pro, also able to take Nikon lenses. Fuji had earlier introduced a non-SLR infrared camera, the IS-1, a modified version of the FujiFilm FinePix S9100. Unlike the S3 PRO UVIR, the IS-1 does not offer UV sensitivity. FujiFilm restricts the sale of these cameras to professional users with their EULA specifically prohibiting "unethical photographic conduct".

 

Phase One digital camera backs can be ordered in an infrared modified form.

 

Remote sensing and thermographic cameras are sensitive to longer wavelengths of infrared (see Infrared spectrum#Commonly used sub-division scheme). They may be multispectral and use a variety of technologies which may not resemble common camera or filter designs. Cameras sensitive to longer infrared wavelengths including those used in infrared astronomy often require cooling to reduce thermally induced dark currents in the sensor (see Dark current (physics)). Lower cost uncooled thermographic digital cameras operate in the Long Wave infrared band (see Thermographic camera#Uncooled infrared detectors). These cameras are generally used for building inspection or preventative maintenance but can be used for artistic pursuits as well.

 

en.wikipedia.org/wiki/Infrared_photography

 

Technique mixte acrylique

Sérigraphie, pochoir et bombe

Toile de coton sur châssis

60 x 60 cm

 

chris@tian.fr

 

mixed media : acrylic painting

screenprinting, stencil & spray

cotton canvas on stretcher

23.6 x 23.6 inches

we've all seen or done the 1 plate offset for colored slopes... but what about the half plate.

pretty tight

Mosque Hassan II

In Explore 30th May, 2008

Infrared converted Sony A6000 with Sony E 16mm F2.8 mounted with the Sony Ultra Wide Converter. HDR AEB +/-2 total of 3 exposures at F8, 16mm, auto focus and processed with Photomatix HDR software.

 

High Dynamic Range (HDR)

 

High-dynamic-range imaging (HDRI) is a high dynamic range (HDR) technique used in imaging and photography to reproduce a greater dynamic range of luminosity than is possible with standard digital imaging or photographic techniques. The aim is to present a similar range of luminance to that experienced through the human visual system. The human eye, through adaptation of the iris and other methods, adjusts constantly to adapt to a broad range of luminance present in the environment. The brain continuously interprets this information so that a viewer can see in a wide range of light conditions.

 

HDR images can represent a greater range of luminance levels than can be achieved using more 'traditional' methods, such as many real-world scenes containing very bright, direct sunlight to extreme shade, or very faint nebulae. This is often achieved by capturing and then combining several different, narrower range, exposures of the same subject matter. Non-HDR cameras take photographs with a limited exposure range, referred to as LDR, resulting in the loss of detail in highlights or shadows.

 

The two primary types of HDR images are computer renderings and images resulting from merging multiple low-dynamic-range (LDR) or standard-dynamic-range (SDR) photographs. HDR images can also be acquired using special image sensors, such as an oversampled binary image sensor.

 

Due to the limitations of printing and display contrast, the extended luminosity range of an HDR image has to be compressed to be made visible. The method of rendering an HDR image to a standard monitor or printing device is called tone mapping. This method reduces the overall contrast of an HDR image to facilitate display on devices or printouts with lower dynamic range, and can be applied to produce images with preserved local contrast (or exaggerated for artistic effect).

 

In photography, dynamic range is measured in exposure value (EV) differences (known as stops). An increase of one EV, or 'one stop', represents a doubling of the amount of light. Conversely, a decrease of one EV represents a halving of the amount of light. Therefore, revealing detail in the darkest of shadows requires high exposures, while preserving detail in very bright situations requires very low exposures. Most cameras cannot provide this range of exposure values within a single exposure, due to their low dynamic range. High-dynamic-range photographs are generally achieved by capturing multiple standard-exposure images, often using exposure bracketing, and then later merging them into a single HDR image, usually within a photo manipulation program). Digital images are often encoded in a camera's raw image format, because 8-bit JPEG encoding does not offer a wide enough range of values to allow fine transitions (and regarding HDR, later introduces undesirable effects due to lossy compression).

 

Any camera that allows manual exposure control can make images for HDR work, although one equipped with auto exposure bracketing (AEB) is far better suited. Images from film cameras are less suitable as they often must first be digitized, so that they can later be processed using software HDR methods.

 

In most imaging devices, the degree of exposure to light applied to the active element (be it film or CCD) can be altered in one of two ways: by either increasing/decreasing the size of the aperture or by increasing/decreasing the time of each exposure. Exposure variation in an HDR set is only done by altering the exposure time and not the aperture size; this is because altering the aperture size also affects the depth of field and so the resultant multiple images would be quite different, preventing their final combination into a single HDR image.

 

An important limitation for HDR photography is that any movement between successive images will impede or prevent success in combining them afterwards. Also, as one must create several images (often three or five and sometimes more) to obtain the desired luminance range, such a full 'set' of images takes extra time. HDR photographers have developed calculation methods and techniques to partially overcome these problems, but the use of a sturdy tripod is, at least, advised.

 

Some cameras have an auto exposure bracketing (AEB) feature with a far greater dynamic range than others, from the 3 EV of the Canon EOS 40D, to the 18 EV of the Canon EOS-1D Mark II. As the popularity of this imaging method grows, several camera manufactures are now offering built-in HDR features. For example, the Pentax K-7 DSLR has an HDR mode that captures an HDR image and outputs (only) a tone mapped JPEG file. The Canon PowerShot G12, Canon PowerShot S95 and Canon PowerShot S100 offer similar features in a smaller format.. Nikon's approach is called 'Active D-Lighting' which applies exposure compensation and tone mapping to the image as it comes from the sensor, with the accent being on retaing a realistic effect . Some smartphones provide HDR modes, and most mobile platforms have apps that provide HDR picture taking.

 

Camera characteristics such as gamma curves, sensor resolution, noise, photometric calibration and color calibration affect resulting high-dynamic-range images.

 

Color film negatives and slides consist of multiple film layers that respond to light differently. As a consequence, transparent originals (especially positive slides) feature a very high dynamic range

 

Tone mapping

Tone mapping reduces the dynamic range, or contrast ratio, of an entire image while retaining localized contrast. Although it is a distinct operation, tone mapping is often applied to HDRI files by the same software package.

 

Several software applications are available on the PC, Mac and Linux platforms for producing HDR files and tone mapped images. Notable titles include

 

Adobe Photoshop

Aurora HDR

Dynamic Photo HDR

HDR Efex Pro

HDR PhotoStudio

Luminance HDR

MagicRaw

Oloneo PhotoEngine

Photomatix Pro

PTGui

 

Information stored in high-dynamic-range images typically corresponds to the physical values of luminance or radiance that can be observed in the real world. This is different from traditional digital images, which represent colors as they should appear on a monitor or a paper print. Therefore, HDR image formats are often called scene-referred, in contrast to traditional digital images, which are device-referred or output-referred. Furthermore, traditional images are usually encoded for the human visual system (maximizing the visual information stored in the fixed number of bits), which is usually called gamma encoding or gamma correction. The values stored for HDR images are often gamma compressed (power law) or logarithmically encoded, or floating-point linear values, since fixed-point linear encodings are increasingly inefficient over higher dynamic ranges.

 

HDR images often don't use fixed ranges per color channel—other than traditional images—to represent many more colors over a much wider dynamic range. For that purpose, they don't use integer values to represent the single color channels (e.g., 0-255 in an 8 bit per pixel interval for red, green and blue) but instead use a floating point representation. Common are 16-bit (half precision) or 32-bit floating point numbers to represent HDR pixels. However, when the appropriate transfer function is used, HDR pixels for some applications can be represented with a color depth that has as few as 10–12 bits for luminance and 8 bits for chrominance without introducing any visible quantization artifacts.

 

History of HDR photography

The idea of using several exposures to adequately reproduce a too-extreme range of luminance was pioneered as early as the 1850s by Gustave Le Gray to render seascapes showing both the sky and the sea. Such rendering was impossible at the time using standard methods, as the luminosity range was too extreme. Le Gray used one negative for the sky, and another one with a longer exposure for the sea, and combined the two into one picture in positive.

 

Mid 20th century

Manual tone mapping was accomplished by dodging and burning – selectively increasing or decreasing the exposure of regions of the photograph to yield better tonality reproduction. This was effective because the dynamic range of the negative is significantly higher than would be available on the finished positive paper print when that is exposed via the negative in a uniform manner. An excellent example is the photograph Schweitzer at the Lamp by W. Eugene Smith, from his 1954 photo essay A Man of Mercy on Dr. Albert Schweitzer and his humanitarian work in French Equatorial Africa. The image took 5 days to reproduce the tonal range of the scene, which ranges from a bright lamp (relative to the scene) to a dark shadow.

 

Ansel Adams elevated dodging and burning to an art form. Many of his famous prints were manipulated in the darkroom with these two methods. Adams wrote a comprehensive book on producing prints called The Print, which prominently features dodging and burning, in the context of his Zone System.

 

With the advent of color photography, tone mapping in the darkroom was no longer possible due to the specific timing needed during the developing process of color film. Photographers looked to film manufacturers to design new film stocks with improved response, or continued to shoot in black and white to use tone mapping methods.

 

Color film capable of directly recording high-dynamic-range images was developed by Charles Wyckoff and EG&G "in the course of a contract with the Department of the Air Force". This XR film had three emulsion layers, an upper layer having an ASA speed rating of 400, a middle layer with an intermediate rating, and a lower layer with an ASA rating of 0.004. The film was processed in a manner similar to color films, and each layer produced a different color. The dynamic range of this extended range film has been estimated as 1:108. It has been used to photograph nuclear explosions, for astronomical photography, for spectrographic research, and for medical imaging. Wyckoff's detailed pictures of nuclear explosions appeared on the cover of Life magazine in the mid-1950s.

 

Late 20th century

Georges Cornuéjols and licensees of his patents (Brdi, Hymatom) introduced the principle of HDR video image, in 1986, by interposing a matricial LCD screen in front of the camera's image sensor, increasing the sensors dynamic by five stops. The concept of neighborhood tone mapping was applied to video cameras by a group from the Technion in Israel led by Dr. Oliver Hilsenrath and Prof. Y.Y.Zeevi who filed for a patent on this concept in 1988.

 

In February and April 1990, Georges Cornuéjols introduced the first real-time HDR camera that combined two images captured by a sensor3435 or simultaneously3637 by two sensors of the camera. This process is known as bracketing used for a video stream.

 

In 1991, the first commercial video camera was introduced that performed real-time capturing of multiple images with different exposures, and producing an HDR video image, by Hymatom, licensee of Georges Cornuéjols.

 

Also in 1991, Georges Cornuéjols introduced the HDR+ image principle by non-linear accumulation of images to increase the sensitivity of the camera: for low-light environments, several successive images are accumulated, thus increasing the signal to noise ratio.

 

In 1993, another commercial medical camera producing an HDR video image, by the Technion.

 

Modern HDR imaging uses a completely different approach, based on making a high-dynamic-range luminance or light map using only global image operations (across the entire image), and then tone mapping the result. Global HDR was first introduced in 19931 resulting in a mathematical theory of differently exposed pictures of the same subject matter that was published in 1995 by Steve Mann and Rosalind Picard.

 

On October 28, 1998, Ben Sarao created one of the first nighttime HDR+G (High Dynamic Range + Graphic image)of STS-95 on the launch pad at NASA's Kennedy Space Center. It consisted of four film images of the shuttle at night that were digitally composited with additional digital graphic elements. The image was first exhibited at NASA Headquarters Great Hall, Washington DC in 1999 and then published in Hasselblad Forum, Issue 3 1993, Volume 35 ISSN 0282-5449.

 

The advent of consumer digital cameras produced a new demand for HDR imaging to improve the light response of digital camera sensors, which had a much smaller dynamic range than film. Steve Mann developed and patented the global-HDR method for producing digital images having extended dynamic range at the MIT Media Laboratory. Mann's method involved a two-step procedure: (1) generate one floating point image array by global-only image operations (operations that affect all pixels identically, without regard to their local neighborhoods); and then (2) convert this image array, using local neighborhood processing (tone-remapping, etc.), into an HDR image. The image array generated by the first step of Mann's process is called a lightspace image, lightspace picture, or radiance map. Another benefit of global-HDR imaging is that it provides access to the intermediate light or radiance map, which has been used for computer vision, and other image processing operations.

 

21st century

In 2005, Adobe Systems introduced several new features in Photoshop CS2 including Merge to HDR, 32 bit floating point image support, and HDR tone mapping.

 

On June 30, 2016, Microsoft added support for the digital compositing of HDR images to Windows 10 using the Universal Windows Platform.

 

HDR sensors

Modern CMOS image sensors can often capture a high dynamic range from a single exposure. The wide dynamic range of the captured image is non-linearly compressed into a smaller dynamic range electronic representation. However, with proper processing, the information from a single exposure can be used to create an HDR image.

 

Such HDR imaging is used in extreme dynamic range applications like welding or automotive work. Some other cameras designed for use in security applications can automatically provide two or more images for each frame, with changing exposure. For example, a sensor for 30fps video will give out 60fps with the odd frames at a short exposure time and the even frames at a longer exposure time. Some of the sensor may even combine the two images on-chip so that a wider dynamic range without in-pixel compression is directly available to the user for display or processing.

 

en.wikipedia.org/wiki/High-dynamic-range_imaging

 

Infrared Photography

 

In infrared photography, the film or image sensor used is sensitive to infrared light. The part of the spectrum used is referred to as near-infrared to distinguish it from far-infrared, which is the domain of thermal imaging. Wavelengths used for photography range from about 700 nm to about 900 nm. Film is usually sensitive to visible light too, so an infrared-passing filter is used; this lets infrared (IR) light pass through to the camera, but blocks all or most of the visible light spectrum (the filter thus looks black or deep red). ("Infrared filter" may refer either to this type of filter or to one that blocks infrared but passes other wavelengths.)

 

When these filters are used together with infrared-sensitive film or sensors, "in-camera effects" can be obtained; false-color or black-and-white images with a dreamlike or sometimes lurid appearance known as the "Wood Effect," an effect mainly caused by foliage (such as tree leaves and grass) strongly reflecting in the same way visible light is reflected from snow. There is a small contribution from chlorophyll fluorescence, but this is marginal and is not the real cause of the brightness seen in infrared photographs. The effect is named after the infrared photography pioneer Robert W. Wood, and not after the material wood, which does not strongly reflect infrared.

 

The other attributes of infrared photographs include very dark skies and penetration of atmospheric haze, caused by reduced Rayleigh scattering and Mie scattering, respectively, compared to visible light. The dark skies, in turn, result in less infrared light in shadows and dark reflections of those skies from water, and clouds will stand out strongly. These wavelengths also penetrate a few millimeters into skin and give a milky look to portraits, although eyes often look black.

 

Until the early 20th century, infrared photography was not possible because silver halide emulsions are not sensitive to longer wavelengths than that of blue light (and to a lesser extent, green light) without the addition of a dye to act as a color sensitizer. The first infrared photographs (as distinct from spectrographs) to be published appeared in the February 1910 edition of The Century Magazine and in the October 1910 edition of the Royal Photographic Society Journal to illustrate papers by Robert W. Wood, who discovered the unusual effects that now bear his name. The RPS co-ordinated events to celebrate the centenary of this event in 2010. Wood's photographs were taken on experimental film that required very long exposures; thus, most of his work focused on landscapes. A further set of infrared landscapes taken by Wood in Italy in 1911 used plates provided for him by CEK Mees at Wratten & Wainwright. Mees also took a few infrared photographs in Portugal in 1910, which are now in the Kodak archives.

 

Infrared-sensitive photographic plates were developed in the United States during World War I for spectroscopic analysis, and infrared sensitizing dyes were investigated for improved haze penetration in aerial photography. After 1930, new emulsions from Kodak and other manufacturers became useful to infrared astronomy.

 

Infrared photography became popular with photography enthusiasts in the 1930s when suitable film was introduced commercially. The Times regularly published landscape and aerial photographs taken by their staff photographers using Ilford infrared film. By 1937 33 kinds of infrared film were available from five manufacturers including Agfa, Kodak and Ilford. Infrared movie film was also available and was used to create day-for-night effects in motion pictures, a notable example being the pseudo-night aerial sequences in the James Cagney/Bette Davis movie The Bride Came COD.

 

False-color infrared photography became widely practiced with the introduction of Kodak Ektachrome Infrared Aero Film and Ektachrome Infrared EIR. The first version of this, known as Kodacolor Aero-Reversal-Film, was developed by Clark and others at the Kodak for camouflage detection in the 1940s. The film became more widely available in 35mm form in the 1960s but KODAK AEROCHROME III Infrared Film 1443 has been discontinued.

 

Infrared photography became popular with a number of 1960s recording artists, because of the unusual results; Jimi Hendrix, Donovan, Frank and a slow shutter speed without focus compensation, however wider apertures like f/2.0 can produce sharp photos only if the lens is meticulously refocused to the infrared index mark, and only if this index mark is the correct one for the filter and film in use. However, it should be noted that diffraction effects inside a camera are greater at infrared wavelengths so that stopping down the lens too far may actually reduce sharpness.

 

Most apochromatic ('APO') lenses do not have an Infrared index mark and do not need to be refocused for the infrared spectrum because they are already optically corrected into the near-infrared spectrum. Catadioptric lenses do not often require this adjustment because their mirror containing elements do not suffer from chromatic aberration and so the overall aberration is comparably less. Catadioptric lenses do, of course, still contain lenses, and these lenses do still have a dispersive property.

 

Infrared black-and-white films require special development times but development is usually achieved with standard black-and-white film developers and chemicals (like D-76). Kodak HIE film has a polyester film base that is very stable but extremely easy to scratch, therefore special care must be used in the handling of Kodak HIE throughout the development and printing/scanning process to avoid damage to the film. The Kodak HIE film was sensitive to 900 nm.

 

As of November 2, 2007, "KODAK is preannouncing the discontinuance" of HIE Infrared 35 mm film stating the reasons that, "Demand for these products has been declining significantly in recent years, and it is no longer practical to continue to manufacture given the low volume, the age of the product formulations and the complexity of the processes involved." At the time of this notice, HIE Infrared 135-36 was available at a street price of around $12.00 a roll at US mail order outlets.

 

Arguably the greatest obstacle to infrared film photography has been the increasing difficulty of obtaining infrared-sensitive film. However, despite the discontinuance of HIE, other newer infrared sensitive emulsions from EFKE, ROLLEI, and ILFORD are still available, but these formulations have differing sensitivity and specifications from the venerable KODAK HIE that has been around for at least two decades. Some of these infrared films are available in 120 and larger formats as well as 35 mm, which adds flexibility to their application. With the discontinuance of Kodak HIE, Efke's IR820 film has become the only IR film on the marketneeds update with good sensitivity beyond 750 nm, the Rollei film does extend beyond 750 nm but IR sensitivity falls off very rapidly.

  

Color infrared transparency films have three sensitized layers that, because of the way the dyes are coupled to these layers, reproduce infrared as red, red as green, and green as blue. All three layers are sensitive to blue so the film must be used with a yellow filter, since this will block blue light but allow the remaining colors to reach the film. The health of foliage can be determined from the relative strengths of green and infrared light reflected; this shows in color infrared as a shift from red (healthy) towards magenta (unhealthy). Early color infrared films were developed in the older E-4 process, but Kodak later manufactured a color transparency film that could be developed in standard E-6 chemistry, although more accurate results were obtained by developing using the AR-5 process. In general, color infrared does not need to be refocused to the infrared index mark on the lens.

 

In 2007 Kodak announced that production of the 35 mm version of their color infrared film (Ektachrome Professional Infrared/EIR) would cease as there was insufficient demand. Since 2011, all formats of color infrared film have been discontinued. Specifically, Aerochrome 1443 and SO-734.

 

There is no currently available digital camera that will produce the same results as Kodak color infrared film although the equivalent images can be produced by taking two exposures, one infrared and the other full-color, and combining in post-production. The color images produced by digital still cameras using infrared-pass filters are not equivalent to those produced on color infrared film. The colors result from varying amounts of infrared passing through the color filters on the photo sites, further amended by the Bayer filtering. While this makes such images unsuitable for the kind of applications for which the film was used, such as remote sensing of plant health, the resulting color tonality has proved popular artistically.

 

Color digital infrared, as part of full spectrum photography is gaining popularity. The ease of creating a softly colored photo with infrared characteristics has found interest among hobbyists and professionals.

 

In 2008, Los Angeles photographer, Dean Bennici started cutting and hand rolling Aerochrome color Infrared film. All Aerochrome medium and large format which exists today came directly from his lab. The trend in infrared photography continues to gain momentum with the success of photographer Richard Mosse and multiple users all around the world.

 

Digital camera sensors are inherently sensitive to infrared light, which would interfere with the normal photography by confusing the autofocus calculations or softening the image (because infrared light is focused differently from visible light), or oversaturating the red channel. Also, some clothing is transparent in the infrared, leading to unintended (at least to the manufacturer) uses of video cameras. Thus, to improve image quality and protect privacy, many digital cameras employ infrared blockers. Depending on the subject matter, infrared photography may not be practical with these cameras because the exposure times become overly long, often in the range of 30 seconds, creating noise and motion blur in the final image. However, for some subject matter the long exposure does not matter or the motion blur effects actually add to the image. Some lenses will also show a 'hot spot' in the centre of the image as their coatings are optimised for visible light and not for IR.

 

An alternative method of DSLR infrared photography is to remove the infrared blocker in front of the sensor and replace it with a filter that removes visible light. This filter is behind the mirror, so the camera can be used normally - handheld, normal shutter speeds, normal composition through the viewfinder, and focus, all work like a normal camera. Metering works but is not always accurate because of the difference between visible and infrared refraction. When the IR blocker is removed, many lenses which did display a hotspot cease to do so, and become perfectly usable for infrared photography. Additionally, because the red, green and blue micro-filters remain and have transmissions not only in their respective color but also in the infrared, enhanced infrared color may be recorded.

 

Since the Bayer filters in most digital cameras absorb a significant fraction of the infrared light, these cameras are sometimes not very sensitive as infrared cameras and can sometimes produce false colors in the images. An alternative approach is to use a Foveon X3 sensor, which does not have absorptive filters on it; the Sigma SD10 DSLR has a removable IR blocking filter and dust protector, which can be simply omitted or replaced by a deep red or complete visible light blocking filter. The Sigma SD14 has an IR/UV blocking filter that can be removed/installed without tools. The result is a very sensitive digital IR camera.

 

While it is common to use a filter that blocks almost all visible light, the wavelength sensitivity of a digital camera without internal infrared blocking is such that a variety of artistic results can be obtained with more conventional filtration. For example, a very dark neutral density filter can be used (such as the Hoya ND400) which passes a very small amount of visible light compared to the near-infrared it allows through. Wider filtration permits an SLR viewfinder to be used and also passes more varied color information to the sensor without necessarily reducing the Wood effect. Wider filtration is however likely to reduce other infrared artefacts such as haze penetration and darkened skies. This technique mirrors the methods used by infrared film photographers where black-and-white infrared film was often used with a deep red filter rather than a visually opaque one.

 

Another common technique with near-infrared filters is to swap blue and red channels in software (e.g. photoshop) which retains much of the characteristic 'white foliage' while rendering skies a glorious blue.

 

Several Sony cameras had the so-called Night Shot facility, which physically moves the blocking filter away from the light path, which makes the cameras very sensitive to infrared light. Soon after its development, this facility was 'restricted' by Sony to make it difficult for people to take photos that saw through clothing. To do this the iris is opened fully and exposure duration is limited to long times of more than 1/30 second or so. It is possible to shoot infrared but neutral density filters must be used to reduce the camera's sensitivity and the long exposure times mean that care must be taken to avoid camera-shake artifacts.

 

Fuji have produced digital cameras for use in forensic criminology and medicine which have no infrared blocking filter. The first camera, designated the S3 PRO UVIR, also had extended ultraviolet sensitivity (digital sensors are usually less sensitive to UV than to IR). Optimum UV sensitivity requires special lenses, but ordinary lenses usually work well for IR. In 2007, FujiFilm introduced a new version of this camera, based on the Nikon D200/ FujiFilm S5 called the IS Pro, also able to take Nikon lenses. Fuji had earlier introduced a non-SLR infrared camera, the IS-1, a modified version of the FujiFilm FinePix S9100. Unlike the S3 PRO UVIR, the IS-1 does not offer UV sensitivity. FujiFilm restricts the sale of these cameras to professional users with their EULA specifically prohibiting "unethical photographic conduct".

 

Phase One digital camera backs can be ordered in an infrared modified form.

 

Remote sensing and thermographic cameras are sensitive to longer wavelengths of infrared (see Infrared spectrum#Commonly used sub-division scheme). They may be multispectral and use a variety of technologies which may not resemble common camera or filter designs. Cameras sensitive to longer infrared wavelengths including those used in infrared astronomy often require cooling to reduce thermally induced dark currents in the sensor (see Dark current (physics)). Lower cost uncooled thermographic digital cameras operate in the Long Wave infrared band (see Thermographic camera#Uncooled infrared detectors). These cameras are generally used for building inspection or preventative maintenance but can be used for artistic pursuits as well.

 

en.wikipedia.org/wiki/Infrared_photography

65 Shot bokeh pano (brenizer technique) taken with the canon 50mm 1.4 on a 40D. All 65 shots were taken with a shutter speed of 1/8000 with a aperture of 1.6 at ISO 100.

The 2025 Cowal Gathering, Dunoon

1 2 ••• 11 12 14 16 17 ••• 79 80