View allAll Photos Tagged Device

Lynx Optare Tempo FD54JYF, snapped in Hunstanton bus station and framed appropriately by trees in the park opposite

For truing up certain orifices.

I have a couple of these Realistic Quatravox units from the 1970s. They convert two-channel stereo into four-channel for a surround sound effect. When these units were out quadraphonic sound was the latest fad, Radio Shack marketed these as a low cost alternative to get into four-channel sound. They require no external power, just a simple circuit with switches and resistors.

The side of the barrel of this peculiar device seems to be marked "Jr.Fl.M.W.Na.22.N". I was unable to find any info on this weapon until I found something very similar in "German Assault troops of WW1" from T. Wictor.

 

There he shows a Heavy 240 cm trench mortar "Iko" (Schwerer Flügelminenwerfer "Iko") with Iko standing for Ingenieur-Komitee (Engineering Committee). The weapon is based on French mortars. The device weighed over a ton and required 42 men to reposition it. 20 shells could be fired per hour and the range was slightly over 1 km. The device was mounted on wooden railroad ties (crosstie, railway sleeper) and was positioned in a pit of approximately 2 x 1.5 m and 0.5 meter deep. Fixation required more railway ties and wooden wedges.

Clearly, the device seen here fits this description, although it is not completely similar. I presume the example I have concerns an improved version, designed by the engineering committee at a later stage of the war. The barrel here is longer and the mechanism to adjust the height is more elaborated.

To get back to the markings on the side, and to relate this to the description found in literature, the "Fl.M.W." no doubt stands for Flügelminenwerfer. "Na" might suggest neuer Art, but I think "nA" would make more sense.

 

Recently, I've seen pictures of French 240 mm trench mortars and the mechanism looks very similar, so it might just be a captured example after all.

'Orrible little things the 12s. Their only redeeming feature being that you could prefix the fleet number with a buffer grease 3 and giggle childishly. (3)1216 arrives at some shack I didn't write down on some train I didn't board. Portugal May 1993ish.

P1060985PSXstrtn

 

For maximum effect, click the image, to go into the Lightbox, to view at the largest size; or, perhaps, by clicking the expansion arrows at top right of the page for a Full Screen view.

Don't use or reproduce this image on Websites/Blog or any other media without my explicit permission.

© All Rights Reserved - Jim Goodyear 2017.

petitions.moveon.org/sign/change-flickr-back

 

For my coming Jabba's palace I've built some technical device. I've made an instruction to see how I used some SNOT-techniques.

Spaceflight (or space flight) is ballistic flight into or through outer space. Spaceflight can occur with spacecraft with or without humans on board. Yuri Gagarin of the Soviet Union was the first human to conduct a spaceflight. Examples of human spaceflight include the U.S. Apollo Moon landing and Space Shuttle programs and the Russian Soyuz program, as well as the ongoing International Space Station. Examples of unmanned spaceflight include space probes that leave Earth orbit, as well as satellites in orbit around Earth, such as communications satellites. These operate either by telerobotic control or are fully autonomous.

 

Spaceflight is used in space exploration, and also in commercial activities like space tourism and satellite telecommunications. Additional non-commercial uses of spaceflight include space observatories, reconnaissance satellites and other Earth observation satellites.

 

A spaceflight typically begins with a rocket launch, which provides the initial thrust to overcome the force of gravity and propels the spacecraft from the surface of the Earth. Once in space, the motion of a spacecraft – both when unpropelled and when under propulsion – is covered by the area of study called astrodynamics. Some spacecraft remain in space indefinitely, some disintegrate during atmospheric reentry, and others reach a planetary or lunar surface for landing or impact.

  

History

Main articles: History of spaceflight and Timeline of spaceflight

Tsiolkovsky, early space theorist

 

The first theoretical proposal of space travel using rockets was published by Scottish astronomer and mathematician William Leitch, in an 1861 essay "A Journey Through Space".[1] More well-known (though not widely outside Russia) is Konstantin Tsiolkovsky's work, "Исследование мировых пространств реактивными приборами" (The Exploration of Cosmic Space by Means of Reaction Devices), published in 1903.

 

Spaceflight became an engineering possibility with the work of Robert H. Goddard's publication in 1919 of his paper A Method of Reaching Extreme Altitudes. His application of the de Laval nozzle to liquid fuel rockets improved efficiency enough for interplanetary travel to become possible. He also proved in the laboratory that rockets would work in the vacuum of space;[specify] nonetheless, his work was not taken seriously by the public. His attempt to secure an Army contract for a rocket-propelled weapon in the first World War was defeated by the November 11, 1918 armistice with Germany. Working with private financial support, he was the first to launch a liquid-fueled rocket in 1926. Goddard's paper was highly influential on Hermann Oberth, who in turn influenced Wernher von Braun. Von Braun became the first to produce modern rockets as guided weapons, employed by Adolf Hitler. Von Braun's V-2 was the first rocket to reach space, at an altitude of 189 kilometers (102 nautical miles) on a June 1944 test flight.[2]

 

Tsiolkovsky's rocketry work was not fully appreciated in his lifetime, but he influenced Sergey Korolev, who became the Soviet Union's chief rocket designer under Joseph Stalin, to develop intercontinental ballistic missiles to carry nuclear weapons as a counter measure to United States bomber planes. Derivatives of Korolev's R-7 Semyorka missiles were used to launch the world's first artificial Earth satellite, Sputnik 1, on October 4, 1957, and later the first human to orbit the Earth, Yuri Gagarin in Vostok 1, on April 12, 1961.[3]

 

At the end of World War II, von Braun and most of his rocket team surrendered to the United States, and were expatriated to work on American missiles at what became the Army Ballistic Missile Agency. This work on missiles such as Juno I and Atlas enabled launch of the first US satellite Explorer 1 on February 1, 1958, and the first American in orbit, John Glenn in Friendship 7 on February 20, 1962. As director of the Marshall Space Flight Center, Von Braun oversaw development of a larger class of rocket called Saturn, which allowed the US to send the first two humans, Neil Armstrong and Buzz Aldrin, to the Moon and back on Apollo 11 in July 1969. Over the same period, the Soviet Union secretly tried but failed to develop the N1 rocket to give them the capability to land one person on the Moon.

Phases

Launch

Main article: Rocket launch

See also: List of space launch system designs

 

Rockets are the only means currently capable of reaching orbit or beyond. Other non-rocket spacelaunch technologies have yet to be built, or remain short of orbital speeds. A rocket launch for a spaceflight usually starts from a spaceport (cosmodrome), which may be equipped with launch complexes and launch pads for vertical rocket launches, and runways for takeoff and landing of carrier airplanes and winged spacecraft. Spaceports are situated well away from human habitation for noise and safety reasons. ICBMs have various special launching facilities.

 

A launch is often restricted to certain launch windows. These windows depend upon the position of celestial bodies and orbits relative to the launch site. The biggest influence is often the rotation of the Earth itself. Once launched, orbits are normally located within relatively constant flat planes at a fixed angle to the axis of the Earth, and the Earth rotates within this orbit.

 

A launch pad is a fixed structure designed to dispatch airborne vehicles. It generally consists of a launch tower and flame trench. It is surrounded by equipment used to erect, fuel, and maintain launch vehicles. Before launch, the rocket can weigh many hundreds of tonnes. The Space Shuttle Columbia, on STS-1, weighed 2,030 tonnes (4,480,000 lb) at take off.

Reaching space

 

The most commonly used definition of outer space is everything beyond the Kármán line, which is 100 kilometers (62 mi) above the Earth's surface. The United States sometimes defines outer space as everything beyond 50 miles (80 km) in altitude.

 

Rockets are the only currently practical means of reaching space. Conventional airplane engines cannot reach space due to the lack of oxygen. Rocket engines expel propellant to provide forward thrust that generates enough delta-v (change in velocity) to reach orbit.

 

For manned launch systems launch escape systems are frequently fitted to allow astronauts to escape in the case of emergency.

Alternatives

Main article: Non-rocket spacelaunch

 

Many ways to reach space other than rockets have been proposed. Ideas such as the space elevator, and momentum exchange tethers like rotovators or skyhooks require new materials much stronger than any currently known. Electromagnetic launchers such as launch loops might be feasible with current technology. Other ideas include rocket assisted aircraft/spaceplanes such as Reaction Engines Skylon (currently in early stage development), scramjet powered spaceplanes, and RBCC powered spaceplanes. Gun launch has been proposed for cargo.

Leaving orbit

 

This section possibly contains original research. Relevant discussion may be found on Talk:Spaceflight. Please improve it by verifying the claims made and adding inline citations. Statements consisting only of original research should be removed. (June 2018) (Learn how and when to remove this template message)

Main articles: Escape velocity and Parking orbit

Launched in 1959, Luna 1 was the first known man-made object to achieve escape velocity from the Earth.[4] (replica pictured)

 

Achieving a closed orbit is not essential to lunar and interplanetary voyages. Early Russian space vehicles successfully achieved very high altitudes without going into orbit. NASA considered launching Apollo missions directly into lunar trajectories but adopted the strategy of first entering a temporary parking orbit and then performing a separate burn several orbits later onto a lunar trajectory. This costs additional propellant because the parking orbit perigee must be high enough to prevent reentry while direct injection can have an arbitrarily low perigee because it will never be reached.

 

However, the parking orbit approach greatly simplified Apollo mission planning in several important ways. It substantially widened the allowable launch windows, increasing the chance of a successful launch despite minor technical problems during the countdown. The parking orbit was a stable "mission plateau" that gave the crew and controllers several hours to thoroughly check out the spacecraft after the stresses of launch before committing it to a long lunar flight; the crew could quickly return to Earth, if necessary, or an alternate Earth-orbital mission could be conducted. The parking orbit also enabled translunar trajectories that avoided the densest parts of the Van Allen radiation belts.

 

Apollo missions minimized the performance penalty of the parking orbit by keeping its altitude as low as possible. For example, Apollo 15 used an unusually low parking orbit (even for Apollo) of 92.5 nmi by 91.5 nmi (171 km by 169 km) where there was significant atmospheric drag. But it was partially overcome by continuous venting of hydrogen from the third stage of the Saturn V, and was in any event tolerable for the short stay.

 

Robotic missions do not require an abort capability or radiation minimization, and because modern launchers routinely meet "instantaneous" launch windows, space probes to the Moon and other planets generally use direct injection to maximize performance. Although some might coast briefly during the launch sequence, they do not complete one or more full parking orbits before the burn that injects them onto an Earth escape trajectory.

 

Note that the escape velocity from a celestial body decreases with altitude above that body. However, it is more fuel-efficient for a craft to burn its fuel as close to the ground as possible; see Oberth effect and reference.[5] This is another way to explain the performance penalty associated with establishing the safe perigee of a parking orbit.

 

Plans for future crewed interplanetary spaceflight missions often include final vehicle assembly in Earth orbit, such as NASA's Project Orion and Russia's Kliper/Parom tandem.

Astrodynamics

Main article: Orbital mechanics

 

Astrodynamics is the study of spacecraft trajectories, particularly as they relate to gravitational and propulsion effects. Astrodynamics allows for a spacecraft to arrive at its destination at the correct time without excessive propellant use. An orbital maneuvering system may be needed to maintain or change orbits.

 

Non-rocket orbital propulsion methods include solar sails, magnetic sails, plasma-bubble magnetic systems, and using gravitational slingshot effects.

Ionized gas trail from Shuttle reentry

Recovery of Discoverer 14 return capsule by a C-119 airplane

Transfer energy

 

The term "transfer energy" means the total amount of energy imparted by a rocket stage to its payload. This can be the energy imparted by a first stage of a launch vehicle to an upper stage plus payload, or by an upper stage or spacecraft kick motor to a spacecraft.[6][7]

Reentry

Main article: Atmospheric reentry

 

Vehicles in orbit have large amounts of kinetic energy. This energy must be discarded if the vehicle is to land safely without vaporizing in the atmosphere. Typically this process requires special methods to protect against aerodynamic heating. The theory behind reentry was developed by Harry Julian Allen. Based on this theory, reentry vehicles present blunt shapes to the atmosphere for reentry. Blunt shapes mean that less than 1% of the kinetic energy ends up as heat that reaches the vehicle, and the remainder heats up the atmosphere.

Landing

 

The Mercury, Gemini, and Apollo capsules all splashed down in the sea. These capsules were designed to land at relatively low speeds with the help of a parachute. Russian capsules for Soyuz make use of a big parachute and braking rockets to touch down on land. The Space Shuttle glided to a touchdown like a plane.

Recovery

 

After a successful landing the spacecraft, its occupants and cargo can be recovered. In some cases, recovery has occurred before landing: while a spacecraft is still descending on its parachute, it can be snagged by a specially designed aircraft. This mid-air retrieval technique was used to recover the film canisters from the Corona spy satellites.

Types

Uncrewed

See also: Uncrewed spacecraft and robotic spacecraft

Sojourner takes its Alpha particle X-ray spectrometer measurement of Yogi Rock on Mars

The MESSENGER spacecraft at Mercury (artist's interpretation)

 

Uncrewed spaceflight (or unmanned) is all spaceflight activity without a necessary human presence in space. This includes all space probes, satellites and robotic spacecraft and missions. Uncrewed spaceflight is the opposite of manned spaceflight, which is usually called human spaceflight. Subcategories of uncrewed spaceflight are "robotic spacecraft" (objects) and "robotic space missions" (activities). A robotic spacecraft is an uncrewed spacecraft with no humans on board, that is usually under telerobotic control. A robotic spacecraft designed to make scientific research measurements is often called a space probe.

 

Uncrewed space missions use remote-controlled spacecraft. The first uncrewed space mission was Sputnik I, launched October 4, 1957 to orbit the Earth. Space missions where other animals but no humans are on-board are considered uncrewed missions.

Benefits

 

Many space missions are more suited to telerobotic rather than crewed operation, due to lower cost and lower risk factors. In addition, some planetary destinations such as Venus or the vicinity of Jupiter are too hostile for human survival, given current technology. Outer planets such as Saturn, Uranus, and Neptune are too distant to reach with current crewed spaceflight technology, so telerobotic probes are the only way to explore them. Telerobotics also allows exploration of regions that are vulnerable to contamination by Earth micro-organisms since spacecraft can be sterilized. Humans can not be sterilized in the same way as a spaceship, as they coexist with numerous micro-organisms, and these micro-organisms are also hard to contain within a spaceship or spacesuit.

Telepresence

 

Telerobotics becomes telepresence when the time delay is short enough to permit control of the spacecraft in close to real time by humans. Even the two seconds light speed delay for the Moon is too far away for telepresence exploration from Earth. The L1 and L2 positions permit 400-millisecond round trip delays, which is just close enough for telepresence operation. Telepresence has also been suggested as a way to repair satellites in Earth orbit from Earth. The Exploration Telerobotics Symposium in 2012 explored this and other topics.[8]

Human

Main article: Human spaceflight

ISS crew member stores samples

 

The first human spaceflight was Vostok 1 on April 12, 1961, on which cosmonaut Yuri Gagarin of the USSR made one orbit around the Earth. In official Soviet documents, there is no mention of the fact that Gagarin parachuted the final seven miles.[9] Currently, the only spacecraft regularly used for human spaceflight are the Russian Soyuz spacecraft and the Chinese Shenzhou spacecraft. The U.S. Space Shuttle fleet operated from April 1981 until July 2011. SpaceShipOne has conducted two human suborbital spaceflights.

Sub-orbital

Main article: Sub-orbital spaceflight

The International Space Station in Earth orbit after a visit from the crew of STS-119

 

On a sub-orbital spaceflight the spacecraft reaches space and then returns to the atmosphere after following a (primarily) ballistic trajectory. This is usually because of insufficient specific orbital energy, in which case a suborbital flight will last only a few minutes, but it is also possible for an object with enough energy for an orbit to have a trajectory that intersects the Earth's atmosphere, sometimes after many hours. Pioneer 1 was NASA's first space probe intended to reach the Moon. A partial failure caused it to instead follow a suborbital trajectory to an altitude of 113,854 kilometers (70,746 mi) before reentering the Earth's atmosphere 43 hours after launch.

 

The most generally recognized boundary of space is the Kármán line 100 km above sea level. (NASA alternatively defines an astronaut as someone who has flown more than 50 miles (80 km) above sea level.) It is not generally recognized by the public that the increase in potential energy required to pass the Kármán line is only about 3% of the orbital energy (potential plus kinetic energy) required by the lowest possible Earth orbit (a circular orbit just above the Kármán line.) In other words, it is far easier to reach space than to stay there. On May 17, 2004, Civilian Space eXploration Team launched the GoFast Rocket on a suborbital flight, the first amateur spaceflight. On June 21, 2004, SpaceShipOne was used for the first privately funded human spaceflight.

Point-to-point

 

Point-to-point is a category of sub-orbital spaceflight in which a spacecraft provides rapid transport between two terrestrial locations. Consider a conventional airline route between London and Sydney, a flight that normally lasts over twenty hours. With point-to-point suborbital travel the same route could be traversed in less than one hour.[10] While no company offers this type of transportation today, SpaceX has revealed plans to do so as early as the 2020s using its BFR vehicle.[11] Suborbital spaceflight over an intercontinental distance requires a vehicle velocity that is only a little lower than the velocity required to reach low Earth orbit.[12] If rockets are used, the size of the rocket relative to the payload is similar to an Intercontinental Ballistic Missile (ICBM). Any intercontinental spaceflight has to surmount problems of heating during atmosphere re-entry that are nearly as large as those faced by orbital spaceflight.

Orbital

Main article: Orbital spaceflight

Apollo 6 heads into orbit

 

A minimal orbital spaceflight requires much higher velocities than a minimal sub-orbital flight, and so it is technologically much more challenging to achieve. To achieve orbital spaceflight, the tangential velocity around the Earth is as important as altitude. In order to perform a stable and lasting flight in space, the spacecraft must reach the minimal orbital speed required for a closed orbit.

Interplanetary

Main article: Interplanetary spaceflight

 

Interplanetary travel is travel between planets within a single planetary system. In practice, the use of the term is confined to travel between the planets of our Solar System.

Interstellar

Main article: Interstellar travel

 

Five spacecraft are currently leaving the Solar System on escape trajectories, Voyager 1, Voyager 2, Pioneer 10, Pioneer 11, and New Horizons. The one farthest from the Sun is Voyager 1, which is more than 100 AU distant and is moving at 3.6 AU per year.[13] In comparison, Proxima Centauri, the closest star other than the Sun, is 267,000 AU distant. It will take Voyager 1 over 74,000 years to reach this distance. Vehicle designs using other techniques, such as nuclear pulse propulsion are likely to be able to reach the nearest star significantly faster. Another possibility that could allow for human interstellar spaceflight is to make use of time dilation, as this would make it possible for passengers in a fast-moving vehicle to travel further into the future while aging very little, in that their great speed slows down the rate of passage of on-board time. However, attaining such high speeds would still require the use of some new, advanced method of propulsion.

Intergalactic

Main article: Intergalactic travel

 

Intergalactic travel involves spaceflight between galaxies, and is considered much more technologically demanding than even interstellar travel and, by current engineering terms, is considered science fiction.

Spacecraft

Main article: Spacecraft

An Apollo Lunar Module on the lunar surface

 

Spacecraft are vehicles capable of controlling their trajectory through space.

 

The first 'true spacecraft' is sometimes said to be Apollo Lunar Module,[14] since this was the only manned vehicle to have been designed for, and operated only in space; and is notable for its non aerodynamic shape.

Propulsion

Main article: Spacecraft propulsion

 

Spacecraft today predominantly use rockets for propulsion, but other propulsion techniques such as ion drives are becoming more common, particularly for unmanned vehicles, and this can significantly reduce the vehicle's mass and increase its delta-v.

Launch systems

Main article: Launch vehicle

 

Launch systems are used to carry a payload from Earth's surface into outer space.

Expendable

Main article: Expendable launch system

 

Most current spaceflight uses multi-stage expendable launch systems to reach space.

 

Reusable

Main article: Reusable launch system

Ambox current red.svg

 

This section needs to be updated. Please update this article to reflect recent events or newly available information. (August 2019)

 

The first reusable spacecraft, the X-15, was air-launched on a suborbital trajectory on July 19, 1963. The first partially reusable orbital spacecraft, the Space Shuttle, was launched by the USA on the 20th anniversary of Yuri Gagarin's flight, on April 12, 1981. During the Shuttle era, six orbiters were built, all of which have flown in the atmosphere and five of which have flown in space. The Enterprise was used only for approach and landing tests, launching from the back of a Boeing 747 and gliding to deadstick landings at Edwards AFB, California. The first Space Shuttle to fly into space was the Columbia, followed by the Challenger, Discovery, Atlantis, and Endeavour. The Endeavour was built to replace the Challenger, which was lost in January 1986. The Columbia broke up during reentry in February 2003.

 

The Space Shuttle Columbia seconds after engine ignition on mission STS-1

 

Columbia landing, concluding the STS-1 mission

 

Columbia launches again on STS-2

 

The first automatic partially reusable spacecraft was the Buran (Snowstorm), launched by the USSR on November 15, 1988, although it made only one flight. This spaceplane was designed for a crew and strongly resembled the US Space Shuttle, although its drop-off boosters used liquid propellants and its main engines were located at the base of what would be the external tank in the American Shuttle. Lack of funding, complicated by the dissolution of the USSR, prevented any further flights of Buran.

 

Per the Vision for Space Exploration, the Space Shuttle was retired in 2011 due mainly to its old age and high cost of the program reaching over a billion dollars per flight. The Shuttle's human transport role is to be replaced by the partially reusable Crew Exploration Vehicle (CEV) no later than 2021. The Shuttle's heavy cargo transport role is to be replaced by expendable rockets such as the Evolved Expendable Launch Vehicle (EELV) or a Shuttle Derived Launch Vehicle.

 

Scaled Composites SpaceShipOne was a reusable suborbital spaceplane that carried pilots Mike Melvill and Brian Binnie on consecutive flights in 2004 to win the Ansari X Prize. The Spaceship Company has built its successor SpaceShipTwo. A fleet of SpaceShipTwos operated by Virgin Galactic planned to begin reusable private spaceflight carrying paying passengers (space tourists) in 2008, but this was delayed due to an accident in the propulsion development.[15]

 

Challenges

Main article: Effect of spaceflight on the human body

Space disasters

Main article: Space accidents and incidents

 

All launch vehicles contain a huge amount of energy that is needed for some part of it to reach orbit. There is therefore some risk that this energy can be released prematurely and suddenly, with significant effects. When a Delta II rocket exploded 13 seconds after launch on January 17, 1997, there were reports of store windows 10 miles (16 km) away being broken by the blast.[16]

 

Space is a fairly predictable environment, but there are still risks of accidental depressurization and the potential failure of equipment, some of which may be very newly developed.

 

In 2004 the International Association for the Advancement of Space Safety was established in the Netherlands to further international cooperation and scientific advancement in space systems safety.[17]

Weightlessness

Main article: Weightlessness

Astronauts on the ISS in weightless conditions. Michael Foale can be seen exercising in the foreground.

 

In a microgravity environment such as that provided by a spacecraft in orbit around the Earth, humans experience a sense of "weightlessness." Short-term exposure to microgravity causes space adaptation syndrome, a self-limiting nausea caused by derangement of the vestibular system. Long-term exposure causes multiple health issues. The most significant is bone loss, some of which is permanent, but microgravity also leads to significant deconditioning of muscular and cardiovascular tissues.

Radiation

 

Once above the atmosphere, radiation due to the Van Allen belts, solar radiation and cosmic radiation issues occur and increase. Further away from the Earth, solar flares can give a fatal radiation dose in minutes, and the health threat from cosmic radiation significantly increases the chances of cancer over a decade exposure or more.[18]

Life support

Main article: Life support system

 

In human spaceflight, the life support system is a group of devices that allow a human being to survive in outer space. NASA often uses the phrase Environmental Control and Life Support System or the acronym ECLSS when describing these systems for its human spaceflight missions.[19] The life support system may supply: air, water and food. It must also maintain the correct body temperature, an acceptable pressure on the body and deal with the body's waste products. Shielding against harmful external influences such as radiation and micro-meteorites may also be necessary. Components of the life support system are life-critical, and are designed and constructed using safety engineering techniques.

Space weather

Main article: Space weather

Aurora australis and Discovery, May 1991.

 

Space weather is the concept of changing environmental conditions in outer space. It is distinct from the concept of weather within a planetary atmosphere, and deals with phenomena involving ambient plasma, magnetic fields, radiation and other matter in space (generally close to Earth but also in interplanetary, and occasionally interstellar medium). "Space weather describes the conditions in space that affect Earth and its technological systems. Our space weather is a consequence of the behavior of the Sun, the nature of Earth's magnetic field, and our location in the Solar System."[20]

 

Space weather exerts a profound influence in several areas related to space exploration and development. Changing geomagnetic conditions can induce changes in atmospheric density causing the rapid degradation of spacecraft altitude in Low Earth orbit. Geomagnetic storms due to increased solar activity can potentially blind sensors aboard spacecraft, or interfere with on-board electronics. An understanding of space environmental conditions is also important in designing shielding and life support systems for manned spacecraft.

Environmental considerations

 

Rockets as a class are not inherently grossly polluting. However, some rockets use toxic propellants, and most vehicles use propellants that are not carbon neutral. Many solid rockets have chlorine in the form of perchlorate or other chemicals, and this can cause temporary local holes in the ozone layer. Re-entering spacecraft generate nitrates which also can temporarily impact the ozone layer. Most rockets are made of metals that can have an environmental impact during their construction.

 

In addition to the atmospheric effects there are effects on the near-Earth space environment. There is the possibility that orbit could become inaccessible for generations due to exponentially increasing space debris caused by spalling of satellites and vehicles (Kessler syndrome). Many launched vehicles today are therefore designed to be re-entered after use.

Two-car Class 156 'Super Sprinter' DMU 156 480 rumbles through Kirkby-in-Furness station, a deserted request stop, whilst forming Northern Rail's 07.41 (SaO) Barrow-Sellafield service. @07.52

chambeili® Magazine is the world’s first and only digital edition magazine on Google Play Newsstand that offers the latest news (trends, catwalk events, designer focus) on Pakistan Fashion Industry from Pakistan’s top fashion bloggers as well as health & beauty and all chambeili® news (new arrivals, special offers, bridal and formal wear made-to-measure catalog) in one place on any android or iOS devices and best of all it’s completely free! We have already reached 11,900 registered subscribers globally and growing.

 

Google Play Newsstand is pre-installed on many Android devices; with over 1 billion devices and 1.5 million daily activation's, Android is a growing ecosystem and a powerful content platform.

 

To subscribe visit magazine.chambeili.com or search chambeili on Google Play Newsstand.

A rotating lazy susan device that turns the cannabis plants to optimize exposure to lights.

 

I had to wear sunglasses in this room.

 

photo by Rusty Blazenhoff

 

laughingsquid.com/a-rare-look-inside-a-commercial-medical...

Sent from my T-Mobile 4G LTE Device

Some of these serious gentlemen hold metal tools and on the desk I can see metal pikes of different size and shape. In the open boxes I can make out various accessories and elecrical switches.

According to the clamp, the man on the left is holding, I assume that the devices are probably lightning arresters. But any other idea is welcome!

[Germany, unidentified photographer, 1910-1920?]

DSC07480 - Let's just say I am happy with my current camera and lens. Had a split second to make this image, in-between people passing by. Really enjoy the bright, soft on the left, and the dark, hard on the right.

 

Done from the hip, looking down on tilt screen. No time to bring up to eye level. I also think the slightly lower point of view work well.

For a wireless device, much of the volume is dedicated to wire jacks. And about half the space is taken by the AC/DC power supply. The three circuit boards stack tightly together.

 

Richard akes a photograph and uploads it to Flickr with his SPV

So I got my hands on the prototype Portal gun at the New York Toy Fair this morning. I was told it would be closer to $150, not $120, and it’ll be ready mid to late this year. They’re not sure yet where it will be sold, but Toys R Us was mentioned as a possibility. It’s satisfyingly large, the handle feels good and the thumb toggle to change from orange to blue and back is intuitive, the trigger is a trigger… nothing to get excited or upset over. It feels cheap. It looks cheap. But it also looks easy to dismantle for bulking up and repainting to improve. Honestly, I sort of feel that for the price that shouldn’t be needed, but I can deal with it. This one is not the finished product though, they will at least paint that inner core bit that should be black. Also the orange will be as bright as the blue and the sound effects which were weak should be louder and clearer. I wouldn’t say I’m disappointed, as I expected such flaws, so I’ll still be picking one up. But I can see a lot of unhappy fans that want perfection. Then again, considering the other game replica weapons I’ve seen, and for something this size with lights and sounds, the price is not bad and the cheapness is tolerable to make the price slightly more reasonable.

algerian polymath ∋vitruc, though born into poverty, rose in influence and power through his revisioning of janissarian tactics, demonstrating uncommon military brilliance and innovation. initially forced because of his age (estimated to be 13 at his first foray) to filter his instructions through a "ghost", an older, mildly disabled war veteran, his identity was discovered upon investigation of the death of said ghost. still a very young man (14 or 15; sources differ), he was challenged by pasha to prove his competence in developing strategy for a pending battle. ∋vitruc agreed, making a request that no opposing soldiers be killed unnecessarily, and that all armaments and gear obtained from the defeated be given him to study. upon the rout of the enemy battalion (again, sources differ alarmingly here, showing much personal bias among historians), ∋vitruc was awarded his prize, along with two captured soldiers, now his servants.

 

these servants, whom he'd personally selected, were reputed to become his advisors and reporters of the mysterious scientific innovations of foreign lands. though the empire was powerful, suspicion of great weaponry possessed by the enemy haunted the upper classes, and ∋vitruc's youthful intelligence was given unprecedented freedom to spend and explore. retiring with his servants to a remote valley some distance from oran, he spent some time refining (and re-refining) his own astonishingly accurate (and lethal) modifications to the arquebus, eventually earning the undying gratitude of the pasha for more than tripling the range of the firearm. as a result, the empire went unchallenged with any seriousness for many years.

 

his true reasons for retiring to the privacy of the countryside, however, were only revealed upon the eve of what has been recorded as his death (in or around 1623), but was really what more modern biographers now call his grand escape. algeria began to suffer from the effects of plague in 1620, and though he felt safe in his sheltered valley, ∋vitruc realised that terrible disease could strike at any time (his theories on epidemiology, though noted here, will have to wait to be discussed). none of his drawings survive (disputed; no paper record exists), but he was rumoured to have been fascinated by the constellations and from a very early age built odd devices (described as witches' clouds) that clearly must have been balloon prototypes. cave drawings estimated to be from his era (and in a valley not far from oran) show odd craft in the sky, in both day and night - historians again squabble here, as the drawings are crude and ∋vitruc was widely known to be a meticulous and exemplary artist. some agreement can be established that it was his servants who did the scribbling while he worked, and he possibly took his paperwork with him.

 

unsatisfied with paper aircraft, ∋vitruc began working with metal constructs he believed would fly through the air and carry weaponry, people and any and all matter of goods. documents survive in algiers, written by his detractors (and those who politically opposed his funders) that mock his impossible dream of levitating rocks, metals and minerals. ∋vitruc's legend and value as a miltary innovator protected him, though, and only the most polite needling of his dreams seems to have been allowed. some more serious criticism came in the form of questioning his use of valuable materials (notably silver and gold), which he was reputed to be experimenting with and destroying in vast amounts. there is evidence that at least two attempts were made by brigands to steal from him, but his weaponry was very greatly feared and respected (and his location secret and remote), so it's doubtful any dent in his resources was made.

 

the golden orb, shown above, is one of the few remaining devices he developed. with plague threatening his land (one of the servants is said to have become quite ill or died in 1622), ∋vitruc boarded his experimental metal craft and is said to have floated or flown away over the mediterranean sea. his surviving servant, when questioned, was barely believed, and he indicated that ∋vitruc had packed all of his remaining machines, along with some food, before departing. envoys of the pasha delivered the news, and in a fury, believing ∋vitruc had simply stolen all the wealth allowed him (not more than a few ounces of gold and silver remained), the story of his death was summarily spread.

 

the orb, once in the possession of the musée des arts et métiers (museum of arts and crafts) in paris, france, was lost and presumed stolen in 1804. a daguerreotype (dated 1850) of an unnamed man standing beside it surfaced in 1948, but no location could be determined. its existence on the grey market is, however, an open secret, and though algerian nationalists have made strong claims that the orb be repatriated, other pressing matters have consistently stifled the issue.

 

shown here is the orb attached sideways to a support structure, for no reason other than the whim of the current owner (and perhaps a slight attempt to disguise it, as it is on somewhat open display). the mechanical works are unfortunately not shown and may be missing entirely, though i was not allowed to touch, approach or examine the orb. photographing it was forbidden for the few years i knew of its location, until just recently, and i was required to both obscure all background details and surrender the memory card of my camera after downloading and editing this one shot. for obvious reasons, i cannot geolocate the orb on any map.

  

Laser Devices DBAL-A2 on a Samson Evolution with Troy Folding Sight, Surefire MB556K, Sabre 14.5'' 5.56mm Barrel and Surefire Scoutlight. LaRue QD Mount.

The Naked 3D Fitness Tracker goes on pre-request appears to be ready to change your entire body estimation game. It is a beautiful, cutting edge, flawlessly outlined framework saddled with a sketchy name. The apparatus is in general made of an extraordinary mirror glass that outfitted with depth...

 

wow-gift.com/3d-fitness-tracker/

Not recommended as a health device.

For my coming Jabba's palace I've built some technical device. I've made an instruction to see how I used some SNOT-techniques.

A number of years ago, I was playing tennis with my mate and we got talking about how my wife was complaining about my snoring. For more information please visit: www.resmed.com/in/en/consumer/snoring/how-to-stop-snoring...

www.youtube.com/watch?v=knS9Cc81-mI

 

Study simulates how COVID can be transmitted among airplane passengers

 

nypost.com/2022/01/21/yale-research-team-develops-wearabl...

 

Yale research team develops wearable clip to detect COVID

 

Researchers at the Yale School of Public Health have developed a wearable clip that can detect if a person may have been exposed to COVID-19.

 

The device captures virus-laden aerosols that deposit on a polydimethylsiloxane (PDMS) surface, according to a study published earlier this month in the peer-reviewed journal, Environmental Science and Technology Letters.

 

Krystal Godri Pollitt, who led the team of researchers who developed the clip, told Fox News it came about through her research measuring a person’s exposure to environmental factors.

 

“Through that work, I developed wearable tools that we can measure our exposure to lots of different chemicals within the air and other airborne factors,” Godri Pollitt said.

 

Her team pivoted to respiratory viruses in March 2020 when the COVID-19 pandemic hit.

 

The wearable clip is designed to be reusable with the polymer films being changed. It is intended as a complementary device to at-home testing kits.

 

“We want to go a step before that and be able to start thinking about, do we need more infectious control measures in place, do we need less people in this space? Do we need more ventilation?” Godri Pollitt said. “And also thinking about if people are at a potential risk for being infected? If we detect it within the air, there’s a good chance that maybe those people are at risk and should be quarantining.”

 

Godri Pollitt told Fox News there is a lot of potential in expanding the clip to other respiratory viruses. The clip is not yet publicly available, but Godri Pollitt hopes it will be in the near future.

 

www.cnbc.com/2022/01/21/omicron-two-years-since-covid-was...

 

Two years since Covid was first confirmed in U.S., the pandemic is worse than anyone imagined

 

Key Points

 

■ Two years ago, the CDC confirmed the first known case of coronavirus in the U.S.

■ A 35-year-old traveler had returned from Wuhan, China to Washington state and tested positive.

■ The virus has killed more than 860,000 people in the U.S. and infected more than 69 million.

■ With the emergence of omicron, the future course of the pandemic is unclear as experts struggle to understand how new variants emerge.

 

A 35-year-old man returned to the U.S. from Wuhan, China on Jan. 15, 2020 and fell ill with a cough and fever.

 

He had read an alert from the Centers for Disease Control and Prevention about an outbreak of a novel coronavirus in Wuhan and sought treatment at an urgent care clinic in Snohomish County, Washington four days later.

 

On Jan. 21, the CDC publicly confirmed he had the first known case of coronavirus in the U.S., although the agency would later find the virus had arrived on the West Coast as early as December after testing blood samples for antibodies.

 

The man said he had not spent time at the Huanan seafood market in Wuhan, where a cluster of early cases were identified in December. He was admitted to isolation unit at Providence Regional Medical Center in Everett, Wash. for observation.

 

After confirming the Washington state case, the CDC told the public it believed the risk “remains low at this time.” There was growing evidence of person-to-person transmission of the virus, the CDC said, but “it’s unclear how easily this virus is spreading between people.”

 

Then President Donald Trump told CNBC the U.S. had it “totally under control.”

 

“It’s one person coming in from China. We have it under control. It’s going to be just fine,” Trump told “Squawk Box” co-host Joe Kernen in an interview from the World Economic Forum in Davos, Switzerland.

 

However, Dr. Anthony Fauci would confirm the public’s worst fears on Jan. 31: People could carry and spread the virus without showing any symptoms. Dr. Helen Chu’s research team at the Seattle Flu Study started examining genomic data from Wuhan. It became clear early on that person-to-person transmission was happening, Chu said. By using the flu study’s databank of nasal swab samples, the team was able to identify another Covid case in a 15-year-old who hadn’t recently traveled, indicating it was spreading throughout the community.

 

In late February, a senior CDC official, Dr. Nancy Messonnier, warned that containing the virus at the nation’s borders was no longer feasible. Community spread would happen in the U.S., she said, and the central question was “how many people in this country will have severe illness.”

 

In the two years since that first confirmed case, the virus has torn through the U.S. with a ferocity and duration few anticipated. The human toll is staggering, with more than 860,000 people dead and more than 69 million total infections. Hospitals around the nation have been pushed to the breaking point with more than 4 million admissions of confirmed Covid patients since August 2020, when the CDC started tracking hospitalizations. The hospital admissions are an undercount because they do not include the wave of cases that first hit the U.S. in the spring 2020 when hospitals were caught flat footed and testing was inadequate.

 

Though the U.S. now has effective vaccines and therapeutics to fight Covid, the future course of the pandemic remains uncertain as the virus mutates into new variants that are more transmissible and can evade vaccine protection. The highly contagious omicron variant has pushed infections and hospitalizations to record highs across the globe this month, a shock to a weary public that wants a return to normal life after two years of lockdowns, event cancellations, working from home and mask and vaccine mandates.

 

The rapid evolution of the virus and the dramatic waves of infection that would follow, from alpha to delta and omicron, came as a surprise to many elected leaders, public health officials and scientists. Dr. Michael Osterholm, a top epidemiologist, said the Covid mutations are the big unknown that will determine the future course of the pandemic.

 

“We don’t yet understand how these variants emerge and what they are capable of doing,” Osterholm, director of the Center for Infectious Disease Research and Policy in Minnesota, told CNBC. “Look at how omicron caught us as a global community surprised by the rapid transmission, the immune evasion. Look at delta and all the impact it had on disease severity,” he said.

 

As new infections started to decline in the spring of 2021 and the vaccines became widely available, the U.S. began to let its guard down. The CDC said the fully vaccinated no longer need to wear masks indoors. President Joe Biden proclaimed on July 4th the U.S. was closer than ever to declaring independence from the virus.

 

However, the delta variant was taking hold in the U.S. at the time and would soon cause a new wave of infection, hospitalization and death as vaccination rates slowed. Public health leaders have struggled for months to convince skeptics to get the shots.

 

More than a year after the first vaccine was administered in the U.S., about 67% of Americans older than 5 are fully vaccinated, according to CDC data. Tens of millions of Americans still have not gotten their shots, despite the fact that data has proven them to be safe and effective at preventing severe illness and death.

 

“We had no sense in January of 2020, the divisive politics and community reaction to this that were going to occur,” Osterholm said. “Who would have imagined the kind of vaccine hesitancy and hostility that’s occurred.”

 

Delta was more than twice as transmissible as previous variants and research indicated it caused more severe disease in unvaccinated people. The CDC would reverse its loosened mask guidance and encourage everyone, regardless of vaccination status, to wear masks indoors in public in areas of substantial transmission as delta spread.

 

The vaccines took a hit when omicron emerged in November. Though they still protect against severe illness and death, they are less effective at preventing infection from omicron. Chu said the U.S. relied primarily on vaccines to prevent transmission of the virus without equally emphasizing widespread masking and testing, which are crucial to controlling a variant like omicron that can evade immunity.

 

“We now know that, proportionately, you can be repeatedly infected, you can have vaccine breakthroughs, and that this virus will just continue to mutate and continue to evade us for a long time,” Chu said.

 

Katriona Shea co-leads a team of researchers who bring together models to forecast the trajectory of the pandemic. In their latest update, the omicron wave of cases and hospitalizations will likely peak before the end of the month. However, their most optimistic projection shows anywhere from 16,000 to up to 98,000 additional deaths from the omicron wave by April 2.

 

Currently, the U.S. is reporting an average of more than 736,000 new infections per day, according to a seven-day average of Johns Hopkins data analyzed by CNBC. While that is still far higher than previous waves, average daily infections are down 8% from the previous week. The U.S. is reporting more than 1,800 deaths per day as a seven-day average.

 

“It’s really, really frustrating and tragic to see people dying from a vaccine preventable disease,” Chu said.

 

The implications of omicron for the future course of the pandemic are unclear. In the classic view, viruses evolve to become more transmissible and less severe, making it easier to find new hosts.

 

“There are lots of reasons to believe that might not be true because the jump to omicron was so massive, it suggests that there’s lots of space for it to change quite dramatically,” said Shea, a professor of biology at Pennsylvania State University. Omicron has more than 30 mutations on the spike protein that binds to human cells. The shots target the spike protein, and the mutations make it more difficult for vaccine-induced antibodies to block infection.

 

Doctors and infectious disease experts in South Africa, where omicron was first identified, said the variant peaked and started to declined rapidly, demonstrating a significantly different trajectory than past strains. The researchers also said ICU admissions and deaths were lower at Steve Biko Academic Hospital, indicating decreased severity.

 

“If this pattern continues and is repeated globally, we are likely to see a complete decoupling of case and death rates, suggesting that Omicron may be a harbinger of the end of the epidemic phase of the Covid pandemic, ushering in its endemic phase,” the researchers wrote.

 

Over time, the virus could become less disruptive to society as mutations slow and it becomes mild as greater immunity in the population limits severe disease, according to Jennie Lavine, a computational investigational biologist at the biotech company Karius.

 

However, the head of the World Health Organization, Dr. Tedros Adhanom Ghebreyesus, cautioned earlier this week that the pandemic is “nowhere near over,” warning that new variants are likely to emerge as omicron rapidly spread across the world.

 

“Everybody wants to get to this thing called endemic. I still don’t know what the hell that means,” Osterholm said, noting that he has 46 years of experience as an epidemiologist. “With variants, we can go for a period of time with relatively low activity, like we’ve seen in many places in the world, and then a new variant could change all that overnight. We don’t really understand our future yet.”

This is the Delkin Devices DC550D-P on my Canon T2i. Works great and really helps to make the LCD more visible in bright ambient light.

Mexico has emerged as one of the most important medical equipment and devices market in the Americas.

 

www.americanindustriesgroup.com/medical/

For my coming Jabba's palace I've built some technical device. I've made an instruction to see how I used some SNOT-techniques.

IR HDR. IR converted Canon Rebel XTi. AEB +/-2 total of 3 exposures processed with Photomatix. Levels adjusted in PSE.

 

High Dynamic Range (HDR)

 

High-dynamic-range imaging (HDRI) is a high dynamic range (HDR) technique used in imaging and photography to reproduce a greater dynamic range of luminosity than is possible with standard digital imaging or photographic techniques. The aim is to present a similar range of luminance to that experienced through the human visual system. The human eye, through adaptation of the iris and other methods, adjusts constantly to adapt to a broad range of luminance present in the environment. The brain continuously interprets this information so that a viewer can see in a wide range of light conditions.

 

HDR images can represent a greater range of luminance levels than can be achieved using more 'traditional' methods, such as many real-world scenes containing very bright, direct sunlight to extreme shade, or very faint nebulae. This is often achieved by capturing and then combining several different, narrower range, exposures of the same subject matter. Non-HDR cameras take photographs with a limited exposure range, referred to as LDR, resulting in the loss of detail in highlights or shadows.

 

The two primary types of HDR images are computer renderings and images resulting from merging multiple low-dynamic-range (LDR) or standard-dynamic-range (SDR) photographs. HDR images can also be acquired using special image sensors, such as an oversampled binary image sensor.

 

Due to the limitations of printing and display contrast, the extended luminosity range of an HDR image has to be compressed to be made visible. The method of rendering an HDR image to a standard monitor or printing device is called tone mapping. This method reduces the overall contrast of an HDR image to facilitate display on devices or printouts with lower dynamic range, and can be applied to produce images with preserved local contrast (or exaggerated for artistic effect).

 

In photography, dynamic range is measured in exposure value (EV) differences (known as stops). An increase of one EV, or 'one stop', represents a doubling of the amount of light. Conversely, a decrease of one EV represents a halving of the amount of light. Therefore, revealing detail in the darkest of shadows requires high exposures, while preserving detail in very bright situations requires very low exposures. Most cameras cannot provide this range of exposure values within a single exposure, due to their low dynamic range. High-dynamic-range photographs are generally achieved by capturing multiple standard-exposure images, often using exposure bracketing, and then later merging them into a single HDR image, usually within a photo manipulation program). Digital images are often encoded in a camera's raw image format, because 8-bit JPEG encoding does not offer a wide enough range of values to allow fine transitions (and regarding HDR, later introduces undesirable effects due to lossy compression).

 

Any camera that allows manual exposure control can make images for HDR work, although one equipped with auto exposure bracketing (AEB) is far better suited. Images from film cameras are less suitable as they often must first be digitized, so that they can later be processed using software HDR methods.

 

In most imaging devices, the degree of exposure to light applied to the active element (be it film or CCD) can be altered in one of two ways: by either increasing/decreasing the size of the aperture or by increasing/decreasing the time of each exposure. Exposure variation in an HDR set is only done by altering the exposure time and not the aperture size; this is because altering the aperture size also affects the depth of field and so the resultant multiple images would be quite different, preventing their final combination into a single HDR image.

 

An important limitation for HDR photography is that any movement between successive images will impede or prevent success in combining them afterwards. Also, as one must create several images (often three or five and sometimes more) to obtain the desired luminance range, such a full 'set' of images takes extra time. HDR photographers have developed calculation methods and techniques to partially overcome these problems, but the use of a sturdy tripod is, at least, advised.

 

Some cameras have an auto exposure bracketing (AEB) feature with a far greater dynamic range than others, from the 3 EV of the Canon EOS 40D, to the 18 EV of the Canon EOS-1D Mark II. As the popularity of this imaging method grows, several camera manufactures are now offering built-in HDR features. For example, the Pentax K-7 DSLR has an HDR mode that captures an HDR image and outputs (only) a tone mapped JPEG file. The Canon PowerShot G12, Canon PowerShot S95 and Canon PowerShot S100 offer similar features in a smaller format.. Nikon's approach is called 'Active D-Lighting' which applies exposure compensation and tone mapping to the image as it comes from the sensor, with the accent being on retaing a realistic effect . Some smartphones provide HDR modes, and most mobile platforms have apps that provide HDR picture taking.

 

Camera characteristics such as gamma curves, sensor resolution, noise, photometric calibration and color calibration affect resulting high-dynamic-range images.

 

Color film negatives and slides consist of multiple film layers that respond to light differently. As a consequence, transparent originals (especially positive slides) feature a very high dynamic range

 

Tone mapping

Tone mapping reduces the dynamic range, or contrast ratio, of an entire image while retaining localized contrast. Although it is a distinct operation, tone mapping is often applied to HDRI files by the same software package.

 

Several software applications are available on the PC, Mac and Linux platforms for producing HDR files and tone mapped images. Notable titles include

 

Adobe Photoshop

Aurora HDR

Dynamic Photo HDR

HDR Efex Pro

HDR PhotoStudio

Luminance HDR

MagicRaw

Oloneo PhotoEngine

Photomatix Pro

PTGui

 

Information stored in high-dynamic-range images typically corresponds to the physical values of luminance or radiance that can be observed in the real world. This is different from traditional digital images, which represent colors as they should appear on a monitor or a paper print. Therefore, HDR image formats are often called scene-referred, in contrast to traditional digital images, which are device-referred or output-referred. Furthermore, traditional images are usually encoded for the human visual system (maximizing the visual information stored in the fixed number of bits), which is usually called gamma encoding or gamma correction. The values stored for HDR images are often gamma compressed (power law) or logarithmically encoded, or floating-point linear values, since fixed-point linear encodings are increasingly inefficient over higher dynamic ranges.

 

HDR images often don't use fixed ranges per color channel—other than traditional images—to represent many more colors over a much wider dynamic range. For that purpose, they don't use integer values to represent the single color channels (e.g., 0-255 in an 8 bit per pixel interval for red, green and blue) but instead use a floating point representation. Common are 16-bit (half precision) or 32-bit floating point numbers to represent HDR pixels. However, when the appropriate transfer function is used, HDR pixels for some applications can be represented with a color depth that has as few as 10–12 bits for luminance and 8 bits for chrominance without introducing any visible quantization artifacts.

 

History of HDR photography

The idea of using several exposures to adequately reproduce a too-extreme range of luminance was pioneered as early as the 1850s by Gustave Le Gray to render seascapes showing both the sky and the sea. Such rendering was impossible at the time using standard methods, as the luminosity range was too extreme. Le Gray used one negative for the sky, and another one with a longer exposure for the sea, and combined the two into one picture in positive.

 

Mid 20th century

Manual tone mapping was accomplished by dodging and burning – selectively increasing or decreasing the exposure of regions of the photograph to yield better tonality reproduction. This was effective because the dynamic range of the negative is significantly higher than would be available on the finished positive paper print when that is exposed via the negative in a uniform manner. An excellent example is the photograph Schweitzer at the Lamp by W. Eugene Smith, from his 1954 photo essay A Man of Mercy on Dr. Albert Schweitzer and his humanitarian work in French Equatorial Africa. The image took 5 days to reproduce the tonal range of the scene, which ranges from a bright lamp (relative to the scene) to a dark shadow.

 

Ansel Adams elevated dodging and burning to an art form. Many of his famous prints were manipulated in the darkroom with these two methods. Adams wrote a comprehensive book on producing prints called The Print, which prominently features dodging and burning, in the context of his Zone System.

 

With the advent of color photography, tone mapping in the darkroom was no longer possible due to the specific timing needed during the developing process of color film. Photographers looked to film manufacturers to design new film stocks with improved response, or continued to shoot in black and white to use tone mapping methods.

 

Color film capable of directly recording high-dynamic-range images was developed by Charles Wyckoff and EG&G "in the course of a contract with the Department of the Air Force". This XR film had three emulsion layers, an upper layer having an ASA speed rating of 400, a middle layer with an intermediate rating, and a lower layer with an ASA rating of 0.004. The film was processed in a manner similar to color films, and each layer produced a different color. The dynamic range of this extended range film has been estimated as 1:108. It has been used to photograph nuclear explosions, for astronomical photography, for spectrographic research, and for medical imaging. Wyckoff's detailed pictures of nuclear explosions appeared on the cover of Life magazine in the mid-1950s.

 

Late 20th century

Georges Cornuéjols and licensees of his patents (Brdi, Hymatom) introduced the principle of HDR video image, in 1986, by interposing a matricial LCD screen in front of the camera's image sensor, increasing the sensors dynamic by five stops. The concept of neighborhood tone mapping was applied to video cameras by a group from the Technion in Israel led by Dr. Oliver Hilsenrath and Prof. Y.Y.Zeevi who filed for a patent on this concept in 1988.

 

In February and April 1990, Georges Cornuéjols introduced the first real-time HDR camera that combined two images captured by a sensor3435 or simultaneously3637 by two sensors of the camera. This process is known as bracketing used for a video stream.

 

In 1991, the first commercial video camera was introduced that performed real-time capturing of multiple images with different exposures, and producing an HDR video image, by Hymatom, licensee of Georges Cornuéjols.

 

Also in 1991, Georges Cornuéjols introduced the HDR+ image principle by non-linear accumulation of images to increase the sensitivity of the camera: for low-light environments, several successive images are accumulated, thus increasing the signal to noise ratio.

 

In 1993, another commercial medical camera producing an HDR video image, by the Technion.

 

Modern HDR imaging uses a completely different approach, based on making a high-dynamic-range luminance or light map using only global image operations (across the entire image), and then tone mapping the result. Global HDR was first introduced in 19931 resulting in a mathematical theory of differently exposed pictures of the same subject matter that was published in 1995 by Steve Mann and Rosalind Picard.

 

On October 28, 1998, Ben Sarao created one of the first nighttime HDR+G (High Dynamic Range + Graphic image)of STS-95 on the launch pad at NASA's Kennedy Space Center. It consisted of four film images of the shuttle at night that were digitally composited with additional digital graphic elements. The image was first exhibited at NASA Headquarters Great Hall, Washington DC in 1999 and then published in Hasselblad Forum, Issue 3 1993, Volume 35 ISSN 0282-5449.

 

The advent of consumer digital cameras produced a new demand for HDR imaging to improve the light response of digital camera sensors, which had a much smaller dynamic range than film. Steve Mann developed and patented the global-HDR method for producing digital images having extended dynamic range at the MIT Media Laboratory. Mann's method involved a two-step procedure: (1) generate one floating point image array by global-only image operations (operations that affect all pixels identically, without regard to their local neighborhoods); and then (2) convert this image array, using local neighborhood processing (tone-remapping, etc.), into an HDR image. The image array generated by the first step of Mann's process is called a lightspace image, lightspace picture, or radiance map. Another benefit of global-HDR imaging is that it provides access to the intermediate light or radiance map, which has been used for computer vision, and other image processing operations.

 

21st century

In 2005, Adobe Systems introduced several new features in Photoshop CS2 including Merge to HDR, 32 bit floating point image support, and HDR tone mapping.

 

On June 30, 2016, Microsoft added support for the digital compositing of HDR images to Windows 10 using the Universal Windows Platform.

 

HDR sensors

Modern CMOS image sensors can often capture a high dynamic range from a single exposure. The wide dynamic range of the captured image is non-linearly compressed into a smaller dynamic range electronic representation. However, with proper processing, the information from a single exposure can be used to create an HDR image.

 

Such HDR imaging is used in extreme dynamic range applications like welding or automotive work. Some other cameras designed for use in security applications can automatically provide two or more images for each frame, with changing exposure. For example, a sensor for 30fps video will give out 60fps with the odd frames at a short exposure time and the even frames at a longer exposure time. Some of the sensor may even combine the two images on-chip so that a wider dynamic range without in-pixel compression is directly available to the user for display or processing.

 

en.wikipedia.org/wiki/High-dynamic-range_imaging

 

Infrared Photography

 

In infrared photography, the film or image sensor used is sensitive to infrared light. The part of the spectrum used is referred to as near-infrared to distinguish it from far-infrared, which is the domain of thermal imaging. Wavelengths used for photography range from about 700 nm to about 900 nm. Film is usually sensitive to visible light too, so an infrared-passing filter is used; this lets infrared (IR) light pass through to the camera, but blocks all or most of the visible light spectrum (the filter thus looks black or deep red). ("Infrared filter" may refer either to this type of filter or to one that blocks infrared but passes other wavelengths.)

 

When these filters are used together with infrared-sensitive film or sensors, "in-camera effects" can be obtained; false-color or black-and-white images with a dreamlike or sometimes lurid appearance known as the "Wood Effect," an effect mainly caused by foliage (such as tree leaves and grass) strongly reflecting in the same way visible light is reflected from snow. There is a small contribution from chlorophyll fluorescence, but this is marginal and is not the real cause of the brightness seen in infrared photographs. The effect is named after the infrared photography pioneer Robert W. Wood, and not after the material wood, which does not strongly reflect infrared.

 

The other attributes of infrared photographs include very dark skies and penetration of atmospheric haze, caused by reduced Rayleigh scattering and Mie scattering, respectively, compared to visible light. The dark skies, in turn, result in less infrared light in shadows and dark reflections of those skies from water, and clouds will stand out strongly. These wavelengths also penetrate a few millimeters into skin and give a milky look to portraits, although eyes often look black.

 

Until the early 20th century, infrared photography was not possible because silver halide emulsions are not sensitive to longer wavelengths than that of blue light (and to a lesser extent, green light) without the addition of a dye to act as a color sensitizer. The first infrared photographs (as distinct from spectrographs) to be published appeared in the February 1910 edition of The Century Magazine and in the October 1910 edition of the Royal Photographic Society Journal to illustrate papers by Robert W. Wood, who discovered the unusual effects that now bear his name. The RPS co-ordinated events to celebrate the centenary of this event in 2010. Wood's photographs were taken on experimental film that required very long exposures; thus, most of his work focused on landscapes. A further set of infrared landscapes taken by Wood in Italy in 1911 used plates provided for him by CEK Mees at Wratten & Wainwright. Mees also took a few infrared photographs in Portugal in 1910, which are now in the Kodak archives.

 

Infrared-sensitive photographic plates were developed in the United States during World War I for spectroscopic analysis, and infrared sensitizing dyes were investigated for improved haze penetration in aerial photography. After 1930, new emulsions from Kodak and other manufacturers became useful to infrared astronomy.

 

Infrared photography became popular with photography enthusiasts in the 1930s when suitable film was introduced commercially. The Times regularly published landscape and aerial photographs taken by their staff photographers using Ilford infrared film. By 1937 33 kinds of infrared film were available from five manufacturers including Agfa, Kodak and Ilford. Infrared movie film was also available and was used to create day-for-night effects in motion pictures, a notable example being the pseudo-night aerial sequences in the James Cagney/Bette Davis movie The Bride Came COD.

 

False-color infrared photography became widely practiced with the introduction of Kodak Ektachrome Infrared Aero Film and Ektachrome Infrared EIR. The first version of this, known as Kodacolor Aero-Reversal-Film, was developed by Clark and others at the Kodak for camouflage detection in the 1940s. The film became more widely available in 35mm form in the 1960s but KODAK AEROCHROME III Infrared Film 1443 has been discontinued.

 

Infrared photography became popular with a number of 1960s recording artists, because of the unusual results; Jimi Hendrix, Donovan, Frank and a slow shutter speed without focus compensation, however wider apertures like f/2.0 can produce sharp photos only if the lens is meticulously refocused to the infrared index mark, and only if this index mark is the correct one for the filter and film in use. However, it should be noted that diffraction effects inside a camera are greater at infrared wavelengths so that stopping down the lens too far may actually reduce sharpness.

 

Most apochromatic ('APO') lenses do not have an Infrared index mark and do not need to be refocused for the infrared spectrum because they are already optically corrected into the near-infrared spectrum. Catadioptric lenses do not often require this adjustment because their mirror containing elements do not suffer from chromatic aberration and so the overall aberration is comparably less. Catadioptric lenses do, of course, still contain lenses, and these lenses do still have a dispersive property.

 

Infrared black-and-white films require special development times but development is usually achieved with standard black-and-white film developers and chemicals (like D-76). Kodak HIE film has a polyester film base that is very stable but extremely easy to scratch, therefore special care must be used in the handling of Kodak HIE throughout the development and printing/scanning process to avoid damage to the film. The Kodak HIE film was sensitive to 900 nm.

 

As of November 2, 2007, "KODAK is preannouncing the discontinuance" of HIE Infrared 35 mm film stating the reasons that, "Demand for these products has been declining significantly in recent years, and it is no longer practical to continue to manufacture given the low volume, the age of the product formulations and the complexity of the processes involved." At the time of this notice, HIE Infrared 135-36 was available at a street price of around $12.00 a roll at US mail order outlets.

 

Arguably the greatest obstacle to infrared film photography has been the increasing difficulty of obtaining infrared-sensitive film. However, despite the discontinuance of HIE, other newer infrared sensitive emulsions from EFKE, ROLLEI, and ILFORD are still available, but these formulations have differing sensitivity and specifications from the venerable KODAK HIE that has been around for at least two decades. Some of these infrared films are available in 120 and larger formats as well as 35 mm, which adds flexibility to their application. With the discontinuance of Kodak HIE, Efke's IR820 film has become the only IR film on the marketneeds update with good sensitivity beyond 750 nm, the Rollei film does extend beyond 750 nm but IR sensitivity falls off very rapidly.

  

Color infrared transparency films have three sensitized layers that, because of the way the dyes are coupled to these layers, reproduce infrared as red, red as green, and green as blue. All three layers are sensitive to blue so the film must be used with a yellow filter, since this will block blue light but allow the remaining colors to reach the film. The health of foliage can be determined from the relative strengths of green and infrared light reflected; this shows in color infrared as a shift from red (healthy) towards magenta (unhealthy). Early color infrared films were developed in the older E-4 process, but Kodak later manufactured a color transparency film that could be developed in standard E-6 chemistry, although more accurate results were obtained by developing using the AR-5 process. In general, color infrared does not need to be refocused to the infrared index mark on the lens.

 

In 2007 Kodak announced that production of the 35 mm version of their color infrared film (Ektachrome Professional Infrared/EIR) would cease as there was insufficient demand. Since 2011, all formats of color infrared film have been discontinued. Specifically, Aerochrome 1443 and SO-734.

 

There is no currently available digital camera that will produce the same results as Kodak color infrared film although the equivalent images can be produced by taking two exposures, one infrared and the other full-color, and combining in post-production. The color images produced by digital still cameras using infrared-pass filters are not equivalent to those produced on color infrared film. The colors result from varying amounts of infrared passing through the color filters on the photo sites, further amended by the Bayer filtering. While this makes such images unsuitable for the kind of applications for which the film was used, such as remote sensing of plant health, the resulting color tonality has proved popular artistically.

 

Color digital infrared, as part of full spectrum photography is gaining popularity. The ease of creating a softly colored photo with infrared characteristics has found interest among hobbyists and professionals.

 

In 2008, Los Angeles photographer, Dean Bennici started cutting and hand rolling Aerochrome color Infrared film. All Aerochrome medium and large format which exists today came directly from his lab. The trend in infrared photography continues to gain momentum with the success of photographer Richard Mosse and multiple users all around the world.

 

Digital camera sensors are inherently sensitive to infrared light, which would interfere with the normal photography by confusing the autofocus calculations or softening the image (because infrared light is focused differently from visible light), or oversaturating the red channel. Also, some clothing is transparent in the infrared, leading to unintended (at least to the manufacturer) uses of video cameras. Thus, to improve image quality and protect privacy, many digital cameras employ infrared blockers. Depending on the subject matter, infrared photography may not be practical with these cameras because the exposure times become overly long, often in the range of 30 seconds, creating noise and motion blur in the final image. However, for some subject matter the long exposure does not matter or the motion blur effects actually add to the image. Some lenses will also show a 'hot spot' in the centre of the image as their coatings are optimised for visible light and not for IR.

 

An alternative method of DSLR infrared photography is to remove the infrared blocker in front of the sensor and replace it with a filter that removes visible light. This filter is behind the mirror, so the camera can be used normally - handheld, normal shutter speeds, normal composition through the viewfinder, and focus, all work like a normal camera. Metering works but is not always accurate because of the difference between visible and infrared refraction. When the IR blocker is removed, many lenses which did display a hotspot cease to do so, and become perfectly usable for infrared photography. Additionally, because the red, green and blue micro-filters remain and have transmissions not only in their respective color but also in the infrared, enhanced infrared color may be recorded.

 

Since the Bayer filters in most digital cameras absorb a significant fraction of the infrared light, these cameras are sometimes not very sensitive as infrared cameras and can sometimes produce false colors in the images. An alternative approach is to use a Foveon X3 sensor, which does not have absorptive filters on it; the Sigma SD10 DSLR has a removable IR blocking filter and dust protector, which can be simply omitted or replaced by a deep red or complete visible light blocking filter. The Sigma SD14 has an IR/UV blocking filter that can be removed/installed without tools. The result is a very sensitive digital IR camera.

 

While it is common to use a filter that blocks almost all visible light, the wavelength sensitivity of a digital camera without internal infrared blocking is such that a variety of artistic results can be obtained with more conventional filtration. For example, a very dark neutral density filter can be used (such as the Hoya ND400) which passes a very small amount of visible light compared to the near-infrared it allows through. Wider filtration permits an SLR viewfinder to be used and also passes more varied color information to the sensor without necessarily reducing the Wood effect. Wider filtration is however likely to reduce other infrared artefacts such as haze penetration and darkened skies. This technique mirrors the methods used by infrared film photographers where black-and-white infrared film was often used with a deep red filter rather than a visually opaque one.

 

Another common technique with near-infrared filters is to swap blue and red channels in software (e.g. photoshop) which retains much of the characteristic 'white foliage' while rendering skies a glorious blue.

 

Several Sony cameras had the so-called Night Shot facility, which physically moves the blocking filter away from the light path, which makes the cameras very sensitive to infrared light. Soon after its development, this facility was 'restricted' by Sony to make it difficult for people to take photos that saw through clothing. To do this the iris is opened fully and exposure duration is limited to long times of more than 1/30 second or so. It is possible to shoot infrared but neutral density filters must be used to reduce the camera's sensitivity and the long exposure times mean that care must be taken to avoid camera-shake artifacts.

 

Fuji have produced digital cameras for use in forensic criminology and medicine which have no infrared blocking filter. The first camera, designated the S3 PRO UVIR, also had extended ultraviolet sensitivity (digital sensors are usually less sensitive to UV than to IR). Optimum UV sensitivity requires special lenses, but ordinary lenses usually work well for IR. In 2007, FujiFilm introduced a new version of this camera, based on the Nikon D200/ FujiFilm S5 called the IS Pro, also able to take Nikon lenses. Fuji had earlier introduced a non-SLR infrared camera, the IS-1, a modified version of the FujiFilm FinePix S9100. Unlike the S3 PRO UVIR, the IS-1 does not offer UV sensitivity. FujiFilm restricts the sale of these cameras to professional users with their EULA specifically prohibiting "unethical photographic conduct".

 

Phase One digital camera backs can be ordered in an infrared modified form.

 

Remote sensing and thermographic cameras are sensitive to longer wavelengths of infrared (see Infrared spectrum#Commonly used sub-division scheme). They may be multispectral and use a variety of technologies which may not resemble common camera or filter designs. Cameras sensitive to longer infrared wavelengths including those used in infrared astronomy often require cooling to reduce thermally induced dark currents in the sensor (see Dark current (physics)). Lower cost uncooled thermographic digital cameras operate in the Long Wave infrared band (see Thermographic camera#Uncooled infrared detectors). These cameras are generally used for building inspection or preventative maintenance but can be used for artistic pursuits as well.

 

en.wikipedia.org/wiki/Infrared_photography

 

A picture part to my devices ever bought yet

iPhone 5 box

Oneplus One 64 gb box

IPod touch 1.gen. 8 gb box with John Lennon

iPhone 4s white

iPad 4 white 32Gb box

1 2 ••• 4 5 7 9 10 ••• 79 80