View allAll Photos Tagged Device

Imperative once

Exactness of detail

Time may blur

 

With nested rotating spheres within spheres, this mechanical device afforded motion sensing on 2 axis, much less effectively that the 6-axis accelerometers now standard in all of our cell phones (and built on-chip with Silicon MEMS). But she is a beauty inside, and the subject of a Pilat painting (details in comment below).

 

UPDATE: new research shows that this S/N 39, was in 1973 a LM spare located at AC/Delco in Milwaukee. Two of the three Inertial Reference Integrating Gyroscopes (IRIG) were prime spares for Apollo 17.

 

This Block II Apollo Inertial Measurement Unit (IMU) comprised the heart of the spacecraft’s primary guidance and navigational control system (PGNCS). Here is a cool 1-minute video on this "ball of wizardry and magic" and a short I posted of my spinning.

 

The IMU provided inertial reference inputs to the onboard Apollo Guidance Computer, and Flight Director Attitude Indicators and served as a fixed reference point in space with which to measure vehicle displacement. Encased within the housing are three inertial rate integrating gyros and three pulsed-integrating pendulous accelerometers; these are mounted to a gimbaled platform to allow three directions of freedom. Any displacement of the platform, resulting from either a change in spacecraft attitude or velocity, would be sensed and communicate signals representative of the magnitude and direction of displacement. The IMU was developed by Dr. Charles Draper and the MIT Instrumentation Laboratory and manufactured by General Motors (A.C. Spark Plug Division). The technology is a derivative of the Polaris Ballistic Missile submarine guidance system. RRAuction described it as "an extreme rarity, auction records indicate this is likely the first of its kind offered for sale."

 

The device is spherical and approximately 12″ in diameter, and bears a metal NASA tag reading: “Apollo G. &. N. System. Name: Inertial Measuring Unit, Part No. 2018601-241, Serial No. AC 39, Cont. No. NAS 9-497.” Above this is another tag, labeled “PIP,” reading: “X: 2AP-293R, Y: 3AP-313, Z: 2AP-241.” An artifact in the Future Ventures’ 🚀 Space Collection.

I named this one 'The Device' because that's what they call the mysterious Egyptian contraption in the movie 'Stargate' before they find out what it actually is.

 

I kind of feel the same about this weird gizmo that lives outside Glasgow Science Centre. If anyone has any ideas about what this is, what it does or who designed it, I would love to hear more.

 

It may well be a covert transmitter to Science Centre security who appeared on the scene right after I took this shot and said I couldn't take photos. When I questioned this he made some lame excuse about the management of the Science Centre chasing me because they don't like people taking photos so close to the BBC TV studios.....eh?

 

Maybe 'The Device' was actually his spaceship.....

“Women are strange and incomprehensible, a device invented by Providence to keep the wit of man well sharpened by constant employment.”

 

(Arnold Bennett)

 

Model: Cátia Mayettes

Happy Valentina's Day! 🚀 👩‍🚀 💫

 

Today is the 57th anniversary of the first woman in space — Russia's Valentina Tereshkova on Vostok 6, a scary bullet of a capsule that could not make a soft landing, necessitating a skydive leap during reentry (just as Yuri Gagarin had to do).

 

She was 26 at the time and an amateur skydiver. She is the youngest female astronaut and the only woman to fly solo. To date, only 12% of astronauts have been women, despite being arguably better suited for the job. That should change.

 

After the flight of Yuri Gagarin in 1961, Nikolai Kamanin, director of cosmonaut training, read in American media that female pilots were training to be astronauts. In his diary, he wrote, "We cannot allow that the first woman in space will be American. This would be an insult to the patriotic feelings of Soviet women." Approval was granted for five female cosmonauts in the next group, which would begin training in 1963. To increase the odds of sending a Soviet woman into space first, the female cosmonauts began their training before the males.

 

With a single flight, she logged more flight time than the combined times of all American astronauts who had flown before that date.

 

Her call sign in this flight was Chaika (Russian: Ча́йка, or 'Seagull'), later commemorated as the name of an asteroid, 1671 Chaika. After her launch, she radioed down:

 

"It is I, Seagull! Everything is fine. I see the horizon; it's a sky blue with a dark strip. How beautiful the Earth is"

Wikipedia

 

As planned in all Vostok missions, Tereshkova ejected from the capsule during its descent at about four miles above Kazakhstan and made a parachute landing, quite a thrill ride for this skydiver!

 

She had dinner with some local villagers in the Altai Krai who helped her to get out of her spacesuit.

 

Her flight became Cold War propaganda to demonstrate the superiority of communism. At the 1963 World Congress of Women, Soviet leader Nikita Khrushchev used Tereshkova's voyage to declare the USSR had achieved equality for women.

 

I bought this in France from Tereshkova’s instructor at the Zhukovsky Air Force Engineering Academy. His translated description: "This is an angle measuring device of the Russian aerospace for astronomic navigation, especially for calculation of flight angles. It was personally used by the Soviet cosmonaut Valentina Tereshkova. It is the model CMK 3 numbered 2416305 (badge on the device). The device still possesses the original leather strap. It consists of a turning wheel that is marked with degree values. Once the zero point on the scale is adjusted to a certain height, the orbs’ and targets’ location can be measured. In 1963, Tereshkova was the first woman to fly into space and remained the only woman in space until Svetlana Savitskaya’s flight in 1982. The device is in good condition with traces of use. The dimensions are 12 x 19.5 x 6 cm (height x width x depth)."

 

Part of the space collection at work. More angles below.

Translucent cobalt blue; handles in same color.

 

On shoulder, six palmettes with alternating inward and outward facing leaves at angles, and six recessed semicircular pediments with thick raised rib-like edges on panels, decorated alternately with circular bosses comprising two small concentric circles and a central dot and a plain four-armed cross; on body, six panels, each surrounded by raised lines and each containing a different device: 1) Greek inscription in three lines; 2) palmette with inward facing leaves above suspended tendrils at either side tied into a loop below to support a bunch of grapes; 3) ivy tendrils hanging from top corners supporting a cantharus by one of its handles; 4) palmette with outward facing leaves above suspended tendrils at either side tied into a loop below to support double flutes; 5) ivy tendrils hanging from top corners supporting a fluted oinochoe by its handle; 6) palmette with inward facing leaves above suspended tendrils at either side tied into a loop below to support a set of pipes; on bottom, four concentric raised circles. Broken on body and bottom, with one hole on bottom edge of shoulder, lower part of three panels, and slightly over half of bottom missing; few bubbles and black inclusions; some dulling and faint pitting, patches of creamy brown weathering with faint iridescence.

 

Shoulder and body blown in a three-part mold; separate mold for bottom. Inscribed in Greek: "Ennion made [me/it]".

 

Early Imperial, Julio-Claudian, 1st half of 1st century. Said to be from Potamia, near Golgoi, Cyprus.

 

Met Museum (81.10.224)

I've been whale watching a number of times and this is the first time I've had to wear a suit! They were heavy but they sure were warm and in the end I'm glad I wore it as it also acted as a floating device as well.

My friend is visiting from Toronto and is just loving it here on the West Coast of British Columbia.

Corridor with a lot of body fridges and live fly killer device on the wall in a former morgue building used until 2009. Strange place... the power is still on, the tools are working and there is water in the cranes. And a very large CCTV camera outside pointing straight on the entry point... Good times. My first morgue.

 

Some more pictures on my blog.

 

From the "1000 miles and running" tour. 10 urbex locations all around UK in 4 days.

 

On tour with Andre Govia, Rusty Photography, Martin Widlund and Haribohoe.

 

My blog || twitter || youtube || vimeo || tumblr || 500px

Alpaca World HD + – simulation and Tamagotchi simultaneously. Gamers get their hands on a small farm located in the highlands of South America. Here they breed and raise alpacas. These cute animals will become real pets living in mobile devices. No smell and the need to clean up after...

 

apkplay.org/alpaca-world-hd-v-3-2-2-mod-apk-money/

Cpl. Gurdeep Mann, left, a squad leader, and Lance Cpl. James Caulk, a rifleman, both with Kilo Company, 3rd Battalion, 4th Marine Regiment, climb a wall during Counter Improvised Explosive Device training at Camp Leatherneck, Helmand province, Afghanistan, April 3, 2013. The training, taught by 2nd Combat Engineer Battalion instructors, covered CIED tactics and techniques in Military Operations in Urban Terrain environment.

 

(U.S. Marine Corps photo by Sgt. Tammy K. Hineline)

NASAViz Story Antarctica Exposed on the iPhone

 

Download the video here:

svs.gsfc.nasa.gov/vis/a010000/a011200/a011274/index.html

 

NASA Visualization Explorer Now Available For All iOS Devices

 

The popular NASA Visualization Explorer app, first launched for the iPad in July 2011, is now available for the iPhone and all devices running iOS 5.1+

 

A new universal version of the app is now available for download in the iTunes app store. Click here: svs.gsfc.nasa.gov/nasaviz/ to download the app

 

The app, which features the data visualization work of NASA's Scientific Visualization Studio, Earth Observatory and others, publishes two stories per week about the full range of NASA's astrophysics, planetary, heliophysics and Earth science missions.

 

Read more:

1.usa.gov/1h9Bkf0

 

Join the NASAViz Community on Facebook: www.facebook.com/NasaViz

 

Follow us @NASAViz: twitter.com/#!/nasaviz

 

NASA's Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission.

 

Credit: NASA/Goddard

 

NASA image use policy.

 

NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission.

 

Follow us on Twitter

 

Like us on Facebook

 

Find us on Instagram

Pilatus Bahnen AG: the Pilatus rack railway connects Alpnachstadt (440 meters above sea level) to the summit of Mount Pilatus (2,073 meters above sea level), with a 800 mm gauge single track and a length of 4.8 km. This is the steepest rack railway in the world, with a maximum gradient of 48%, and has some features that make it unique. It uses the Locher rack system with side pinions, since conventional rack-and-pinion systems with vertical pinions did not provide the safety of the gears tracing the rack on the strongest ramps. Due to the use of the Locher rack system, the wheels of the original steam railcars did not have flanges on the wheels at first, although later all the wheels were endowed with external flanges, and not inside flanges as is usual on all railways. Therefore, the switches are also not conventional, and instead of having moving parts, are enormous devices that move laterally, or are turned over.

 

The line was put into service on June 4, 1889, and electrification at 1550 V DC was put into service on May 15, 1937.

 

The service is currently provided with twelve railcars. In 1937 SLM and MFO built eight electric passengers railcars (numbers Beh 1/2 21-28). In 1954, SLM built a freight electric railcar (Ohe 1/2 31), although it is normally used as a passenger car after the construction of a body with the number 29 in 1962. In 1968 SLM built the last electric passengers railcar, the number Beh 1/2 30. Finally, in 1981 Stadler built the diesel departmental railcar Xhm 1/2 32.

 

The railway is operated by means of the "train package" system: two convoys are formed of several railcars that run in driving on sight mode between them. At the intermediate station of Ämsigen cross the ascending and descending convoys.

____________________________________________________

 

In this picture, crossing at the intermediate station of Ämsigen with an ubpound convoy.

"A Buoy" a floating device that can have many purposes

The main environmental issues associated with the implementation of the 5G network come with the manufacturing of the many component parts of the 5G infrastructure. In addition, the proliferation of new devices that will use the 5G network that is tied to the acceleration of demand from consumers for new 5G-dependent devices will have serious environmental consequences. The 5G network will inevitably cause a large increase in energy usage among consumers, which is already one of the main contributors to climate change. Additionally, the manufacturing and maintenance of the new technologies associated with 5G creates waste and uses important resources that have detrimental consequences for the environment. 5G networks use technology that has harmful effects on birds, which in turn has cascading effects through entire ecosystems. And, while 5G developers are seeking to create a network that has fewer environmental impacts than past networks, there is still room for improvement and the consequences of 5G should be considered before it is widely rolled out. 5G stands for the fifth generation of wireless technology. It is the wave of wireless technology surpassing the 4G network that is used now. Previous generations brought the first cell phones (1G), text messaging (2G), online capabilities (3G), and faster speed (4G). The fifth generation aims to increase the speed of data movement, be more responsive, and allow for greater connectivity of devices simultaneously.[2] This means that 5G will allow for nearly instantaneous downloading of data that, with the current network, would take hours. For example, downloading a movie using 5G would take mere seconds. These new improvements will allow for self-driving cars, massive expansion of Internet of Things (IoT) device use, and acceleration of new technological advancements used in everyday activities by a much wider range of people. While 5G is not fully developed, it is expected to consist of at least five new technologies that allow it to perform much more complicated tasks at faster speeds. The new technologies 5G will use are hardware that works with much higher frequencies (millimeter wavelengths), small cells, massive MIMO (multiple input multiple output), beamforming, and full duplex.[3] Working together, these new technologies will expand the potential of many of the devices used today and devices being developed for the future. Millimeter waves are a higher frequency wavelength than the radio wavelength generally used in wireless transmission today.[4] The use of this portion of the spectrum corresponds to higher frequency and shorter wavelengths, in this case in the millimeter range (vs the lower radio frequencies where the wavelengths can be in the meters to hundreds of kilometers). Higher frequency waves allow for more devices to be connected to the same network at the same time, because there is more space available compared to the radio waves that are used today. The use of this portion of the spectrum has much longer wavelengths than of that anticipated for a portion of the 5G implementation. The waves in use now can measure up to tens of centimeters, while the new 5G waves would be no greater than ten millimeters.[5] The millimeter waves will create more transmission space for the ever-expanding number of people and devices crowding the current networks. The millimeter waves will create more space for devices to be used by consumers, which will increase energy usage, subsequently leading to increased global warming. Millimeter waves are very weak in their ability to connect two devices, which is why 5G needs something called “small cells” to give full, uninterrupted coverage. Small cells are essentially miniature cell towers that would be placed 250 meters apart throughout cities and other areas needing coverage.[6] The small cells are necessary as emissions [or signals] at this higher frequency/shorter wavelength have more difficulty passing through solid objects and are even easily intercepted by rain.[7] The small cells could be placed on anything from trees to street lights to the sides of businesses and homes to maximize connection and limit “dead zones” (areas where connections are lost). The next new piece of technology necessary for 5G is massive MIMO, which stands for multiple input multiple output. The MIMO describes the capacity of 5G’s base stations, because those base stations would be able to handle a much higher amount of data at any one moment of time. Currently, 4G base stations have around eight transmitters and four receivers which direct the flow of data between devices.[9] 5G will exceed this capacity with the use of massive MIMO that can handle 22 times more ports. Figure 1 shows how a massive MIMO tower would be able to direct a higher number of connections at once. However, massive MIMO causes signals to be crossed more easily. Crossed signals cause an interruption in the transmission of data from one device to the next due to a clashing of the wavelengths as they travel to their respective destinations. To overcome the cross signals problem, beamforming is needed. To maximize the efficiency of sending data another new technology called beamforming will be used in 5G. For data to be sent to the correct user, a way of directing the wavelengths without interference is necessary. This is done through a technique called beamforming. Beamforming directs where exactly data are being sent by using a variety of antennas to organize signals based on certain characteristics, such as the magnitude of the signal. By directly sending signals to where they need to go, beamforming decreases the chances that a signal is dropped due to the interference of a physical object.

One way that 5G will follow through on its promise of faster data transmission is through sending and receiving data simultaneously. The method that allows for simultaneous input and output of data is called full duplexing. While full duplex capabilities allow for faster transmission of data, there is an issue of signal interference, because of echoes. Full duplexing will cut transmission times in half, because it allows for a response to occur as soon as an input is delivered, eliminating the turnaround time that is seen in transmission today. Because these technologies are new and untested, it is hard to say how they will impact our environment. This raises another issue: there are impacts that can be anticipated and predicted, but there are also unanticipated impacts because much of the new technologies are untested. Nevertheless, it is possible to anticipate some of detrimental environmental consequences of the new technologies and the 5G network, because we know these technologies will increase exposure to harmful radiation, increase mining of rare minerals, increase waste, and increase energy usage. The main 5G environmental concerns have to do with two of the five new components: the millimeter waves and the small cells. The whole aim of the new 5G network is to allow for more devices to be used by the consumer at faster rates than ever before, because of this goal there will certainly be an increase in energy usage globally. Energy usage is one of the main contributors to climate change today and an increase in energy usage would cause climate change to increase drastically as well. 5G will operate on a higher frequency portion of the spectrum to open new space for more devices. The smaller size of the millimeter waves compared to radio frequency waves allows for more data to be shared more quickly and creates a wide bandwidth that can support much larger tasks.[15] While the idea of more space for devices to be used is great for consumers, this will lead to a spike in energy usage for two reasons – the technology itself is energy demanding and will increase demand for more electronic devices. The ability for more devices to be used on the same network creates more incentive for consumers to buy electronics and use them more often. This will have a harmful impact on the environment through increased energy use. Climate change has several underlying contributors; however, energy usage is gaining attention in its severity with regards to perpetuating climate change. Before 5G has even been released, about 2% of the world’s greenhouse gas emissions can be attributed to the ICT industry.[16] While 2% may not seem like a very large portion, it translates to around 860 million tons of greenhouse gas emissions.[17] Greenhouse gas emissions are the main contributors to natural disasters, such as flooding and drought, which are increasing severity and occurrence every year. Currently, roughly 85% of the energy used in the United States can be attributed to fossil fuel consumption.[18] The dwindling availability of fossil fuels and the environmental burden of releasing these fossil fuels into our atmosphere signal an immediate need to shift to other energy sources. Without a shift to other forms of energy production and the addition of technology allowed by the implementation of 5G, the strain on our environment will rise and the damage may never be repaired. With an increase in energy usage through technology and the implementation of 5G, it can be expected that the climate change issues faced today will only increase. The overall contribution of carbon dioxide emissions from the ICT industry has a huge impact on climate change and will continue to have even larger impacts without proper actions. In a European Union report, researchers estimated that in order to keep the increase in global temperature below 2° Celsius a decrease in carbon emissions of around 15-30% is necessary by 2020. Engineers claim that the small cells used to provide the 5G connection will be energy efficient and powered in a sustainable way; however the maintenance and production of these cells is more of an issue. Supporters of the 5G network advocate that the small cells will use solar or wind energy to stay sustainable and green.[20] These devices, labeled “fuel-cell energy servers” will work as clean energy-based generators for the small cells.[21] While implementing base stations that use sustainable energy to function would be a step in the right direction in environmental conservation, it is not the solution to the main issue caused by 5G, which is the impact that the massive amount of new devices in the hands of consumers will have on the amount of energy required to power these devices. The wasteful nature of manufacturing and maintenance of both individual devices and the devices used to deliver 5G connection could become a major contributor of climate change. The promise of 5G technology is to expand the number of devices functioning might be the most troubling aspect of the new technology. Cell phones, computers, and other everyday devices are manufactured in a way that puts stress on the environment. A report by the EPA estimated that in 2010, 25% of the world’s greenhouse gas emissions comes from electricity and heat production making it the largest single source of emissions.[22] The main gas emitted by this sector is carbon dioxide, due to the burning of natural gas, such as coal, to fuel electricity sources.[23] Carbon dioxide is one of the most common greenhouse gases seen in our atmosphere, it traps heat in earth’s atmosphere trying to escape into space, which causes the atmosphere to warm generating climate change. Increased consumption of devices is taking a toll on the environment. As consumers gain access to more technologies the cycle of consumption only expands. As new devices are developed, the older devices are thrown out even if they are still functional. Often, big companies will purposefully change their products in ways that make certain partner devices (such as chargers or earphones) unusable–creating demand for new products. Economic incentives mean that companies will continue these practices in spite of the environmental impacts. One of the main issues with the 5G network and the resulting increase in consumption of technological devices is that the production required for these devices is not sustainable. In the case of making new devices, whether they be new smart-phones or the small cells needed for 5G, the use of nonrenewable metals is required. It is extremely difficult to use metals for manufacturing sustainably, because metals are not a renewable resource. Metals used in the manufacturing of the smart devices frequently used today often cannot be recycled in the same way many household items can be recycled. Because these technologies cannot be recycled, they create tons of waste when they are created and tons of waste when they are thrown away. There are around six billion mobile devices in use today, with this number expected to increase drastically as the global population increases and new devices enter the market. One estimate of the life-time carbon emissions of a single device–not including related accessories and network connection–is that a device produces a total of 45kg of carbon dioxide at a medium level of usage over three years. This amount of emission is comparable to that of driving the average European car for 300km. But, the most environmentally taxing stage of a mobile device life cycle is during the production stage, where around 68% of total carbon emissions is produced, equating to 30kg of carbon dioxide. To put this into perspective, an iPhone X weighs approximately 0.174kg, so in order to produce the actual device, 172 iPhone X’s worth of carbon dioxide is also created. These emissions vary from person to person and between different devices, but it’s possible to estimate the impact one device has on the environment. 5G grants the capacity for more devices to be used, significantly increase the existing carbon footprint of smart devices today. Energy usage for the ever-growing number of devices on the market and in homes is another environmental threat that would be greatly increased by the new capabilities brought by the 5G network. Often, energy forecasts overlook the amount of energy that will be consumed by new technologies, which leads to a skewed understanding of the actual amount of energy expected to be used.[30] One example of this is with IoT devices.[31] IoT is one of the main aspects of 5G people in the technology field are most excited about. 5G will allow for a larger expansion of IoT into the everyday household.[32] While some IoT devices promise lower energy usage abilities, the 50 billion new IoT devices expected to be produced and used by consumers will surpass the energy used by today’s electronics.

The small cells required for the 5G network to properly function causes another issue of waste with the new network. Because of the weak nature of the millimeter waves used in the 5G technology, small cells will need to be placed around 250 meters apart to insure continuous connection. The main issue with these small cells is that the manufacturing and maintenance of these cells will create a lot of waste. The manufacturing of technology takes a large toll on the environment, due to the consumption of non-renewable resources to produce devices, and technology ending up in landfills. Implementing these small cells into large cities where they must be placed at such a high density will have a drastic impact on technology waste. Technology is constantly changing and improving, which is one of the huge reasons it has such high economic value. But, when a technological advancement in small cells happens, the current small cells would have to be replaced. The short lifespan of devices created today makes waste predictable and inevitable. In New York City, where there would have to be at least 3,135,200 small cells, the waste created in just one city when a new advancement in small cells is implemented would have overwhelming consequences on the environment. 5G is just one of many examples of how important it is to look at the consequences of new advancements before their implementation. While it is exciting to see new technology that promises to improve everyday life, the consequences of additional waste and energy usage must be considered to preserve a sustainable environment in the future. There is some evidence that the new devices and technologies associated with 5G will be harmful to delicate ecosystems. The main component of the 5G network that will affect the earth’s ecosystems is the millimeter waves. The millimeter waves that are being used in developing the 5G network have never been used at such scale before. This makes it especially difficult to know how they will impact the environment and certain ecosystems. However, studies have found that there are some harms caused by these new technologies. The millimeter waves, specifically, have been linked to many disturbances in the ecosystems of birds. In a study by the Centre for Environment and Vocational Studies of Punjab University, researchers observed that after exposure to radiation from a cell tower for just 5-30 minutes, the eggs of sparrows were disfigured.[34] The disfiguration of birds exposed for such a short amount of time to these frequencies is significant considering that the new 5G network will have a much higher density of base stations (small cells) throughout areas needing connection. The potential dangers of having so many small cells all over areas where birds live could cause whole populations of birds to have mutations that threaten their population’s survival. Additionally, a study done in Spain showed breeding, nesting, and roosting was negatively affected by microwave radiation emitted by a cell tower. Again, the issue of the increase in the amount of connection conductors in the form of small cells to provide connection with the 5G network is seen to be harmful to species that live around humans. Additionally, Warnke found that cellular devices had a detrimental impact on bees.[36] In this study, beehives exposed for just ten minutes to 900MHz waves fell victim to colony collapse disorder.Colony collapse disorder is when many of the bees living in the hive abandon the hive leaving the queen, the eggs, and a few worker bees. The worker bees exposed to this radiation also had worsened navigational skills, causing them to stop returning to their original hive after about ten days. Bees are an incredibly important part of the earth’s ecosystem. Around one-third of the food produced today is dependent on bees for pollination, making bees are a vital part of the agricultural system. Bees not only provide pollination for the plant-based food we eat, but they are also important to maintaining the food livestock eats. Without bees, a vast majority of the food eaten today would be lost or at the very least highly limited. Climate change has already caused a large decline in the world’s bee population. The impact that the cell towers have on birds and bees is important to understand, because all ecosystems of the earth are interconnected. If one component of an ecosystem is disrupted the whole system will be affected. The disturbances of birds with the cell towers of today would only increase, because with 5G a larger number of small cell radio-tower-like devices would be necessary to ensure high quality connection for users. Having a larger number of high concentrations of these millimeter waves in the form of small cells would cause a wider exposure to bees and birds, and possibly other species that are equally important to our environment.As innovation continues, it is important that big mobile companies around the world consider the impact 5G will have on the environment before pushing to have it widely implemented. The companies pushing for the expansion of 5G may stand to make short term economic gains. While the new network will undoubtedly benefit consumers greatly, looking at 5G’s long-term environmental impacts is also very important so that the risks are clearly understood and articulated. The technology needed to power the new 5G network will inevitably change how mobile devices are used as well as their capabilities. This technological advancement will also change the way technology and the environment interact. The change from using radio waves to using millimeter waves and the new use of small cells in 5G will allow more devices to be used and manufactured, more energy to be used, and have detrimental consequences for important ecosystems. While it is unrealistic to call for 5G to not become the new network norm, companies, governments, and consumers should be proactive and understand the impact that this new technology will have on the environment. 5G developers should carry out Environmental Impact Assessments that fully estimate the impact that the new technology will have on the environment before rushing to widely implement it. Environmental Impact Assessments are intended to assess the impact new technologies have on the environment, while also maximizing potential benefits to the environment. This process mitigates, prevents, and identifies environmental harm, which is imperative to ensuring that the environment is sustainable and sound in the future. Additionally, the method of Life Cycle Assessments (LCA) of devices would also be extremely beneficial for understanding the impact that 5G will inevitably have on the environment. An LCA can be used to assess the impact that devices have on carbon emissions throughout their life span, from the manufacturing of the device to the energy required to power the device and ultimately the waste created when the device is discarded into a landfill or other disposal system. By having full awareness of the impact new technology will have on the environment ways to combat the negative impacts can be developed and implemented effectively.

 

jsis.washington.edu/news/what-will-5g-mean-for-the-enviro...

  

A signal is a mechanical or electrical device erected beside a railway line to pass information relating to the state of the line ahead to train/engine drivers. The driver interprets the signal's indication and acts accordingly. Typically, a signal might inform the driver of the speed at which the train may safely proceed or it may instruct the driver to stop.

 

One of the earliest forms of fixed railway signal is the semaphore ike these ones. These signals display their different indications to train drivers by changing the angle of inclination of a pivoted 'arm'. Semaphore signals were patented in the early 1840s by Joseph James Stevens, and soon became the most widely used form of mechanical signal. Designs have altered over the intervening years, and colour light signals have replaced semaphore signals in some countries, but in others they remain in use.

 

Engine sheds could be found in many towns and cities as well as in rural locations. They were built by the railway companies to provide accommodation for their locomotives that provided their local train services. Each engine shed would have an allocation of locomotives that would reflect the duties carried out by that depot. Most depots had a mixture of passenger, freight and shunting locomotives but some such as Mexborough had predominantly freight locomotives reflecting the industrial nature of that area in South Yorkshire. Others, such as Kings Cross engine shed in London, predominantly provided locomotives for passenger workings.

 

This view is on the Romney, Hythe & Dymchurch Railway (RH&DR) which is a 15 in (381 mm) gauge light railway in Kent, England, operating steam and internal combustion locomotives. The 13 3⁄4-mile (22.1 km) line runs from the Cinque Port of Hythe via Dymchurch, St. Mary's Bay, New Romney and Romney Sands to Dungeness, close to Dungeness nuclear power station and Dungeness Lighthouse.

 

This is at New Romney railway station which has always been the headquarters location of the railway.

 

There is a signal box for local train control, and also the main Control Centre for train operation across the whole railway. The latter is staffed by a Control Officer, who is in constant radio contact with all signal boxes, locomotives, and (where appropriate) station staff, travelling guards, and engineering teams.

 

This original engine shed is still in use, but was designed to accommodate only nine locomotives. In recent years it has been considerably extended, more than doubling the original size. This shed is now capable of housing all the railway's locomotives, as well as an engineering centre capable of work from minor running repairs to full locomotive overhauls, together with the necessary mess facilities for engineering staff. Also on the New Romney site are a separate locomotive erecting shop, and a paint shop where locomotives and other rolling stock can be re-liveried. Although there is a secondary engine shed at Hythe station, all locomotives are now based at New Romney locomotive shed.

 

en.wikipedia.org/wiki/Railway_signal

 

en.wikipedia.org/wiki/Railway_semaphore_signal

 

en.wikipedia.org/wiki/Motive_power_depot

 

en.wikipedia.org/wiki/Romney,_Hythe_and_Dymchurch_Railway

 

en.wikipedia.org/wiki/New_Romney_railway_station

 

From the mean streets of New York, my friend Wendy brought me back a battery-operated bubble maker. I had no idea that bubble-making technology had advanced so far, and I had no idea that what my life, up til now, had been missing was one of these clever devices. You can keep your iPods and -Pads -- just give me my bubble maker.

 

In honor of the bubble maker, I am declaring this week Bubble Week. Prepare to be awash in a sea of bubbles (while I get this out of my system). Anyone care to join me? Let's all bubble!

Rope channels in a fallen monolithic shaft from the Temple of Apollo at Syracuse. Early 6th century BC.

On our return trip on the Yellow Line BART, these two were traveling together. They were absorbed in their individual devices for the whole ride. February 16, 2024

IR converted Canon Rebel XTi. AEB +/-2 total of 3 exposures processed with Photomatix.

 

High Dynamic Range (HDR)

 

High-dynamic-range imaging (HDRI) is a high dynamic range (HDR) technique used in imaging and photography to reproduce a greater dynamic range of luminosity than is possible with standard digital imaging or photographic techniques. The aim is to present a similar range of luminance to that experienced through the human visual system. The human eye, through adaptation of the iris and other methods, adjusts constantly to adapt to a broad range of luminance present in the environment. The brain continuously interprets this information so that a viewer can see in a wide range of light conditions.

 

HDR images can represent a greater range of luminance levels than can be achieved using more 'traditional' methods, such as many real-world scenes containing very bright, direct sunlight to extreme shade, or very faint nebulae. This is often achieved by capturing and then combining several different, narrower range, exposures of the same subject matter. Non-HDR cameras take photographs with a limited exposure range, referred to as LDR, resulting in the loss of detail in highlights or shadows.

 

The two primary types of HDR images are computer renderings and images resulting from merging multiple low-dynamic-range (LDR) or standard-dynamic-range (SDR) photographs. HDR images can also be acquired using special image sensors, such as an oversampled binary image sensor.

 

Due to the limitations of printing and display contrast, the extended luminosity range of an HDR image has to be compressed to be made visible. The method of rendering an HDR image to a standard monitor or printing device is called tone mapping. This method reduces the overall contrast of an HDR image to facilitate display on devices or printouts with lower dynamic range, and can be applied to produce images with preserved local contrast (or exaggerated for artistic effect).

 

In photography, dynamic range is measured in exposure value (EV) differences (known as stops). An increase of one EV, or 'one stop', represents a doubling of the amount of light. Conversely, a decrease of one EV represents a halving of the amount of light. Therefore, revealing detail in the darkest of shadows requires high exposures, while preserving detail in very bright situations requires very low exposures. Most cameras cannot provide this range of exposure values within a single exposure, due to their low dynamic range. High-dynamic-range photographs are generally achieved by capturing multiple standard-exposure images, often using exposure bracketing, and then later merging them into a single HDR image, usually within a photo manipulation program). Digital images are often encoded in a camera's raw image format, because 8-bit JPEG encoding does not offer a wide enough range of values to allow fine transitions (and regarding HDR, later introduces undesirable effects due to lossy compression).

 

Any camera that allows manual exposure control can make images for HDR work, although one equipped with auto exposure bracketing (AEB) is far better suited. Images from film cameras are less suitable as they often must first be digitized, so that they can later be processed using software HDR methods.

 

In most imaging devices, the degree of exposure to light applied to the active element (be it film or CCD) can be altered in one of two ways: by either increasing/decreasing the size of the aperture or by increasing/decreasing the time of each exposure. Exposure variation in an HDR set is only done by altering the exposure time and not the aperture size; this is because altering the aperture size also affects the depth of field and so the resultant multiple images would be quite different, preventing their final combination into a single HDR image.

 

An important limitation for HDR photography is that any movement between successive images will impede or prevent success in combining them afterwards. Also, as one must create several images (often three or five and sometimes more) to obtain the desired luminance range, such a full 'set' of images takes extra time. HDR photographers have developed calculation methods and techniques to partially overcome these problems, but the use of a sturdy tripod is, at least, advised.

 

Some cameras have an auto exposure bracketing (AEB) feature with a far greater dynamic range than others, from the 3 EV of the Canon EOS 40D, to the 18 EV of the Canon EOS-1D Mark II. As the popularity of this imaging method grows, several camera manufactures are now offering built-in HDR features. For example, the Pentax K-7 DSLR has an HDR mode that captures an HDR image and outputs (only) a tone mapped JPEG file. The Canon PowerShot G12, Canon PowerShot S95 and Canon PowerShot S100 offer similar features in a smaller format.. Nikon's approach is called 'Active D-Lighting' which applies exposure compensation and tone mapping to the image as it comes from the sensor, with the accent being on retaing a realistic effect . Some smartphones provide HDR modes, and most mobile platforms have apps that provide HDR picture taking.

 

Camera characteristics such as gamma curves, sensor resolution, noise, photometric calibration and color calibration affect resulting high-dynamic-range images.

 

Color film negatives and slides consist of multiple film layers that respond to light differently. As a consequence, transparent originals (especially positive slides) feature a very high dynamic range

 

Tone mapping

Tone mapping reduces the dynamic range, or contrast ratio, of an entire image while retaining localized contrast. Although it is a distinct operation, tone mapping is often applied to HDRI files by the same software package.

 

Several software applications are available on the PC, Mac and Linux platforms for producing HDR files and tone mapped images. Notable titles include

 

Adobe Photoshop

Aurora HDR

Dynamic Photo HDR

HDR Efex Pro

HDR PhotoStudio

Luminance HDR

MagicRaw

Oloneo PhotoEngine

Photomatix Pro

PTGui

 

Information stored in high-dynamic-range images typically corresponds to the physical values of luminance or radiance that can be observed in the real world. This is different from traditional digital images, which represent colors as they should appear on a monitor or a paper print. Therefore, HDR image formats are often called scene-referred, in contrast to traditional digital images, which are device-referred or output-referred. Furthermore, traditional images are usually encoded for the human visual system (maximizing the visual information stored in the fixed number of bits), which is usually called gamma encoding or gamma correction. The values stored for HDR images are often gamma compressed (power law) or logarithmically encoded, or floating-point linear values, since fixed-point linear encodings are increasingly inefficient over higher dynamic ranges.

 

HDR images often don't use fixed ranges per color channel—other than traditional images—to represent many more colors over a much wider dynamic range. For that purpose, they don't use integer values to represent the single color channels (e.g., 0-255 in an 8 bit per pixel interval for red, green and blue) but instead use a floating point representation. Common are 16-bit (half precision) or 32-bit floating point numbers to represent HDR pixels. However, when the appropriate transfer function is used, HDR pixels for some applications can be represented with a color depth that has as few as 10–12 bits for luminance and 8 bits for chrominance without introducing any visible quantization artifacts.

 

History of HDR photography

The idea of using several exposures to adequately reproduce a too-extreme range of luminance was pioneered as early as the 1850s by Gustave Le Gray to render seascapes showing both the sky and the sea. Such rendering was impossible at the time using standard methods, as the luminosity range was too extreme. Le Gray used one negative for the sky, and another one with a longer exposure for the sea, and combined the two into one picture in positive.

 

Mid 20th century

Manual tone mapping was accomplished by dodging and burning – selectively increasing or decreasing the exposure of regions of the photograph to yield better tonality reproduction. This was effective because the dynamic range of the negative is significantly higher than would be available on the finished positive paper print when that is exposed via the negative in a uniform manner. An excellent example is the photograph Schweitzer at the Lamp by W. Eugene Smith, from his 1954 photo essay A Man of Mercy on Dr. Albert Schweitzer and his humanitarian work in French Equatorial Africa. The image took 5 days to reproduce the tonal range of the scene, which ranges from a bright lamp (relative to the scene) to a dark shadow.

 

Ansel Adams elevated dodging and burning to an art form. Many of his famous prints were manipulated in the darkroom with these two methods. Adams wrote a comprehensive book on producing prints called The Print, which prominently features dodging and burning, in the context of his Zone System.

 

With the advent of color photography, tone mapping in the darkroom was no longer possible due to the specific timing needed during the developing process of color film. Photographers looked to film manufacturers to design new film stocks with improved response, or continued to shoot in black and white to use tone mapping methods.

 

Color film capable of directly recording high-dynamic-range images was developed by Charles Wyckoff and EG&G "in the course of a contract with the Department of the Air Force". This XR film had three emulsion layers, an upper layer having an ASA speed rating of 400, a middle layer with an intermediate rating, and a lower layer with an ASA rating of 0.004. The film was processed in a manner similar to color films, and each layer produced a different color. The dynamic range of this extended range film has been estimated as 1:108. It has been used to photograph nuclear explosions, for astronomical photography, for spectrographic research, and for medical imaging. Wyckoff's detailed pictures of nuclear explosions appeared on the cover of Life magazine in the mid-1950s.

 

Late 20th century

Georges Cornuéjols and licensees of his patents (Brdi, Hymatom) introduced the principle of HDR video image, in 1986, by interposing a matricial LCD screen in front of the camera's image sensor, increasing the sensors dynamic by five stops. The concept of neighborhood tone mapping was applied to video cameras by a group from the Technion in Israel led by Dr. Oliver Hilsenrath and Prof. Y.Y.Zeevi who filed for a patent on this concept in 1988.

 

In February and April 1990, Georges Cornuéjols introduced the first real-time HDR camera that combined two images captured by a sensor3435 or simultaneously3637 by two sensors of the camera. This process is known as bracketing used for a video stream.

 

In 1991, the first commercial video camera was introduced that performed real-time capturing of multiple images with different exposures, and producing an HDR video image, by Hymatom, licensee of Georges Cornuéjols.

 

Also in 1991, Georges Cornuéjols introduced the HDR+ image principle by non-linear accumulation of images to increase the sensitivity of the camera: for low-light environments, several successive images are accumulated, thus increasing the signal to noise ratio.

 

In 1993, another commercial medical camera producing an HDR video image, by the Technion.

 

Modern HDR imaging uses a completely different approach, based on making a high-dynamic-range luminance or light map using only global image operations (across the entire image), and then tone mapping the result. Global HDR was first introduced in 19931 resulting in a mathematical theory of differently exposed pictures of the same subject matter that was published in 1995 by Steve Mann and Rosalind Picard.

 

On October 28, 1998, Ben Sarao created one of the first nighttime HDR+G (High Dynamic Range + Graphic image)of STS-95 on the launch pad at NASA's Kennedy Space Center. It consisted of four film images of the shuttle at night that were digitally composited with additional digital graphic elements. The image was first exhibited at NASA Headquarters Great Hall, Washington DC in 1999 and then published in Hasselblad Forum, Issue 3 1993, Volume 35 ISSN 0282-5449.

 

The advent of consumer digital cameras produced a new demand for HDR imaging to improve the light response of digital camera sensors, which had a much smaller dynamic range than film. Steve Mann developed and patented the global-HDR method for producing digital images having extended dynamic range at the MIT Media Laboratory. Mann's method involved a two-step procedure: (1) generate one floating point image array by global-only image operations (operations that affect all pixels identically, without regard to their local neighborhoods); and then (2) convert this image array, using local neighborhood processing (tone-remapping, etc.), into an HDR image. The image array generated by the first step of Mann's process is called a lightspace image, lightspace picture, or radiance map. Another benefit of global-HDR imaging is that it provides access to the intermediate light or radiance map, which has been used for computer vision, and other image processing operations.

 

21st century

In 2005, Adobe Systems introduced several new features in Photoshop CS2 including Merge to HDR, 32 bit floating point image support, and HDR tone mapping.

 

On June 30, 2016, Microsoft added support for the digital compositing of HDR images to Windows 10 using the Universal Windows Platform.

 

HDR sensors

Modern CMOS image sensors can often capture a high dynamic range from a single exposure. The wide dynamic range of the captured image is non-linearly compressed into a smaller dynamic range electronic representation. However, with proper processing, the information from a single exposure can be used to create an HDR image.

 

Such HDR imaging is used in extreme dynamic range applications like welding or automotive work. Some other cameras designed for use in security applications can automatically provide two or more images for each frame, with changing exposure. For example, a sensor for 30fps video will give out 60fps with the odd frames at a short exposure time and the even frames at a longer exposure time. Some of the sensor may even combine the two images on-chip so that a wider dynamic range without in-pixel compression is directly available to the user for display or processing.

 

en.wikipedia.org/wiki/High-dynamic-range_imaging

 

Infrared Photography

 

In infrared photography, the film or image sensor used is sensitive to infrared light. The part of the spectrum used is referred to as near-infrared to distinguish it from far-infrared, which is the domain of thermal imaging. Wavelengths used for photography range from about 700 nm to about 900 nm. Film is usually sensitive to visible light too, so an infrared-passing filter is used; this lets infrared (IR) light pass through to the camera, but blocks all or most of the visible light spectrum (the filter thus looks black or deep red). ("Infrared filter" may refer either to this type of filter or to one that blocks infrared but passes other wavelengths.)

 

When these filters are used together with infrared-sensitive film or sensors, "in-camera effects" can be obtained; false-color or black-and-white images with a dreamlike or sometimes lurid appearance known as the "Wood Effect," an effect mainly caused by foliage (such as tree leaves and grass) strongly reflecting in the same way visible light is reflected from snow. There is a small contribution from chlorophyll fluorescence, but this is marginal and is not the real cause of the brightness seen in infrared photographs. The effect is named after the infrared photography pioneer Robert W. Wood, and not after the material wood, which does not strongly reflect infrared.

 

The other attributes of infrared photographs include very dark skies and penetration of atmospheric haze, caused by reduced Rayleigh scattering and Mie scattering, respectively, compared to visible light. The dark skies, in turn, result in less infrared light in shadows and dark reflections of those skies from water, and clouds will stand out strongly. These wavelengths also penetrate a few millimeters into skin and give a milky look to portraits, although eyes often look black.

 

Until the early 20th century, infrared photography was not possible because silver halide emulsions are not sensitive to longer wavelengths than that of blue light (and to a lesser extent, green light) without the addition of a dye to act as a color sensitizer. The first infrared photographs (as distinct from spectrographs) to be published appeared in the February 1910 edition of The Century Magazine and in the October 1910 edition of the Royal Photographic Society Journal to illustrate papers by Robert W. Wood, who discovered the unusual effects that now bear his name. The RPS co-ordinated events to celebrate the centenary of this event in 2010. Wood's photographs were taken on experimental film that required very long exposures; thus, most of his work focused on landscapes. A further set of infrared landscapes taken by Wood in Italy in 1911 used plates provided for him by CEK Mees at Wratten & Wainwright. Mees also took a few infrared photographs in Portugal in 1910, which are now in the Kodak archives.

 

Infrared-sensitive photographic plates were developed in the United States during World War I for spectroscopic analysis, and infrared sensitizing dyes were investigated for improved haze penetration in aerial photography. After 1930, new emulsions from Kodak and other manufacturers became useful to infrared astronomy.

 

Infrared photography became popular with photography enthusiasts in the 1930s when suitable film was introduced commercially. The Times regularly published landscape and aerial photographs taken by their staff photographers using Ilford infrared film. By 1937 33 kinds of infrared film were available from five manufacturers including Agfa, Kodak and Ilford. Infrared movie film was also available and was used to create day-for-night effects in motion pictures, a notable example being the pseudo-night aerial sequences in the James Cagney/Bette Davis movie The Bride Came COD.

 

False-color infrared photography became widely practiced with the introduction of Kodak Ektachrome Infrared Aero Film and Ektachrome Infrared EIR. The first version of this, known as Kodacolor Aero-Reversal-Film, was developed by Clark and others at the Kodak for camouflage detection in the 1940s. The film became more widely available in 35mm form in the 1960s but KODAK AEROCHROME III Infrared Film 1443 has been discontinued.

 

Infrared photography became popular with a number of 1960s recording artists, because of the unusual results; Jimi Hendrix, Donovan, Frank and a slow shutter speed without focus compensation, however wider apertures like f/2.0 can produce sharp photos only if the lens is meticulously refocused to the infrared index mark, and only if this index mark is the correct one for the filter and film in use. However, it should be noted that diffraction effects inside a camera are greater at infrared wavelengths so that stopping down the lens too far may actually reduce sharpness.

 

Most apochromatic ('APO') lenses do not have an Infrared index mark and do not need to be refocused for the infrared spectrum because they are already optically corrected into the near-infrared spectrum. Catadioptric lenses do not often require this adjustment because their mirror containing elements do not suffer from chromatic aberration and so the overall aberration is comparably less. Catadioptric lenses do, of course, still contain lenses, and these lenses do still have a dispersive property.

 

Infrared black-and-white films require special development times but development is usually achieved with standard black-and-white film developers and chemicals (like D-76). Kodak HIE film has a polyester film base that is very stable but extremely easy to scratch, therefore special care must be used in the handling of Kodak HIE throughout the development and printing/scanning process to avoid damage to the film. The Kodak HIE film was sensitive to 900 nm.

 

As of November 2, 2007, "KODAK is preannouncing the discontinuance" of HIE Infrared 35 mm film stating the reasons that, "Demand for these products has been declining significantly in recent years, and it is no longer practical to continue to manufacture given the low volume, the age of the product formulations and the complexity of the processes involved." At the time of this notice, HIE Infrared 135-36 was available at a street price of around $12.00 a roll at US mail order outlets.

 

Arguably the greatest obstacle to infrared film photography has been the increasing difficulty of obtaining infrared-sensitive film. However, despite the discontinuance of HIE, other newer infrared sensitive emulsions from EFKE, ROLLEI, and ILFORD are still available, but these formulations have differing sensitivity and specifications from the venerable KODAK HIE that has been around for at least two decades. Some of these infrared films are available in 120 and larger formats as well as 35 mm, which adds flexibility to their application. With the discontinuance of Kodak HIE, Efke's IR820 film has become the only IR film on the marketneeds update with good sensitivity beyond 750 nm, the Rollei film does extend beyond 750 nm but IR sensitivity falls off very rapidly.

  

Color infrared transparency films have three sensitized layers that, because of the way the dyes are coupled to these layers, reproduce infrared as red, red as green, and green as blue. All three layers are sensitive to blue so the film must be used with a yellow filter, since this will block blue light but allow the remaining colors to reach the film. The health of foliage can be determined from the relative strengths of green and infrared light reflected; this shows in color infrared as a shift from red (healthy) towards magenta (unhealthy). Early color infrared films were developed in the older E-4 process, but Kodak later manufactured a color transparency film that could be developed in standard E-6 chemistry, although more accurate results were obtained by developing using the AR-5 process. In general, color infrared does not need to be refocused to the infrared index mark on the lens.

 

In 2007 Kodak announced that production of the 35 mm version of their color infrared film (Ektachrome Professional Infrared/EIR) would cease as there was insufficient demand. Since 2011, all formats of color infrared film have been discontinued. Specifically, Aerochrome 1443 and SO-734.

 

There is no currently available digital camera that will produce the same results as Kodak color infrared film although the equivalent images can be produced by taking two exposures, one infrared and the other full-color, and combining in post-production. The color images produced by digital still cameras using infrared-pass filters are not equivalent to those produced on color infrared film. The colors result from varying amounts of infrared passing through the color filters on the photo sites, further amended by the Bayer filtering. While this makes such images unsuitable for the kind of applications for which the film was used, such as remote sensing of plant health, the resulting color tonality has proved popular artistically.

 

Color digital infrared, as part of full spectrum photography is gaining popularity. The ease of creating a softly colored photo with infrared characteristics has found interest among hobbyists and professionals.

 

In 2008, Los Angeles photographer, Dean Bennici started cutting and hand rolling Aerochrome color Infrared film. All Aerochrome medium and large format which exists today came directly from his lab. The trend in infrared photography continues to gain momentum with the success of photographer Richard Mosse and multiple users all around the world.

 

Digital camera sensors are inherently sensitive to infrared light, which would interfere with the normal photography by confusing the autofocus calculations or softening the image (because infrared light is focused differently from visible light), or oversaturating the red channel. Also, some clothing is transparent in the infrared, leading to unintended (at least to the manufacturer) uses of video cameras. Thus, to improve image quality and protect privacy, many digital cameras employ infrared blockers. Depending on the subject matter, infrared photography may not be practical with these cameras because the exposure times become overly long, often in the range of 30 seconds, creating noise and motion blur in the final image. However, for some subject matter the long exposure does not matter or the motion blur effects actually add to the image. Some lenses will also show a 'hot spot' in the centre of the image as their coatings are optimised for visible light and not for IR.

 

An alternative method of DSLR infrared photography is to remove the infrared blocker in front of the sensor and replace it with a filter that removes visible light. This filter is behind the mirror, so the camera can be used normally - handheld, normal shutter speeds, normal composition through the viewfinder, and focus, all work like a normal camera. Metering works but is not always accurate because of the difference between visible and infrared refraction. When the IR blocker is removed, many lenses which did display a hotspot cease to do so, and become perfectly usable for infrared photography. Additionally, because the red, green and blue micro-filters remain and have transmissions not only in their respective color but also in the infrared, enhanced infrared color may be recorded.

 

Since the Bayer filters in most digital cameras absorb a significant fraction of the infrared light, these cameras are sometimes not very sensitive as infrared cameras and can sometimes produce false colors in the images. An alternative approach is to use a Foveon X3 sensor, which does not have absorptive filters on it; the Sigma SD10 DSLR has a removable IR blocking filter and dust protector, which can be simply omitted or replaced by a deep red or complete visible light blocking filter. The Sigma SD14 has an IR/UV blocking filter that can be removed/installed without tools. The result is a very sensitive digital IR camera.

 

While it is common to use a filter that blocks almost all visible light, the wavelength sensitivity of a digital camera without internal infrared blocking is such that a variety of artistic results can be obtained with more conventional filtration. For example, a very dark neutral density filter can be used (such as the Hoya ND400) which passes a very small amount of visible light compared to the near-infrared it allows through. Wider filtration permits an SLR viewfinder to be used and also passes more varied color information to the sensor without necessarily reducing the Wood effect. Wider filtration is however likely to reduce other infrared artefacts such as haze penetration and darkened skies. This technique mirrors the methods used by infrared film photographers where black-and-white infrared film was often used with a deep red filter rather than a visually opaque one.

 

Another common technique with near-infrared filters is to swap blue and red channels in software (e.g. photoshop) which retains much of the characteristic 'white foliage' while rendering skies a glorious blue.

 

Several Sony cameras had the so-called Night Shot facility, which physically moves the blocking filter away from the light path, which makes the cameras very sensitive to infrared light. Soon after its development, this facility was 'restricted' by Sony to make it difficult for people to take photos that saw through clothing. To do this the iris is opened fully and exposure duration is limited to long times of more than 1/30 second or so. It is possible to shoot infrared but neutral density filters must be used to reduce the camera's sensitivity and the long exposure times mean that care must be taken to avoid camera-shake artifacts.

 

Fuji have produced digital cameras for use in forensic criminology and medicine which have no infrared blocking filter. The first camera, designated the S3 PRO UVIR, also had extended ultraviolet sensitivity (digital sensors are usually less sensitive to UV than to IR). Optimum UV sensitivity requires special lenses, but ordinary lenses usually work well for IR. In 2007, FujiFilm introduced a new version of this camera, based on the Nikon D200/ FujiFilm S5 called the IS Pro, also able to take Nikon lenses. Fuji had earlier introduced a non-SLR infrared camera, the IS-1, a modified version of the FujiFilm FinePix S9100. Unlike the S3 PRO UVIR, the IS-1 does not offer UV sensitivity. FujiFilm restricts the sale of these cameras to professional users with their EULA specifically prohibiting "unethical photographic conduct".

 

Phase One digital camera backs can be ordered in an infrared modified form.

 

Remote sensing and thermographic cameras are sensitive to longer wavelengths of infrared (see Infrared spectrum#Commonly used sub-division scheme). They may be multispectral and use a variety of technologies which may not resemble common camera or filter designs. Cameras sensitive to longer infrared wavelengths including those used in infrared astronomy often require cooling to reduce thermally induced dark currents in the sensor (see Dark current (physics)). Lower cost uncooled thermographic digital cameras operate in the Long Wave infrared band (see Thermographic camera#Uncooled infrared detectors). These cameras are generally used for building inspection or preventative maintenance but can be used for artistic pursuits as well.

 

en.wikipedia.org/wiki/Infrared_photography

 

Device: Walton Primo s3 mini

.

Focal Length : 3.55mm

.

ISO: Auto

.

Exposure: Auto

 

For my coming Jabba's palace I've built some technical device. I've made an instruction to see how I used some SNOT-techniques.

....as a child I would jump off a swing and pretend I was flying 😀

It's finally time to lift the lid on another artwork commission.

I can honestly say that it's been an absolute privilege to have been involved with all things art&design for New Device right from the get-go & I was thrilled when the band approached me late last year to commission not only 1, but 4 pieces of album artwork! Yes indeed, the below is no.1 in a line of 4 releases the band will be doing this year and the artwork will tie all 4 records together... That's as much as I can say at this point... stay tuned for more!

Everyone has a right to access our public lands, but few of Glacier's trails were created with accessibility in mind.

 

A first step to addressing limits to accessibility is to identify them.

 

Glacier and the National Park Service are using tools—like the orange, one-wheeled device pictured here in front of two people using hand cycles—to evaluate trails in the park using the High Efficiency Trail Assessment Process (HETAP).

 

HETAP identifies trail variables: grade, cross-slope, trail width, surface material, and more.

 

This data allows park managers to prioritize future trail improvements, and allow visitors in the future to make more informed decisions.

Floatation device @ Montgomery Place Pavilion pool :)

Image from a vintage asbestos abatement industry publication showing workers posing inside an apparent asbestos abatement work area while demonstrating cleaning activities. Abatement workers are depicted with disposable coveralls, supplied-air full-face respirators, and a portable air monitoring device (worker in background).

 

This advertising photo attempts to portray some basic aspects of asbestos abatement, but might have missed a few details for realism in this obvious staged set-up, such as the apparent absence of negative air pressurization acting on the polyethylene-sheet wall and floor barriers. Along this line, the placement of the negative air machine (NAM) itself appears to show its intake opening directly against the enclosure wall, hindering its ability to draw airflow (doubtful if it was actually activated); NAM intake should be directed toward the main portion of the work area. Additionally, there doesn't even seem to be an electrical cord leading to the NAM.

 

Further, there seems to be a distinct absence of a wetting-agent and associated applicator (no water, hose, or reservoir container); everything appears to be "dry". One of the main factors in proper asbestos abatement dust control technique is assuring materials are "adequately wet", which can greatly reduce the potential for dust particles to become airborne, typically achieved by wetting materials and work area surfaces before, during and after ACM removal. Even the worker wiping the enclosure wall should be using a wet towel or damp rag, but where is the bucket of cleaning solution? Plus, such wiping activity is usually reserved for the "final cleaning" stage, well after bulk ACM debris has been removed and containerized.

 

In addition to this, the assumed "asbestos" debris on the floor should've been "promptly" containerized as it was removed, not allowed to accumulate where it could be further disturbed by trampling it, haphazardly dragging hoses and equipment over it, etc., likely causing asbestos fibers to become airborne and further contaminate surfaces. Loose bulk debris also compounds cleaning efforts by unnecessarily spending more time and resources to decontaminate exposed equipment and supplies from excessive debris build-up. Further, the workers themselves in this image appear to have managed keeping their coveralls and gloves perfectly spotless, an amazing feat inside an "active" asbestos abatement work area during bulk removal.

 

Not to mention, the fact that the personal air monitoring device is attached to the worker performing the least riskiest job function -in this example - relative to airborne asbestos fiber exposure - wiping walls; whereas the other workers are pictured vacuuming and shoveling apparent bulk friable insulation material. Air monitoring results would probably not be fully representative of job tasks with the potential highest exposure risk.

 

A couple of other points: larger areas of accumulated bulk debris such as this are often cleaned using shovels and not necessarily utilizing vacuums, since the excessive bulk material reduces the service-life of the vucuum's costly HEPA-filter much quicker, tends to clog more frequently, and would also fill the vacuum canister or bag quite often, requiring frequent emptying or bag replacement. HEPA-vacuuming is typically employed for residual materials on surfaces, following substantial removal and cleanup of bulk debris.

 

Although perhaps a smidgeon of credit is due, since there doesn't appear to be evidence of a broom or brush inside the work area (at least not on camera). Dry-sweeping asbestos material is strictly prohibited. But, some asbestos abatement workers might have another opinion about that.

 

Also, the kneeling worker holding open the black waste bag does not appear to have an adequate fit "inside" his full-face respirator. The internal seal around his nose and mouth looks breached, consequently not providing the full level of protection these types of respirators are designed for.

 

Ah, but who's looking anyway?

Spring equinox is near in our northern hemisphere, it is time to check and harvest our cameras for this period.

More about solargraphy at: solarigrafia.com

24/7 live-in maid sissy barbie wearing a yellow satin uniform with matching cap, gloves and shoes. The uniform is trimmed with black satin and lace.

 

Close up of the matching yellow padlock through the zip's pull tab and through the two metal rings of the collar. This prevents sissy barbie removing her uniform without permission of Mistress Lady Penelope. Mistress has found this form of discipline highly effective. Only when a long hard day's work has been completed to Mistress' satisfaction might the padlock key and the key to the servant's quarters be given to the maid. Naturally the key to the maid's chastity device is retained by her Mistress. Once sissy barbie has returned to the servant's quarters she can unlock the padlock and remove her uniform. As her petticoat has an attached bodice and shoulder straps, she cannot remove that until the uniform has been removed. After that she may don her nightie and begin her night's sleep, though being a 24/7 live in maid, she is always on call and she may well have to serve her Mistress wearing her nightie.

 

Sometimes when it is likely she will be called, she is not given the key to the padlock and has to sleep in uniform ready to 'scramble' within seconds of being called to serve. This does tend to flatten her petticoat unfortunately and that has to rectified the following day.

 

If the maid has been negligent in her duties in any way or does not pass Mistress Lady Penelope's inspection of her work and attire, she will not be given any keys but instead be locked in the sturdy steel cage in Mistress' dungeon as punishment. The cage has a hard floor and is too small for any more than a cat nap. All night the errant maid will be shifting from one uncomfortable position to another in the cold dark dungeon, not knowing the time. She will remain silent to avoid further punishment for waking her Mistress in her nice soft warm bed.

 

The maid will worry all night that she has angered her Mistress so much that Mistress Lady Penelope might decide to leave her locked up for two nights and the intervening day too. Sissy barbie knows that would be very severe punishment and will do everything humanly possible during her working day to avoid making her Mistress decide to do that. She also worries that perhaps Mistress Lady Penelope might be so busy that Mistress simply forgets to release her. The maid knows she is an insignificant convenience in Mistress Lady Penelope's life, Mistress might very well notice when she does not get her breakfast in bed but then forget that her maid is not cleaning the house, doing the laundry and all the other things she does unseen, Mistress Lady Penelope might get so excited by her social life that she forgets that her maid is locked in a little cage in the dungeon, dutifully remaining silent, not knowing whether it is day or night but acutely feeling the passage of every minute and regretting what she did to be receiving such severe punishment.

 

Sissy barbie will be waiting to hear her Mistress' footsteps and when she hears them, praying Mistress will come towards the dungeon door, unlock it and turn on the light. The maid will probably cower and quiver in case she is to be punished further, all the time hoping she is to be released, though she knows when that happens she will be required to begin a new days work, after a visit to the bathroom, fluffing out her petticoat and correcting whatever she did wrong the day before. The loss of one night's sleep is bad, but two nights in a row is torture, so the errant maid will be very careful not to risk another and will perform her duties to the letter despite being very weary. She will be extremely grateful to her Mistress for her release. The maid will have lost all track of time and if Mistress Lady Penelope is feeling magnanimous, might tell her maid how long she has been caged.

 

If you are interested in maid training, look at Mistress Lady Penelope's excellent free web site

mistressladypenelope.com

You can make an appointment with Mistress Lady Penelope by calling 07970183024

 

NASAViz List of Stories on the iPhone

 

NASA Visualization Explorer Now Available For All iOS Devices

 

The popular NASA Visualization Explorer app, first launched for the iPad in July 2011, is now available for the iPhone and all devices running iOS 5.1+

 

A new universal version of the app is now available for download in the iTunes app store. Click here: svs.gsfc.nasa.gov/nasaviz/ to download the app

 

The app, which features the data visualization work of NASA's Scientific Visualization Studio, Earth Observatory and others, publishes two stories per week about the full range of NASA's astrophysics, planetary, heliophysics and Earth science missions.

 

Read more:

1.usa.gov/1h9Bkf0

 

Join the NASAViz Community on Facebook: www.facebook.com/NasaViz

 

Follow us @NASAViz: twitter.com/#!/nasaviz

 

NASA's Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission.

 

Credit: NASA/Goddard

 

NASA image use policy.

 

NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission.

 

Follow us on Twitter

 

Like us on Facebook

 

Find us on Instagram

This is a USB device looking like a Canon 5D camera! Looks fun!

 

Photo by: Rouben Dickranian

There is a value to human error: the ability to self destruct.

 

If you haven't noticed, I have a mild fascination with technophobia.

 

We live in a time where society is constantly creating ways to eliminate work; each update means another way of living, and these are increasing rapidly. People grow to fear technology due this vision of the future of society; where people not only become less significant to each other, but also become less significant to their devices.

 

Which brings up one of the central themes in futuristic sci-fi: we'll all be reduced to numbers as opposed to individuals. Serial codes belong on pieces of technology, however our dependency on progress could eventually switch around so that humans become the puppets that are programed.

 

In contrast to this idea is the fact that machine's only purpose is to be utilized. Without the humans, progress would barely continue and gradually cease to exist.

 

It comes down to the only power humans would have left -- human error: the ability to make mistakes and to self destruct.

This clever device is actually chopsticks for those, like me, who have never mastered the art of using real chopsticks. You grasp it beyond where the spring is and you can pick up all sorts of delectable goodies. Will put up the grasp picture in a bit.

…….Btw, I have no clue where I got these- probably a yard sale or donation store! Either here or maybe in the Netherlands when I was visiting my DD.

A few of the devices I use. The newest being the HP 2133 Mini-Note PC running SUSE Linux with the Gnome desktop.

 

I'll be using the Mini-Note as part of a colloaboration experiment at the Office 2.0 conference in San Francisco next week.

Sent from my mobile digital doodle device.

Corten Steel

4'x6'x4'

2008

1 2 3 5 7 ••• 79 80