View allAll Photos Tagged Nonlinear
I was not prepared for the impact the tower would have on me. For somone who studied structural engineering, and who has an avid interest in Art, it is for me a majestic symbol of the optimism of the nineteenth century for the approaching twentieth century.
From Wikipedia:
"Named after its designer, engineer Gustave Eiffel, the Eiffel Tower is the tallest building in Paris. More than 200,000,000 people have visited the tower since its construction in 1889, including 6,719,200 in 2006, making it the most visited paid monument in the world. Including the 24 m (79 ft) antenna, the structure is 325 m (1,063 ft) high (since 2000), which is equivalent to about 81 levels in a conventional building.
When the tower was completed in 1889 it was the world's tallest tower — a title it retained until 1930 when New York City's Chrysler Building (319 m — 1,047 ft tall) was completed. The tower is now the fifth-tallest structure in France and the tallest structure in Paris, with the second-tallest being the Tour Montparnasse (210 m — 689 ft), although that will soon be surpassed by Tour AXA (225.11 m — 738.36 ft).
The metal structure of the Eiffel Tower weighs 7,300 tonnes while the entire structure including non-metal components is approximately 10,000 tonnes. Depending on the ambient temperature, the top of the tower may shift away from the sun by up to 18 cm (7 in) because of thermal expansion of the metal on the side facing the sun. The tower also sways 6–7 cm (2–3 in) in the wind. As demonstration of the economy of design, if the 7300 tonnes of the metal structure were melted down it would fill the 125 meter square base to a depth of only 6 cm (2.36 in), assuming a density of the metal to be 7.8 tonnes per cubic meter. The tower has a mass less than the mass of the air contained in a cylinder of the same dimensions, that is 324 meters high and 88.3 meters in radius. The weight of the tower is 10,100 tonnes compared to 10,265 tonnes of air.
The first and second levels are accessible by stairways and lifts. A ticket booth at the south tower base sells tickets to access the stairs which begin at that location. At the first platform the stairs continue up from the east tower and the third level summit is only accessible by lift. From the first or second platform the stairs are open for anyone to ascend or descend regardless of whether they have purchased a lift ticket or stair ticket. The actual count of stairs includes 9 steps to the ticket booth at the base, 328 steps to the first level, 340 steps to the second level and 18 steps to the lift platform on the second level. When exiting the lift at the third level there are 15 more steps to ascend to the upper observation platform. The step count is printed periodically on the side of the stairs to give an indication of progress of ascent. The majority of the ascent allows for an unhindered view of the area directly beneath and around the tower although some short stretches of the stairway are enclosed.
Maintenance of the tower includes applying 50 to 60 tonnes of paint every seven years to protect it from rust. In order to maintain a uniform appearance to an observer on the ground, three separate colors of paint are used on the tower, with the darkest on the bottom and the lightest at the top. On occasion the colour of the paint is changed; the tower is currently painted a shade of brownish-grey.
The structure was built between 1887 and 1889 as the entrance arch for the Exposition Universelle, a World's Fair marking the centennial celebration of the French Revolution. Eiffel originally planned to build the tower in Barcelona, for the Universal Exposition of 1888, but those responsible at the Barcelona city hall thought it was a strange and expensive construction, which did not fit into the design of the city. After the refusal of the Consistory of Barcelona, Eiffel submitted his draft to those responsible for the Universal Exhibition in Paris, where he would build his tower a year later, in 1889. The tower was inaugurated on 31 March 1889, and opened on 6 May. Three hundred workers joined together 18,038 pieces of puddled iron (a very pure form of structural iron), using two and a half million rivets, in a structural design by Maurice Koechlin. The risk of accident was great, for unlike modern skyscrapers the tower is an open frame without any intermediate floors except the two platforms. However, because Eiffel took safety precautions, including the use of movable stagings, guard-rails and screens, only one man died.
The tower was met with much criticism from the public when it was built, with many calling it an eyesore. Newspapers of the day were filled with angry letters from the arts community of Paris. One is quoted extensively in William Watson's US Government Printing Office publication of 1892 Paris Universal Exposition: Civil Engineering, Public Works, and Architecture. “And during twenty years we shall see, stretching over the entire city, still thrilling with the genius of so many centuries, we shall see stretching out like a black blot the odious shadow of the odious column built up of riveted iron plates.” Signers of this letter included Messonier, Gounod, Garnier, Gerome, Bougeureau, and Dumas.
Novelist Guy de Maupassant — who claimed to hate the tower — supposedly ate lunch in the Tower's restaurant every day. When asked why, he answered that it was the one place in Paris where one could not see the structure. Today, the Tower is widely considered to be a striking piece of structural art.
One of the great Hollywood movie clichés is that the view from a Parisian window always includes the tower. In reality, since zoning restrictions limit the height of most buildings in Paris to 7 stories, only a very few of the taller buildings have a clear view of the tower.
Eiffel had a permit for the tower to stand for 20 years, meaning it would have had to be dismantled in 1909, when its ownership would revert to the City of Paris. The City had planned to tear it down (part of the original contest rules for designing a tower was that it could be easily demolished) but as the tower proved valuable for communication purposes, it was allowed to remain after the expiration of the permit. The military used it to dispatch Parisian taxis to the front line during the First Battle of the Marne, and it therefore became a victory statue of that battle.
Shape of the tower
Looking up at the Eiffel Tower.At the time the tower was built many people were shocked by its daring shape. Eiffel was criticised for the design and accused of trying to create something artistic, or inartistic according to the viewer, without regard to engineering. Eiffel and his engineers, as renowned bridge builders however, understood the importance of wind forces and knew that if they were going to build the tallest structure in the world they had to be certain it would withstand the wind. In an interview reported in the newspaper Le Temps, Eiffel said:
“ Now to what phenomenon did I give primary concern in designing the Tower? It was wind resistance. Well then! I hold that the curvature of the monument's four outer edges, which is as mathematical calculation dictated it should be (...) will give a great impression of strength and beauty, for it will reveal to the eyes of the observer the boldness of the design as a whole. ”
—translated from the French newspaper Le Temps of 14 February 1887.
The shape of the tower was therefore determined by mathematical calculation involving wind resistance. Several theories of this mathematical calculation have been proposed over the years, the most recent is a nonlinear integral differential equation based on counterbalancing the wind pressure on any point on the tower with the tension between the construction elements at that point. That shape is exponential. A careful plot of the tower curvature however, reveals two different exponentials, the lower section having a stronger resistance to wind forces.
"What if you get a writer’s block?” (That’s a favorite question.) I say, “I don’t ever get one precisely because I switch from one task to another at will. If I’m tired of one project, I just switch to something else which, at the moment, interests me more.” [From his memoir, In Joy Still Felt.]
Note Asimov’s absolute sense of freedom and dominion (authority!) over his work—expressed not in grandiose terms, but the simple ability to do whatever he wants, whenever he wants. And, of course, the total lack of blame, shame, compulsion, and perfectionism."
Section 5.4 Write Nonlinearly: Leverage Your Project’s Easy Parts, in Chapter 5, Optimizing Your Writing Process, in the book, The Seven Secrets of the Prolific: The Definitive Guide to Overcoming Procrastination, Perfectionism, and Writer’s Block, by Hillary Rettig. Copyright © 2011 Hillary Rettig. All rights reserved. hillaryrettig.com
In chaos theory, the butterfly effect is the sensitive dependence on initial conditions in which a small change in one state of a deterministic nonlinear system can result in large differences in a later state.
~
"The flapping of the wings of a butterfly can be felt on the other side of the world." This Chinese proverb is the origin. This theory says that small actions are capable of generating large changes, positive or not.
~
digital drawn, manipulated via gimp & pixlr
Red laser pointer, Instax Wide format film. Odd how the laser at its brightest turns out blue. Anyway; how did I do this?
1. Put instax camera with film already in it, and laser pointer in lightproof changing bag having noticed how many shots you have left.
2. Zip up. Take off watch that glows in the dark!
3. Put arms in arm holes, locate camera and take out film pack.
4. Orient film pack correctly (sensitive area facing up).
5. Place laser pointer on film, switch on and guess-draw some sort of "interesting pattern".
6. Switch off laser pointer.
7. Replace film pack in camera.
8. Unzip and remove all articles from changing bag.
9. Switch on camera and press shutter to release/develop photo; the camera thinks the image is the darkslide that protects the film pack.
10. Marvel at the unreal colours.
11. Scan and post on Flickr for worldwide admiration.
I asked "an expert" why the red laser was turning up blue here and he said this:
"If I had to guess I'd speculate that this is what's going on. Film
emulsion contains three different sets of chemicals (possibly in
separate layers ?) which deal independently with the red, green and blue colours. The laser is monochromatic - it only emits red light - so you'd think that the blue and green processes would never get activated, and in general that's what we see. At very low intensities the red process works as we would expect and we get the nice red parts of the picture."
"At higher intensities we "burn out" the red process (the laser beam will be quite sharp-edged and the intensity where the black central line is could easily be hundreds of times higher than in the red surrounding region). Let's say ordinary low-intensity light turns chemical A into chemical B and it's chemical B which makes the red colour when it's developed. Too much red light turns B into some other chemical - say C, which doesn't develop to red. Or perhaps it produces an additional chemical (D) which somehow poisons the developing process. In any case we get burnout."
"The appearance of blue in some places will be something different again. It won't be the laser intensity which is varying (intensity is power per unit area and the laser power and beam size will be constant). The blue spots probably appeared where the artist stopped moving the laser beam for a few seconds. Holding a fixed intensity beam still will cause the local temperature of the emulsion to rise and it may be that it's the heating which is triggering the "blue" chemistry. Or, perhaps, there's some leaching of the copious amounts of chemical C (or D) from the heavily saturated red process into the blue layer ? But now I really am guessing."
"The one thing I can say is that there's unlikely to be any blue light involved. In principle it is possible to add two red photons together to make a blue one but this process (called "nonlinear optics") usually requires intensities many orders of magnitude higher than you can get from a hand-held laser."
A new attempt on the SH2-190 with a dragon eating its way out of the Heart Nebula.
7500 lightyears away in Cassiopeia.
Started with the SOH dataset from Telescope live with about 26 hours of data from the SPA-1 telescope in Spain, Oria.
Doing the now standard workflow for me with WBPP, NSG, Drizzle x2 and NoiseXTeminator.
Continued to do SOH Starless images using Starnet2 that for me works better?? than the StarXTerminator right now. May change rapidly with new upgrades of the AI engines for both programs.
Made an "RGB" Starimage using Sii and Oiii data
Got into nonlinear mode.
Created RGB Starless image with Pxelmath using a dynamic Palette from "The Coldest nights" site.
Added the "RGB"stars to the Starless RGB in Pixelmath
Final touching up in Lightroom.
Chaos is the science of surprises, of the nonlinear and the unpredictable. It teaches us to expect the unexpected. While most traditional science deals with supposedly predictable phenomena like gravity, electricity, or chemical reactions, Chaos Theory deals with nonlinear things that are effectively impossible to predict or control, like turbulence, weather, the stock market, our brain states.
Something different, just for fun. I just associated the word "chaos" with the hurricane season and we'll have maybe a major hurricane next week...total chaos!
Excerpt from winterstations.com:
One Canada
Design Team: University of Guelph, School of Environmental Design & Rural Development – Alex Feenstra, Megan Haralovich, Zhengyang Hua, Noah Tran, Haley White & Connor Winrow, Lead by Assistant Professor Afshin Ashari and Associate Professor Sean Kelly (Canada)
Description
The Indigenous Peoples in Canada are an inspirational example of resilience due to their ability to withstand adversity and persevere through generations of oppressive colonial policies. Historic injustices persist, including the effects of cultural genocide from the residential school system of Canada. Here we symbolize bridging the gap between Indigenous and non-Indigenous Peoples through gathering. Accomplished through the support of the seven grandfather teachings, represented by the seven rings of the installation, that originated with the Anishnabae Peoples, passed down through generations that ensures the survival of all Indigenous Peoples: Wisdom, Love, Respect, Bravery, Honesty, Humility, and Truth. Orange represents the National Day of Truth and Reconciliation, and the reality that the support of non-Indigenous Peoples, as Indigenous Peoples assert rights to self-determination, will strengthen relations and begin to redress the historic wrongs. Orange is displayed in the ropes where the pattern pays homage to the creation of drums, where the ropes were weaved to honour culture. The installations flow towards the lifeguard stand reinforces the strengthening of the relationship and that the protection of Canada hinges on the unity between peoples. We aim to symbolize movement to a new relationship, one based on mutual respect that honours Indigenous treaties and rights. The road forward is long and nonlinear, but we commit to take the journey together.
Beautiful shots from EVERSPACE.
Shot using Fraps and in-game Action Freeze cam. Bottom cropped (Beta Build and EVERSPACE logo).
Beta (1.1.28051) access is finally here !
Crowdfunded and from the makers of the iconic Galaxy on Fire series comes a new breed of space shooter for PC and Xbox One, combining roguelike elements with top-notch visuals and a captivating story.
Here's a little "behind the scenes." I extracted these photos from the one picture I took shooting in RAW. The RAW format file has the entire range from light to dark. From this you can extract as many pictures as you want. "Reshooting" the shot by adjusting the settings and saving out that particular image. Change the exposure, temperature, etc. to get a quicker shot like A or a longer shot like D. Most cameras take in all this info but based on the camera's settings it only stores a small portion of what it sees... like, say, shot B. The rest of the information is dumped and lost as the photo is saved as the JPG on your memory card.
Images have a standardized "range" of 256 shades of grey in each channel of RGB (Red Green Blue), together producing millions of beautiful colors. High Dynamic "Range" refers to the much greater range beyond 256 that the RAW contains. However, because 256 is the standardized number, in order to be displayed on the computer monitor, the web, or in a program, even my HDR image has to follow the rules, and only be 256 shades in each channel. This requires some adjustment, and indeed ALLOWS for some reworking of the image. I pick and choose what parts I want, and what parts I don't want. I dont want dark buildings, I like seeing the details. I like seeing the gradient of color in the sky instead of being blown out. These are some of the decisions I get to make when producing a more dynamically spliced together image using this great range of information stored in the RAW file.
So the word "dynamic" refers to multiple things (to me anyway). It refers to the way programs can dynamically access and reshoot your shot. And it also refers to how the information stored is used by the photographer to dynamically build a nonlinear image.
Philosophically, this is really no different than how early film photographers spliced together, edited, cleaned-up, and reshot photos in their darkrooms. Computers and digital photography have just enabled everyone to go that much further in having fun with their hobbies. This is why you see Mr. PlumpChump playing around and experimenting with it.
It certainly gives me a strong respect for how amazing our eyes are. They are able to see a range FAR broader than the meager 256 shades.
Hopefully the 256 standard will be increased to 512 or 1024 someday to closer mimic what we really see. =)
1 stack of 35 60s images, Canon 800D at ISO 800, Canon 400mm f5.6 lens at f6.3, iOptron Skyguider Pro tracker. 50 darks, 120 biases. Processed in PixInsight as below
***** Integration
lightvortexastronomy tutorial (www.lightvortexastronomy.com/tutorial-pre-processing-cali...)
* CC defect list + master dark (sigma = 8)
15*(1-(FWHM-FWHMMin)/(FWHMMax-FWHMMin))
+ 35*(1-(Eccentricity-EccentricityMin)/(EccentricityMax-EccentricityMin))
+ 20*(SNRWeight-SNRWeightMin)/(SNRWeightMax-SNRWeightMin)
+ 30
img 0713 ref
* ESD integration
* drizzle integration, circular kernel, area containing the galaxies
*****Linear processing
*** Initial
* Crop
* DBE tolerance 3, manually placed points outside the dust clouds
* CC using a dust cloud as background, background ref upper limit 0.0025
*** Denoise
Using jonrista.com/the-astrophotographers-guide/pixinsights/eff... as implemented by EZSuite.
* TGV edge protection 5e-5, default MMT
***** Nonlinear processing
*** Initial stretch
* MaskedStretch
* extract luminance, histo stretch shadows 0.06 mids 0.02, use as mask
* range mask 0.3, smoothness 9, apply, boost saturation
* ACDNR: create star mask (save, will use later), curves boost, dilate x 2 convolve x2, apply to main image. ACDNR on chrominance only, w/ lightness mask, 4 stdev 6 iterations
*** Colors
* CC with white ref upper limit 1 and background ref upper limit 0.2
* invert, SNCR green 0.8 to fix magenta background hue, invert again
* SNCR green 0.8
*** Stars
* Using a copy of the star mask, reduce using EZReduce, morphological transformation
* extract luminance, curve to get white stars and dark background, use as mask, unsharpmask 0.7
*** Other processing
* break down image in starless and stars, erase dust marks from starless, sum starless and stars back to the old image
* Duplicate, LHE kernel 512 contrast limit 1.5, slight contrast curve in RGB/K and sat boost, 0.5 blend with original
* denoise using jonrista.com/the-astrophotographers-guide/pixinsights/eff... as implemented by EZSuite. TGV edge protection 2e-2, default MMT
* MMT sharpen 6 layers biases 0.025 0.5 0.025 0.025 0.012 0.005
SH2-64 is an emission nebula located in the Serpens Molecular Cloud. Located about 1400 light years away, it is a massive star forming region. Due to the high density of the dust clouds, X-Ray, IR, and Radio observations are needed to see through the dust and observe the star forming processes happening within.
- Location: Remote Observatory (Bortle 1, SQM 21.99) near Fort Davis, TX
- Total Exposure Time: 16.65 Hours
Equipment:
- Scope: Esprit 100ED w/ 1x Flattener
- Imaging Camera: QHY 268M
- Filters: Chroma LRGB (36mm)
- Mount: Astro Physics Mach1GTO
- Guidescope: SVBony 50mm Guidescope
- Guide camera: ASI 120mm mini
- Focuser: Moonlite Nitecrawler WR35
- Accessories: Pegasus Ultimate Powerbox v2, QHY Polemaster, Optec Alnitak Flip Flat
------------------------------------------------------------
Software:
- N.I.N.A for image acquisition, platesolving, and framing
- PHD2 for guiding
- PixInsight for processing
-------------------------------------------------------------
Acquisition:
- L: 196 x 3m
- R: 49 x 3m
- G: 44 x 3m
- B: 44 x 3m
- All images at Gain 56, Offset 25 (Readout mode 1) and -5C sensor temperature
- 20 flats per filter
- Master Dark & Bias from Library
- Nights: 5/30, 5/31, 6/1, 6/6, 6/7/22
--------------------------------------------------------------
Processing:
- BatchPreprocessing for calibration
- StarAlignment and ImageIntegration of all masters
RGB Processing (apply to each master):
- DynamicCrop
- DynamicBackgroundExtraction
- NoiseXterminator for linear NR
- StarAlign G and B masters to R
- ChannelCombination to combine into linear RGB image
- STF stretch to bring RGB to nonlinear
- ColorSaturation to selectively bump yellow and blue saturation
Lum Processing:
- DynamicCrop
- DynamicBackgroundExtraction
- NoiseXTerminator for linear NR
- GeneralizedHyperbolicStretch for initial stretch
- HistogramTransformation x3 for further stretch
- CurvesTransformation for contrast
Combine RGB and Luminance:
- StarAlign RGB to Luminance
- LRGBCombination with chrominance NR enabled
- NoiseXTerminator for slight luminance noise reduction
- ColorSaturation to boost the red in SH2 64
- Slight UnsharpMask via inverted luminance mask for sharpening
- CurvesTransformation with inverted luminance mask for contrast increase
- Final CurvesTransformation for 'c' curve bump
- SCNR green @ 0.3
- DynamicCrop to crop edges
- Save and Export
Didn't quite do the processing right - still some strange background gradients I haven't quite managed to remove
27 lights, Canon 800D at ISO 800, Samyang 16mm at f2.8, 1 minute exposures, Omegon Lx2 tracking mount. 30 darks, 120 biases. Processed in PixInsight as below
*****Linear processing
*** Integration:
lightvortexastronomy tutorial (www.lightvortexastronomy.com/tutorial-pre-processing-cali...),
(15*(1-(FWHM-FWHMMin)/(FWHMMax-FWHMMin)) + 15*(1-(Eccentricity-EccentricityMin)/(EccentricityMax-EccentricityMin)) + 20*(SNRWeight-SNRWeightMin)/(SNRWeightMax-SNRWeightMin))+50 - use #775 as ref
*** Crop
*** Background extraction:
DBE tolerance 3, no points placed on the Milky Way
***Deconvolution
Created star mask for larger stars - large scale structure 2, small scale 1, noise threshold 0.1, scale 6,
Extracted luminance, STF autostretched, then histod shadows 0.2 midtones 0.28 highs 1 to get a range mask
Deconvolve with range mask on, 80 interations, custom PSF, dark 0.01 bright 0.004, local deringing with star mask, wavelet regularization
*** Color calibration
SNCR applied with range mask on (inverted) to protect nebulas
Background neutralization
Color calibration
*** Star reduction (for small and mid stars)
Small star mask - noise 0.15, scale 4, small scale 3 comp 1, smoothness 8, binarize, midtones = 0.02
Range mask from that, 0.05-1
Apply, erosion operator 4 iterations 0.15
*** Linear noise reduction
jonrista.com/the-astrophotographers-guide/pixinsights/eff...
*TGV - small noise
Created TGV masks - extracted luminosity, standard stretch (tgv_luma_mask), curved it with black point at ~0.2 and white at ~0.5, moved histogram point to middle (tgv_mask)
apply tgv mask inverted to the image, give luma mask as local support
TGV chroma str 7 edge protection 2E-4 smoothness 2 iterations 500
TGV luma str 5 edge protection 1E-5 smoothness 2 iterations 500
*MMT - larger noise and TGV artifacts
Created MMT mask - extract luminosity, standard stretch, move histogram point to 75%, apply low range -0.5. Apply inverted
MMT mask - 8 layers, threshold 10 10 7 5 5 2.5 2 2 on rgb
*****Nonlinear
***Initial stretch
*Autostretch, apply to hist
*Create full star mask, max(star_mask_large, star_mask_small)
* HDR transform, 8 layers, B3 spline, star mask applied inverted, preserve hue, lightness mask
***MLT stretch
**Initial
* created a new multiscale linear transform, kept 4 layers using linear interpolation
* diffed from original image to create a "blurred" version of original image
* extracted luminance from original, used as mask on blurred version
* used curves to create s shape in luminance and saturation, inflection 3/4 up
* pixelmath sum the 3, rescaled, back to original image
**Second
* new multiscale linear transform, keep 5 layers
* diff from original
* extract luminance from blurred image, to use as a mask
* masked blurred image with its own luminance, gave it s-shaped RGB curve, slight boost in luminosity, big boost in saturation
* pixelmath sum the 3, rescaled, back to original image
**Third
* new multiscale linear transform, keep 8 layers
* diff from original
* extract luminance from blurred image, to use as a mask, hist stretch it (multi_8_substracted_L)
* luminosity increase (1 curve), saturation (even more)
* pixelmath sum the 3, rescaled, back to original image
*** Darken
* DarkStructureEnhancer, 8 layers, 0.7, 3x3
* DarkStructureEnhancer, 8 layers, 0.7, 5x5
*** Color saturation
* bumped reds strongly, green-blues less strong
*** Sharpen
* Sharpen with multiscale linear transform, bias layers 2-6 (0.05, 0.05, 0.025, 0.012, 0.006)
*** Final crop and resize
* rotate 90* clockwise
* crop bottom (slightly weird corner)
* Rescale back to normal
**** Not used
**Create star and bright nebulas mask
* substract star_mask from luma to get a nebula mask
* exagerrate hugely with curve to get high contrast - RGB line going from 25% of horizontal to 50%
* apply said exagerration to stars too
* sum them up in pixelmath, save as star_nebula_mask
* new multiscale linear transform, keep 5 layers
* diff from original
* apply inverted stars_nebula_mask
* Local histogram equalization, kernel 200, contrast 1.5
* Local histogram equalization, kernel 400, contrast 1.5
* Saturate with curves (slight s-shape but mostly nuke
Beautiful shots from EVERSPACE.
Shot using Fraps and in-game Action Freeze cam. Bottom cropped (Beta Build and EVERSPACE logo).
Beta (1.1.28051) access is finally here !
Crowdfunded and from the makers of the iconic Galaxy on Fire series comes a new breed of space shooter for PC and Xbox One, combining roguelike elements with top-notch visuals and a captivating story.
Imaged the Heart Nebula located in the constellation Cassiopeia. This has been a lower priority target for me in the past but got bumped up once I went mono. I'm glad I was able to get a decent shot of it this year. My camera and scope combo give a good FOV on this faint target. Oiii was extremely faint, but luckily good narrowband processing techniques can mask that well.
Total exposure time for this image is: 29 hours.
Equipment:
- AT65EDQ Scope
- ZWO ASI1600mm-Pro Imaging Camera
- Belt Modded Orion Sirius EQ-G
- QHY miniGuideScope and QHY5L-ii mono guidecam
- Chroma Ha/Oiii/Sii filters
---------------------------------------------------------------
Software:
- N.I.N.A. for capture
- PHD2 for guiding
- PixInsight for Processing
---------------------------------------------------------------
Acquisition:
- 175 x 300" Ha - Chroma 5nm
- 69 x 300" Oiii - Chroma 3nm
- 104 x 300" Sii - Chroma 3nm
- 200 gain and 50 offset, -10C
- 20 flats and flat-darks per filter
- 30 darks from library
- Nights: 10/12, 10/14, 11/6, 11/7/20
---------------------------------------------------------------
Processing:
Each Master Image:
- Calibration, Integration, DrizzleIntegration
- DynamicCrop
- DynamicBackgroundExtraction
- Deconvolution (Ha only)
- TGVDenoise + MMT noise reduction using EZDenoise Script
- Arcsinhstretch (x2) to bring to nonlinear
- HistogramTransformation for further stretch
- CurvesTransformation to bring up background level
- StarAlign Oiii and Sii to Ha
- Starnet to remove stars from each master; duplicate starless Ha and set aside to use as Luminance layer
Combine Starless Masters via PixelMath:
- Duplicate Oiii and rename to 'f'. CurvesTransformation to boost signal of f and lower background
- R: f*Sii + ~f*Ha
- G: f*(0.7*Ha + 0.3*Sii) + ~f*Oiii
- B: Oiii
- Visit thecoldestnights.com/2020/06/pixinsight-dynamic-narrowban... for more information on Dynamic Narrowband Combinations
- CurvesTransformation to slightly reduce green and boost saturation
Starless Ha Luminance Processing:
- CurvesTransformation for contrast
- RangeMask + LocalHistogramEqualization on Melotte 15 to bring back details
- DarkStructureEnhance script at 0.3
- UnsharpMask using a new RangeMask
Combine Luminance and Color:
- LRGBCombination with Luminance at 85% weight and chrominance noise reduction enabled
Add Back Color Stars and Final Processing:
- StarAlign linear Oiii and Sii masters to linear Ha master
- Arcsinhstretch just barely each linear master
- Duplicate each barely stretched master and Starnet each to remove stars
- PixelMath: Master_Stars - Master_Starless to get just the stars for each channel
- PixelMath: Combine the stars of each channel into a color star image:
- R: Ha_stars
- G: Sii_stars
- B: Oiii_stars
- PixelMath: RGB_Stars + RGB_Nebula to add stars into nebula image
- DynamicCrop to remove edges
- Save and Export
1 stack of 58 60s images, Canon 800D at ISO 800, Canon 400mm f5.6 lens at f6.3, iOptron Skyguider Pro tracker. 120 darks, 120 biases. Processed in PixInsight as below
* CC defect list + master dark (sigma = 8)
* Subframe selection
30*(1-(FWHM-FWHMMin)/(FWHMMax-FWHMMin))
+ 30*(1-(Eccentricity-EccentricityMin)/(EccentricityMax-EccentricityMin))
+ 20*(SNRWeight-SNRWeightMin)/(SNRWeightMax-SNRWeightMin)
+ 20
Eccentricity < 0.7 && SNRWeight > 1 && FWHM < 5. Keep frames 7173 and 7177 even though they failed the rejection conditions, for meteors
* star alignment:
img 8411 ref, frame adaptation (let's see if this works)
* ESD integration, range exclude
* drizzle integration, gaussian kernel\
*****Linear processing
*** Initial
* Crop
* DBE tolerance 3
*** CC
* PCC on Pleiades, default settings pixel size 2, background neutralization upper limit 0.0012
*** Decon
* star mask creation: extract luminance get starmask with starnet, then range mask 0.1-1 (1). from stretched luminance range mask 0.55-1 (2). Pixel math max the 2 as binary_star_mask. 4x dilate, 6x convolve
* PSF - ezdecon script
* background mask -ezdecon script and curves
* with background mask on, inverted, apply decon 100 iterations on luminance, deringing global dark 0.01 light 0.002 and local deringing with binary star mask, wavelet regularization
*** Denoise
Using jonrista.com/the-astrophotographers-guide/pixinsights/eff... as implemented by EZSuite.
* EZDenoise, TGV 1500 edge 1e-5, default MMT
***** Nonlinear
*** Initial stretch
* Masked stretch, default settings
* using a contrasty luminance as mask, curves stretch
*** Stretched color mods
* with binary_star_mask protecting stars, ACDNR on chrominance, lightness mask, stdev 4 iterations 6
* CC background ref upper limit 0.125
* invert, SNCR green 1, invert again (to fix magenta hues in whites)
* SNCR green 0.8
*** Star mods
* extract star mask with starnet, blur 3 layers of a trous, curve boost
* with it on, curve saturation upwards. Clone stamp out big stars - small_star_mask
* using that as a start, EZStarReduce using morphological transformation
* with small_star_mask on, unsharp mask 0.9, w/ deringing
***MLT stretch
www.stelleelettroniche.it/en/2014/09/astrophoto/m42-ngc19...
**Initial (fine details)
* created a new multiscale linear transform, kept 5 layers
* diffed from original image to create a "blurred" version of original image
* extracted luminance from original, s-shaped RGB curve used as mask on blurred version
* used curves to create s shape in RGB (asymmetrical, like a camera curve) and pump up saturation a lot
* pixelmath sum the 3, rescaled, back to original image
**Second (nebula)
* created a new multiscale linear transform, kept 7 layers, and diff from original
* extract luminance from diff, s-shaped RGB curve. Use as mask on blurred version
* slight boosts in sat and RGB
* pixelmath sum the 3, rescaled, back to original image
*** Finishers
* EzDenoise, TGV 1500 edge 8e-3, default MMT
* LHE kernel 512 contrast 1.5,slight boost in sat, 50-50 mix with original
* LHE kernel 256 contrast 1.5, 20-80 mix with original
* dark structure enhance
* MMT sharpen, 6 layers biases 0.05 0.05 0.025, 0.025, 0.012, 0.005 with luminosity mask s-curved for more contrast
Beautiful shots from EVERSPACE.
Shot using Fraps and in-game Action Freeze cam. Bottom cropped (Beta Build and EVERSPACE logo).
Beta (1.1.28051) access is finally here !
Crowdfunded and from the makers of the iconic Galaxy on Fire series comes a new breed of space shooter for PC and Xbox One, combining roguelike elements with top-notch visuals and a captivating story.
Beautiful shots from EVERSPACE.
Shot using Fraps and in-game Action Freeze cam. Bottom cropped (Beta Build and EVERSPACE logo).
Beta (1.1.28051) access is finally here !
Crowdfunded and from the makers of the iconic Galaxy on Fire series comes a new breed of space shooter for PC and Xbox One, combining roguelike elements with top-notch visuals and a captivating story.
LDN 1251, in Lynd's Catalog of Dark Nebulae, is a dense star forming region located in the constellation Cepheus. Together it and other objects combine to form the Cepheus Flare, a large molecular cloud above the Milky Way. Many objects are visible in this image including the dust of LDN 1251, LDN 1243, and a few galaxies including PGC166755 and PGC69472.
This image is a 2-panel mosaic taken over 10 nights, with approximately 19 hours of exposure time per panel.
- Location: Remote Observatory (Bortle 1, SQM 21.99) near Fort Davis, TX
- Total Exposure Time: 38.75 Hours
Equipment:
- Scope: Esprit 100ED w/ 1x Flattener
- Imaging Camera: QHY 268M
- Filters: Chroma LRGB (36mm)
- Mount: Astro Physics Mach1GTO
- Guidescope: SVBony 50mm Guidescope
- Guide camera: ASI 120mm mini
- Focuser: Moonlite Nitecrawler WR35
- Accessories: Pegasus Ultimate Powerbox v2, QHY Polemaster, Optec Alnitak Flip Flat
------------------------------------------------------------
Software:
- N.I.N.A for image acquisition, platesolving, and framing
- PHD2 for guiding
- PixInsight for processing
-------------------------------------------------------------
Acquisition (Total between both panels):
- L: 380 x 3m
- R: 132 x 3m
- G: 131 x 3m
- B: 132 x 3m
- All images at Gain 56, Offset 25 (Readout mode 1) and -5C sensor temperature
- 20 flats per filter
- Master Dark, Flat & Bias from Library
- Nights: 6/23-6/27, 6/30, 7/1-7/3/22
--------------------------------------------------------------
Processing:
- BatchPreprocessing for all calibration
- ImageIntegration with PSFSignalWeights for integration of all masters
Combining panels for each master:
- DynamicCrop on each panel
- DynamicBackgroundExtraction on each panel
- ImageSolver to platesolve panels
- MosaicByCoordinates script to create mosaic templates
- PhotometricMosaic script to create 2-panel LRGB masters
RGB Processing (apply to each master):
- FastRotation 90 degrees
- DynamicCrop
- DynamicBackgroundExtraction
- NoiseXterminator for linear NR
- StarAlign B and G to R
- ChannelCombination to combine into linear RGB image
- DynamicBackgroundExtraction to remove color gradients
- STF for stretch to non-linear
- CurvesTransformation to balance color
Luminance Processing:
- FastRotation 90 degrees
- DynamicCrop
- DynamicBackgroundExtraction
- NoiseXterminator for linear NR
- GeneralizedHyperbolicStretch for initial stretch
- HistogramTransformation x2 for further stretch
- CurvesTransformation on starless copy to increase nebulosity
- PixelMath to combine pushed starless and original via: (1- (1-$T) * (1-star)*F)+($T*~F) where F=0.3
- CurvesTransformation for contrast
Combine RGB and Luminance:
- StarAlign RGB to Luminance
- LRGBCombination with RGB channel weights set to 0.9, chrominance NR enabled, and saturation boost
- SCNR green @ 0.2
Further Processing:
- CurvesTransformation for contrast
- NoiseXterminator for slight nonlinear NR @ 0.2
- Starnet2 to create starmask
- MorphologicalTransformation for slight star reduction
- ColorSaturation to saturate brown dust
- CurvesTransformation for contrast
- DynamicCrop to remove edges
Created Annotated Version:
- Duplicate final image and StarAlign to initial linear Luminance
- ImageSolver script on luminance
- Transfer FITSHeader from solved luminance to staraligned final image
- AnnotateImage script to label objects
- DynamicCrop to remove edges
- IntegerResample to downsample 2x
- Save as PNG
Beautiful shots from EVERSPACE.
Shot using Fraps and in-game Action Freeze cam. Bottom cropped (Beta Build and EVERSPACE logo).
Beta (1.1.28051) access is finally here !
Crowdfunded and from the makers of the iconic Galaxy on Fire series comes a new breed of space shooter for PC and Xbox One, combining roguelike elements with top-notch visuals and a captivating story.
Finally completed a new image and the first one of 2022. There have been a lot of changes in the setup, including the main imaging scope, and we had enough clear new moon nights in January for me to put a decent amount of time into an image from the backyard. This will likely be my last time imaging from my backyard as there are some big changes (for the better) coming for my AP journey!
Messier 42, the Orion Nebula, is probably the most prominent beginner DSO that you see imaged by amateurs (followed closely by M31). Bright enough to be viewed naked eye from heavily light polluted areas, this emission nebula is an active star forming region surrounded by tons of dust and Hydrogen Alpha emission. Because the core of the nebula is so bright, it is often blown out in many images (including this one! - I was too lazy and did not optimize my Luminance images to prevent this). Easy to see/capture, difficult to properly process - even for experienced folks. A challenge for new and veteran imagers alike.
- Location(s): Bortle 6/7 backyard
- Total Integration Time: 14.25 Hours
Equipment:
- Scope: Esprit 100ED w/ 1x Flattener
- Imaging Camera: QHY 268M
- Filters: Chroma LRGB (36mm)
- Mount: Skywatcher EQ6-R Pro & Astro Physics Mach1GTO
- Guidescope: SVBony 50mm Guidescope
- Guide camera: ASI 120mm mini
- Accessories: Moonlite NiteCrawler WR35, Pegasus Ultimate Powerbox v2, QHY Polemaster
------------------------------------------------------------
Software:
- N.I.N.A for image acquisition, platesolving, and framing
- PHD2 for guiding
- PixInsight and Lightroom for processing
-------------------------------------------------------------
Acquisition:
- L: 122 x 3m
- R: 63 x 3m
- G: 40 x 3m
- B: 60 x 3m
- All images at Gain 56, Offset 25 (Readout mode 1) and -5C sensor temperature
- 20 flats per filter
- Master Bias from Library
- Nights: 1/6, 1/7, 1/28, 1/29/22
--------------------------------------------------------------
Processing:
- BatchPreProcessing to calibrate all images
- SubFrameSelector to reject/approve subs and weigh approved
- ImageIntegration with SFS weights
Luminance Processing:
- DynamicCrop
- MureDenoise
- DynamicBackgroundExtraction
- Deconvolution via Oke's method
- GeneralizedHyperbolicStretch for initial stretch
- CurvesTransformation for contrast + stretch of dust
- RangeMask on core + LocalHistogramEqualization to bring back dust details (trapezium blown out)
RGB Processing (apply to each master):
- DynamicCrop
- DynamicBackgroundExtraction
- StarAlign RGB masters to Luminance
- ChannelCombination to combine into color image
- PhotometricColorCalibration
- BackgroundNeutralization
- MaskedStretch to bring to nonlinear
- HistogramTransformation for further stretch
- CurvesTransformation for contrast
- SCRN green at 0.25
- CurvesTransformation on red channel to make dust more brown
- Invert -> SCNR green @ 0.25 -> invert back
Combine into LRGB and further processing:
- LRGBCombination with Chrominance NR enabled
- Slight CurvesTransformation for contrast
- Original RangeMask + CurvesTransformation for slight saturation boost in emission and smaller saturation boost in background
- RangeMask + UnsharpMask for sharpening of nebula details and dust lanes
- DynamicCrop
Bring into Adobe LightRoom:
- 3x radial filters to color balance left and right regions of image
Beautiful shots from EVERSPACE.
Shot using Fraps and in-game Action Freeze cam. Bottom cropped (EVERSPACE logo).
The Cone Nebula is one of those somewhat rare targets that are really interesting in both narrowband or broadband, and I've always wanted to take a shot at a broadband image but haven't had the skies for it until recently. From an astronomical perspective, it's fairly run of the mill: it's a hydrogen-dominated emission nebula with some blue reflection elements located about 2,700 light years away in the constellation Monoceros. Other names for it are the Christmas Tree nebula (presumably because of the shape, though it may be because of the date of its initial discovery). The wavy region just to the upper right of the reflection nebula is called the Fox Fur Nebula, again, for reasons I'm not 100% sure on. Either way, there's lots of interesting color and structure going on, and that's what I tried to bring out in the image.
My processing here was primarily focused on bringing out all the structure, which is deliciously complex. To do that, I took an approach I rarely do, and blended in Ha with LRGB data, because just looking at it, the Ha contained detail just not readily visible in broadband. Rather than just use the NRGB script like I've done in the past, I tried a new approach, learned from Adam Block, wherein I used Herbert Walter's Blend script set to Screen blend, and blended in starless, stretched Ha data into both the nonlinear luminance data and the extracted red channel from the permanently stretched, color-calibrated RGB image. I then recombined the now modified red channel with the other color channels and applied the new blended luminance. The net result contained significantly more detail than the LRGB alone, though it did take some iteration on how much of the Ha to blend into the red channel so the whole thing didn't look like a big red blob: the blue reflection element adds a lot of visual interest, and I really didn't want to lose it. I think I ended up attenuating it by maybe 20 - 25%.
All in all, it was a fun experiment and yielded a not-terrible result, though could always be better. I'm putting this one down for now, though I may do an all-narrowband version at some point in the future. As always, comments and criticisms welcome.
Imaging telescopes or lenses: Orion optics UK AG12
Imaging cameras: zwo optical ASI 2600MM Pro
Mounts: Software Bisque Paramount MX+
Guiding telescopes or lenses: Orion optics UK AG12
Guiding cameras: ZWO ASI290 Mini
Software: PIXINSIGHT 1.8 · Sequence Generator Pro
Filters: Chroma Ha 5nm 36mm · Chroma B 36mm · Chroma G 36mm · Chroma R 36mm · Chroma L 36mm
Accessory: StarLight Instruments Feather Touch Focuser · Pegasus Astro ultimate powerbox v2
Dates:Nov. 5, 2021 , Nov. 7, 2021 , Nov. 8, 2021 , Nov. 10, 2021
Frames:
Chroma B 36mm: 180x120" (6h) (gain: 100.00) bin 1x1
Chroma G 36mm: 180x120" (6h) (gain: 100.00) bin 1x1
Chroma Ha 5nm 36mm: 7x600" (1h 10') (gain: 100.00) bin 1x1
Chroma L 36mm: 360x60" (6h) (gain: 100.00) bin 1x1
Chroma R 36mm: 180x120" (6h) (gain: 100.00) bin 1x1
Integration: 25h 10'
..of course he loves me! 😍
He Loves Me... He Loves Me Not (French: À la folie... pas du tout) is a 2002 French psychological drama film directed by Lætitia Colombani. The film focuses on a Fine Arts student, played by Audrey Tautou, and a married cardiologist, played by Samuel Le Bihan, with whom she is dangerously obsessed. The film studies the condition of erotomania and is both an example of the nonlinear and "unreliable narrator" forms of storytelling.
The title refers to the last two lines of the French game of Effeuiller la Marguerite (Fr., "to pluck the daisy") of pulling petals off a flower, in which one seeks to determine whether the object of their affection returns that affection and to what extent: un peu ("a little"), beaucoup ("a lot"); passionnément ("passionately"): à la folie ("to madness"); pas du tout ("not at all").
Smile on Saturday: Yellow on White
Happy Easter weekend everyone!!
Thank you for your visits, kind comments and faves. Always greatly appreciated.
Copyright 2019 © Gloria Sanvicente
The little crisscrossing waves approaching the beach appear to be solitons, a kind of wave behavior that has application in widely divergent areas of physics like nonlinear optics and novel states of matter.
"First, prove your bona fides. Give me something linear, but good," said Rob.
"Do da Dunk do Dunk do da DUNK. Do da Dunk do Dunk do da dunk," responded Cherry.
"Good opening riff! Controlled!! Memorable. But your plucking is still in the realm of the norm in a linear world," said Rob. "Now surprise me. Sing me something nonlinear, glissando-like. Bend the rules of time and space. Wake me up. Break me outta this world!"
"HhhhhaaaaaAaaAaaAaAAAAAAYYYYYYYY, DO it nooow."
"Yeah!" shouted Rob. "That's what I'm talking about!! Lift off. Now scream it. PLAY that funky music."
Beautiful shots from EVERSPACE.
Shot using Fraps and in-game Action Freeze cam. Bottom cropped (Beta Build and EVERSPACE logo).
Beta (1.1.28051) access is finally here !
Crowdfunded and from the makers of the iconic Galaxy on Fire series comes a new breed of space shooter for PC and Xbox One, combining roguelike elements with top-notch visuals and a captivating story.
Description: I developed this image of the North American Nebula NGC 700 from 60x300s subs or 5.0 hours of total exposure time. I used the Optolong L-eXtreme Dual Bandpass Light Pollution Filter. It has two 7nm pass bands centered on the H-alpha and OIII wavelengths. With a one-shot color (OSC) camera and an L-eXtreme filter combination the red signal from the H-alpha tends to dominate. In the nonlinear postprocessing phase I applied Histogram Transformation, Local Histogram Equalization and Curves Transformation in small doses in multiple passes.
Date / Location: 20, 25, 26 June 2022 / Washington D.C.
Equipment:
Scope: WO Zenith Star 81mm f/6.9 with WO 6AIII Flattener/Focal Reducer x0.8
OSC Camera: ZWO ASI 2600 MC Pro at 100 Gain and 50 Offset
Mount: iOptron GEM28-EC
Guide Scope: ZWO ASI 30mm f/4
Guide Camera: ZWO ASI 120mm mini
Light Pollution Filter: Optolong L-eXtreme Dual Bandpass
Processing Software: Pixinsight
Processing Steps:
Preprocessing: I preprocessed 60x300s subs (= 5.0 hours) in Pixinsight to get an integrated image using the following process steps: Image Calibration > Cosmetic Correction > Subframe Selector > Debayer > Select Reference Star and Star Align > Image Integration.
Linear Postprocessing: Dynamic Crop > Dynamic Background Extractor (both subtraction to remove light pollution gradients and division for flat field corrections) > Background Neutralization > Color Calibration.
Nonlinear Postprocessing and additional steps: Histogram Transformation > Noise Xterminator > Histogram Transformation (small doses in multiple passes) > Local Histogram Equalization > Curves Transformation (small doses in multiple passes).
Beautiful shots from EVERSPACE.
Shot using Fraps and in-game Action Freeze cam. Bottom cropped (Beta Build and EVERSPACE logo).
Beta (1.1.28051) access is finally here !
Crowdfunded and from the makers of the iconic Galaxy on Fire series comes a new breed of space shooter for PC and Xbox One, combining roguelike elements with top-notch visuals and a captivating story.
Description: NGC 6960, the Western Veil Nebula, being an emission nebula in the Cygnus Loop, is well-suited for imaging using a narrow band filter such as the Optolong L-eXtreme dual band filter which provides two 7nm passbands one for H-alpha at 656nm and another for OIII at ~501nm. Accordingly, using a combination of a one-shot color (OSC) camera and the Optolong L-eXtreme filter, I generated 152x300s subs for a total exposure time of 12.67 hours in order to achieve my image. NGC 6960 appears to show intertwined substructures, possibly as a consequence of being a supernova remnant. As a side note, in order to mitigate the distraction created by numerous stars around the nebula I applied a morphological transformation. But since such a transformation can impact contrast and saturation, I applied another round of Curves Transformation.
Date / Location: 7-9, 13, 17, 19 and 21-23 November 2022 / Washington D.C.
Equipment:
Scope: WO Zenith Star 81mm f/6.9 with WO 6AIII Flattener/Focal Reducer x0.8
OSC Camera: ZWO ASI 2600 MC Pro at 100 Gain
Mount: iOptron GEM28-EC
Guide Scope: WO 50mm Uniguide Scope
Guide Camera: ZWO ASI 290mm
Focuser: ZWO EAF
Light Pollution Filter: Optolong L-eXtreme Dual Bandpass LPF
Processing Software: Pixinsight
Processing Steps:
Preprocessing: I preprocessed 152x300s subs (= 12.67 hours) in Pixinsight to get an integrated image using the following processes: Image Calibration > Cosmetic Correction > Subframe Selector > Debayer > Select Reference Star and Star Align > Image Integration.
Linear Postprocessing: Fast Rotation > Dynamic Crop > Dynamic Background Extractor (subtraction to remove light pollution gradients and division for flat field corrections) > Background Neutralization > Color Calibration > Noise Xterminator.
Nonlinear Postprocessing: Histogram Transformation > Local Histogram Equalization > Curves Transformation > SCNR Noise Reduction > Morphological Transformation > Noise Xterminator > Curves Transformation.
The Cygnus Loop Supernova Remnant (Optolong L-Enhance)
This latest capture from Grand Mesa Observatory’s system 4 (available on our subscriptions from October 1st) using a QHY367C full frame One Shot Color CMOS camera on a Takahashi E-180 F2.8 Astrograph with an Optolong L-Enhance dual-band pass Filter that we are testing for Optolong.
I wanted to compare the performance of the Optolong L-Enhance filter with the Williams Optics STC, the differences between the filters is the L-E is claimed to be capturing H-Alpha, H-Beta and OIII whereas the William Optics STC is claimed to be capturing H-Alpha and OIII
For the capture the seeing conditions were 4 out of 5 under bortle 2 skies, the WO STC was captured on a moonless night but the Optolong capture was captured with a 60% illumination (over half full)
For the comparison I acquired the exact same number and length of exposures, both were pre processed in Pixinsight using dark, bias and flat frames for each filter, no further work was carried out in pixinsight and following the stacking I saved each file as a 16 bit TIF file. I re processed the first capture using the WO STC filter and Post processed in Photoshop side by side with the Optolong filter. First doing linear stretching and balancing in levels followed by identical nonlinear stretching in curves then shadows and highlights, match color. Following this on each image I used Starnet star removal and replaced the stars for this final result.
In conclusion on this particular target I’m seeing that both filters are producing some incredible signal with the QHY367C One Shot Color CMOS camera that are comparable with images captured using monochrome cameras. It is my opinion that the William Optics filter has far more signal in OIII than the Optolong L-E but the stronger H-Alpha and H-Beta signal is obvious with the Optolong L-E.
When the moon wanes I’ll be capturing the Veil once again in Broadband for a further comparison (with only the UV/IR cut filter necessary on my sample of the QHY367C to prevent star bloating).
For comparison in high resolution:
WO STC Version www.flickr.com/photos/terryhancock/48683636067/in/album-7...
Total Acquisition Time 4.08 hours
Image capture details
By Terry Hancock
Location: GrandMesaObservatory.com Purdy Mesa, Colorado
Dates: 7th September 2019
Color 245 min, 49 x 300 sec
Camera: QHY367C
Offset 76, Gain 2850 Calibrated with flat, Dark & Bias
Optics: Takahashi E-180 F2.8 Astrograph
Filter: Optolong L-Enhance Duo-Narrowband
Mount: Paramount GT1100S
Image Acquisition software Maxim DL6.0
Pre Processed using Pixinsight
Post Processed using Photoshop CC
Some of my previous images of The Cygnus Loop
www.flickr.com/photos/terryhancock/albums/72157656480303673
About The Cygnus Loop
Containing many components such as The Eastern & Western Veil Nebulae NGC6960, NGC6992, NGC6995, IC1340, Pickerings Triangle, NGC6974 and NGC6979. Cygnus Loop is a supernova remnant, the expanding cloud of diverse elements created in the most powerful of explosions; a supernova.
As a Massive star nears the end of its life, it runs out of hydrogen fuel and begins fusing helium. After exhausting its supply of helium it begins to fuse heavier elements until finally, the star's core can no longer exert enough outward pressure and it collapses. A shock wave rebounds through the star so fierce that the star is shredded and leaves behind a small but extremely dense body; either a neutron star or a black hole.
The progenitor of this supernova remnant exploded more than 5,000 years ago and over the course of the past 5 millennia, the material has been racing away in all directions. The Cygnus loop now occupies a vast region of sky, equal to 36 full moons!
A Supernova seeds the interstellar medium with all types of heavy elements. In fact, every single atom of elements heavier than iron was created in this type of event, including many in your own body.
In order to advance beyond a somewhat colorless result arising from using a combination of an OSC camera and a broad band LPF, the integrated image was first separated into starless and stars only components, followed by splitting the starless image into its RGB components which were individually weighted and then recombined using LRGB Combination followed by further processing.
Scope: WO Zenith Star 81mm f/6.9 with WO 6AIII Flattener/Focal Reducer x0.8
OSC Camera: ZWO ASI 2600 MC Pro at 100 Gain and 50 Offset
Mount: iOptron GEM28-EC
Guider: ZWO Off-Axis Guider
Guide Camera: ZWO ASI 174mm mini
Light Pollution Filter: Chroma LoGlow Broadband LPF
Date: 30-31 March 2023 and 2-5 April 2023
Location: Washington D.C.
Exposure: 244x300s subs (= 20.3 hours)
Software: Pixinsight
Processing Steps:
Preprocessing: FITS data > Image Calibration > Cosmetic Correction > Subframe Selector > Debayer > Select Reference Star and Star Align > Image Integration.
Linear Postprocessing: Integrated image > Rotation > Dynamic Crop > Dynamic Background Extractor (subtraction to remove light pollution gradients and division for flat field corrections) > Background Neutralization > Color Calibration > Blur Xterminator > Noise Xterminator.
Nonlinear Postprocessing: Linear postprocessed image > Histogram Transformation > Star Xterminator to separate into Starless and Stars Only images.
Starless image > Histogram Transformation > Noise Xterminator > Local Histogram Equalization > Split RGB Channels > Weight the original channels and use Pixel Math to generate new modified RGB channels.
Apply HDR Multiscale Transform to the L channel (= R channel for broad band image) and the new modified RGB channels.
LRGB combination > LRGB image.
LRGB image > Curves Transformation using color masks > Histogram Transformation (multiple steps as needed) > Local Histogram Equalization (multiple steps as needed) > Final Starless image.
Pixel Math to combine the Final Starless Image and the new Stars Only image > Rejoined image.
Rejoined image > Dark Structure Enhancement > New rejoined image.
New rejoined image > Topaz AI > AI image.
Pixel Math to combine New rejoined image and AI image > Final result.
This nebula is also known as IC 443 and Sh2-248. Decided to go for a spoopy looking palette on this in celebration of Halloween. [Also made a starless version](i.imgur.com/HlsYOwV.jpg) to better show the fainter nebulosity in the image. Captured over 6 nights in September and October, 2021 from a Bortle 6 zone.
---
**[Equipment:](i.imgur.com/6T8QNsv.jpg)**
* TPO 6" F/4 Imaging Newtonian
* Orion Sirius EQ-G
* ZWO ASI1600MM-Pro
* Skywatcher Quattro Coma Corrector
* ZWO EFW 8x1.25"/31mm
* Astronomik LRGB+CLS Filters- 31mm
* Astrodon 31mm Ha 5nm, Oiii 3nm, Sii 5nm
* Agena 50mm Deluxe Straight-Through Guide Scope
* ZWO ASI-290mc for guiding
* Moonlite Autofocuser
**Acquisition:** 17 hours 5 minutes (Camera at Unity Gain, -15°C)
* Ha- 89x360"
* Oiii- 86x360"
* Darks- 30
* Flats- 30 per filter
**Capture Software:**
* Captured using [N.I.N.A.](nighttime-imaging.eu) and PHD2 for guiding and dithering.
**[PixInsight Processing:](www.youtube.com/watch?v=u7FuApFSGuA)**
* BatchPreProcessing
* SubframeSelector
* StarAlignment
* [Blink](youtu.be/sJeuWZNWImE?t=40)
* ImageIntegration
* DrizzleIntegration
**Linear:**
* DynamicCrop
* automaticBackgroundExtraction
* DynamicBackgroundExtraction
* EZ Decon + Denoise (per channel)
* STF applied via HistogramTransformaion to bring each channel nonlinear
**Combining Channels:**
* Pixelmath to make RGB image using /u/dreamsplease's palette:
> R=iif(Ha > .15, Ha, (Ha\*.8)+(Oiii*.2))
> G=iif(Ha > 0.5, 1-(1-Oiii)*(1-(Ha-0.5)), Oiii *(Ha+0.5))
> B=iif(Oiii > .1, Oiii, (Ha\*.3)+(Oiii*.2))
**Nonlinear:**
* Extract synthetic lumance channel > LRGBCombination for chrominance noise reduction
* LRGBCombination with Ha as luminance
* Shitloads of CurveTransformations to adjust lightness, contrast, saturation, hues, etc.
* MultiscaleLinearTransform for noise reduction
* LocalHistogramEqualization
* More curves
* EZ Star reduction
* NoiseGenerator to add noise back into reduced stars
* Snother round of MLT
* Even more curves
* Resample to 60%
* Annotation
Recently took a phenomenal trip out to the town of Marathon, TX in Big Bend country. While we hiked the park during the day, the Marathon Motel was absolutely perfect at night to do astrophotography from the concrete pads and other AP-related facilities they have.
On the third night, I decided to bust out my trusty H-Alpha modded Canon T3i and the good old SkyWatcher Star Adventurer to do a widefield image on the Orion constellation. I've been wanting an image like this for almost 4 years now, ever since I took my first astrophotos in almost the same area as Marathon. This one was a quick and easy process but boy does the modded camera bring out Barnard's Loop well.
Total Exposure Time: 2.6 hours
Equipment:
- Modified Canon T3i (600D)
- EF 35mm f/2 lens (set to f/4)
- Skywatcher Star Adventurer
- Intervalometer
--------------------------------------------
Acquisition:
- 53 x 180" lights @ ISO 1600
- 25 flats
---------------------------------------------
Processing:
- Calibrated and integrated in PixInsight
- NoiseEvaluation script to determine channel with least noise
- Split image into RGB channels and LinearFit B and G channels to red to get rid of color cast
- ChannelCombination to recombine the channels
- DynamicCrop to crop edges
- DynamicBackgroundExtraction
- Arcsinhstretch to bring to nonlinear
- Duplicate image and remove stars from duplicate via Starnet
- CurvesTransformation on starless image for contrast and further curves for saturation boost
- PixelMath: Original image - Starnet'd image to get stars_only image
- PixelMath: Stars_only_0.65 + Starless image to add back in stars
- Save and Export
The World Economic Forum: “The only way to stop the exponential propagation of a COVID-like cyber threat is to fully disconnect the millions of vulnerable devices [social distancing and quarantining of all electronic devices] from one another and from the internet.” Great, a repackaged plandemic made in a computer lab! Build (the internet) Back Better! The Great (computer) Reset! The official narrative: a nonlinear, three-dimensional, AI virus has attacked the internet! This fake Cyber Pandemic will shut down the internet, and they will swap it out for the Internet 2.0; the full rollout of the real metaverse; a tightly controlled internet/virtual world that will be linked to a Universal Digital ID number and Social Credit Score System. This new (non-infected) internet will be the safest space ever, with no misinformation or disinformation (just NWO propaganda). Ironically, its critics will nickname it: the Ministry of Truth.
Eventually the New World Order will announce: every one must be microchipped! In order to save the planet, we must monitor everyone’s carbon footprint—as you can see, we are in a Climate Pandemic! This pandemic is causing worldwide famines and plagues. Therefore we must use smartchip biometric surveillance technology to track the rationing of water, food, and medicine. Through AI monitoring, we will know that everything is being rationed out properly. This will not only ensure equality for all, but it will also cut out all mismanagement and theft. Transhumanism, here we come! 666 smartchip hacks: neural monitoring by AI-driven mind control technology. 666 smartchip jab injuries: “And the agony they suffered was like that of the sting of a scorpion when it strikes. During those days people will seek death but will not find it; they will long to die, but death will elude them.”
BTW The Beast will be immune to all lawsuits! And those who refuse the smartchip jab will not be able to buy or sell. In fact, they will be enemies of the state. They will be hunted down and sentenced to death. They will face the guillotine, as the transhumans cheer: death to humanity! “And I saw the souls of those who had been beheaded because of their testimony about Jesus and because of the word of God. They had not worshiped the Beast or his Image and had not received his Mark on their foreheads or their hands.”
“China is in the process of fulfilling what Stalin, Hitler and Mao could only dream about: The flawless totalitarian state, powered by digital technology, where the individual has nowhere to flee from the all-seeing eye of the Communist state.” The same thing will happen worldwide as countries implement a Central Bank Digital Currency, a Biometric Digital ID, and a Chinese style Social Credit Score System, which will ultimately lead to the microchipping of humanity (transhumanism).
NGC 6914 is just [a reflection nebula in Cyngus](i.imgur.com/P7R1lmn.jpg), although since this is a narrowband image, the reflection nebula don't show up as well compared to the emission nebulae surrounding it. Was just zooming around in stellarium looking for something cool to shoot while the moon was up and settled on this region. I also made a [starless version](i.imgur.com/E2vfQl9.jpg) to better show off the fainter nebulosity. Captured over 6 nights from November 10th to 19th, 2021 from a Bortle 6 zone.
---
**[Equipment:](i.imgur.com/6T8QNsv.jpg)**
* TPO 6" F/4 Imaging Newtonian
* Orion Sirius EQ-G
* ZWO ASI1600MM-Pro
* Skywatcher Quattro Coma Corrector
* ZWO EFW 8x1.25"/31mm
* Astronomik LRGB+CLS Filters- 31mm
* Astrodon 31mm Ha 5nm, Oiii 3nm, Sii 5nm
* Agena 50mm Deluxe Straight-Through Guide Scope
* ZWO ASI-290mc for guiding
* Moonlite Autofocuser
**Acquisition:** 15 hours 54 minutes (Camera at Unity Gain, -15°C)
* Ha- 547360"
* Oiii- 52x360"
* Sii- 50x360"
* Darks- 30
* Flats- 30 per filter
**Capture Software:**
* Captured using [N.I.N.A.](nighttime-imaging.eu) and PHD2 for guiding and dithering.
**[PixInsight Processing:](www.youtube.com/watch?v=u7FuApFSGuA)**
* BatchPreProcessing
* StarAlignment
* [Blink](youtu.be/sJeuWZNWImE?t=40)
* ImageIntegration
* DrizzleIntegration (2x, Var β=1.5)
**Linear:**
* DynamicCrop
* DynamicBackgroundExtraction
* EZ Decon + Denoise (Ha only)
* STF applied via HistogramTransformaion to bring each channel nonlinear
**Combining Channels:**
* Pixelmath to make RGB image using /u/CrazyPanda741_'s palette:
>R= iif(Sii > .1, Sii, (Ha\*.7))
>G= iif(Ha > 0.7, 1-(1-Oiii)\*(1-(Ha-0.3)), Oiii \*(Ha+0.7))
>B= iif(Oiii > .1, Oiii, (Ha\*.3)+(Oiii\*.2))
**Nonlinear:**
* LRGBCombination with Ha as luminance
* Shitloads of CurveTransformations to adjust lightness, contrast, saturation, hues, etc.
* SCNR to partially remove greens
* Extract L channel > LRGBC again for chrominance noise reduction
* ACDNR
* ColorSaturation
* LocalHistogramEqualization
> two rounds of this, one at size 16 kernel for the finer ‘feathery’ details, and one at 200 for larger structures
* more curves
* DarkStructureEnhancement
* EZ Star reduction
* NoiseGenerator to add noise back into reduced stars
* Resample to 60%
* Annotation
Most of them are gas and dust clouds, but this is a rare capture of an elusive oil-type nebula - I was doing some mount tracking tests on Sirius a few days ago and didn't bother taking off, or cleaning, the UV filter I use to keep the thing from getting damaged when I crawl through bushes.
Single 120s shot on an iOptron SkyGuider Pro mount that finally works right now that I took a screwdriver to it, Canon 800D at ISO 800, Canon 400mm f5.6 lens at f6.3, 10bux no-name filter I touched as I was putting the camera on the mount. Processed in PixInsight as follows:
***Linear processing
*Debayer
*DBE
*PCC
*Denoise using Jon Rista's method (jonrista.com/the-astrophotographers-guide/pixinsights/eff...) and the EZDenoise script
***Nonlinear
*Initial stretch: EZSoftStretch
**Denoising:
*ACDNR denoise - chrominance only, lightness mask 2 wavelet layers, mids 0.4 shadows 0.2 highs 0.8. Stdev 4 iterations 6 structure 5
*EZDenoise - TGV str 5 edge protection -3, MMT default
**Starless/stars processing:
* split with starnet in starless + stars, split looks super clean so we will process these separately
**Starless:
* LHE kernel 64 contrast limit 1.5, 25% mix with original
* curves: s-shaped contrast in RGB/K, sat boost
* histogram: shadows 0.1
* denoise: for a mask, extract luminance, make it only cover Sirius and the flare. Then apply MMT with EzDenoise settings, on chrominance only
**Stars:
* SNCR green
* boost saturation heavily
* star reduction, Adam Block method implemented by EZSuite, using own luminance as mask
* curves contrast and sat boost
* Merge
* MMT sharpen layers 2-6 (biases 0.05 0.025 0.025 0.012 0.005), 0.2 mix with original
Final tweaking of nebula colors and extra denoising in Darktable
In its simple definition, Chaos defines a state of confusion and disorder. We need only look at US politics currently to see examples!
“Chaos” is also the name given to the void that was before the ordered universe came to be – it is the opposite of “cosmos”.
The relatively more recent used of the word is as the description of “ . . . the science of surprises, of the nonlinear and the unpredictable.“ - The Fractal Foundation
Edward Lorenz (who coined the term “The Butterfly Effect”) said, “Chaos: When the present determines the future, but the approximate present does not approximately determine the future.” So, the flapping of a butterfly’s wings – a very small influence on initial conditions -- can have large effects on the later outcome.
“Chaos” is the title of the book that is on the top of my library “place a hold” list. Written by James Gleick in 1987, it records the birth of a new science. It was a finalist for both The Pulitzer Prize and The National Book Award in 1987, and is still widely read today.
So, for The words hatch today I give you “chaos”. The Hereios are visiting this this group, which was created “as a tribute to the little miracle that takes place whenever we find that ‘good word’.”
Is your 365 project in chaos? Get some new ideas from The Hereios at We’re Here!
Submitted to the “Spotlight Your Best” challenge, Book Titles .
Submitted to Vivid Art Challenge No. 1:
www.flickr.com/groups/2817915@N22/discuss/72157665385904265/
Description: This image of the Elephant’s Trunk Nebula IC 1396 was developed from 37x300s subs or 3.08 hours of total exposure time. A dual bandpass integrated image was first separated into Starless and Stars only images. The Starless image was split into its RGB components, which were individually boosted as appropriate, followed by the application of appropriate weighting factors to the individual RGB channels, further followed by LRGB Combination. The resulting image was then recombined with the Stars only image the result of which was post processed with various color masks using Curves Transformation to generate a final image.
Date / Location: 12 July 2023 / Washington D.C.
Equipment:
Scope: WO Zenith Star 81mm f/6.9 with WO 6AIII Flattener/Focal Reducer x0.8
OSC Camera: ZWO ASI 2600 MC Pro at 100 Gain and 50 Offset
Mount: iOptron GEM28-EC
Guider: ZWO Off-Axis Guider
Guide Camera: ZWO ASI 174mm mini
Focuser: ZWO EAF
Light Pollution Filter: Optolong L-eXtreme Dual Bandpass
Processing Software: Pixinsight
Processing Steps:
Preprocessing:
I preprocessed 37x300s subs (= 3.08 hours) in Pixinsight to get an integrated image using the following process steps: Image Calibration > Cosmetic Correction > Subframe Selector > Debayer > Select Reference Star and do a Star Align > Image Integration.
Linear Postprocessing:
Dynamic Crop > Dynamic Background Extractor (doing subtraction to remove light pollution gradients and division for flat field correction) > Background Neutralization > Color Calibration > Blur Xterminator > Noise Xterminator.
Nonlinear Postprocessing and additional steps:
Histogram Transformation > Star Xterminator to create Starless and Stars Only Images.
Starless Image > Noise Xterminator > Local Histogram Equalization > Multiscale Median Transform > Curves Transformation to boost O(III) and H-alpha signals > Split RGB channels > Create new green and blue channels > Boosted the channels as appropriate > LRGB Combination > Curves Transformation using various color masks.
Stars Only Image > Morphological transformation.
Pixel Math to combine the Starless Image with the Stars Only Image to get a Rejoined Image.
Rejoined Image > Dark Structure Enhancement > Topaz AI.
Pixel Math to combine the non-AI Rejoined Image with the Topaz AI Image to get a final image.
Scope: WO Zenith Star 81mm f/6.9 with WO 6AIII Flattener/Focal Reducer x0.8
OSC Camera: ZWO ASI 2600 MC Pro at 100 Gain and 50 Offset
Mount: iOptron GEM28-EC
Guider: ZWO Off-Axis Guider
Guide Camera: ZWO ASI 174mm mini
Light Pollution Filter: Chroma LoGlow Broadband
Date: 14, 18 February 2023
Location: Washington D.C.
Exposure: 81x300s subs (= 6.75 hours)
Software: Pixinsight
Processing Steps:
Preprocessing: FITS data > Image Calibration > Cosmetic Correction > Subframe Selector > Debayer > Select Reference Star and Star Align > Image Integration.
Linear Postprocessing: Dynamic Crop > Dynamic Background Extractor (subtraction to remove light pollution gradients and division for flat field corrections) > Background Neutralization > Color Calibration > Blur Xterminator > Noise Xterminator.
Nonlinear Postprocessing: Histogram Transformation > Star Xterminator to decompose into Starless and Stars Only images.
Starless image > Histogram Transformation > Noise Xterminator > Local Histogram Equalization.
Apply a First Curves Transformation as appropriate to boost the blue signal from the galaxy's arms. Apply an RGB Split. After adjusting the weights for the individual RGB components (noting that the R serves as both the L channel and the red channel when using an OSC camera), apply LRGB Combination to get a blue boosted image.
Apply a Second Curves Transformation as appropriate to boost the red signal from the galaxy's core. Apply an RGB Split. After adjusting the weights for the individual RGB components (noting that the R serves as both the L channel and the red channel when using an OSC camera), apply LRGB Combination to get a red boosted image.
Use Pixel Math to combine 0.5 x red boosted image + 0.5 x blue boosted image to get a Composite image.
Use Pixel math again to combine 0.75 x Composite image + 0.25 x an HDR Multiscale Transform-modified Composite image to get a New Composite image.
Process the New Composite image with Curves Transformation using color masks.
Apply Histogram Transformation and Local Histogram Equalization to get a Final Starless image.
Use Pixel Math to rejoin the Final Starless image with the Stars Only image (modified by a Morphological Transformation if needed) to get a rejoined image.
Rejoined image > Topaz Labs > DeNoise AI > Gigapixel AI.
Use Pixel Math to combine 0.25 x non-AI Composite + 0.75 x Gigapixel AI = Final Result.
WR-134 is a Wolf Rayet star located in Cygnus. The star itself, located about 6000 light years from Earth, is surrounded by a Oxygen-dominated nebula that is being blown around by radiation and winds from the WR star itself. The star itself is about 5 times the radius of our sun and was one of the first three stars observed in Cygnus to have intense emission lines rather than continuous or absorption lines. This class of stars is now known as Wolf-Rayet stars.
This image is a HOO Bicolor image totaling about 34.7 hours of exposure time taken over 7 nights.
This image will eventually be mosaic'd with the Crescent Nebula, which I am current halfway done gathering data for.
---------------------------------------
Equipment:
- Scope: AstroTech AT65EDQ Quad APO
- Mount: Rowan Belt Modded Orion Sirius EQ-G
- Imaging Camera: ZWO ASI1600mm-Pro
- Guidescope: QHY MiniGuideScope
- Guide Cam: QHY5L-II Mono
----------------------------------------
Software:
- EQMOD/StellariumScope for mount control
- APT for image capture and platesolving
- PHD2 for guiding
- PixInsight for Processing
----------------------------------------
Acquisition:
- Location: Sugar Land, TX
- Gain: 200; Offset: 50; Camera @ -10C
- 195 x 300" Ha - Chroma 5nm
- 221 x 300" OIII - Chroma 3nm
- 30 flats, flat darks, and darks (from library) per session
- Nights: 7/30/20, 8/3/20, 8/4/20, 8/6/20, 8/7/20, 8/10/20, 8/11/20
- Total Integration Time: 34.7 Hours
----------------------------------------
Processing (Apply to each Ha and OIII master):
- ImageIntegration
- DrizzleIntegration
- DynamicCrop to get rid of edges
- DBE
- Deconvolution
- Linear noise reduction via EZDenoise Script
- HistogramTransformation for Stretch
- HistogramTransformation for contrast
- CurvesTransformation to bring out nebulosity
- DarkStructureEnhance script (Ha only)
-----------------------------------------
Combining Ha and OIII Masters via the following Pixelmath Combination:
- R: iif(ha>.15,ha,(ha*.8)+(oiii*.2))
- G: iif(ha>0.5,1-(1-oiii)*(1-(ha-0.5)),oiii*(ha+0.5))
- B: iif(oiii>.1,oiii,(ha*.3)+(oiii*.2))
-------------------------------------------
Further Processing:
- Invert -> SCNR Green -> Invert back to remove magenta cast
- Export to Photoshop for slight color tweaking
- Bring back to PI for curves on hue and contrast
- Pushed nebulosity by creating a starless version of the image and then using the following Pixelmath Expression:
- 1- (1-$T)*(1-$T)
- Recombined this enhanced nebulosity version with the original version via the following Pixelmath Expression:
- F=0.4; (1- (1-$T)*(1-starless)*F)+($T*~F)
- Additional CurvesTransformation for contrast
- LocalHistogramEqualization on ring nebula with RangeMask
- Create star mask via Starnet++, binarize and convolute star mask
- MorphologicalTransformation to reduce star size
- CurvesTransformation to shift hue to red from rusty red
- Extract blue channel and apply ask a mask and use CurvesTransformation to emphasize the OIII emissions and balance some color
- Nonlinear noise reduction via ACDNR and luminance mask
- Final Export