View allAll Photos Tagged Nonlinear
M97 (aka the owl nebula) and M108 (aka the surfboard galaxy just happen to be relatively the same size when viewed from Earth, even though M108 is about 46 million light years outside of our galaxy (M97 is ~2600ly away). I ended up shooting these two almost 3 years ago from this same driveway, but I wanted to see how much of an improvement my monochrome camera would be over a DSLR. Captured on March 11, 21, and 22nd, 2021 from a Bortle 6 zone.
---
**[Equipment:](i.imgur.com/6T8QNsv.jpg)**
* TPO 6" F/4 Imaging Newtonian
* Orion Sirius EQ-G
* ZWO ASI1600MM-Pro
* Skywatcher Quattro Coma Corrector
* ZWO EFW 8x1.25"/31mm
* Astronomik LRGB+CLS Filters- 31mm
* Astrodon 31mm Ha 5nm, Oiii 3nm, Sii 5nm
* Agena 50mm Deluxe Straight-Through Guide Scope
* ZWO ASI-120MC for guiding
* Moonlite Autofocuser
**Acquisition:** 5 hours 56 minutes (Camera at Unity Gain, -15°C)
* Lum - 91x120"
* Red - 29x120"
* Green - 29x120"
* Blue - 29x120"
* Darks- 30
* Flats- 30 per filter
**Capture Software:**
* Captured using [N.I.N.A.](nighttime-imaging.eu) and PHD2 for guiding and dithering.
**PixInsight Processing:**
* BatchPreProcessing
* StarAlignment
* [Blink](youtu.be/sJeuWZNWImE?t=40)
* ImageIntegration
* DrizzleIntegration (2x, Var β=1.5)
**Linear:**
* StarAlign undrizzled R, G, and B stacks to Drizzled Lum
* DynamicCrop
* DynamicBackgroundExtraction
**Luminance:**
* EZ Denoise (Lum only)
* HistogramTransformation to stretch nonlinear
**RGB**
* Channel Combination
* DBE again
* PhotometricColorCalibration
* SCNR green
* HistogramTransformation to stretch nonlinear
**Nonlinear:**
* LRGBCombination with Lum as luminance
* CurveTransformations to adjust lightness, saturation, contrast, hues, etc.
* ACDNR
* EZ Star Reduction
* More Curves
* Invert > SCNR > Invert to remove magenta glow around a star
* Resample to 60%
* Annotation
Scope: WO Zenith Star 81mm f/6.9 with WO 6AIII Flattener/Focal Reducer x0.8
OSC Camera: ZWO ASI 2600 MC Pro at 100 Gain and 50 Offset
Mount: iOptron GEM28-EC
Guide Scope: ZWO ASI 30mm f/4
Guide Camera: ZWO ASI 120mm mini
Light Pollution Filter: Optolong L-eXtreme Dual Bandpass
Date: 20, 25, 26 June 2022
Location: Washington D.C.
Exposure: 60x300s subs (= 5.0 hours)
Software: Pixinsight
Processing Steps:
Preprocessing: FITS data > Image Calibration > Cosmetic Correction > Subframe Selector > Debayer > Select Reference Star and Star Align > Image Integration.
Linear Postprocessing: Integrated image > Dynamic Background Extractor (subtraction to remove light pollution gradients and division for flat field corrections) > Background Neutralization > Color Calibration > Blur Xterminator > Noise Xterminator.
Nonlinear Postprocessing: Linear postprocessed image > Histogram Transformation > Star Xterminator to decompose into Starless and Stars Only images.
Starless image > Histogram Transformation > Noise Xterminator > Local Histogram Equalization > Curves Transformation to boost the blue and green colors for the O(III) signal > Split RGB Channels > Adjust the weights of the RGB channels and use Pixel Math to generate new modified RGB channels.
Apply LRGB Combination to the new modified RGB channels with the L channel unchecked and the RGB channels checked with the new modified B and G being the sources of the G and B channels, respectively, to get a new RGB color image.
Apply HDR multiscale transform to both the L channel image (= the R channel image for an OSC camera) and the new RGB color image.
Reapply LRGB combination specifically to the new RGB color image this time with the L channel checked and the RGB channels unchecked to get a new Starless image.
Process the new Starless image with Curves Transformation using color masks.
Use Pixel Math to rejoin the processed new Starless image with the Stars Only image to get a rejoined image.
Rejoined image > Topaz Labs > DeNoise AI > Gigapixel AI.
Use Pixel Math to combine 25% x Rejoined image + 75% x AI image = Final Result.
If you have followed my blog at all you have certainly noticed that one specific aspect in my pictures is connected to colors. For some unknown reason I've always been very drawn to colors when dealing with photography (I cannot do b&w stuff). I could even say that because of this preference my early experiences with digital cameras were mostly disappointing: I couldn't stand the standard look of the early digital cameras because colors were so different and flat compared to golden era of film photography. And because of this disappointment 'getting good colors' became the thing that I kept of chasing for years.
It's one thing to recognize that the flat look and 'the standard and objective' colors, which most of the current digital cameras provide out of the box, doesn't actually carry any resemblance to the visual legacy that was left from the era of film photography. Just take a look, for example, old Kodachrome slides and compare them to your JPEGs and you should see two very different interpretations of colors and also become aware that these differences are not only about 'technological advancement' but also artistic choices. Behind the Kodachrome (and other films) there is a artistic interpretation of 'what looks good' and how the colors should be reproduced. JPEG on the other hand is much more 'objective' (read: flat) with some minor contrast and skin color correction thrown in. No wonder why standard JPEGs from the camera look so boring (though they have really been getting better with time).
While it's easy to point JPEGs with your finger, it's entirely different thing to determine what are 'good colors'. For example, while I like film era colors, I don't think we should concentrate to reduplicate them as they were – instead I think we should bring in some influences from that visual era there but then continue to define what are good colors at 21th century photography. Rather than ready-made-Lightroom-presets this calls for a cultivated taste regarding colors, which is much harder. I would love to transfer something from the film era legacy to today's photography, but at the same time I don't want photographs to look like they were taken 20 or 30 years ago (read: faded look) as I think it is intellectually dishonest to add a feeling of nostalgia to a picture from a digital filter. Like I said, it's a difficult question.
So how I have I solved this 'getting good colors' so far? I believe everyone has to develop their own 'theory of good colors' and 'methodology' to get there eventually. For me it's a three part response: Zeiss glass, VSCO-presets as a starting point and editing. I used to search for clear and bright colors and ultimately found my answer from Zeiss lenses. I'll be first one to admit that there are also other manufacturers out there who deliver great equipment color wise, but for some reason I found Zeiss to provide those small nuances which made difference to me (from my current setup I think the Batis 2/25 is the best followed by the Touit 2.8/50M). Then I use VSCO-presets as a starting point. They give me an easy way to explore whole bunch of different looks which I would never come across without them. Do they look like film? In some cases yes, but I usually erase the vintage look by editing. If I would recommend some of their film packs I would say that the Film Pack 04 is great and Film Pack 07 provides pretty nice starting points for general explorations as well – other ones are way too much vintage for my taste. But even with Zeiss glass and VSCO-presets it comes down to editing. Sometimes it's easy to see what the picture asks for, but sometimes it takes much longer to realize what is wrong with colors or how I should adjust them. It looks like I will never get rid of this task as much as I would like to. Like I said, colors are complicated thing once you step outside of the supposition of objective colors.
Ps. For this particular picture I used Provia 100F emulation from the VSCO Film pack 04. One of my favorites, but most often provides way too much contrast which is frustrating with some images. It also shifts the white balance in a nonlinear way that it's difficult to edit. I've also noticed that one cannot just slap on slide film preset to any picture as the end result would look just bad. Slide film emulations, such as the Provia 100F, works as a good starting point when used with conditions that would be similar to real world use of that particular film - in this case it means a lot of sunlight.
Days of Zeiss: www.daysofzeiss.com
Description: This image of the Heart Nebula IC 1805 was developed from 110x300s subs or 9.17 hours of total exposure time. The use of a dual bandpass Light Pollution Filter with an OSC camera, yields an image in which the red from the H-alpha tends to swamp out the teal from the O(III). In order to isolate and develop the O(III) signal, the dual bandpass image was first split into its RGB components, followed by the application of appropriate weighting factors to the green and red channels, further followed by LRGB Combination. The resulting image was post processed using Curves Transformation with various color masks to generate a teal O(III) signal in the final image.
Date / Location: 26-28 January 2023 / Washington D.C.
Equipment:
Scope: WO Zenith Star 81mm f/6.9 with WO 6AIII Flattener/Focal Reducer x0.8
OSC Camera: ZWO ASI 2600 MC Pro at 100 Gain and 50 Offset
Mount: iOptron GEM28-EC
Guider: ZWO Off-Axis Guider
Guide Camera: ZWO ASI 174mm mini
Focuser: ZWO EAF
Light Pollution Filter: Optolong L-eXtreme Dual Bandpass
Processing Software: Pixinsight
Processing Steps:
Preprocessing:
I preprocessed 110x300s subs (= 9.17 hours) in Pixinsight to get an integrated image using the following process steps: Image Calibration > Cosmetic Correction > Subframe Selector > Debayer > Select Reference Star and do a Star Align > Image Integration.
Linear Postprocessing:
Rotation > Dynamic Background Extractor (doing subtraction to remove light pollution gradients and division for flat field correction) > Background Neutralization > Color Calibration > Blur Xterminator > Noise Xterminator.
Nonlinear Postprocessing and additional steps:
Histogram Transformation > Star Xterminator to create Starless and Stars Only Images.
Starless Image > Noise Xterminator > Local Histogram Equalization > Multiscale Median Transform > Curves Transformation to boost O(III) signal > Split RGB channels > Create new green and blue channels > LRGB Combination > Curves Transformation using various color masks.
Stars Only Image > Morphological transformation.
Pixel Math to combine the Starless Image with the Stars Only Image to get a Reinstated Image.
Reinstated Image > Dark Structure Enhancement > Topaz AI.
Pixel Math to combine the non-AI Reinstated Image with the Topaz AI Image to get a final image.
At 60 hours this is currently the most exposure time I have on a single target, nearly double that of my previous record on the Cave Nebula
Sh2-216 is the largest planetary nebula in terms of angular size in the sky, at about 3X as wide as the full moon (~1.5 degrees). It's also the second closest planetary nebula to earth. Captured over 14 nights from mid-November 2020 to mid-January 2021 from a Bortle 6 zone
---
**[Equipment:](i.imgur.com/6T8QNsv.jpg)**
* TPO 6" F/4 Imaging Newtonian
* Orion Sirius EQ-G
* ZWO ASI1600MM-Pro
* Skywatcher Quattro Coma Corrector
* ZWO EFW 8x1.25"/31mm
* Astronomik LRGB+CLS Filters- 31mm
* Astrodon 31mm Ha 5nm, Oiii 3nm, Sii 5nm
* Agena 50mm Deluxe Straight-Through Guide Scope
* ZWO ASI-120MC for guiding
* Moonlite Autofocuser
**Acquisition:** 60 hours 10 minutes (Camera at Unity Gain, -20°C)
* Ha- 221x600"
* Sii- 140x600"
* Darks- 30
* Flats- 30 per filter
**Capture Software:**
* Captured using [N.I.N.A.](nighttime-imaging.eu/) and PHD2 for guiding and dithering.
**PixInsight Processing:**
* BatchPreProcessing
* StarAlignment
* [Blink](youtu.be/sJeuWZNWImE?t=40)
* ImageIntegration
* DrizzleIntegration per panel(2x, Var β=1.5)
**Linear:**
* DynamicCrop
* DynamicBackgroundExtraction
* EZ Denoise (Ha only)
**Combining Channels:**
* LinearFit Oiii to Ha
* PixelMath to map stacks to RGB channels
* R= Ha
* G= 0.35 * Ha + 0.65 * Oiii
* B= 0.2 * Ha + 0.8 * Oiii
* ArcsinhStretch + HistogramTransformation to stretch to nonlinear
**Stretching Ha:** (method courtesy of /u/xanthine_oxidase)
* MaskedStretch to 0.1 background
* Starnet++ starmask made, subtracted from 0.3 Gray image and colvolved
* Previous image used as a mask to stretch nebulosity without stretching stars
* Previous two steps were repeated 3X with incremental HistogramTransformations
**Nonlinear:**
* Stretched Ha added as luminance to stretched RGB
* Several [Curve](i.imgur.com/lyJJ8RD.jpg)Transformations to adjust hue, lightness, saturation, etc.
* ACDNR
* LocalHistogramEqualization
* EZ Star Reduction
* Stars slightly desaturated using Starnet++ mask
* More Curves
* Resample to 60%
* Annotation
Bortle 8 and [balcony roofs](gfycat.com/femaleunrealistichellbender) suck :-(
The galaxies (from left to right) are NGC 5981,5982, abd 5985. Captured between September 12th-15th, 2022.
---
**[Equipment:](i.imgur.com/ejpKkwU.jpg)**
* TPO 6" F/4 Imaging Newtonian
* Orion Sirius EQ-G
* ZWO ASI1600MM-Pro
* Skywatcher Quattro Coma Corrector
* ZWO EFW 8x1.25"/31mm
* Astronomik LRGB+CLS Filters- 31mm
* Astrodon 31mm Ha 5nm, Oiii 3nm, Sii 5nm
* Agena 50mm Deluxe Straight-Through Guide Scope
* ZWO ASI-120mc for guiding
* Moonlite Autofocuser
**Acquisition:** 13 hours 13 minutes (Camera at 75/15 gain/offset, -15°C)
* Lum- 355x30"
* Red- 97x90"
* Green- 99x60"
* Blue- 96x60"
* Darks- 30
* Flats- 30 per filter
**Capture Software:**
* Captured using [N.I.N.A.](nighttime-imaging.eu) and PHD2 for guiding and dithering.
**[PixInsight Processing:](www.youtube.com/watch?v=E6vj_SEZ79k)**
* BatchPreProcessing
* StarAlignment
* [Blink](youtu.be/sJeuWZNWImE?t=40)
* ImageIntegration
**Linear:**
* DynamicCrop
* Dynamic Background extraction
**RGB:**
* ChannelCombination to map monochrome R, G, and B images into a color image
* PhotometricColorCalibration
* Slight SCNR green
* HSV repair
* ArcsinhStretch + HistogramTransformation to bring nonlinear
**Luminance:**
* NoiseXTerminator
* ArcsinhStretch + HistogramTransformation to bring nonlinear
**Nonlinear:**
* LRGBCombination with stretched L as luminance
* Several curves transformations to adjust lightness, contrast, saturation, etc
* SCNR green
* NoiseX again
* More Curves
* Annotation
No idea why it's called this but it's what Stellarium has named it ¯\_(ツ)_/¯. It's also at the very end of the [Kemble's Cascade asterism](www.reddit.com/r/astrophotography/comments/jzm9gp/kembles...), which is one of my favorite objects to view through binoculars (it's basically a straight line of stars over 2 degrees long)
Captured on April 9-11th, 2023 from my bortle 8 apartment balcony.
---
**[Equipment:](i.imgur.com/ejpKkwU.jpg)**
* TPO 6" F/4 Imaging Newtonian
* Orion Sirius EQ-G
* ZWO ASI1600MM-Pro
* Skywatcher Quattro Coma Corrector
* ZWO EFW 8x1.25"/31mm
* Astronomik LRGB+CLS Filters- 31mm
* Astrodon 31mm Ha 5nm, Oiii 3nm, Sii 5nm
* Agena 50mm Deluxe Straight-Through Guide Scope
* ZWO ASI-120mc for guiding
* Moonlite Autofocuser
**Acquisition:** 2 hours 52 minutes (Camera at half Unity Gain, -15°C)
* L- 90x60"
* R - 28x90"
* G - 28x90"
* B - 26x90"
* Darks- 30
* Flats- 30 per filter
**Capture Software:**
* Captured using [N.I.N.A.](nighttime-imaging.eu) and PHD2 for guiding and dithering.
**PixInsight Processing**
* BatchPreProcessing
* StarAlignment
* [Blink](youtu.be/sJeuWZNWImE?t=40)
* ImageIntegration
**Luminance Linear:**
* DynamicCrop
* DynamicBackgroundExtraction
* NoiseXterminator
* ArcsinhStretch+HistogramTransformation to bring nonlinear
**RGB Linear:**
* DynamicCrop
* DynamicBackgroundExtraction
* ChannelCombination
* SpectrophotometricColorCalibration
* HSV Repair
* Slight SCNR green
* AcrsinhStretch + HistogramTransformation to stretch nonlinear
**Nonlinear:**
* added stretch luminance to stretched RGB via LRGBCombination
* DeepSNR
* shitloads of CurveTransformations to adjust hue, lightness, saturation, etc. (some with star masks)
* annotation
This article is about dispersion of waves on a water surface. For other forms of dispersion, see Dispersion (disambiguation). In fluid dynamics, dispersion of water waves generally refers to frequency dispersion, which means that waves of different wavelengths travel at different phase speeds. Water waves, in this context, are waves propagating on the water surface, and forced by gravity and surface tension. As a result, water with a free surface is generally considered to be a dispersive medium.
Surface gravity waves, moving under the forcing by gravity, propagate faster for increasing wavelength. For a given wavelength, gravity waves in deeper water have a larger phase speed than in shallower water. In contrast with this, capillary waves only forced by surface tension, propagate faster for shorter wavelengths.
Besides frequency dispersion, water waves also exhibit amplitude dispersion. This is a nonlinear effect, by which waves of larger amplitude have a different phase speed from small-amplitude waves.
en.wikipedia.org/wiki/Dispersion_(water_waves)
Shot this data exactly one year ago, started processing it, forgot about it, and just finished it now. I'd consider this an improvement over my last attempt at this trio in 2019. Captured over 3 nights in December 2021 from a Bortle 6 zone
---
**[Equipment:](i.imgur.com/ejpKkwU.jpg)**
* TPO 6" F/4 Imaging Newtonian
* Orion Sirius EQ-G
* ZWO ASI1600MM-Pro
* Skywatcher Quattro Coma Corrector
* ZWO EFW 8x1.25"/31mm
* Astronomik LRGB+CLS Filters- 31mm
* Astrodon 31mm Ha 5nm, Oiii 3nm, Sii 5nm
* Agena 50mm Deluxe Straight-Through Guide Scope
* ZWO ASI-290mc for guiding
* Moonlite Autofocuser
**Acquisition:** 5 hours 50 minutes (Camera at Unity Gain, -20°C)
* L- 89x120"
* R- 29x120"
* G- 29x120"
* B- 29x120"
* Darks- 30
* Flats- 30 per filter
**Capture Software:**
* Captured using [N.I.N.A.](nighttime-imaging.eu) and PHD2 for guiding and dithering.
**PixInsight Processing:**
* BatchPreProcessing
* SubframeSelector
* StarAlignment
* [Blink](youtu.be/sJeuWZNWImE?t=40)
* ImageIntegration
* DrizzleIntegration (2x, Var β=1.5)
**Linear:**
* DynamicCrop
* DynamicBackgroundExtraction
* EZ Decon/Denoise
* ArcsinhStretch + HistogramTransformation to stretch to nonlinear
**RGB:**
* Channelcombination to combine monochrome images into RGB image
* PhotometricColorCalibration
* HSV repair
* ArcsinhStretch + HistogramTransformation to stretch nonlinear
**Nonlinear:**
* LRGBCombination with luminance
* LRGBCombination with luminance again, inverted background mask used to protect galaxy from chronicance noise reduction
* Shitloads of CurveTransformations to adjust lightness, saturation, contrast, hues, etc.
* MLT noise reduction
* Unsharp mask to further sharpen the galaxy
* LocalHistogramEqualization
* EZ star reduction
* NoiseGenerator to add noise back into star reduced areas
* More curves
* NoiseXTerminator
* few more curves
* Resample to 60%
* annotation
at 101 hours 6 minutes, this is the longest exposure time I've put into a single object (albeit across 6 panels), beating out my previous record of 84 hours. I had shot just the elephant trunk itself last year, and this year I decided to revisit it, and shoot a mosaic of the entire nebula. Due to my limited horizons at my new apartment (just a sliver of the northwest sky) and a stupidly clear fall, I made 100 hours of exposure time my arbitrary goal for this pic
This photo was downsampled to 65% of it's original 276 Megapixel resolution, which puts it *just* under Flickr's 200MB upload limit. All panels were shot over the course of 33 nights spanning September to November, 2022 from my Bortle 8 apartment balcony.
---
**[Equipment:](i.imgur.com/ejpKkwU.jpg)**
* TPO 6" F/4 Imaging Newtonian
* Orion Sirius EQ-G
* ZWO ASI1600MM-Pro
* Skywatcher Quattro Coma Corrector
* ZWO EFW 8x1.25"/31mm
* Astronomik LRGB+CLS Filters- 31mm
* Astrodon 31mm Ha 5nm, Oiii 3nm, Sii 5nm
* Agena 50mm Deluxe Straight-Through Guide Scope
* ZWO ASI-120mc for guiding
* Moonlite Autofocuser
**Acquisition:** 101 hours 6 minutes (Camera at Unity Gain, -15°C)
* Ha - Roughly 5.5 hours of 360" exposures per panel
* Oiii - Roughly 5.5 hours of 360" exposures per panel
* Sii - Roughly 5.5 hours of 360" exposures per panel
> [Exact exposure breakdown here](i.imgur.com/pXgopjI.png)
* Darks- 30
* Flats- 30 per filter
**Capture Software:**
* Captured using [N.I.N.A.](nighttime-imaging.eu) and PHD2 for guiding and dithering.
**PixInsight Processing:**
* BatchPreProcessing
* SubframeSelector
* StarAlignment
* [Blink](youtu.be/sJeuWZNWImE?t=40)
* ImageIntegration
* DrizzleIntegration (2x, Var β=1.5)
**Creating the Mosaic:**
> For whatever reason mosaic by coordinates refuses to work for me despite updating/reinstalling pixinsight, or even downloading the script directly from others.
* DynamicCrop to remove stacking artifacts
* Several rounds of automatic and dynamic background extractions to remove gradients
* LinearFit
* Exported photos as tiff, stitched in Microsoft ICE per filter
* StarAlign stitched Oiii and Sii images to stitched Ha
**Linear:**
* DynamicCrop
* DynamicBackgroundExtraction
* EZ Decon
* NoiseXTerminator
* STF applied via HistogramTransformation to stretch nonlinear
**Combining Channels:**
* PixelMath to map Sii, Ha, Oiiito RGB, respectively
**Nonlinear:**
> I did this over the course of several days and a couple crashes so this isn't exhaustive, but I believe this is mostly what I did in order
* HistogramTransformation to pull back the green channel and slightly boost red
* LRGBCombination with extracted L as luminance
* Invert > SCNR > Invert > to remove some background magenentas
* Shitloads of CurveTransformations to adjust lightness, saturation, contrast, hues, etc. with various masks
* NoiseXTerminator
* More Curves
* LocalHistogramEqualization
> Two rounds of this. one at size 16 for the finer 'feathery' details and one at size 500 for large scale structures
* DarkStructureEnhance
* Even more curves
* ColorSaturation to better bring out the Oiii regions and desaturate background reds/magentas a little
* More NoiseX
* curves
* Inverted SCNR with a starmask to remove magentas from stars
* EZ star reduction
* NoiseGenerator to add noise into reduced star areas
* guess what more curves
* Resample to 65% (any image larger than this is too big for flickr)
* Annotation
Dates: 23-24, 26-28 April 2025
Location: Washington D.C.
Equipment:
ASI 2600MM Pro (monochrome) camera
Chroma 36mm LRGB Filter Set
WO Fluorostar 91mm f/5.9 triplet APO refractor with Adjustable Field Flattener 68III
iOptron GEM28-EC mount
Data and exposure times:
Data was acquired as LRGB images with the following exposure times:
14.11 hours (242x210s subs) with Luminance filter (L).
3.50 hours (60x210s subs) with Red filter (R).
3.56 hours (61x210s subs) with Green filter (G).
3.44 hours (59x210s subs) with Blue filter (B).
Atmospheric conditions:
The shown image was developed from data acquired in a Bortle Class 8 area (i.e. in an environment experiencing a degree of light pollution typical of a city) where the sky quality during observation was such that both transparency (i.e. the level of atmospheric clarity) and seeing (i.e. the level of atmospheric turbulence) varied from average to below average.
Processed in PixInsight.
Preprocessing notes:
Created LRGB "masters" by Calibration, Cosmetic Correction, Weighted Subframes, Star Alignment, and Integration.
Postprocessing notes:
a. Dynamic Cropping of LRGB masters each to the same dimensions having a 3:2 aspect ratio.
b. Applied a Screen Transfer Function to view the resulting images.
c. For the L master: Applied a Dynamic Background Extractor and saved the settings to be used later when applying a DBE on the RGB masters.
d. Applied BlurXT and NoiseXT.
e. Applied a Histogram Transformation. This step generated a nonlinear image which was saved as a postprocessed L image.
f. "Built" a color image from the R, G and B masters by using LRGB Combination and applied a DBE to the color image using the same DBE settings as used for the L master.
g. Since a color image is involved, this necessitated the application of Background Neutralization and Color Calibration to the result from step f above.
h. Applied BlurXT, NoiseXT and a Histogram Transformation. Saved the nonlinear result as a postprocessed RGB image.
i. Used LRGB Combination to "apply" an instance from the postprocessed L image to the postprocessed RGB image.
j. Applied StarXterminator to create starless (i.e. containing the target image - in this case M51) and stars-only images.
k. Processed the starless image, after applying a range selection mask to protect the background area, using Local Histogram Equalization, Curves Transformation and Color Saturation. Curves Transformation was used only to boost the saturation whereas Color Saturation was used to enhance specific color hues.
l. Applied SCNR (Subtractive Chromatic Noise Reduction). Removed mask and used an expression in Pixel Math to combine the result from step k above with the stars-only image from step j.
m. As a final step, after protecting the target image with a Star Mask, applied a (star reduction) Morphological Transformation to the result from step l above.
Moscú - Moscow - Москва
Happening (de la palabra inglesa que significa acontecimiento, ocurrencia, suceso) es toda experiencia que parte de la secuencia provocación-participación-improvisación. Tiene su origen en la década de 1950 y se considera una manifestación artística multidisciplinaria. Aunque se han relacionado con el pop-art y el movimiento hippie, los happenings se integran dentro del conjunto del llamado performance art.
En un principio, el happening artístico fue una tentativa de producir una obra de arte que naciese del acto a organizar y con la participación de los "espectadores" (que abandonasen así su posición de sujetos pasivos y se liberasen a través de la expresión emotiva y la representación colectiva). Aunque es común confundir el happening con la llamada acción artística el primero difiere de la segunda en la improvisación.
El happening, como manifestación artística múltiple que pretende la participación espontánea del público, suele ser efímero. Por este motivo los happenings suelen presentarse en lugares públicos, irrumpiendo en la cotidianeidad.
es.wikipedia.org/wiki/Happening
A happening is a performance, event, or situation meant to be considered art, usually as performance art. The term was first used by Allan Kaprow during the 1950s to describe a range of art-related event or multiple events.
Happenings occur anywhere and are often multi-disciplinary, with a nonlinear narrative and the active participation of the audience. Key elements of happenings are planned but artists sometimes retain room for improvisation. This new media art aspect to happenings eliminates the boundary between the artwork and its viewer.
In the late 1960s, perhaps due to the depiction in films of hippie culture, the term was used much less specifically to mean any gathering of interest from a pool hall meetup or a jamming of a few young people to a beer blast or fancy formal party.
Two Quadrantids and a short meteor of unknown origin (not sure if there's any Canes Venatici shower). The Beehive cluster can be seen on top right, and the Coma Berenices one on bottom left
60 lights in 2 stacks that weren't properly aligned, so I needed to do a mosaic, Canon 800D at ISO 800, Samyang 16mm at f2.8, 1 minute exposures, Omegon Lx2 tracking mount. 30 darks, 120 biases. Processed in PixInsight as below
***** Integration:
*lightvortexastronomy tutorial (www.lightvortexastronomy.com/tutorial-pre-processing-cali...),
** CC defect list + master dark
** weighing: (15*(1-(FWHM-FWHMMin)/(FWHMMax-FWHMMin)) + 15*(1-(Eccentricity-EccentricityMin)/(EccentricityMax-EccentricityMin)) + 20*(SNRWeight-SNRWeightMin)/(SNRWeightMax-SNRWeightMin))+50
Meteors: #823, #852
Last image of first stack #836, first image of second stack #838
** Stack 1 - align on #836, since fav framing, drop #807
** Stack 2 - align on #842, since best closest to stack 1, drop #843, #856
*** Meteor stacking:
* Redid the stacking with no rejection, maximum combination. Rescaled to 2x
* Meteor trail coordinates:
Stack 1:
y1 = 475 x1 = 3812, y2 = 754 x2 = 3896, r = 10px
y1 = 5807 x1 = 6265, y2 = 6074 x2 = 7432 , r = 8px (let's do 10 tho for safety)
Stack 2:
y1 = 4610 x1 = 8128, y2 = 5142 x2 = 9914, r = 10px
* Pixelmath the meteors on the main image:
Stack 1: iif((d2seg(3812, 475, 3896, 754) 0.012)||(d2seg(6265, 5807, 7432, 6074) 0.018) , max(meteors1, drizzle1), drizzle1)
Stack 2: iif(d2seg(8128, 4610, 9914, 5142) 0.025, max(meteors2, drizzle2), drizzle2)
*****Linear processing (separate)
***Crop
***Background extraction
* DBE tolerance 2, substracted
*****Mozaic stack
www.lightvortexastronomy.com/tutorial-preparing-a-mosaic....
***Star align, register union = mosaic, thin plate splines, distortion correction, local distortion,frame adaptation to create the alignment frame
***Star align, register match, thin plate splines, distortion correction, local distortion to align each frame to the alignment frame
***Pixelmathed to black non-useful area of frame 2 using iif(y() - 6175 > ( (1098 - 6175)/(10410 - 505) ) * (x() - 505), _02_04_DBE_r, 0)
***GradientMosaicMerge, Overlay, shrink 10, feather 50
***Re-add the missing meteor with pixelmath
iif(d2seg(4683, 5806, 5848, 6078) < 8, max(meteor1, mosaic), mosaic)
*****Linear processing (single image)
***Crop
***Color
*SNCR 0.5 green
*Background neutralization
*Color calibration
***Masks
star_mask_large - large scale structure 2, small scale 1, noise threshold 0.1, scale 6,
star_mask_small (initial)- noise 0.15, scale 4, small scale 3 comp 1, smoothness 8, binarize, midtones = 0.02
meteor_mask: Pixelmath iif(d2seg(1664, 478, 1756, 764) < 12 || d2seg(4118, 5808, 5270, 6068) < 12 || d2seg(7192, 6042, 8928, 6394) < 12, star_mask_small, 0)
star_mask_small(final) - star_mask_small - meteor_mask
***Star shrink
Morphological transformation, erosion operator 4 iterations 0.15, star mask on
*** Linear noise reduction
jonrista.com/the-astrophotographers-guide/pixinsights/eff...
*TGV - small noise
Created TGV masks - extracted luminosity, standard stretch (luminance_mask), curved it with black point at ~0.2 and white at ~0.5, moved histogram point to middle (tgv_mask)
apply tgv mask inverted to the image, give luma mask as local support
TGV chroma str 7 edge protection 2E-4 smoothness 2 iterations 500
TGV luma str 5 edge protection 1E-5 smoothness 2 iterations 500
*MMT - larger noise and TGV artifacts
Created MMT mask - extract luminosity, standard stretch, move histogram point to 75%, apply low range -0.5. Apply inverted
MMT mask - 8 layers, threshold 10 10 7 5 5 2.5 2 2 on rgb
*****Nonlinear
***Stretch
Histogram stretch, STF shadows -2 target background 0.08
***Fix dots
*Curves mask upped the bottom end a bit to erase 2 black dots
***General curves work
*No mask, slight Canon DSLR-like curve to RGB/K
*Applied luminance mask inverted, upped saturation
*** Sharpen
* Sharpen with multiscale linear transform, bias layers 2-6 (0.05, 0.05, 0.025, 0.012, 0.006)
En la Feria del Ram, en Mallorca. La atracción era una especie de péndulo humano gigante (60 metros de altura) que se balanceaba aumentando progresivamente su recorrido hasta realizar un par de vueltas completas. Ése el el momento que recoge ésta foto.
SI ABRES LA IMAGEN EN GRANDE, DESAPARECE EL EFECTO MOIRÉ
En óptica, un patrón de moiré es un patrón de interferencia que se forma cuando se superponen dos rejillas de líneas con un cierto ángulo, o cuando tales rejillas tienen tamaños ligeramente diferentes.
Las líneas pueden ser las fibras textiles en una tela de seda de moiré (las que le dan su nombre al efecto), o bien simples líneas en una pantalla de ordenador, el efecto se presenta igualmente en ambos casos. El sistema visual humano crea la ilusión de bandas oscuras y claras horizontales, que se superponen a las líneas finas que en realidad son las que forman el trazo. Patrones de moiré más complejos pueden formarse igualmente al superponer figuras complejas hechas de líneas curvas y entrelazadas.
El término proviene de moiré, un tipo particular de textil en seda y que posee una apariencia onduleante o fluctuante, gracias a los patrones de interferencia formados por la estructura misma del tejido.
es.wikipedia.org/wiki/Patr%c3%b3n_de_moir%c3%a9
---------------------------------------------------------------------------------------------------
At the Ram Fair in Mallorca. The attraction was a kind of giant human pendulum (60 meters) that swung gradually increasing its way in order to make a couple of full turns. That moment is what this picture is showing.
IF YOU OPEN THE IMAGE IN BIG SICE, MOIRE EFFECT WILL DISAPEAR
Moiré pattern - In physics, a moiré pattern is an interference pattern created, for example, when two grids are overlaid at an angle, or when they have slightly different mesh sizes.
The term originates from moire (or moiré in its French form), a type of textile, traditionally of silk but now also of cotton or synthetic fiber, with a rippled or 'watered' appearance.
The lines could represent fibers in moiré silk, or lines drawn on paper or on a computer screen. The nonlinear interaction of the optical patterns of lines creates a real and visible pattern of roughly parallel dark and light bands, the moiré pattern, superimposed on the lines.
More complex line moiré patterns are created if the lines are curved or not exactly parallel. Moiré patterns revealing complex shapes, or sequences of symbols embedded in one of the layers (in form of periodically repeated compressed shapes) are created with shape moiré, otherwise called band moiré patterns. One of the most important properties of shape moiré is its ability to magnify tiny shapes along either one or both axes, that is, stretching. A common 2D example of moiré magnification occurs when viewing a chain-link fence through a second chain-link fence of identical design. The fine structure of the design is visible even at great distances.
Went out to a dark site for an observation with my local astronomy club, and brought along [my astrophotography equipment](i.imgur.com/q577d0M.jpg). Despite the forecasts we had intermittent clouds rolling through, and eventually it fogged over, so I had to toss a lot of my data. Still pleased with the results with just 2.5 hours of integration time, this target has been on my list to shoot for a while, but rarely can you get to a dark site on a new moon weekend when the clouds are gone. Captured on September 10th, 2021 from a Bortle 4 zone.
---
**[Equipment:](i.imgur.com/6T8QNsv.jpg)**
* TPO 6" F/4 Imaging Newtonian
* Orion Sirius EQ-G
* ZWO ASI1600MM-Pro
* Skywatcher Quattro Coma Corrector
* ZWO EFW 8x1.25"/31mm
* Astronomik LRGB+CLS Filters- 31mm
* Astrodon 31mm Ha 5nm, Oiii 3nm, Sii 5nm
* Agena 50mm Deluxe Straight-Through Guide Scope
* ZWO ASI-290mc for guiding
* Moonlite Autofocuser
**Acquisition:** 2 hours 36 minutes (Camera at Unity Gain, -15°C)
* Lum - 26x180"
* Red - 10x180"
* Green - 8x180"
* Blue - 8x180"
* Darks- 30
* Flats- 30 per filter
**Capture Software:**
* Captured using [N.I.N.A.](nighttime-imaging.eu) and PHD2 for guiding and dithering.
**PixInsight Processing:**
* BatchPreProcessing
* SubframeSelector
* StarAlignment
* [Blink](youtu.be/sJeuWZNWImE?t=40)
* ImageIntegration
* DynamicCrop
* DynamicBackgroundExtraction
**Luminance:**
* EZ Denoise
* EZ Soft Stretch
**RGB:**
* ChannelCombinaiton to combine monochrome R, G, B stacks into color image
* PhotometricColorCalibration
* SCNR green
* HSV Repair
* EZ Soft Stretch
**Nonlinear:**
* LRGBCombination with nonlinear L as luminance
* HistogramTransformation to lower black point
* Shitloads of CurveTransformations to adjust lightness, contrast, colors, saturation, etc. (various masks used)
* ACDNR
* EZ StarReduction
* NoiseGenerator to add noise back into reduced stars
* More Curves
* Extract Synthetic Luminance > LRGBC for large scale chrominance noise reduction
* ColorSaturation
* Even more curves
* Fast rotation
* Resample to 75%
* Annotation
Shot these nearly a year ago, and didn't get around to processing it until now. I really like how the Ha addition helped with these galaxies. Captured over 4 nights in January/February, 2022 from a Bortle 6 zone
**Places where I host my other images:**
[Instagram](www.instagram.com/leftysastrophotography/) | [Flickr](www.flickr.com/people/leftysastrophotography/)
---
**[Equipment:](i.imgur.com/ejpKkwU.jpg)**
* TPO 6" F/4 Imaging Newtonian
* Orion Sirius EQ-G
* ZWO ASI1600MM-Pro
* Skywatcher Quattro Coma Corrector
* ZWO EFW 8x1.25"/31mm
* Astronomik LRGB+CLS Filters- 31mm
* Astrodon 31mm Ha 5nm, Oiii 3nm, Sii 5nm
* Agena 50mm Deluxe Straight-Through Guide Scope
* ZWO ASI-290mc for guiding
* Moonlite Autofocuser
**Acquisition:** 11 hours 29 minutes (Camera at Unity Gain, -20°C)
* L- 164x90"
* R- 55x90"
* G- 54x90"
* B- 53x90"
* Ha - 40x300"
* Darks- 30
* Flats- 30 per filter
**Capture Software:**
* Captured using [N.I.N.A.](nighttime-imaging.eu) and PHD2 for guiding and dithering.
**PixInsight Processing:**
* BatchPreProcessing
* SubframeSelector
* StarAlignment
* [Blink](youtu.be/sJeuWZNWImE?t=40)
* ImageIntegration
* DrizzleIntegration (2x, Var β=1.5)
**Linear:**
* DynamicCrop
* DynamicBackgroundExtraction
* EZ Decon (lum only)
* NoiseXTerminator (lum only)
* ArcsinhStrecth + HistogramTransformation to bring nonlinear
**RGB:**
* Channelcombination to combine monochrome images into RGB image
* PhotometricColorCalibration
* SCNR green
* HSV Repair
**Adding Ha:**
> I followed this tutorial which had great results on some [prior](www.reddit.com/r/astrophotography/comments/q8ogec/m33_the...) HaLRGB [galaxy pics](www.reddit.com/r/astrophotography/comments/ml2os3/m51_the...):
> www.arciereceleste.it/tutorial-pixinsight/cat-tutorial-en...
* PixelMath to make Clean Ha. This effectively [isolates just the Ha regions](i.imgur.com/Aob3UEO.png) from the red continuum spectrum
> Ha-Q * (Red-med (Red))
> Q=0.16
* PixelMath to combine Clean Ha
* PixelMath to add Ha to RGB image ($T)
> R= $T+B*(Ha_Clean - med(Ha_Clean))
> G= $T
> B= $T+B\*0.2*(Ha_Clean - med(Ha_Clean))
> B=1
**HaRGB:**
* Slight SCNR
* ArcsinhStretch + HistogramTransformations to stretch nonlinear
**Nonlinear:**
* LRGBCombination with stretched luminance
* Shitloads of CurveTransformations to adjust lightness, saturation, contrast, hues, etc.
* color saturation to slightly desaturate the Ha regions
* More curves/color saturation adjustments
* Extract L --> LRGBCombination for background chrominance noise reduction
* even more curves
* NoiseXTerminator
* LocalHistogramEqualization
> Two rounds of this. one at size 16 for the finer 'feathery' details and one at size 200 for large scale structures
* EZ Star Reduction
* noise generator to add noise back into star reduced areas
* Invert > SCNR > invert to remove some magentas from the galaxy
* More curves
* MLT for large scale chrominance noise reduction
* guess what even more curves
* Resample to 70%
* Annotation
Sadly LoTr doesn't mean Lord of The rings, but is instead short for the names of the two people who discovered this nebula in 1980, Longmore-Tritton. Wikipedia lists the thing at 1650 light years away from us, and 1.8 ly in diameter. It also happens to be [close to the galaxy NGC 4725 in the sky](www.reddit.com/r/astrophotography/comments/mz5kqm/ngc_472...). I decided to keep this image looking darker overall when processing given how dim the nebula is. Captured over 9 nights from January through March, 2022 from a Bortle 6 zone.
---
**[Equipment:](i.imgur.com/ejpKkwU.jpg)**
* TPO 6" F/4 Imaging Newtonian
* Orion Sirius EQ-G
* ZWO ASI1600MM-Pro
* Skywatcher Quattro Coma Corrector
* ZWO EFW 8x1.25"/31mm
* Astronomik LRGB+CLS Filters- 31mm
* Astrodon 31mm Ha 5nm, Oiii 3nm, Sii 5nm
* Agena 50mm Deluxe Straight-Through Guide Scope
* ZWO ASI-120mc for guiding
* Moonlite Autofocuser
**Acquisition:** 32 hours 36 minutes (Camera at Unity Gain, -20°C)
* Oiii- 184x600"
* R- 26x90"
* G- 25x90"
* B 26x90"
* Darks- 30
* Flats- 30 per filter
**Capture Software:**
* Captured using [N.I.N.A.](nighttime-imaging.eu) and PHD2 for guiding and dithering.
**PixInsight Processing:**
* BatchPreProcessing
* SubframeSelector
* StarAlignment
* [Blink](youtu.be/sJeuWZNWImE?t=40)
* ImageIntegration
* DrizzleIntegration (2x, Var β=1.5) (Oiii only)
**Linear:**
* DynamicCrop
* StarAlign undrizzled RGB stacks to Oiii
* DynamicBackgroundExtraction 2x
**RGB**
* LinearFit RGB stacks to Oiii
* ChannelCombination to make color image from the RGB stacks
* PixelMath to add Oiii to RGB image
> R = $T
> G = 0.1\*Oiii + 0.9\*$t
> B = 0.5\*Oiii + 0.5\*$t
* PhotometricColorCalibration + HSV repair for star color
* Second RGB image created and ran through PCC and HSV repair, for fixing star color later
* ArcsinhStretch + HistogramTransformation to stretch RGB images and Oiii stack to nonlinear
**Nonlinear:**
* LRGBCombination with Oiii as luminance
* Shitloads of CurveTransformations to adjust lightness, contrast, saturation, hues, etc.
* PixelMath to replace the stars with the more color accurate ones from the stars-only image (modified starnet mask used)
> *shockingly* adding the Oiii directly into the G and B channels fucks up star colors, hence this step
* more curves
* MLT noise reduction
* even more curves
* MMT De-blotching
* Resample to 60%
* DynamicCrop
* Annotation
Scope: WO Zenith Star 81mm f/6.9 with WO 6AIII Flattener/Focal Reducer x0.8
OSC Camera: ZWO ASI 2600 MC Pro at 100 Gain
Mount: iOptron GEM28-EC
Guide Scope: ZWO ASI 30mm f/4
Guide Camera: ZWO ASI 120mm-mini
Light Pollution Filter: ZWO Duo-Band Light Pollution Filter
Date: 24-25, 29-30 April and 2, 4-5 May 2022
Location: Washington D.C.
Exposure: 261x300s subs (= 21.75 hours)
Software: Pixinsight
Processing Steps:
Preprocessing: FITS data > Image Calibration > Cosmetic Correction > Subframe Selector > Debayer > Select Reference Star and Star Align > Image Integration.
Linear Postprocessing: Integrated image > Dynamic Crop > Dynamic Background Extractor (subtraction to remove light pollution gradients and division for flat field corrections) > Background Neutralization > Color Calibration > Blur Xterminator > Noise Xterminator.
Nonlinear Postprocessing: Histogram Transformation > Star Xterminator to decompose into Starless and Stars Only images.
Starless image > Histogram Transformation > Noise Xterminator > Local Histogram Equalization.
Apply a First Curves Transformation to boost the blue signal from the galaxy's arms. Apply an RGB Split. After adjusting the weights for the individual RGB components (noting that the R serves as both the L channel and the red channel when using an OSC camera), apply LRGB Combination to get a blue boosted image.
Apply a Second Curves Transformation to boost the red signal from the galaxy's core. Apply an RGB Split. After adjusting the weights for the individual RGB components (noting that the R serves as both the L channel and the red channel when using an OSC camera), apply LRGB Combination to get a red boosted image.
Use Pixel Math to combine 0.4 x red boosted image + 0.6 x blue boosted image to get a Composite image. These weights were determined by experimentation to produce an optimum balance.
Use Pixel Math again to combine 0.6 x Composite image + 0.4 x an HDR Multiscale Transform-modified Composite image to get a New Composite image.
New Composite > Curves Transformation using color masks > Histogram Transformation > Local Histogram Equalization > Final Starless image.
Use Pixel Math to rejoin the Final Starless image with the Stars Only image to get a rejoined image.
Rejoined image > Topaz Labs > DeNoise AI > Gigapixel AI.
Use Pixel Math to combine 25% x Rejoined image + 75% x AI image = Final Result.
Beautiful shots from EVERSPACE.
Shot using Fraps and in-game Action Freeze cam. Bottom cropped (Beta Build and EVERSPACE logo).
Beta (1.1.28051) access is finally here !
Crowdfunded and from the makers of the iconic Galaxy on Fire series comes a new breed of space shooter for PC and Xbox One, combining roguelike elements with top-notch visuals and a captivating story.
The Indigenous Peoples in Canada are an inspirational example of resilience due to their ability to withstand adversity and persevere through generations of oppressive colonial policies. Historic injustices persist, including the effects of cultural genocide from the residential school system of Canada. Here we symbolize bridging the gap between Indigenous and non-Indigenous Peoples through gathering. Accomplished through the support of the seven grandfather teachings, represented by the seven rings of the installation, that originated with the Anishnabae Peoples, passed down through generations that ensures the survival of all Indigenous Peoples: Wisdom, Love, Respect, Bravery, Honesty, Humility, and Truth. Orange represents the National Day of Truth and Reconciliation, and the reality that the support of non-Indigenous Peoples, as Indigenous Peoples assert rights to self-determination, will strengthen relations and begin to redress the historic wrongs. Orange is displayed in the ropes where the pattern pays homage to the creation of drums, where the ropes were weaved to honour culture. The installations flow towards the lifeguard stand reinforces the strengthening of the relationship and that the protection of Canada hinges on the unity between peoples. We aim to symbolize movement to a new relationship, one based on mutual respect that honours Indigenous treaties and rights. The road forward is long and nonlinear, but we commit to take the journey together. - from www.winterstations.com
Design Team: University of Guelph, School of Environmental Design & Rural Development – Alex Feenstra, Megan Haralovich, Zhengyang Hua, Noah Tran, Haley White & Connor Winrow, Lead by Assistant Professor Afshin Ashari and Associate Professor Sean Kelly (Canada)
This is a **false color** image using the SHO Hubble Palette. Monochrome images are taken through three different filters that isolate the specific wavelengths of light that certain elements in space emit. In this case Sulfur-II, Hydrogen-alpha, and Oxygen-III are mapped to R, G, and B respectively to make a false color image. Because all of the channels are almost equal in brightness you get some crazy rainbow colors. Captured on October 23rd and 26th, and November 13th and 16th, 2020 from a bortle 6 zone.
--
**[Equipment:](i.imgur.com/6T8QNsv.jpg)**
* TPO 6" F/4 Imaging Newtonian
* Orion Sirius EQ-G
* ZWO ASI1600MM-Pro
* Skywatcher Quattro Coma Corrector
* ZWO EFW 8x1.25"/31mm
* Astronomik LRGB+CLS Filters- 31mm
* Astrodon 31mm Ha 5nm, Oiii 3nm, Sii 5nm
* Agena 50mm Deluxe Straight-Through Guide Scope
* ZWO ASI-120MC for guiding
* Moonlite Autofocuser
**Acquisition:** 10 hours 0 minutes (Camera at Unity Gain, -15°C)
* Ha- 42x300"
* Oiii- 39x300"
* Sii- 39x300"
* Darks- 30
* Flats- 30 per filter
**Capture Software:**
* Captured using [N.I.N.A.](nighttime-imaging.eu/) and PHD2 for guiding and dithering.
**PixInsight Processing:**
* BatchPreProcessing
* SubframeSelector
* StarAlignment
* [Blink](youtu.be/sJeuWZNWImE?t=40)
* ImageIntegration
**SuperLuminance:**
* Ha, Oiii, and Sii images were integrated using ImageIntegration and then drizzled (2x, Var β=1.5) to make the superluminance.
**Linear:**
* DynamicCrop
* DynamicBackgroundExtraction
* [EZ Decon and Denoise](darkarchon.internet-box.ch:8443/) (Superluminance only)
* STF applied via HistogramTransformation to bring each channel nonlinear
**Combining Channels:**
* PixelMath to make classic SHO to RGB image
**Nonlinear:**
* LRGBCombination with Superluminance
* Shitloads of [Curve](i.imgur.com/ms5h9bO.jpg)Transformations to adjust lightness, saturation, contrast, hues, etc
* RangeMasked SCNR to adjust background colors
* ACDNR
* LocalHistogramTransformation
* DarkStructureEnhance
* More Curve
* EZ Star Reduction
* Resample to 70%
* DynamicCrop to 2880x2160
* Annotation
This marks the third year in a row that I've photographed M40. Out of the 110 objects in the Messier Catalog, these two stars are by far the most awe-inspiring, and sexy objects in space. Captured on April 25th, 2021 from a Bortle 6 zone.
---
**[Equipment:](i.imgur.com/6T8QNsv.jpg)**
* TPO 6" F/4 Imaging Newtonian
* Orion Sirius EQ-G
* ZWO ASI1600MM-Pro
* Skywatcher Quattro Coma Corrector
* ZWO EFW 8x1.25"/31mm
* Astronomik LRGB+CLS Filters- 31mm
* Astrodon 31mm Ha 5nm, Oiii 3nm, Sii 5nm
* Agena 50mm Deluxe Straight-Through Guide Scope
* ZWO ASI-120MC for guiding
* Moonlite Autofocuser
**Acquisition:** 2 hours 44 minutes (Camera at Unity Gain, -15°C)
* Lum - 128x120"
* Red - 18x120"
* Green - 18x120"
* Blue - 18x120"
* Darks- 30
* Flats- 30 per filter
**Capture Software:**
* Captured using [N.I.N.A.](nighttime-imaging.eu) and PHD2 for guiding and dithering.
**PixInsight Processing:**
* BatchPreProcessing
* StarAlignment
* [Blink](youtu.be/sJeuWZNWImE?t=40)
* ImageIntegration
* DynamicCrop
* AutomaticBackgroundExtraction
**Luminance:**
* EZ Denoise
* ArcsinhStretch + histogramtransformation to bring nonlinear
**RGB:**
* ChannelCombinaiton to combine monochrome R, G, B stacks into color image
* PhotometricColorCalibration
* Slight SCNR
* HSV Repair
* ArcsinhStretch + histogramtransformation to bring nonlinear
**Nonlinear:**
* LRGBCombination with stretched L as luminance
* Several CurveTransformations to adjust lightness, contrast, colors, saturation, etc.
* ACDNR
* EZ StarReduction
* NoiseGenerator to add noise back into star reduced areas
* Final Curves
* Annotation
Yup, I fucked up. Should have closed this lens down to f2.8 at least, and should have taken far more care with the processing - unholy frequency separation HDR methods live and die on the quality of the masking you use, and I was sloppy. However, I can now see the molecular clouds cobwebbing the more visible nebulas, and I'm happy about it. Next time I'll be in good focus and won't mess up the masks either :)
2 stacks of 30 images, slightly out of focus, Canon 800D at ISO 800, Canon 50mm f1.8 lens at f2.2, 1 minute exposures, Omegon Lx2 tracking mount. 30 darks, 120 biases. Processed in PixInsight as below
*****Linear processing
* integrate per lightvortexastronomy tutorial (www.lightvortexastronomy.com/tutorial-pre-processing-cali...), image ref 711 (highest quality one in the stack with the better framing)
* rotate
* crop
* DBE substract, tolerance = 1.2
* background neutralization
* color calibration
*** Deconvolution
* create star mask, noise threshhold 0.01 scale 7 structure growth large 2 small 1 compensation 2
* deconvolution Richardson-Lucy, external PSF, using global dark 0.02, global bright 0.005, local deringing with above star mask, amount 0.9, 70 iterations
*****Nonlinear processing
*** Initial stretch
* histogram transformation shadows 0.03 highlights 1 mids 0.22
* removed m42 core from the star mask
* HDR transform, 8 layers, B3 spline, star mask applied inverted, preserve hue, lightness mask
*** Multiscale transformation (small bright nebulae)
www.stelleelettroniche.it/en/2014/09/astrophoto/m42-ngc19...
* created a new multiscale linear transform, kept 4 lauyers using linear interpolation
* diffed from original image to create a "blurred" version of original image
* extracted luminance from original, used as mask on blurred version
* used curves to create s shape in luminance and saturation, inflection 3/4 up
* pixelmath sum the 3, rescaled, back to original image
* SNCR green
* new multiscale linear transform, keep 5 layers
* diff from original
* extract luminance from blurred image, to use as a mask
* masked blurred image with its own luminance, gave it s-shaped RGB curve, slight boost in luminosity, big boost in saturation
* pixelmath sum the 3, rescaled, back to original image
www.deepskycolors.com/archive/2011/09/08/star-size-reduct...
*** Creating a star mask for star reduction:
* extracted luminance
* applied to it HDRMultiscaleTransform, 3 layers, 2 iterations, all protections off
* histogram stretched it to shadows 0.19 mids 0.44
* created star mask from it: noise treshhold 0.12, scale 6, growth 2/1/2, smoothness 5, midtones 24, highs 90
* histogram stretched it to midtones 0.19
* apply mask to main image, morphological transformation - selection 0.10, 5-element circle
*** Multiscale linear transform, large clouds
* repeat process above up until we create the star mask
* substract star mask from luminance, exagerrate curves to get a nebula and bright stars mask (here I messed up by having a far smaller mask than I should have had, bloated my nebulas)
* rescale add star mask to nebula and bright star mask to get stars and nebula mask
* multiscale linear transform, keep 5 layers as small scale
* substract small scale from full image to get large scale
* apply star and nebula mask to large scale
* local histogram equalization, kernel 350 contrast 1.5
* curves, pump saturation, increase luminosity a bit as well
* pixelmath add back, normalized
*** Darken
* DarkStructureEnhancer, 8 layers, 0.7, 3x3
* curved lightness darker in the bottom 1/4 of the curve
*** Noise reduction and sharpening
* Create luminance mask: extract luma, cut shadows quite a bit (we want to protect stars and bright nebulas so 0.25)
* TGVDenoise CieLAB mode
- chroma: 7 str 3e-3 edge protection 2 smoothness 1k iterations autoconvergence, using luminance mask as local support
- luma: 3 str 3e-3 edge protection 2 smoothness 200 iterations, using luminance mask as local support
* Sharpen with multiscale linear transform, bias layers 2-6 (0.1, 0.1, 0.05, 0.025, 0.012)
* Rescale back to normal
* sharpened with multiscale linear transform, no NR, detail layer schema 0, 0.025, 0.025, 0.012
This mosaic was a pain to process. I originally captured the data in September 2020, but I struggled to find a way to combine the narrowband Ha stack as a luminance channel to the RGB data without having the [stars look weird](i.imgur.com/hqVhONg.png). Every now and then I'd fiddle with something, but it never worked out well. It also doesn't help that drizzled mosaics are *huge* files, with the final uncompressed image coming in around 2GB in size. Also ran out of disk space several times due to huge PixInsight swap files and an ungodly amount of fiddling with masks. Huge thanks to /u/jimmythechicken1 and his HaRGB workflow that he shared the other day (see below), wouldn't have been able to finish this image if it wasn't for your work! [I also made a starless version to better show off the faint nebulosity](i.imgur.com/g9npP7i.jpg) Captured over 5 nights from September 3-30th, 2020 from a Bortle 6 zone.
---
**[Equipment:](i.imgur.com/6T8QNsv.jpg)**
* TPO 6" F/4 Imaging Newtonian
* Orion Sirius EQ-G
* ZWO ASI1600MM-Pro
* Skywatcher Quattro Coma Corrector
* ZWO EFW 8x1.25"/31mm
* Astronomik LRGB+CLS Filters- 31mm
* Astrodon 31mm Ha 5nm, Oiii 3nm, Sii 5nm
* Agena 50mm Deluxe Straight-Through Guide Scope
* ZWO ASI-290mc for guiding
* Moonlite Autofocuser
**Acquisition:** 15 hours 4 minutes (Camera at Unity Gain, -15°C)
Panel 1:
* Ha- 29x600"
* Red- 27x120"
* Green- 27x120"
* Blue- 27x120"
Panel 2:
* Ha- 29x600"
* Red- 27x120"
* Green- 27x120"
* Blue- 27x120"
Calibration Frames
* Darks- 30
* Flats- 30 per filter
**Capture Software:**
* Captured using [N.I.N.A.](nighttime-imaging.eu) and PHD2 for guiding and dithering.
**PixInsight Processing:**
* BatchPreProcessing
* SubframeSelector
* StarAlignment
* [Blink](youtu.be/sJeuWZNWImE?t=40)
* ImageIntegration
* DrizzleIntegration (2x, Var β=1.5)
**Making the mosaic:**
* StarAlign left Ha panel to right (Register/union mosaic mode) to make master mosaic
* StarAlign all panels from all filters to master mosaic (register/match mode)
* GradientMergeMosaic to combine aligned panels into single mosaic image per filter
**Linear:**
* DynamicCrop
* AutomaticBackgroundExtraction
* DynamicBackgroundExtraction
* EZ Decon (Ha only)
* ChannelCombination to make color image from RGB filter stacks
* PhotometricColorCalibration + SCNR
**Adding Ha to RGB:**
> I'm basically just copy/pasting Jimmy's workflow with some slight tweaks
- Start with a full broadband image in linear, combine RGB channels.
- Open Histogram Transformation Preview, preview and adjust stretch such that looks adequate for star color and broadband image color. Save this to the Pixinsight Desktop. This will be used later to stretch the NBRGB image
- Open PixelMath, use the formula Ha-A*(Red-med(Red)) with A being an arbitrary value to sufficiently blacken all stars but not removing any of the H-alpha structure. [Do the same for any other narrowband channels as well and name them appropriately, use Red for Sii and Blue for Oiii] Save this image as 'Subtracted_Ha'
- Duplicate 'Subtracted_Ha' and apply an STF to it, rename this copy to 'F'. This will be a scale factor.
- In a new Pixelmath instance, to combine the NB and BB, use the formula as follows
> R: Red + R\*(Subtracted_Ha\*F)
> G: Green
> B: Blue + (0.025\*R)*(Subtracted_Ha\*F)
> Symbols: R=[insert value, discussed in next step],
- The symbol associated with the scaling of the channel addition should be changed to taste. To preview image, STF preview should not be used, but rather the preview window from the Histogram Transformation previously saved off.
> in my case R=0.5 was the best R value
* PhotometricColorCalibration for the NBRGB image
**Nonlinear:**
* HistogramTransformation to stretch Ha and NBRGB images
* LRGBCombination to add Ha as luminance to color NBRGB image
> Getting a good star mask is CRITICAL for this step. I used a copy of the 'F' image above, modified with a stretch and convolution. This took a while to get right.
* SCNR Green
* Shitloads of CurveTransformations to adjust lightness, saturation, contrast, hues, etc.
* Extract L > LRGBCombination again for larger scale chrominance noise reduction
* LocalHistogramTransformation
> two rounds of this, one at size 16 kernel for the finer 'feathery' details, and one at 100 for larger structures
* MLT noise reduction
* DarkStructureEnhance
* EZ Star Reduction (2 rounds of this)
* MMT noise reduction
* even more curves
* IntegerResample to 50%
* Annotation
The sensitive dependence on initial conditions in which a small change at one place in a deterministic nonlinear system can result in large differences in a later state.
Strobist… 580exii lastolite ezybox camera left 1/2 24mm. 580exii lastolite STU camera right 1/2 50mm.
I have been posting, and will post several more Philly shots. This one is too, but what the heck, it's the weekend, so why not go AbStRaCt.
The pattern has a weak fractal form, and is made "interesting" because of the quasi-periodic loops. If we consider the way it was made, it can give us more general insight into how certain patterns come about... The picture was made by moving the camera in a fast, circular motion while aiming at street lights below my hotel window. The "rectangular" regularity of the street lights induced self-similarity amongst the loops, and the jerkiness of my motion made the loops less-than-perfect. While not technically a chaotic "strange" attractor, it depicts a low dimensional nonlinear dynamical system, so that's close enough...
#348 in Explore, 8.11.2007
Beautiful shots from EVERSPACE.
Shot using Fraps and in-game Action Freeze cam. Bottom cropped (EVERSPACE logo).
Although it's a few hours fewer exposure time than [my last attempt at it](www.reddit.com/r/astrophotography/comments/b2w8dn/m106_14...), I'd consider this an improvement, mostly due to less light pollution and better post=processing skills. I love how much more visible the Ha 'tendrils' are in the core region of the galaxy compared to the old pic. Also made [an annotated version](i.imgur.com/hY2AorC.jpg), which highlights most of the galaxies in the background. Captured over 5 nights in May, 2022 from a Bortle 6 zone
---
**[Equipment:](i.imgur.com/ejpKkwU.jpg)**
* TPO 6" F/4 Imaging Newtonian
* Orion Sirius EQ-G
* ZWO ASI1600MM-Pro
* Skywatcher Quattro Coma Corrector
* ZWO EFW 8x1.25"/31mm
* Astronomik LRGB+CLS Filters- 31mm
* Astrodon 31mm Ha 5nm, Oiii 3nm, Sii 5nm
* Agena 50mm Deluxe Straight-Through Guide Scope
* ZWO ASI-290mc for guiding
* Moonlite Autofocuser
**Acquisition:** 18 hours 48 minutes (Camera at Unity Gain, -20°C)
* L- 92x120"
* R- 31x120"
* G- 31x120"
* B- 30x120"
* Ha - 51x300"
* Darks- 30
* Flats- 30 per filter
**Capture Software:**
* Captured using [N.I.N.A.](nighttime-imaging.eu) and PHD2 for guiding and dithering.
**PixInsight Processing:**
* BatchPreProcessing
* SubframeSelector
* StarAlignment
* [Blink](youtu.be/sJeuWZNWImE?t=40)
* ImageIntegration
* DrizzleIntegration (2x, Var β=1.5)
**Linear:**
* DynamicCrop
* DynamicBackgroundExtraction
* EZ Decon (lum only)
* NoiseXTerminator (lum only)
* ArcsinhStrecth + HistogramTransformation to bring nonlinear
**RGB:**
* Channelcombination to combine monochrome images into RGB image
* PhotometricColorCalibration
* SCNR green
* HSV Repair
**Adding Ha:**
> I followed this tutorial which had great results on some [prior](www.reddit.com/r/astrophotography/comments/q8ogec/m33_the...) HaLRGB [galaxy pics](www.reddit.com/r/astrophotography/comments/ml2os3/m51_the...):
> www.arciereceleste.it/tutorial-pixinsight/cat-tutorial-en...
* PixelMath to make Clean Ha. This effectively [isolates just the Ha regions](i.imgur.com/Aob3UEO.png) from the red continuum spectrum
> Ha-Q * (Red-med (Red))
> Q=0.12
* PixelMath to combine Clean Ha
* PixelMath to add Ha to RGB image ($T)
> R= $T+B*(Ha_Clean - med(Ha_Clean))
> G= $T
> B= $T+B\*0.2*(Ha_Clean - med(Ha_Clean))
> B=2.5
**HaRGB:**
* Slight SCNR
* ArcsinhStretch + HistogramTransformations to stretch nonlinear
**Nonlinear:**
* LRGBCombination with stretched luminance
* Shitloads of CurveTransformations to adjust lightness, saturation, contrast, hues, etc.
* color saturation to slightly desaturate the Ha regions
* More curves/color saturation adjustments
* Extract L --> LRGBCombination for background chrominance noise reduction
* even more curves
* NoiseXTerminator
* MLT chroninance noise reduction
* UnsharpMask to further sharpen the galaxy details
* LocalHistogramEqualization
* EZ Star Reduction
* noise generator to add noise back into star reduced areas
* Invert > SCNR > invert to remove magentas from the galaxy
* More curves
* MLT for large scale chrominance noise reduction
* guess what even more curves
* Resample to 70%
* Annotation
x
MIDIA ARTE / MEDIA ART
[fladry + jones]: Robb Fladry and Barry Jones – The War is Over 2007
Agricola de Cologne – One Day on Mars
Alan Bigelow – “When I Was President”
Alessandra Ribeiro Parente Paes, Daniel Fernandes Gamez, Glauber Kotaki Rodrigues, Igor Albuquerque Bertolino, Karina Yuko Haneda, Marcio Pedrosa Tirico da Silva Junior. Orientadores: Mércia Albuquerque e Mauro Baptista – Reativo
Alessandro Capozzo – Talea
Alex Hetherington – Untitled (sexback, folly artist)
Alexandre Campos, Bruno Massara e Lucilene Soares Alves – Novos Olhares sobre a Mobilidade
Alexandre Cardoso Rodrigues Nunes, Bruno Coimbra Franco, Diego Filipe Braga R. Nascimento, Fábio Rinaldi Batistine, Yumi Dayane Shimada. Orientadores: Mércia Albuquerque e Carlos Alberto Barbosa – Abra Sua Gaveta
All: Alcione de Godoy, ADILSON NG, CAMILLO LOUVISE COQUEIRO, MARINA QUEIROZ MAIA, RODOLFO ROSSI JULIANI, VINÍCIUS NAKAMURA DE BRITO – Vita Ex Machina
Andreas Zingerele – Extension of Human Sight
Andrei R. Thomaz – O Tabuleiro dos Jogos que se bifurcam
Andrei R. Thomaz – First Person Movements
Andrei R. Thomaz e Marina Camargo – Eclipses
Brit Bunkley – Eclipses
Brit Bunkley – Spin
cali man – appendXship
Carlindo da Conceição Barbosa, Kauê de Oliveira Souza, Guilherme Tetsuo Takei, Renato Michalischen, Ricardo Rodrigues Martins, Tassia Deusdara Manso, Thalyta de Almeida Barbosa.
Orientadores: Mauricio Mazza e Carlos Alberto Barbosa – Da Música ao Caos
Christoph Korn – waldstueck
Corpos Informáticos: Bia Medeiros – UAI 69
Daniel Duda – do pixel ao pixel
Daniel Kobayashi, Felipe Crivelli Ayub, Fernando Boschetti, Luiz Felipe M. Coelho, Marcelo Knelsen, Mauro Falavigna, Rafael de A. Campos, Wellington K. Guimarães Bastos. Orientadores: Mércia Albuquerque e Gizela Belluzo de Campos – A Casa Dentro da Porta
David Clark - 88 Constellations for Wittgenstein
Thais Paola Galvez, Josias Silva, Diego Abrahão Modesto, Nilson Benis, Vinicius Augusto Naka de Vasconcelos, Wilson Ruano Junior, Marcela Moreira da Silva. Orientadores: Paulo Costa e Mauricio Mazza – Rogério Caos
Diogo Fuhrmann Misiti, Guilherme Pilz, João Henrique - Caleidoscópio Felliniano: 8 1/2
Agence TOPO: Elene Tremblay, Marcio Lana-Lopez,Maryse Larivière, Marie-Josée Hardy, James Prior – Mês / My contacts
Eliane Weizmann, Fernando Marinho e Leocádio Neto – Story teller
Fabian Antunes Silva – Pusada Recant Abaetuba
Edgar Franco e Fabio FON - Freakpedia - A verdadeira enciclopédia livre
Fernando Aquino – UAI-Justiça
Henry Gwiazda – claudia and paul
Henry Gwiazda – a doll`s house is……
Henry Gwiazda – there`s whispering......
Architecture in Metaverse: Hidenori Watanave - "Archidemo" - Architecture in Metaverse
Isabel Margarida Aranda Mansilla – Cyber Birds Dance
Dana Sperry - Sketch for an Intermezzo for the Masses, no. 7
Jorb Ebner - sans femme et sans aviateur
Josephine Anstey e Dave Pape – Office Diva
Joshua Andrew Fishburn – Layers
Joshua Andrew Fishburn – Waiting
Joshua Andrew Fishburn – Peculiaris
Kevin Evensen – Veils of Light
lemeh42: santini michele e paoloni lorenza - Study on human form and humanity #01
Linda Hilfling Nielsen – misspelling generator
Lisa Link – If I worked for 493 years
Marcelo Padre – Estro
Martha Carrer Cruz Gabriel – Locative Painting
Martin John Callanan – I Wanted to See All of the News From Today
Mateus Knelsen, Ana Clara, Felipe Vasconcelos, Rafael Jacobsen, Ronaldo Silva - A pós-modernidade em recortes: Tide Hellmeister e as relações Design e cultura
Mateus Knelsen, Felipe Szulc, Mileine Assai Ishii, Pamela Cardoso, Tânia Taura – Homo ex Machina
Michael Takeo Magruder – Sequence-n (labyrinth)
Michael Takeo Magruder – Sequence-n (horizon)
Michael Takeo Magruder + Drew Baker + David Steele – The Vitruvian World
Nina Simões - Rehearsing Reality ( An interactive non-linear docufragmentary)
Nurit Bar-Shai – Nothing Happens
projectsinge: Blanquet jerome – Monkey_Party
QUBO GAS: Jef Ablézot, Morgan Dimnet & Laura Henno – WATERCOLEUS PARK
rachelmauricio castro – 360
rachelmauricio castro – R.G.B.
rachelmauricio castro – tybushwacka
Rafael Rozendaal – future physics
Regina Célia Pinto – Ninhos e Magia
Ronaldo Ribeiro da Silva (Roni Ribeiro) – Bípedes
Rubens Pássaro – ISTO NÃO É PARANÓIA
Rui Filipe Antunes – xTNZ
Selcuk ARTUT & Cem OCALAN – NewsPaperBox
Stuart Pound – Dominat Culture
Stuart Pound – SF
Stuart Pound – Life on Mars
Susan Jowsey and Marcus Williams – The Trouble
Tanja Vujinovic – Osciloo
Weinberger Hannah – “Without Title”
CINEMA DOCUMENTA
Antonello Matarazzo – Interferenze
Bruno Natal - Dub Echoes
Carlo Sansolo - Panoramika Eletronika
Kevin Logan – Recitation
Kodiak Bachine e Apollo 9 – Nuncupate
Linda Hilfing Nielsen - Participation 0.0
Maren Sextro e Holger Wick - Slices, Pioneers of Electronic Music – Vol.1 – Richie
Matthew Bate - What The Future Sounded Like
Thomas Ziegler, Jason Gross e Russell Charmo - OHM+ the early gurus of electronic music
CINEMA DIGITAL
A Study of 4D Julia sets
Baraka / Baraka from DVD to 4K / Baraka with the monkey
Beatbox360
Enquanto a noite não chega (While we wait for the night â?" first Brazilian film in 4K)/(primeiro filme brasileiro em 4k)
Era la Notte
Flight to the Center of the Milky Way
Growth by aggregation 2
Jet Instabilities in a stratified fluid flow
Keio University Concert
Manny Farber (Tribute to)
Scalable City
The Nonlinear Evolution of the Universe
The Prague train
FILE INOVAÇÃO / FILE INNOVATION
Interface Cérebro-Computador – Eduardo Miranda
Sistema comercial de Reconhecimento Automático - Genius Instituto de Tecnologia
Robô de visão omnidirecional – Jun Okamoto
Loo Table: mesa interativa - André V. Perrotta, Erico Cheung e Luis Stateri dos Santos, da empresa Loodik
Simulador de Ondas e Simulador de Turbilhão - Steger produção de efeitos especiais ltda.
GAMES INSTALAÇÕES / INSTALLATIONS GAMES
Giles Askham – Aquaplayne
Jonah Warren & Steven Sanborn – Transpose
Jonah Warren & Steven Sanborn – Full Body Games
Fabiano Onça e Coméia – Tantalus Quest
Julian Oliver - levelHead
GAMES
Andreas Zecher – Understanding Games
Andrei R. Thomaz – Cubos de Cor
Arvi Teikari – Once In Space
Fabrício Fava – Futebolando
Golf Question Mark – Golf
Introversion.co.uk – Darwinia
Jens Andersson and Ida Rödén – Rorschach
Jonatan Söderström – CleanAsia!
Jonatan Söderström – AdNauseum2
Jorn Ebner – sans femme et sans avieteur
Josh Nimoy – BallDroppings
Josiah Pisciotta – Gish
Marek Walczak and Martin Wattenberg – Thinking Machine 7
Mariana Rillo – Desmanche
Mark Essen - Punishment: The punishing
Mark Essen - RANDY BALMA: MUNICIPAL ABORTIONIST
Playtime – SFZero
QUBO GAS: Jef Ablézot, Morgan Dimnet & Laura Henno - WATERCOULEUR PARK
QueasyGames - Jonathan Mak – Everyday Shooter
R-S-G: Radical Software Group - Kriegspiel - Guy Debord's Game of War
Shalin Shodhan (www.experimentalgameplay.com) – On a Rainy Day
Shalin Shodhan (www.experimentalgameplay.com) – Cytoplasm
Shalin Shodhan (www.experimentalgameplay.com) – Particle Rain
Tales of Tales: Auriea Harvey & Michaël Samyn - The Graveyard
Tanja Vujinovic – Osciloo
ThatGameCompany – Jenova Chen – Clouds
ThatGameCompany – Jenova Chen - flOw
JOGOS BR
JOGOS BR 1
Ayri - Uma Lenda Amazônica - Sylker Teles da Silva / Outline Interactive
Capoeira Experience - Andre Ivankio Hauer Ploszaj / Okio Serviços de Comunicação Multimídia Ltda.
Cim-itério - Wagner Gomes Carvalho / Green Land Studios
Incorporated (Emprego Maluco) - Tiago Pinheiro Teixeira / Interama Jogos Eletrônicos
Iracema Aventura – Odair Gaspar / Perceptum Software Ltda.
Nevrose: Sangue e Loucura Sob o Sol do Sertão - Rodrigo Queiroz de Oliveira
/ Gamion Realidade Virtual & Games
Raízes do Mal – Marcos Cruz Alves / Ignis Entretenimento e Informática Ltda.
JOGOS BR 2 – Jogos Completos
Cave Days - Winston George A. Petty / Insolita Studios
Peixis!
(JOGO EM DESENVOLVIMENTO) - Wallace Santos Lages / Ilusis Interactive Graphics
JOGOS BR 2 – Demos Jogáveis
Brasilia Tropicalis - Thiago Salgado Aiache de Moraes / Olympya Games
Conspiração Dumont - Guilherme Mattos Coutinho
Flora - Francisco Oliveira de Queiroz
Fórmula Galaxy – Artur Corrêa / Vencer Consultoria e Projetos Ltda.
Inferno - Alexandre Vrubel / Continuum Entertainment Ltda
Lex Venture - Tiago Pinheiro Teixeira / Interama Jogos Eletrônicos
Trem de Doido (DEMO EM DESENVOLVIMENTO) - Marcos André Penna Coutinho
Zumbi, o rei dos Palmeiras - Nicholas Lima de Souza
HIPERSÔNICA / HIPERSONICA
Hipersônica Performance
Andrei Thomaz, Francisco Serpa, Lílian Campesato e Vitor Kisil – Sonocromática
Bernhard Gal – Gal Live
+Zero: Fabrizio Augusto Poltronieri, Jonattas Marcel Poltronieri, Raphael Dall'Anese - +Zero do Brasil
Luiz duVa - Concerto para duo de laptops
Henrique Roscoe (a.k.a. 1mpar) – HOL
Jose Ignacio Hinestrosa e Testsu Kondo – Fricciones
Alexandre Fenerich e Giuliano Obici – Nmenos1
Orqstra de Laptops de São Paulo - EvEnTo 3 Movimentos para Orquestra
Hipersônica Participantes
Agricola de Cologne - soundSTORY - sound as a tool for storytelling
Jen-Kuan Chang – Drishti II
Jen-Kuan Chang – Discordance
Jen-Kuan Chang – Nekkhamma
Jen-Kuan Chang - She, Flush, Vegetable, Lo Mein, and Intolerable Happiness
Jerome Soudan – Mimetic
Matt Lewis e Jeremy Keenan – Animate Objects
Robert Dow - Precipitation within sight
Tetsu Kondo – Dendraw
Tomas Phillips – Drink_Deep
Hipersônica Screening
Art Zoyd's: Amandine Top – EYECATCHER 1
Art Zoyd's: Amandine Top – EYECATCHER 2, Man with a movie câmera
Art Zoyd's: Amandine Top – Movie-Concert for The Fall of the Usher House
Flames aka Flames: Raphael Freire - Performance Audiovisual Sincronizada: Sociedade Pós-Moderna, Novas Tecnologias e Espaço Urbano
Celia Eid e Sébastien Béranger – Gymel
Duprass: Liora Belford e Ido Govrin – Free Field
Duprass: Liora Belford e Ido Govrin – Pink / Noise
Citrullo International: Carlo Hitermann – H2O
1mpar: Henrique Roscoe – HOL
Jay Needham - Narrative Half-life
Daniel Carvalho - butterbox – diving
Daniel Carvalho - OUT_FLOW PART I
Bernard Loibner – Meltdown
Soundsthatmatter – trotting
Soundsthatmatter – briji
Fernando Velázquez – Nómada
Bjorn Erik Haugen – Regress
Audiobeamers: Felipe Frozza - Paesaggi Liquidi II
David Muth - You Are The Sony Of My Life
Dennis Summers – Plase Shift Videos
INSTALAÇÕES / INSTALLATIONS
Anaisa Franco – Connected Memories
Andrei Thomaz & Sílvia Laurentiz – 1º Subsolo
Graffiti Research Lab – Various
Hisako K. Yamakawa – Kodama
r3nder.net+i2off.org – is.3s
Jarbas Jacome – Crepúsculo dos Ídolos
Julio Obelleiro & Alberto García – Magnéticos
Julio Obelleiro & Alberto García – The Magic Torch
Mariana Manhães – Liquescer (Jarrinho)
Mariana Manhães – Liquescer (Jarrinho Azul)
Rejane Cantoni e Leonardo Crescenti – PISO
Sheldon Brown – Scalable City
Soraya Braz e Fábio FON – Roaming
Takahiro Matsuo – Phantasm
Ursula Hentschlaeger – Outer Space IP
Ursula Hentschlaeger – Phantasma
Ursula Hentschlaeger – Binary Art Site
SYMPOSIUM
Agnus Valente
Anaisa Franco
Andre Thomaz e Silvia Laurentiz
Christin Bolewski
Giles Askham
Graffiti Research Lab: James Powderly
Hidenori Watanave
Ivan Ivanoff e Jose Jimenez
Jarbas Jácome
João Fernando Igansi Nunes
Marcos Moraes
Mediengruppe Bitnik; Carmen Weisskopf, Domagoj Smoljo, Silvan Leuthold, Sven König [SWI]
Mesa Redonda (LABO) - Cicero Silva, Lev Manovich (teleconferencia) e Noah Wardrip-Fruin
Mesa Redonda [BRA] – (Hipersônica) Renata La Rocca, Gabriela Pereira Carneiro, Ana Paula Nogueira de Carvalho, Clarissa Ribeiro Pereira de Almeida. Mediação: Vivian Caccuri
Mesa Redonda [BRA] - [Ministro da Cultura: Gilberto Gil | Secretário do Audiovisual do Ministério da Cultura: Sílvio Da-Rin | Secretário de Políticas Culturais do Ministério da Cultura: Alfredo Manevy ]
Mesa Redonda [BRA] - Inovação - Lala Deheinzelin, Gian Zelada, Alessandro Dalla, Ivandro Sanches, Eduardo Giacomazzi. Mediação: Joana Ferraz
Mesa Redonda 4k - Jane de Almeida, Sheldon Brownn, Mike Toillion, Todd Margolis, Peter Otto
Nardo Germano
Nori Suzuki
Sandra Albuquerque Reis Fachinello
Satoru Tokuhisa
Sheldon Brown
Soraya Braz e Fabio FON
Suzete Venturelli, Mario Maciel e bolsistas do CNPq/UnB (Johnny Souza, Breno Rocha, João Rosa e Samuel Castro [BRA]
Ursula Hentschlaeger
Valzeli Sampaio
x
Description: This is my image of the Andromeda Galaxy M31 based on about 15 hours of total exposure time. The image identifies the two satellite galaxies M32 and M110. The angular size of M31 is a huge 178x63 arcminutes which occupies a significant portion of the APS-C sensor of my camera. Since there are also numerous background stars, finding a relatively star-free area to do a Background Neutralization is a bit of a challenge. I also found achieving a proper color balance to be another challenge. Various sources indicate the presence of an outer bluish halo encompassing the core. I tried to achieve my objective by applying a series of Curves Transformations while protecting the background with a mask. As a side note, while numerous stars are present, I decided against applying a Morphological Transformation to reduce their brightness because in doing so I detected an undesirable ringing effect. One possible solution is to apply Multiscale Linear Transform with deringing selected. However, I have not tested that option.
Date / Location: 21-23 September and 8-10 October 2022 / Washington D.C.
Equipment:
Scope: WO Zenith Star 81mm f/6.9 with WO 6AIII Flattener/Focal Reducer x0.8
OSC Camera: ZWO ASI 2600 MC Pro at 100 Gain
Mount: iOptron GEM28-EC
Guide Scope: WO 50mm Uniguide Scope
Guide Camera: ZWO ASI 290mm
Focuser: ZWO EAF
Light Pollution Filter: Chroma LoGlow Broadband
Processing Software: Pixinsight
Processing Steps:
Preprocessing: I preprocessed 184x300s subs (= 15.3 hours) in Pixinsight to get an integrated image using the following steps: Image Calibration > Cosmetic Correction > Subframe Selector > Debayer > Select Reference Star and Star Align > Image Integration.
Linear Postprocessing: Rotation > Dynamic Crop > Dynamic Background Extractor (subtraction to remove light pollution gradients and division for flat field corrections) > Background Neutralization > Color Calibration > Noise Xterminator.
Nonlinear Postprocessing: First Histogram Transformation > Second Histogram Transformation > First Local Histogram Equalization > Second Local Histogram Equalization First Curves Transformation > Second Curves Transformation > Third Curves Transformation > SCNR Noise Reduction.
I love the Baker table and look of this room.
What the Domino editors say: "Stepped circles—the table's narrow base and generous top, plus the two-level antiqued-brass fixture—telegraph a nonlinear intimacy that establishes an airy feel in small quarters."
Photo by Annie Schlechter, Domino, March 2009.
Table: 60" x 30 1/2" "Round," $2,842, bakerstudio.com for stores.
Chandelier: "Simone 9L," $1,499, arteriorshome.com for stores.
Chairs: "Pissarro," $1,435 each, (as shown), Andrew Martin International, (212) 688-4498.
Flooring: 4 1/2" x 36" vinyl "Essentials Collection," from $5.80/square foot (plus installation), amtico.com for stores.
Been going back and reshooting the [Messier objects](www.reddit.com/r/astrophotography/comments/hp45n2/the_mes...) I photographed in my DSLR days. I last shot this galaxy (and the many other background galaxies) [in 2018](www.reddit.com/r/astrophotography/comments/8kj0ns/m49_and...). Captured on April 19th and 20th, 2021 from a Bortle 6 zone
**[Equipment:](i.imgur.com/6T8QNsv.jpg)**
* TPO 6" F/4 Imaging Newtonian
* Orion Sirius EQ-G
* ZWO ASI1600MM-Pro
* Skywatcher Quattro Coma Corrector
* ZWO EFW 8x1.25"/31mm
* Astronomik LRGB+CLS Filters- 31mm
* Astrodon 31mm Ha 5nm, Oiii 3nm, Sii 5nm
* Agena 50mm Deluxe Straight-Through Guide Scope
* ZWO ASI-120MC for guiding
* Moonlite Autofocuser
**Acquisition:** 5 hours 54 minutes (Camera at Unity Gain, -15°C)
* Lum - 108x180"
* Red - 23x180"
* Green - 23x180"
* Blue - 23x180"
* Darks- 30
* Flats- 30 per filter
**Capture Software:**
* Captured using [N.I.N.A.](nighttime-imaging.eu) and PHD2 for guiding and dithering.
**PixInsight Processing:**
* BatchPreProcessing
* StarAlignment
* [Blink](youtu.be/sJeuWZNWImE?t=40)
* ImageIntegration
* DrizzleIntegration (2x, Var β=1.5) (Lum only)
* StarAlign Ha, R, G, B stacks to drizzled L
* DynamicCrop
* AutomaticBackgroundExtraction
**Luminance:**
* EZ Denoise
* MMT de-splotching
* ArcsinhStretch + histogramtransformation to bring nonlinear
**RGB:**
* ChannelCombinaiton to combine monochrome R, G, B stacks into color image
* PhotometricColorCalibration
* Slight SCNR
* HSV Repair
* ArcsinhStretch + histogramtransformation to bring nonlinear
**Nonlinear:**
* LRGBCombination with stretched L as luminance
* Several CurveTransformations to adjust lightness, contrast, colors, saturation, etc. (some with star and/or lum masks)
* ACDNR
* EZ StarReduction
* NioseGenerator to add noise back into star reduced areas
* Final Curves
* Annotation
Description: This is an image of the Needle Galaxy NGC 4565, an edge-on spiral galaxy in Coma Berenices, as developed by me from a total exposure time of 6.75 hours. Of interest is the structure in the central portion of the disk as shown in the magnification inset. In order to reduce the brightness of stellar background I applied a morphological transformation in postprocessing.
Date/Location: 14, 18 February 2023 / Washington D.C.
Equipment:
Scope: WO Zenith Star 81mm f/6.9 with WO 6AIII Flattener/Focal Reducer x0.8
OSC Camera: ZWO ASI 2600 MC Pro at 100 Gain and 50 Offset
Mount: iOptron GEM28-EC
Guider: ZWO Off-Axis Guider
Guide Camera: ZWO ASI 174mm mini
Focuser: ZWO EAF
Light Pollution Filter: Chroma LoGlow Broadband
Processing Software: Pixinsight
Processing Steps:
Preprocessing: I preprocessed 81x300s subs (= 6.75 hours) in Pixinsight to get an integrated image using the following steps: Image Calibration > Cosmetic Correction > Subframe Selector > Debayer > Select Reference Star and Star Align > Image Integration.
Linear Postprocessing: Rotation > Dynamic Crop > Dynamic Background Extractor (both subtraction to remove light pollution gradients and division for flat field corrections) > Background Neutralization > Color Calibration > Noise Xterminator.
Nonlinear Postprocessing: First Histogram Transformation > Local Histogram Equalization > First Curves Transformation > SCNR Noise Reduction > Second Histogram Transformation > Morphological Transformation > Second Curves Transformation > Third Histogram Transformation.
Global inequality is growing, with half the world’s wealth now in the hands of just 1% of the population, according to a new report.Each of the remaining 383m adults – 8% of the population – has wealth of more than $100,000. This number includes about 34m US dollar millionaires. About 123,800 individuals of these have more than $50m, and nearly 45,000 have more than $100m. There is overwhelming agreement among economists that the Second World War was responsible for decisively ending the Great Depression. When asked why the wars in Iraq and Afghanistan are failing to make the same impact today, they often claim that the current conflicts are simply too small to be economically significant.
There is, of course, much irony here. No one argues that World War II, with its genocide, tens of millions of combatant casualties, and wholesale destruction of cities and regions, was good for humanity. But the improved American economy of the late 1940s seems to illustrate the benefits of large-scale government stimulus. This conundrum may be causing some to wonder how we could capture the good without the bad.
If one believes that government spending can create economic growth, then the answer should be simple: let's have a huge pretend war that rivals the Second World War in size. However, this time, let's not kill anyone.
Most economists believe that massive federal government spending on tanks, uniforms, bullets, and battleships used in World War II, as well the jobs created to actually wage the War, finally put to an end the paralyzing "deflationary trap" that had existed since the Crash of 1929. Many further argue that war spending succeeded where the much smaller New Deal programs of the 1930s had fallen short.
The numbers were indeed staggering. From 1940 to 1944, federal spending shot up more than six times from just $9.5 billion to $72 billion. This increase led to a corresponding $75 billion expansion of US nominal GDP, from $101 billion in 1940 to $175 billion by 1944. In other words, the war effort caused US GDP to increase close to 75% in just four years!
The War also wiped out the country's chronic unemployment problems. In 1940, eleven years after the Crash, unemployment was still at a stubbornly high 8.1%. By 1944, the figure had dropped to less than 1%. The fresh influx of government spending and deployment of working-age men overseas drew women into the workforce in unprecedented numbers, thereby greatly expanding economic output. In addition, government spending on wartime technology produced a great many breakthroughs that impacted consumer goods production for decades.
So, why not have the United States declare a fake war on Russia (a grudge match that is, after all, long overdue)? Both countries could immediately order full employment and revitalize their respective manufacturing sectors. Instead of live munitions, we could build all varieties of paint guns, water balloons, and stink bombs.
Once new armies have been drafted and properly outfitted with harmless weaponry, our two countries could stage exciting war games. Perhaps the US could mount an amphibious invasion of Kamchatka (just like in Risk!). As far as the destruction goes, let's just bring in Pixar and James Cameron. With limitless funds from Washington, these Hollywood magicians could surely produce simulated mayhem more spectacular than Pearl Harbor or D-Day. The spectacle could be televised- with advertising revenue going straight to the government.
The competition could be extended so that the winner of the pseudo-conflict could challenge another country to an all-out fake war. I'm sure France or Italy wouldn't mind putting a few notches in the 'win' column. The stimulus could be never-ending.
If the US can't find any willing international partners, we could always re-create the Civil War. Missed the Monitor vs. the Merrimack the first time? No worries, we'll do it again!
But to repeat the impact of World War II today would require a truly massive effort. Replicating the six-fold increase in the federal budget that was seen in the early 1940s would result in a nearly $20 trillion budget today. That equates to $67,000 for every man, woman, and child in the country. Surely, the tremendous GDP growth created by such spending would make short work of the so-called Great Recession. The big question is how to pay for it. To a degree that will surprise many, the US funded its World War II effort largely by raising taxes and tapping into Americans' personal savings. Both of those avenues are nowhere near as promising today as they were in 1941. Current tax burdens are now much higher than they were before the War, so raising taxes today would be much more difficult. The "Victory Tax" of 1942 sharply raised income tax rates and allowed, for the first time in our nation's history, taxes to be withheld directly from paychecks. The hikes were originally intended to be temporary but have, of course, far outlasted their purpose. It would be unlikely that Americans would accept higher taxes today to fund a real war, let alone a pretend one. That leaves savings, which was the War's primary source of funding. During the War, Americans purchased approximately $186 billion worth of war bonds, accounting for nearly three quarters of total federal spending from 1941-1945. Today, we don't have the savings to pay for our current spending, let alone any significant expansions. Even if we could convince the Chinese to loan us a large chunk of the $20 trillion (on top of the $1 trillion we already owe them), how could we ever pay them back? If all of this seems absurd, that's because it is. War is a great way to destroy things, but it's a terrible way to grow an economy. What is often overlooked is that war creates hardship, and not just for those who endure the violence. Yes, US production increased during the Second World War, but very little of that was of use to anyone but soldiers. Consumers can't use a bomber to take a family vacation. The goal of an economy is to raise living standards. During the War, as productive output was diverted to the front, consumer goods were rationed back home and living standards fell. While it's easy to see the numerical results of wartime spending, it is much harder to see the civilian cutbacks that enabled it. The truth is that we cannot spend our way out of our current crisis, no matter how great a spectacle we create. Even if we spent on infrastructure rather than war, we would still have no means to fund it, and there would still be no guarantee that the economy would grow as a result. What we need is more savings, more free enterprise, more production, and a return of American competitiveness in the global economy. Yes, we need Rosie the Riveter - but this time she has to work in the private sector making things that don't explode. To do this, we need less government spending, not more.
The existing literature identifies natural resource wealth as a major determinant of civil war. The dominant causal link is that resources provide finance and motive (the “looting rebels” model). Others see natural resources as causing “political Dutch disease,” which in turn weakens state capacity (the “state capacity” model). In the looting rebels model, resource wealth first increases, but then decreases the risk for civil war as very large wealth enables governments to constrain rebels, whereas in the state capacity model, large resource wealth is unambiguously related to higher risk of war. This research note uses a new dataset on natural resource rents that are disaggregated as mineral and energy rents for addressing the resources-conflict relationship. We find that neither a dummy variable for major oil exporters nor our resource rents variables predict civil war onset with a 1000-battle-death threshold coded by Fearon and Laitin (2003) Fearon, J. D. and Laitin, D. D. 2003. Ethnicity, insurgency, and civil war. American Political Science Review, 97(1): 1–16.
[Crossref], in the period after 1970 for which rents data are available. However, using a lower threshold of 25 battle deaths, we find that energy wealth, but not mineral wealth, increases the risk for civil war onset. We find no evidence for a nonlinear relationship between either type of resources and civil war onset. The results tentatively support theories built around state capacity models and provide evidence against the looting rebels model of civil war onset.
www.businessinsider.com/lets-pretend-to-have-another-seco...
A considerable body of poetical work has been attributed to Saint Kabir. And while two of his disciples, Bhāgodās and Dharmadās, did write much of it down, "...there is also much that must have passed, with expected changes and distortions, from mouth to mouth, as part of a well-established oral tradition."
In that Place There Is No Happiness or Unhappiness,
No Truth or Untruth
Neither Sin Nor Virtue.
There Is No Day or Night, No Moon or Sun,
There Is Radiance Without Light.
There Is No Knowledge or Meditation
No Repetition of Mantra or Austerities,
Neither Speech Coming From Vedas or Books.
Doing, Not-Doing, Holding, Leaving
All These Are All Lost Too In This Place.
No Home, No Homeless, Neither Outside or Inside,
Micro and Macrocosm Are Non-Existent.
Five Elemental Constituents and the Trinity Are Both Not There
Witnessing Un-struck Shabad Sound is Also Not There.
No Root or Flower, Neither Branch or Seed,
Without a Tree Fruits are Adorning,
Primordial Om Sound, Breath-Synchronized Soham,
This and That - All Are Absent, The Breath Too Unknown
Where the Beloved Is There is Utterly Nothing
Says Kabir I Have Come To Realize.
Whoever Sees My Indicative Sign
Will Accomplish the Goal of Liberation.
Kabir
What is seen is not the Truth
What is cannot be said
Trust comes not without seeing
Nor understanding without words
The wise comprehends with knowledge
To the ignorant it is but a wonder
Some worship the formless God
Some worship His various forms
In what way He is beyond these attributes
Only the Knower knows
That music cannot be written
How can then be the notes
Says Kabir, awareness alone will overcome illusion
Kabir
There is a common trunk that carries energy from the EARTH TO COSMOS? a kind of Milky Way, fruit of the mammary tits of a sacred cow. The link between the body of light and the physical body is a silver rope invisible from mortals. It would be necessary to die first to be reborn in a spiritual World. The attachment to material values divides us and the World War is the result of an oversized human ego. Thus, we must digest our reptilian impulses to live detached from the roots of evil and once again become a sacred fruit of the Tree of Life.In this early spring, he seems happy to be in Paris. It was there that, in 2006, his career took a truly international turn. For the Nuit blanche, Subodh Gupta had been invited to produce a work: "Very Hungry God". This monumental skull of gleaming kitchen utensils was shown at Saint-Bernard church in the Goutte-d'Or district, where the battle of the undocumented had taken place ten years earlier. Struck by this paradoxical image of prosperity and death, François Pinault immediately bought the sculpture, then installed it in front of his Venetian foundation, at the Palazzo Grassi. This skull became one of the most famous vanities of contemporary art with the one Damien Hirst made in diamonds a year later.Born in a village in northern India, marked in his childhood by the presence of a theatre company and by film screenings where his mother took him, Subodh Gupta experienced a meteoric rise. First trained in figurative painting, he put this technique aside to make videos and assemblages of objects, often kitchen utensils, which have been his signature for about ten years. This is the case of "People Tree", a giant tree created especially to be presented in one of the Mint's courses. Subodh Gupta has a sense of sharing and loves to cook. It is for him an essential reference: he compares willingly the kneading of a bread dough and the artistic gesture. His works also tell the story of travel and exile, like his boat overflowing with metal amphorae and evoking the fate of migrants.
He is interested in the cosmos and the philosopher's stone, a mysterious substance known to turn lead into gold.
Faced with success, we had to produce a lot. The size of his workshops kept increasing every year to accommodate more assistants - he said he sometimes made less good pieces. So, for some time now, his work has taken a more meditative turn. He is interested in the cosmos and the philosopher's stone, a mysterious substance reputed to turn lead into gold, cure diseases, prolong human life... He also returned to painting. Through works, often colossal, installed in 18th century salons, the exhibition shows how far we have come.
Subodh Gupta spent a week working in the Mint's workshops to make a medal himself. The exchanges seem to have been spontaneous with the engravers, in this place which is one of the oldest factories in Paris. It was as an alchemist that he thought about his project: the idea came to him to associate the preciousness of spices with that of metal by placing the key ingredients of a good curry, garam masala, on modelling clay. The assembly will be scanned and pressed onto a metal disc. A reminder that in Vasco da Gama's time.
fr.pressfrom.com/actualite/culture/-95491-subodh-gupta-un...
While often Gupta, an artist based in New Delhi, uses form and content emanating from an Indian milieu as initial points of reference, these works are far from nostalgic, nativist or even culturally specific. They serve instead as universally relatable ruminations on the physical, the metaphysical, and their interconnections.
, In This Vessel Lies the Philosopher’s Stone, is a citation from the writings of the Indian poet Kabīr, from the 15th century, who is one of India’s most celebrated mystics and venerated by Hindus and Muslims alike.
Kabīr identifies a humble vessel, a trope for the human body, to be the carrier of everything – the earth, the universe, and the divine. Subodh Gupta’s most recent works are a meditative exploration of both the literal and metaphorical implications of these verses. The quotidian pantry has long been Gupta’s artistic realm where he finds material and meaning. But rather than expressing earthly horrors and delights, he has moved into capturing the cosmic in the everyday, resulting in a body of work that is simultaneously minimalist and exaggerated. For Gupta, the steam that escapes a boiling kettle suggests a new galaxy emerging, the sparks that scatter out of a wood stove appear to represent the birth of new stars, and the metallic banging of a hammer crushing aluminum suggests the celestial big bang. As the domestic is superimposed on the cosmic, astrophysical wonders are minimized to the mundane, and mundane earthly objects out into inter-stellar awe.
he phrase paaras or paarasmani, mentioned in the verses by Kabir, refers to an oddly universal mythological object that is able to transmute ordinary materials into precious metals or imbue them with extraordinary powers. The western equivalent to this mystical gem is known as the philosopher’s stone. The power of the philosopher’s stone is uncannily similar to an artist’s power to elevate an ordinary object into a prized possession, simply by rendering it in an artwork. Subodh Gupta’s work is particularly analogous to this alchemical act of transmutation and this is evident throughout his works, most literally perhaps in the work titled Only One Gold, which shows a humble potato seemingly transformed into a lump of gold.
www.itsliquid.com/subodh-gupta-in-this-vessel-lies-the-ph...
Hearst Memorial Mining Building, University of California, Berkeley
Revisiting one of my favorite buildings.
Description: M109 (or NGC 3992) is a beautiful barred spiral galaxy in Ursa Major one interesting feature of which is its ring structure. By integrating 298x300s subs for a total exposure of 24.8 hours I was able to obtain a reasonably well-defined ring structure.
Date / Location: 24, 26-28 February 2023; 1, 4-6 March 2023 / Washington D.C.
Equipment:
Scope: WO Zenith Star 81mm f/6.9 with WO 6AIII Flattener/Focal Reducer x0.8
OSC Camera: ZWO ASI 2600 MC Pro at 100 Gain and 50 Offset
Mount: iOptron GEM28-EC
Guider: ZWO Off-Axis Guider
Guide Camera: ZWO ASI 174mm mini
Focuser: ZWO EAF
Light Pollution Filter: Chroma LoGlow Broadband
Processing Software: Pixinsight
Processing Steps:
Preprocessing: I preprocessed 298x300s subs (= 24.8 hours) in Pixinsight to get an integrated image using the following processes: Image Calibration > Cosmetic Correction > Subframe Selector > Debayer > Select Reference Star and Star Align > Image Integration.
Linear Postprocessing: Dynamic Background Extractor (both subtraction to remove light pollution gradients and division for flat field corrections) > Background Neutralization > Color Calibration > Noise Xterminator.
Nonlinear Postprocessing: Histogram Transformation > Local Histogram Equalization > Curves Transformation.
I also inserted, as needed, additional Noise Xterminator steps as well as an SCNR Noise Reduction step in postprocessing.
Throughout our lives, certain archetypes shape our sense of self, the world, the road we’re on, and the goals we seek. Our idea of good and evil, male and female, leaders, parents, mentors, friends, and more are framed in the stories of the Bible. The picture’s not always pleasant, but it never fails to be instructive and is sometimes downright revelatory. Mirror, mirror on the wall: what’s the purpose of us all?
Topics of the Day:
Sunday, Day 1: “Introduction” and “Your Life as Revelatory Source. How did you get to be who you are? Your life is a sacred text read by all.
Monday, Day 2: “The Character of God” and “The Male/Female Thing” You and I meet God in sacramental and sacred encounters. But in Scripture, we meet the God who is one character among many in the remarkable story of faith. And our second topic — Gender is complicated. Adam and Eve were just the beginning of the conflict. Gender issues remain with us in secular and sacred realms.
Tuesday, Day 3: “Follow the Leader” and “The Parent Trap”
Leadership styles come and go. From biblical patriarchs and kings to modern-day presidents and celebrities, we follow the leaders we invent and choose. And our 2nd topic – The Ten Commandments bid us to honor our father and our mother. Jesus says we should hate our parents. Please explain!
Wednesday, Day 4:“The Guiding Light” & “You’ve Got a Friend” Elisha had Elijah. Timothy and Titus had Paul. Thank God for mentors: those significant folks along the way who show us how life works. Our second topic — It’s not good for us to be alone, as Genesis attests. Famous friendships help us explore the role of holy companioning.
Thursday, Day 5: “Who’s Your Devil?” and “What a Wonderful World” Everyone fears the Dark Side. Who’s the enemy, and where does it reside? And our second topic — The universe is beautiful. Earth is our home. The Bible and science agree it will come to an end one day. What’s our relationship to a fragile planet?
Friday, Day 6: “And the Purpose of It All Is” We’re born, we live, and we die. For most of us, that’s a pretty full plate of responsibilities. What should we do with this “one wild and precious life?”What qualities are we looking for in the aspirants at Saint-Sulpice? Parish Experience - Before an aspirant joins the Society, Sulpicians want to ensure that an aspirant has completed at least two years of parish work, which will have allowed him/her to develop a strong sense of belonging to the diocese and an attachment to the parish ministry. Indeed, they need priests who live and love their priesthood and who wish to assist the bishops in the service of seminarians and diocesan priests. Ability to work in a team - Sulpicians are looking for candidates who are able to work in a community environment and are able to work collegially on a mission in consultation with fellow priests as well as with lay people or religious. To know how to share one's faith through a life of prayer that nourishes a true enthusiasm for Christ and his Gospel, for the Church and the priesthood. The Apostolic Spirit who animated their founder, Jean-Jacques Olier, is the source of this sharing. Special gifts that open the way to a quality intellectual and professional preparation in several fields: spiritual accompaniment, teaching of philosophy or theology, pastoral animation. This presupposes the openness to learning of a constantly renewed Sulpician pedagogy. How can a priest become a Sulpician? Prerequisites - To be a diocesan priest incardinated in a diocese, to have completed at least two years of parish ministry in the diocese of origin and to be available for service in the Canadian Priests of St. Sulpice Province. Initial recognition - If a priest meets these prerequisites, he or she can contact the Sulpician Vocations Officer for his or her region (see list below). He will inform him about the regular meetings organized for the aspirants to the Society and he will be in charge of this first experience with Saint-Sulpice until the Provincial Council accepts him as a candidate. His participation in these meetings will give him sufficient information about the Company and the demands of Sulpician life. This priest will also be accompanied spiritually in discerning his possible Sulpician vocation. Candidature - After this time of discernment, with the support of the Sulpician Vocations Officer in his region, he asked his bishop for written authorization to make an experience in Saint-Sulpice. The aspirant then applies by contacting the provincial superior or the provincial delegate in writing. First experience in Saint-Sulpice - If formally accepted and admitted as a candidate, the Provincial Council becomes directly responsible for his experience with the Priests of Saint-Sulpice. He then took over his duties and gave him a first appointment to a team in the Canadian Province from the moment his bishop relieved him of his duties. Usually, this first experience in the Company lasts at least two years.The expression "art Saint-Sulpice" is misleading, because it encompasses very different periods and artists in the same name and in the same discredit, because it confuses art of reproduction and wide circulation with the search for an authentic sacred art which has been continuous for nearly two centuries.
In the proper sense, Sulpician art refers to the objects that are sold in the specialized shops that surround the church of the same name in Paris: industrial and economic art, of poor quality, where the mimicry and the fading of style reassure and somehow carry the seal of an official art, orthodox and without excess. Thus understood, Sulpician art is of all times and every effort to renew religious art naturally secretes its counterfeiting. The virgins and saints, with their white eyes and pale air, coming from Ary Scheffer and his raphaelism, the statues of the Virgin of Lourdes, poor translation of the mediocre model of the pious sculptor Cabuchet, the overly sensitive effigies of Thérèse de Lisieux or Saint Anthony of Padua, even the neo-byzantine works, pale reflection. In fact, the interest of Sulpician art is not only sociological; it is also, as in countertype, the revealing of the interest that religious art has never ceased to arouse, against all appearances. Holy Mirror! The creatures on the reverse will be merged in the reflected image but probably not in a laplacian way - just as concentric circles. If anyone has a magic mirrorWe first address the problem of simultaneous image segmentation and smoothing by approaching the paradigm from a curve evolution perspective. In particular, we let a set of deformable contours define the boundaries between regions in an image where we model the data via piecewise smooth functions
www.vallombrosa.org/the-holy-mirror-discovering-ourselves...
Origin of the Holy Mirrors!
Mirrors have been regarded as sacred at least since the Han Dynasty in China. Many of these mirrors and from the subsequent Wei dynasty have been found in Japan. They bore images of gods and sacred animals particularly the Chinese dragon (1,2) . They were very popular, and possibly later manufactured, in Japan. The bronze mirrors are found in great number in ancient (kofun period) burial mounds in Japan. In the biggest archeological find of 33 mirrors, the mirrors were placed surrounding the coffin such that their reflective surface faced the deceased. The Han mirrors were "magic" in that while they reflected they were also able to project an image usually of the deities and animals on the back and refered to as "light passing mirrors" (透明鑑) (Needham, 1965, p.xlic; Needham & Wang, 1977, pp. 96-97).This magic property is due to the their method of construction. When polishing the reflective face of the mirror, the patter on the back influences the pressure brought to bear on the reflective surface and change the extent to which it is concave. Muraoka also claims that Differences in the (slight) "inequality of curvature" (Ayrton & Perry, 1878, p 139; see also Thompson, 1897, and Needham & Wang, 1977, p96 for a diagram) of the mirror result in the mirror reflecting light bearing the pattern shown on the reverse. More recent research has elucidated the precise mathematical model describing the optics of these mirrors as a laplacian image (Berry, 2006), a type of spatial filter today used for edge detection and to blend two images together. It is not known whether the mirrors popular in ancient Japan were also able to project, but later during the Nara period mirrors were found to concel magic Buddhist images, and during the Edo period, concealed Christians (Kurishitan) concealed images of the cross or of the Holy Mary within their bronze "magic" mirrors. Mirrors in Japan contined to be made of brass, until the arrival of Western glass mirrors, and were "magic" in that they displayed the patter on their reverse when reflecting sunlight or other powerful light source (Thompson, 1897). Ayrton (Ayrton & Perry, 1878; Ayrton & Pollock, 1879) claims that in Japan mirror vendors were unaware of the "light passing" quality, and that there is no mention of this 'magical' quality known to Han Chinese in Japanese texts. Even a Japanese mirror maker was unaware of how to make magic mirrors though had inadvertently made one himself by extensive polishing a mirror with a design on its back (Ayrton & Perry, 1878, p135). Unlike the ancient Korean mirror top right (3), the ancient Han and Japanese mirrors were made to be rotated, displaying images in the four directions of the compas. The reason for the holes in the central "breast" (or nipple) is unclear but it is found to be pierced with a hole (of varying shape depending upon the manufacturer) from which the mirror was suspended by a rope. Bearing in mind that the images on the mirrors required that the mirrors be rotated, the central nodule might also have enabled the mirrors to be spun like a top. I am not sure why someone would want to spin a mirror but my son does (see the toy explained later). I would very much like to see what the reflected "magic" image becomes when spun. The creatures on the reverse will be merged in the reflected image but probably not in a laplacian way - just as concentric circles. If anyone has a magic mirror I would like them to try spinning it to see. Skipping the holy mirrors in shrines, mirror rice cakes, and the mirror held by the Japanese version of Saint Peter at the Pearly Gates, King Enma, which holds a record of ones life, and, jumping to the present day... Mirrors are popular in the transformational items used by Japanese superheros. The early 1970's Mirror Man transformed using a Shinto amulet infront of any mirror or reflecting surface. Shinkenja, a group of Super Sentai or Power Rangers, that transforms thanks to their ability to write and then spin Chinese characters in the air, also transforms with the aid of an Inro Maru (4) upon which is affixed a inscribed disk. When the disk is attactched to the mirror the super hero inside the mirror is displayed. Transformation (henshin) by means of a mirror is popular too among Japanese femail super heros notably Himitsu no Akko Chan (Secret Akko), who could change into many things that were displayed in her mirror, sailor moon, and OshareMajo (6). The female super heroes mirrors usually make noises rather than contain inscriptions. The latest greatest Kamen Rider OOO sometimes transforms by means of his Taja-Spina which spins three of his totem-badge "coins" inside a mirror (video). In this ancient tradition we see recurrence of the following themes 1) Mirrors being of great benefit to the bearer enabling him to transform. 2) Mirrors containing hidden deities 3) Mirrors being associated with symbols: iconic marks, and incantations. 4) Mirrors being made to be rotated or spun. Thanks to James Ewing for the Mirror Man (Mira-man) reference and to Tomomi Noguchi for the Ojamajo Doremi reference, and to Taku Shimonuri and my son Ray for getting me interested in Japanese superheros. Addendum One of My students (A Ms. Tanaka, and a book about the cute in Japan) pointed out that the Japanese are into round things, and it seems to me that this Japanese preference for the round may originate in the mirror. Anpanman and Doraemon and many "characters" have round faces The Japanese Flag features a circle representing the sun and the mirror Japanese coats of arms (kamon) Japanese holy mirrors are round "Mirror rice cakes", and many other kinds of rice cake, are round The Sumo ring is round Pictures of the floating world (Ukiyoe) often portray the sitter in a round background Japanese groups always have to end up by standing in a round The Japanese are fond of domes and have many of the biggest The Japanese are fond of seals (inkan), which are round Japanese groups just can't help standing in a round The taiko drum is round The mitsudomoe is round Mount Fuji is round But then there are probably round things in every culture?
Cast and polished bronze mirrors, made in China and Japan for several thousand years, exhibit a curious property [1–4], long regarded as magical. A pattern embossed on the back
is visible in the patch of light projected onto a screen from the reflecting face when this is illuminated by a small source, even though no trace of the pattern can be discerned
by direct visual inspection of the reflecting face. The pattern on the screen is not the result of the focusing responsible for conventional image formation, because its sharpness is independent of distance, and also because the magic mirrors are slightly convex. It was established long ago that the effect results from the deviation of rays by weak undulations on the reflecting surface, introduced during the manufacturing process and too weak to see directly, that reproduce the much stronger relief embossed on the back. Such ‘Makyoh imaging’ (from the Japanese for ‘wonder mirror’) has been applied to detect small asperities on nominally flat semiconductor surfaces [5–8]. My aim here is to draw attention (section 2) to a simple and beautiful fact, central to
the optics of magic mirrors, that has not been emphasized—either in the qualitative accounts or in an extensive geometrical-optics analysis : in the optical regime relevant to
magic mirrors, the image intensity is given, in terms of the height function h(r) of the relief.on the reflecting surface, by the Laplacian ∇2 h(r) (here r denotes position in the mirror plane: r = {x, y}). The Laplacian image predicts striking effects for patterns, such as those on magic mirrors, that consist of steps ; these predictions are supported by experiment
The detailed study of reflection from steps throws up an unresolved problem concerning the relation between the pattern embossed on the back and the relief on the reflecting surface. The Laplacian image is an approximation to geometrical optics, which is itself an approximation to physical optics. The appendix contains a discussion of the Laplacian image starting from the wave integral representing Fresnel diffraction from the mirror surface. Geometrical optics and the Laplacian image If we measure the height h(r) from the convex surface of the mirror (figure 3), assumed to
have radius of curvature R0, then the deviation of the surface undulations from a reference plane (figure 3) is η(r) = − r22R0+ h(r. The specularly reflected rays of geometrical optics are determined by the stationary value(s) of
the optical path length L from the source (distance H from the reference plane) to the position
R on the screen (distance D from the reference plane) via the point r on the mirror. This is L = (H − η(r))2 + r2 +(D − η(r
))2 + (R − r)2≈ H + D + (r, R), (2)where in the second line we have employed the paraxial approximation (all ray angles small), with (r, R) = r2 2H+(R − r)2 2D+ r2 R0− 2h(r). In applying the stationarity condition ∇r(r, R) = 0, it is convenient to define the magnification M, the reduced distance Z, and the ;demagnified observation position r referred to the mirror surface: M ≡ 1 +D H+2D R0, Z ≡ 2D M , r ≡ R M . We note an effect of the convexity that will be important later: as the source and screen distance increase, Z approaches the finite asymptotic value R0. With these variables, the position r
(r,Z), on the mirror, of rays reaching the screen position r, is the solution of r = r − Z∇h(r). The focusing and defocusing responsible for the varying light intensity at r involves the
Jacobian determinant of the transformation from r to r, giving,after a short calculation,Igeom(r,Z) = constant × ∂x ∂x
∂y ∂y − ∂x ∂y ∂y ∂x−1 r→r (r,Z)= 1 − Z∇2 h(r) + Z2
∂h(r) ∂x2 ∂h(r) ∂y2 − ∂h(r) ∂x ∂y2−1r→r(r,Z), ().where the result has been normalized to Igeom = 1 for the convex mirror without surface relief (i.e. h(r) = 0). So far, this is standard geometrical optics. In general, more than one ray can reach r—that is, can have several solutions r—and the boundaries of regions reached by different numbers of rays are caustics. In magic mirrors, however, we are concerned with a
limiting regime satisfying Z Rmin 1, where Rmin is the smallest radius of curvature of the surface irregularities. Then there is only one ray, simplifies to r ≈ r, (9) and the intensity simplifies to ILaplacian(r,Z) = 1 + Z∇2 h(r). This is the Laplacian image. Changing Z affects only the contrast of the image and not its form, so explains why the sharpness of the image is independent of screen position, provided holds. The intensity is a linear function of the surface irregularities h, which
is not the case in general geometrical optics (i.e. when is violated), where, as has been emphasized the relation is nonlinear. And, as already noted, for a distant source and
screen Z approaches the value R0, implying that (8) holds for any distance of the screen if R0 Rmin, that is, provided the irregularities are sufficiently gentle or the mirror is sufficiently
convex. Alternatively stated, the convexity of the mirror can compensate any concavity of the irregularity h, in which case there are no caustics for any screen position.The theory based on the Laplacian image accords well with observation, at least for the mirror studied here. The key insight is that the image of a step is neither a dark line nor a bright line,
as sometimes reported , but is bright on one side and dark on the other. It is possible that there are different types of magic mirror, where for example the relief is etched directly onto
the reflecting surface and protected by a transparent film , but these do not seem to be common. Sometimes, the pattern reflected onto a screen is different from that on the back, but
this is probably a trick, achieved by attaching a second layer of bronze, differently embossed, to the back of the mirror.
Pre-focal ray concentrations leading to Laplacian images are familiar in other contexts, though they are not always recognized as such. An example based on refraction occurs in old windows, where a combination of age and poor manufacture has distorted the glass. The distortion is not evident in views seen through the window when standing close to it. However,when woken by the low morning sun shining through a gap in the curtains onto an opposite
wall, one often sees the distortions magnified as a pattern of irregular bright and dark lines. If the equivalent of is satisfied, that is if the distortions and propagation distance are not too
large, the intensity is the Laplacian image of the window surface. (When the condition is not satisfied, the distortions can generate caustics.) Only the optics of the mirror has been studied here. The manner in which the pattern embossed on the back gets reproduced on the front has not been considered. Referring to ,this involves the sign of the coefficient a in the relation between hback and h. There have been several speculations about the formation of the relief. One is that the relief is generated while the mirror is cooling, by unequal contraction of the thick and thin parts of the pattern ; it is not clear what sign of a this leads to. Another is that cooling generates stresses, and that during vigorous grinding and polishing the thin parts yield more than the thick parts, leading to the thick parts being worn down more; this leads to a 0: bright (dark) lines on the image, indicating low (high) sides of the steps on the reflecting face, are associated with the low (high) sides of the
steps on the back , not the reverse (figure 7(b)). This suggests two avenues for further research. First, the sign of a should be determined by direct measurement of the profile of the reflecting surface; I predict a > 0. Second, whatever the result, the mechanism should be investigated by which the process of manufacture reproduces onto the reflecting surface the
pattern on the back. The fact that h0 = 378 nm is smaller than the wavelengths in visible light does not imply that the Laplacian image is the small-κ limit of (A.3), namely the perturbation limit corresponding to infinitely weak relief. Indeed it is not: the perturbation limit, obtained by
expanding the exponential in (A.3) and evaluating the integral over τ , with a renormalized denominator to incorporate the known limit I = 1 for ξ = ±∞, isψpert(ξ , ζ, κ) = 1 − iκ erf(ξ/√1+iζ /κ)
√ 1 + κ2 . For the gentlest steps, this predicts low-contrast oscillatory images, very different from the Laplacian images of geometrical optics; this is illustrated in figure 8(b), calculated for k =0.05, corresponding to h0 = 5.2 nm.
European journal of physics, 27, 109. Retrieved from www.phy.bris.ac.uk/people/Berry_mv/the_papers/berry383.pdf Spatial Filters - Laplacian/Laplacian of Gaussian. (n.d.). Retrieved April 19, 2012, from homepages.inf.ed.ac.uk/rbf/HIPR2/log.htm Thompson, S. P. (1897). Light Visible and Invisible: A Series of Lectures at Royal Institution of Great Britain. Macmillan. Retrieved from www.archive.org/stream/lightvisibleinvi00thomuoft#page/50...
In the industrial and materialist period that began in the 19th century, Catholicism, even though it had to give in to its official positions, underwent glorious revival. In the years 1830-1880, an attempt was made to revive an authentic religious art, in the image of restored faith, through examples of medieval art. The Gothic cathedral, in its 13th century purity, Fra Angelico, the painter who paints on his knees, will be the models unceasingly questioned and translated through the teaching of Ingres.
The Fish Head Nebula is also known as NGC 1795. It is a small part of the [much larger Heart Nebula](i.imgur.com/Hf11FR9.png), which makes it look like a small peen with huge balls. This image I posted is just a 4k crop of what I find to be the most interesting part of the frame, but [here is a link to the original full sized image](live.staticflickr.com/65535/51710880437_fb8e66e09c_o.png). Also made a [starless version](i.imgur.com/yOQwhaI.jpg) to better show off the fainter structures in the image. Captured over 4 nights from November 16-23rd, 2021 from a Bortle 6 zone.
---
**[Equipment:](i.imgur.com/6T8QNsv.jpg)**
* TPO 6" F/4 Imaging Newtonian
* Orion Sirius EQ-G
* ZWO ASI1600MM-Pro
* Skywatcher Quattro Coma Corrector
* ZWO EFW 8x1.25"/31mm
* Astronomik LRGB+CLS Filters- 31mm
* Astrodon 31mm Ha 5nm, Oiii 3nm, Sii 5nm
* Agena 50mm Deluxe Straight-Through Guide Scope
* ZWO ASI-290mc for guiding
* Moonlite Autofocuser
**Acquisition:** 17 hours 12 minutes (Camera at Unity Gain, -15°C)
* Ha- 62x360"
* Oiii- 55x360"
* Sii- 55x360"
* Darks- 30
* Flats- 30 per filter
**Capture Software:**
* Captured using [N.I.N.A.](nighttime-imaging.eu) and PHD2 for guiding and dithering.
**[PixInsight Processing:](www.youtube.com/watch?v=u7FuApFSGuA)**
* BatchPreProcessing
* SubframeSelector
* StarAlignment
* [Blink](youtu.be/sJeuWZNWImE?t=40)
* ImageIntegration
* DrizzleIntegration
**Linear:**
* DynamicCrop
* AutomaticBackgroundExtraction
* DynamicBackgroundExtraction
* EZ Decon (Ha only)
* STF applied via HistogramTransformaion to bring each channel nonlinear
**Combining Channels:**
* PixelMath to make classic SHO to RGB image
* SCNR to partially remove excess greens and magentas
* Pixelmath to make RGB image using [ForaxX's palette](thecoldestnights.com/2020/06/pixinsight-dynamic-narrowban...)
>R= (Oiii\^~Oiii)\*Sii + ~(Oiii\^~Oiii)*Ha
>G= ((Oiii\*Ha)\^~(Oiii\*Ha))\*Ha + ~((Oiii\*Ha)\^~(Oiii\*Ha))\*Oiii
>B= Oiii
* Pixelmath to blend classic SHO and ForaxX SHO images 50:50
**Nonlinear:**
* LRGBCombination with Ha as luminance
* Shitloads of CurveTransformations to adjust lightness, contrast, saturation, hues, etc.
* MLT noise reduction
* LocalHistogramEqualization
> two rounds of this, one at size 16 kernel for the finer 'feathery' details, and one at 100 for larger structures
* Invert>SCNR>Invert with star mask to remove magentas from stars
* DarkStructureEnhancement
* More curves
* Extract L channel > LRGBC again for chrominance noise reduction
* EZ Star reduction
* NoiseGenerator to add noise back into reduced stars
* Clone stamp to remove 2 ringed stars I forgot about during deconvolution
* Resample to 70%
* Crop to 3840x2160 resolution
* Annotation
This is also known as the "eye of god nebula" or NGC 7293, and at 650 light years distance it is the closest planetary nebula to Earth. Wanted to capture the faint outer shells of the nebula, so I ended up getting over 33 hours of exposure time on it. Took a bit of effort in processing to balance the bright core with the outer shells, but overall I'm pleased with the final result. [Also made a starless version for the hell of it](i.imgur.com/uo4TJDk.jpg)
Thanks to trees and the fact this nebula only gets 33 degrees up here, I could only photograph it for a maximum of 3 hours per night. Also because it's low in the southern sky I had to [turn off the streetlamp at the end of my driveway](gfycat.com/madeverlastingindianpangolin) with a laser pointer while shooting it. Captured over 14 nights from September 2nd to October 20th, 2021 from a Bortle 6 zone.
---
**[Equipment:](i.imgur.com/6T8QNsv.jpg)**
* TPO 6" F/4 Imaging Newtonian
* Orion Sirius EQ-G
* ZWO ASI1600MM-Pro
* Skywatcher Quattro Coma Corrector
* ZWO EFW 8x1.25"/31mm
* Astronomik LRGB+CLS Filters- 31mm
* Astrodon 31mm Ha 5nm, Oiii 3nm, Sii 5nm
* Agena 50mm Deluxe Straight-Through Guide Scope
* ZWO ASI-290mc for guiding
* Moonlite Autofocuser
**Acquisition:** 33 hours 36 minutes (Camera at Unity Gain, -15°C)
* Ha- 175x360"
* Oiii- 161x360"
* Darks- 30
* Flats- 30 per filter
**Capture Software:**
* Captured using [N.I.N.A.](nighttime-imaging.eu) and PHD2 for guiding and dithering.
**[PixInsight Processing:](www.youtube.com/watch?v=u7FuApFSGuA)**
* BatchPreProcessing
* StarAlignment
* [Blink](youtu.be/sJeuWZNWImE?t=40)
* ImageIntegration
* DrizzleIntegration (2x, Var β=1.5)
**Linear:**
* DynamicCrop
* DynamicBackgroundExtraction 2x
* EZ Decon + Denoise per channel
* EZ soft stretch per channel to bring nonlinear
**Combining Channels:**
* Pixelmath to make RGB image using [ForaxX's HOO palette](thecoldestnights.com/2020/06/pixinsight-dynamic-narrowban...):
>R= Ha
>G= ((Oiii\*Ha)\^~(Oiii\*Ha))\*Ha + ~((Oiii\*Ha)\^~(Oiii\*Ha))\*Oiii
>B= Oiii
**Creating Synthetic Luminance:**
* linear Ha and Oiii channels stretched slightly with ArcsinhStretch
> This was used to preserve the details in the core of the nebula, which were a little blown out from EZ soft stretch
* Pixelmath to combine the core stretch image with the soft stretch image 70:30 to make a Synthetic Luminance
> mask used just to add this to the brightest parts of the soft stretch image
**Nonlinear:**
* LRGBCombination with Synthetic Luminance
* Shitloads of [Curve](i.imgur.com/zN7Wsvb.jpg)Transformations to adjust lightness, contrast, saturation, hues, etc.
* EZ HDR
* HistogramTransformation to further stretch the image
* MLT noise reduction
* CloneStamp to remove a severely ringed star that I forgot about during Deconvolution
* more curves
* ColorSaturation to boost the blue in the core
* SCNR Green
* AutomaticBackgroundExtraction
* even more curves
* EZ Star Reduction
* noise generator to add noise back into reduced stars
* another round of MLT noise reduction
* Extract Luminance --> LRGBCombination for background chrominance noise reduction
* MMT noise reduction to reduce background splotchiness
* LocalHistorgramEqualization (nebula core only)
* More curves
* Resample to 80%
* Annotation
This nebula is also called the 'popped balloon nebula' and 'the garlic nebula'
This is certainly one of my longer projects, coming in at just under 50 hours of exposure time total. Due to how faint this target is I couldn't shoot it when the moon was up at all, so it took me over 3 months to get enough exposure time in on it. It was also pretty difficult to process, particularly when stretching, in order to bring out the faint details in the nebula without overstretching the noise and keeping the stars reasonably balanced (iirc I tried about 5 different stretching workflows on it before settling on the one described below). Captured over 19 nights from September through December, 2021 from a Bortle 6 zone.
---
**[Equipment:](i.imgur.com/6T8QNsv.jpg)**
* TPO 6" F/4 Imaging Newtonian
* Orion Sirius EQ-G
* ZWO ASI1600MM-Pro
* Skywatcher Quattro Coma Corrector
* ZWO EFW 8x1.25"/31mm
* Astronomik LRGB+CLS Filters- 31mm
* Astrodon 31mm Ha 5nm, Oiii 3nm, Sii 5nm
* Agena 50mm Deluxe Straight-Through Guide Scope
* ZWO ASI-290mc for guiding
* Moonlite Autofocuser
**Acquisition:** 49 hours 54 minutes (Camera at Unity Gain, -15°C)
* Ha- 184x600"
* Oiii- 89x600" + 44x360"
> Accidentally had the wrong Oiii sequence template for a few nights, but it didn't seem to harm the stack by including the 360" data
* Darks- 30
* Flats- 30 per filter
**Capture Software:**
* Captured using [N.I.N.A.](nighttime-imaging.eu) and PHD2 for guiding and dithering.
**PixInsight Processing:**
* BatchPreProcessing
* SubframeSelector
* StarAlignment
* [Blink](youtu.be/sJeuWZNWImE?t=40)
* ImageIntegration
* DrizzleIntegration (2x, Var β=1.5)
**Linear:**
* DynamicCrop
* AutomaticBackgroundExtraction
* DynamicBackgroundExtraction
* EZ Denoise per channel
**Stretching to nonlinear:** (per channel)
> Probably the hardest part of processing this image. I tried several other workflows before settling on this one, which I used for my photo of [sh2-216 earlier this year](www.reddit.com/r/astrophotography/comments/l57hmg/sh2216_...).
* MaskedStretch to 0.1 background
* Starnet++ starmask made, subtracted from 0.3 Gray image and colvolved
* Previous image used as a mask to stretch nebulosity without stretching stars
* Previous two steps were repeated 2X with incremental HistogramTransformations
* One final unmasked histogramtransformation
**Combining Channels:**
* Monochrome Ha and Oiii images combined into a color image using [ForaxX's HOO palette](thecoldestnights.com/2020/06/pixinsight-dynamic-narrowban...):
>R= Ha
>G= ((Oiii\*Ha)\^~(Oiii\*Ha))\*Ha + ~((Oiii\*Ha)\^~(Oiii\*Ha))\*Oiii
>B= Oiii
**Nonlinear:**
* Create Synthetic luminance with Pixelmath:
> Max(Ha, Oiii)
* LRGBCombination using synthetic luminance for chrominance noise reduction
* Shitloads of CurveTransformations to adjust lightness, saturation, contrast, hues, etc.
* MLT noise reduction
* LocalHistogramEqualization
* EZ Star Reduction
* even more curves
> probably a few other processes should go here that I'm forgetting about. I completely redid the nonlinear processing 3 times and didn't bother to save the image histories, so I'm going off memory for this.
* IntegerResample to 50%
* Annotation
Description: This image of the North American Nebula NGC 7000 was developed from 60x300s subs or 5.0 hours of total exposure time. In order to isolate and develop the O(III) signal, the nonlinear post processed image was first split into its RGB components, followed by the application of appropriate weighting factors to the green and red channels, further followed by LRGB Combination. The resulting image was post processed using Curves Transformation with various color masks.
Date / Location: 20, 25, 26 June 2022 / Washington D.C.
Equipment:
Scope: WO Zenith Star 81mm f/6.9 with WO 6AIII Flattener/Focal Reducer x0.8
OSC Camera: ZWO ASI 2600 MC Pro at 100 Gain and 50 Offset
Mount: iOptron GEM28-EC
Guide Scope: ZWO ASI 30mm f/4
Guide Camera: ZWO ASI 120mm mini
Light Pollution Filter: Optolong L-eXtreme Dual Bandpass
Processing Software: Pixinsight
Processing Steps:
Preprocessing:
I preprocessed 60x300s subs (= 5.0 hours) in Pixinsight to get an integrated image using the following process steps: Image Calibration > Cosmetic Correction > Subframe Selector > Debayer > Select Reference Star and do a Star Align > Image Integration.
Linear Postprocessing:
Dynamic Background Extractor (doing subtraction to remove light pollution gradients and division for flat field correction) > Background Neutralization > Color Calibration > Blur Xterminator > Noise Xterminator.
Nonlinear Postprocessing and additional steps:
Histogram Transformation > Star Xterminator to create Starless and Stars Only Images.
Starless Image > Noise Xterminator > Local Histogram Equalization > Multiscale Median Transform > Curves Transformation to boost O(III) signal > Split RGB channels > Create new green and blue channels > LRGB Combination > Curves Transformation using various color masks.
Stars Only Image > Morphological transformation.
Pixel Math to combine the Starless Image with the Stars Only Image to get a Reinstated Image.
Reinstated Image > Dark Structure Enhancement > Topaz AI.
Pixel Math to combine the (non-AI) Reinstated Image with the Topaz AI Image to get a final image.
The Butterfly Nebula is the name of [this nebula right next to the star Sadr in Cygnus](i.imgur.com/TiZ9kJ8.png) and I opted to go for a fiery look on this one compared to true color red or a rainbow SHO image. This is my first time processing on an IPS monitor so I'm not entirely used to the colors on it compared to my old VA one. Also made a [starless version using StarNet++](i.imgur.com/Eu3bgKt.jpg).Captured on October 6th and 7th, 2020 from a Bortle 6 zone.
---
**[Equipment:](i.imgur.com/6T8QNsv.jpg)**
* TPO 6" F/4 Imaging Newtonian
* Orion Sirius EQ-G
* ZWO ASI1600MM-Pro
* Skywatcher Quattro Coma Corrector
* ZWO EFW 8x1.25"/31mm
* Astronomik LRGB+CLS Filters- 31mm
* Astrodon 31mm Ha 5nm, Oiii 3nm, Sii 5nm
* Agena 50mm Deluxe Straight-Through Guide Scope
* ZWO ASI-120MC for guiding
* Moonlite Autofocuser
**Acquisition:** 9 hours 48 minutes (Camera at Unity Gain, -15°C)
* Ha- 46x360"
* Oiii- 26x360"
* Sii- 26x360"
* Darks- 30
* Flats- 30 per filter
**Capture Software:**
* Captured using [N.I.N.A.](nighttime-imaging.eu/) and PHD2 for guiding and dithering.
**PixInsight Processing:**
* BatchPreProcessing
* SubframeSelector
* StarAlignment
* [Blink](youtu.be/sJeuWZNWImE?t=40)
* ImageIntegration
* DrizzleIntegration (2x, Varβ=1.5)
**Linear:**
* DynamicCrop
* AutomaticBackgroundExtraction
* DynamicBackgroundExtraction
* [EZ Decon and Denoise](darkarchon.internet-box.ch:8443/) (Ha only)
* STF applied via HistogramTransformation to bring each channel nonlinear
**Combining Channels:**
* PixelMath to make classic SHO to RGB image
* Greens and magentas nuked using SCNR
* Pixelmath to make RGB image using [ForaxX's palette](thecoldestnights.com/2020/06/pixinsight-dynamic-narrowban...)
> R= (Oiii\^~Oiii)\*Sii + ~(Oiii\^~Oiii)\*Ha
> G= ((Oiii\*Ha)\^~(Oiii\*Ha))\*Ha + ~((Oiii\*Ha)\^~(Oiii\*Ha))\*Oiii
> B= Oiii
* Pixelmath to blend classic and ForaxX SHO images 50:50
**Nonlinear:**
* LRGBCombination with Ha as luminance
* Invert>SCNR>Invert with star mask to remove magenta stars.
* Shitloads of [Curve](i.imgur.com/vCjArSv.jpg)Transformations to adjust lightness, saturation, contrast, hues, etc
* LocalHistogramTransformation
* ACDNR
* DarkStructureEnhance
* More curves
* SCNR to remove greens
* EZ Star Reduction
* DarkStructureEnhance
* Resample to 63%
* Annotation
The Eiffel Tower (French: La Tour Eiffel, nickname La dame de fer, the iron lady) is an iron lattice tower located on the Champ de Mars in Paris, named after the engineer Gustave Eiffel, whose company designed and built the tower. Erected in 1889 as the entrance arch to the 1889 World's Fair, it has become both a global cultural icon of France and one of the most recognizable structures in the world. The tower is the tallest structure in Paris and the most-visited paid monument in the world; 7.1 million people ascended it in 2011. The third level observatory's upper platform is at 279.11 m the highest accessible to public in the European Union and the highest in Europe as long as the platform of the Ostankino Tower, at 360 m, remains closed as a result of the fire of August 2000. The tower received its 250 millionth visitor in 2010.
The tower stands 320 metres (1,050 ft) tall, about the same height as an 81-storey building. During its construction, the Eiffel Tower surpassed the Washington Monument to assume the title of the tallest man-made structure in the world, a title it held for 41 years, until the Chrysler Building in New York City was built in 1930. However, because of the addition, in 1957, of the antenna atop the Eiffel Tower, it is now taller than the Chrysler Building. Not including broadcast antennas, it is the second-tallest structure in France, after the Millau Viaduct.
The tower has three levels for visitors. Tickets can be purchased to ascend, by stairs or lift (elevator), to the first and second levels. The walk from ground level to the first level is over 300 steps, as is the walk from the first to the second level. The third and highest level is accessible only by lift - stairs exist but they are not usually open for public use. Both the first and second levels feature restaurants.
The tower has become the most prominent symbol of both Paris and France, often in the establishing shot of films set in the city.
History
Origin
First drawing of the Eiffel Tower by Maurice Koechlin
The design of the Eiffel Tower was originated by Maurice Koechlin and Émile Nouguier, two senior engineers who worked for the Compagnie des Establissments Eiffel after discussion about a suitable centrepiece for the proposed 1889 Exposition Universelle, a World's Fair which would celebrate the centennial of the French Revolution. In May 1884 Koechlin, working at his home, made an outline drawing of their scheme, described by him as "a great pylon, consisting of four lattice girders standing apart at the base and coming together at the top, joined together by metal trusses at regular intervals". Initially Eiffel himself showed little enthusiasm, but he did sanction further study of the project, and the two engineers then asked Stephen Sauvestre, the head of company's architectural department, to contribute to the design. Sauvestre added decorative arches to the base, a glass pavilion to the first level and other embellishments. This enhanced version gained Eiffel's support, and he bought the rights to the patent on the design which Koechlin, Nougier and Sauvestre had taken out, and the design was exhibited at the Exhibition of Decorative Arts in the autumn of 1884 under the company name. On 30 March 1885 Eiffel read a paper on the project to the Société des Ingiénieurs Civils: after discussing the technical problems and emphasising the practical uses of the tower, he finished his talk by saying that the tower would symbolise "not only the art of the modern engineer, but also the century of Industry and Science in which we are living, and for which the way was prepared by the great scientific movement of the eighteenth century and by the Revolution of 1789, to which this monument will be built as an expression of France's gratitude."
Little happened until the beginning of 1886, when Jules Grévy was re-elected as President and Édouard Lockroy was appointed as Minister for Trade. A budget for the Exposition was passed and on 1 May Lockroy announced an alteration to the terms of the open competition which was being held for a centerpiece for the exposition, which effectively made the choice of Eiffel's design a foregone conclusion: all entries had to include a study for a 300 m (980 ft) four-sided metal tower on the Champ de Mars. On 12 May a commission was set up to examine Eiffel's scheme and its rivals and on 12 June it presented its decision, which was that all the proposals except Eiffel's were either impractical or insufficiently worked out. After some debate about the exact site for the tower, a contract was finally signed on 8 January 1887. This was signed by Eiffel acting in his own capacity rather than as the representative of his company, and granted him one and a half million francs toward the construction costs: less than a quarter of the estimated cost of six and a half million francs. Eiffel was to receive all income from the commercial exploitation of the tower during the exhibition and for the following twenty years. Eiffel later established a separate company to manage the tower, putting up half the necessary capital himself.
The "Artists Protest"
Caricature of Gustave Eiffel comparing the Eiffel tower to the Pyramids.
The projected tower had been a subject of some controversy, attracting criticism both from those who did not believe that it was feasible and also from those who objected on artistic grounds. Their objections were an expression of a longstanding debate about relationship between architecture and engineering. This came to a head as work began at the Champ de Mars: A "Committee of Three Hundred" (one member for each metre of the tower's height) was formed, led by the prominent architect Charles Garnier and including some of the most important figures of the French arts establishment, including Adolphe Bouguereau, Guy de Maupassant, Charles Gounod and Jules Massenet: a petition was sent to Charles Alphand, the Minister of Works and Commissioner for the Exposition, and was published by Le Temps.
"We, writers, painters, sculptors, architects and passionate devotees of the hitherto untouched beauty of Paris, protest with all our strength, with all our indignation in the name of slighted French taste, against the erection…of this useless and monstrous Eiffel Tower … To bring our arguments home, imagine for a moment a giddy, ridiculous tower dominating Paris like a gigantic black smokestack, crushing under its barbaric bulk Notre Dame, the Tour Saint-Jacques, the Louvre, the Dome of les Invalides, the Arc de Triomphe, all of our humiliated monuments will disappear in this ghastly dream. And for twenty years … we shall see stretching like a blot of ink the hateful shadow of the hateful column of bolted sheet metal"
Gustave Eiffel responded to these criticisms by comparing his tower to the Egyptian Pyramids : "My tower will be the tallest edifice ever erected by man. Will it not also be grandiose in its way ? And why would something admirable in Egypt become hideous and ridiculous in Paris ?" These criticisms were also masterfully dealt with by Édouard Lockroy in a letter of support written to Alphand, ironically saying "Judging by the stately swell of the rhythms, the beauty of the metaphors, the elegance of its delicate and precise style, one can tell that …this protest is the result of collaboration of the most famous writers and poets of our time", and going on to point out that the protest was irrelevant since the project had been decided upon months before and was already under construction. Indeed, Garnier had been a member of the Tower Commission that had assessed the various proposals, and had raised no objection. Eiffel was similarly unworried, pointing out to a journalist that it was premature to judge the effect of the tower solely on the basis of the drawings, that the Champ de Mars was distant enough from the monuments mentioned in the protest for there to be little risk of the tower overwhelming them, and putting the aesthetic argument for the Tower: "Do not the laws of natural forces always conform to the secret laws of harmony?"
Some of the protestors were to change their minds when the tower was built: others remained unconvinced. Guy de Maupassant[20] supposedly ate lunch in the Tower's restaurant every day. When asked why, he answered that it was the one place in Paris where one could not see the structure. Today, the Tower is widely considered to be a striking piece of structural art.
Construction
Foundations of the Eiffel Tower
Eiffel Tower under construction between 1887 and 1889
Work on the foundations started in January 1887. Those for the east and south legs were straightforward, each leg resting on four 2 m (6.6 ft) concrete slabs, one for each of the principal girders of each leg but the other two, being closer to the river Seine were more complicated: each slab needed two piles installed by using compressed-air caissons 15 m (49 ft) long and 6 m (20 ft) in diameter driven to a depth of 22 m (72 ft)[21] to support the concrete slabs, which were 6 m (20 ft) thick. Each of these slabs supported a block built of limestone each with an inclined top to bear a supporting shoe for the ironwork. Each shoe was anchored into the stonework by a pair of bolts 10 cm (4 in) in diameter and 7.5 m (25 ft) long. The foundations were complete by 30 June and the erection of the ironwork began. The very visible work on-site was complemented by the enormous amount of exacting preparatory work that was entailed: the drawing office produced 1,700 general drawings and 3,629 detailed drawings of the 18,038 different parts needed:
The task of drawing the components was complicated by the complex angles involved in the design and the degree of precision required: the position of rivet holes was specified to within 0.1 mm (0.04 in) and angles worked out to one second of arc. The finished components, some already riveted together into sub-assemblies, arrived on horse-drawn carts from the factory in the nearby Parisian suburb of Levallois-Perret and were first bolted together, the bolts being replaced by rivets as construction progressed. No drilling or shaping was done on site: if any part did not fit it was sent back to the factory for alteration. In all there were 18,038 pieces of puddle iron using two and a half million rivets.
At first the legs were constructed as cantilevers but about halfway to the first level construction was paused in order to construct a substantial timber scaffold. This caused a renewal of the concerns about the structural soundness of the project, and sensational headlines such as "Eiffel Suicide!" and "Gustave Eiffel has gone mad: he has been confined in an Asylum" appeared in the popular press. At this stage a small "creeper" crane was installed in each leg, designed to move up the tower as construction progressed and making use of the guides for the lifts which were to be fitted in each leg. The critical stage of joining the four legs at the first level was complete by March 1888. Although the metalwork had been prepared with the utmost precision, provision had been made to carry out small adjustments in order to precisely align the legs: hydraulic jacks were fitted to the shoes at the base of each leg, each capable of exerting a force of 800 tonnes, and in addition the legs had been intentionally constructed at a slightly steeper angle than necessary, being supported by sandboxes on the scaffold.
No more than three hundred workers were employed on site, and because Eiffel took safety precautions, including the use of movable stagings, guard-rails and screens, only one man died during construction.
Inauguration and the 1889 Exposition
The 1889 Exposition Universelle for which the Eiffel Tower was built
The main structural work was completed at the end of March 1889 and on the 31st Eiffel celebrated this by leading a group of government officials, accompanied by representatives of the press, to the top of the tower. Since the lifts were not yet in operation, the ascent was made by foot, and took over an hour, Eiffel frequently stopping to make explanations of various features. Most of the party chose to stop at the lower levels, but a few, including Nouguier, Compagnon, the President of the City Council and reporters from Le Figaro and Le Monde Illustré completed the climb. At 2.35 Eiffel hoisted a large tricolore, to the accompaniment of a 25-gun salute fired from the lower level. There was still work to be done, particularly on the lifts and the fitting out of the facilities for visitors, and the tower was not opened to the public until nine days after the opening of the Exposition on 6 May, and even then the lifts had not been completed.
The tower was an immediate success with the public, and lengthy queues formed to make the ascent. Tickets cost 2 francs for the first level, 3 for the second and 5 for the top, with half-price admission on Sundays, and by the end of the exhibition there had been nearly two million visitors.
Eiffel had a permit for the tower to stand for 20 years; it was to be dismantled in 1909, when its ownership would revert to the City of Paris. The City had planned to tear it down (part of the original contest rules for designing a tower was that it could be easily demolished) but as the tower proved valuable for communication purposes, it was allowed to remain after the expiry of the permit. In the opening weeks of the First World War the powerful radio transmitters using the tower were used to jam German communications, seriously hindering their advance on Paris and contributing to the Allied victory at the First Battle of the Marne.
Subsequent events
10 September 1889 Thomas Edison visited the tower. He signed the guestbook with the following message— To M Eiffel the Engineer the brave builder of so gigantic and original specimen of modern Engineering from one who has the greatest respect and admiration for all Engineers including the Great Engineer the Bon Dieu, Thomas Edison.
19 October 1901 Alberto Santos-Dumont in his Dirigible No.6 won a 10,000-franc prize offered by Henri Deutsch de la Meurthe for the first person to make a flight from St Cloud to the Eiffel tower and back in less than half an hour.
1910 Father Theodor Wulf measured radiant energy at the top and bottom of the tower. He found more at the top than expected, incidentally discovering what are today known as cosmic rays.[28]
4 February 1912 Austrian tailor Franz Reichelt died after jumping 60 metres from the first deck of Eiffel tower with his home-made parachute.
1914 A radio transmitter located in the tower jammed German radio communications during the lead-up to the First Battle of the Marne.
1925 The con artist Victor Lustig "sold" the tower for scrap metal on two separate, but related occasions.
1930 The tower lost the title of the world's tallest structure when the Chrysler Building was completed in New York City.
1925 to 1934 Illuminated signs for Citroën adorned three of the tower's four sides, making it the tallest advertising space in the world at the time.
1940–1944 Upon the German occupation of Paris in 1940, the lift cables were cut by the French so that Adolf Hitler would have to climb the steps to the summit. The parts to repair them were allegedly impossible to obtain because of the war. In 1940 German soldiers had to climb to the top to hoist the swastika[citation needed], but the flag was so large it blew away just a few hours later, and was replaced by a smaller one. When visiting Paris, Hitler chose to stay on the ground. It was said that Hitler conquered France, but did not conquer the Eiffel Tower. A Frenchman scaled the tower during the German occupation to hang the French flag. In August 1944, when the Allies were nearing Paris, Hitler ordered General Dietrich von Choltitz, the military governor of Paris, to demolish the tower along with the rest of the city. Von Choltitz disobeyed the order. Some say Hitler was later persuaded to keep the tower intact so it could later be used for communications. The lifts of the Tower were working normally within hours of the Liberation of Paris.
3 January 1956 A fire damaged the top of the tower.
1957 The present radio antenna was added to the top.
1980s A restaurant and its supporting iron scaffolding midway up the tower was dismantled; it was purchased and reconstructed on St. Charles Avenue and Josephine Street in the Garden District of New Orleans, Louisiana, by entrepreneurs John Onorio and Daniel Bonnot, originally as the Tour Eiffel Restaurant, later as the Red Room and now as the Cricket Club (owned by the New Orleans Culinary Institute). The restaurant was re-assembled from 11,000 pieces that crossed the Atlantic in a 40-foot (12 m) cargo container.
31 March 1984 Robert Moriarty flew a Beechcraft Bonanza through the arches of the tower.
1987 A.J. Hackett made one of his first bungee jumps from the top of the Eiffel Tower, using a special cord he had helped develop. Hackett was arrested by the Paris police upon reaching the ground.
27 October 1991 Thierry Devaux, along with mountain guide Hervé Calvayrac, performed a series of acrobatic figures of bungee jumping (not allowed) from the second floor of the Tower. Facing the Champ de Mars, Thierry Devaux was using an electric winch between each figure to go back up. When firemen arrived, he stopped after the sixth bungee jump.
New Year's Eve 1999 The Eiffel Tower played host to Paris's Millennium Celebration. On this occasion, flashing lights and four high-power searchlights were installed on the tower, and fireworks were set off all over it. An exhibition above a cafeteria on the first floor commemorates this event. Since then, the light show has become a nightly event. The searchlights on top of the tower make it a beacon in Paris's night sky, and the 20,000 flash bulbs give the tower a sparkly appearance every hour on the hour.
28 November 2002 The tower received its 200,000,000th guest.
2004 The Eiffel Tower began hosting an ice skating rink on the first floor each winter.
Design of the tower
Material
The Eiffel Tower from below
The puddle iron structure of the Eiffel Tower weighs 7,300 tonnes, while the entire structure, including non-metal components, is approximately 10,000 tonnes. As a demonstration of the economy of design, if the 7,300 tonnes of the metal structure were melted down it would fill the 125-metre-square base to a depth of only 6 cm (2.36 in), assuming the density of the metal to be 7.8 tonnes per cubic metre. Depending on the ambient temperature, the top of the tower may shift away from the sun by up to 18 cm (7.1 in) because of thermal expansion of the metal on the side facing the sun.
Wind considerations
At the time the tower was built many people were shocked by its daring shape. Eiffel was criticised for the design and accused of trying to create something artistic, or inartistic according to the viewer, without regard to engineering. Eiffel and his engineers, however, as experienced bridge builders, understood the importance of wind forces and knew that if they were going to build the tallest structure in the world they had to be certain it would withstand the wind. In an interview reported in the newspaper Le Temps, Eiffel said:
Now to what phenomenon did I give primary concern in designing the Tower? It was wind resistance. Well then! I hold that the curvature of the monument's four outer edges, which is as mathematical calculation dictated it should be […] will give a great impression of strength and beauty, for it will reveal to the eyes of the observer the boldness of the design as a whole.[37]
Researchers have found that Eiffel used empirical and graphical methods accounting for the effects of wind rather than a specific mathematical formula. Careful examination of the tower shows a basically exponential shape; actually two different exponentials, the lower section overdesigned to ensure resistance to wind forces. Several mathematical explanations have been proposed over the years for the success of the design; the most recent is described as a nonlinear integral equation based on counterbalancing the wind pressure on any point on the tower with the tension between the construction elements at that point. As a demonstration of the tower's effectiveness in wind resistance, it sways only 6–7 cm (2–3 in) in the wind.
Accommodation
When built, the first level contained two restaurants: an "Anglo-American Bar", and a 250 seat theatre. A 2.6 m (8 ft 6 in) promenade ran around the outside.
On the second level, the French newspaper Le Figaro had an office and a printing press, where a special souvenir edition, Le Figaro de la Tour, was produced. There was also a pâtisserie.
On the third level were laboratories for various experiments and a small apartment reserved for Gustave Eiffel to entertain guests. This is now open to the public, complete with period decorations and lifelike models of Gustave and some guests.
Engraved names
Gustave Eiffel engraved on the tower seventy-two names of French scientists, engineers and other notable people. This engraving was painted over at the beginning of the twentieth century but restored in 1986–1987 by the Société Nouvelle d'exploitation de la Tour Eiffel, a company contracted to operate business related to the Tower.
Maintenance
Maintenance of the tower includes applying 50 to 60 tonnes of paint every seven years to protect it from rust. The height of the Eiffel Tower varies by 15 cm due to temperature.
Aesthetic considerations
In order to enhance the impression of height, three separate colours of paint are used on the tower, with the darkest on the bottom and the lightest at the top. On occasion the colour of the paint is changed; the tower is currently painted a shade of bronze. On the first floor there are interactive consoles hosting a poll for the colour to use for a future session of painting.
The only non-structural elements are the four decorative grillwork arches, added in Stephen Sauvestre's sketches, which served to reassure visitors that the structure was safe, and to frame views of other nearby architecture.
One of the great Hollywood movie clichés is that the view from a Parisian window always includes the tower. In reality, since zoning restrictions limit the height of most buildings in Paris to 7 storeys, only a very few of the taller buildings have a clear view of the tower.
Popularity
More than 200,000,000 people have visited the tower since its construction in 1889, including 6,719,200 in 2006. The tower is the most-visited paid monument in the world.
Passenger lifts
Ground to the second level
The original lifts (elevators) to the first and second floors were provided by two companies. Both companies had to overcome many technical obstacles as neither company (or indeed any company) had experience with installing lifts climbing to such heights with large loads. The slanting tracks with changing angles further complicated the problems. The East and West lifts were supplied by the French company Roux Combaluzier Lepape, using hydraulically powered chains and rollers. The North and South lifts were provided by the American company Otis using car designs similar to the original installation but using an improved hydraulic and cable scheme. The French lifts had a very poor performance and were replaced with the current installations in 1897 (West Pillar) and 1899 (East Pillar) by Fives-Lille using an improved hydraulic and rope scheme. Both of the original installations operated broadly on the principle of the Fives-Lille lifts.
The Fives-Lille lifts from ground level to the first and second levels are operated by cables and pulleys driven by massive water-powered pistons. The hydraulic scheme was somewhat unusual for the time in that it included three large counterweights of 200 tonnes each sitting on top of hydraulic rams which doubled up as accumulators for the water. As the lifts ascend the inclined arc of the pillars, the angle of ascent changes. The two lift cabs are kept more or less level and indeed are level at the landings. The cab floors do take on a slight angle at times between landings.
The principle behind the lifts is similar to the operation of a block and tackle but in reverse. Two large hydraulic rams (over 1 metre diameter) with a 16 metre travel are mounted horizontally in the base of the pillar which pushes a carriage (the French word for it translates as chariot and this term will be used henceforth to distinguish it from the lift carriage) with 16 large triple sheaves mounted on it. There are 14 similar sheaves mounted statically. Six wire ropes are rove back and forth between the sheaves such that each rope passes between the 2 sets of sheaves 7 times. The ropes then leave the final sheaves on the chariot and pass up through a series of guiding sheaves to above the second floor and then through a pair of triple sheaves back down to the lift carriage again passing guiding sheaves.
This arrangement means that the lift carriage, complete with its cars and passengers, travels 8 times the distance that the rams move the chariot, the 128 metres from the ground to the second floor. The force exerted by the rams also has to be 8 times the total weight of the lift carriage, cars and passengers, plus extra to account for various losses such as friction. The hydraulic fluid was water, normally stored in three accumulators, complete with counterbalance weights. To make the lift ascend, water was pumped using an electrically driven pump from the accumulators to the two rams. Since the counterbalance weights provided much of the pressure required, the pump only had to provide the extra effort. For the descent, it was only necessary to allow the water to flow back to the accumulators using a control valve. The lifts were operated by an operator perched precariously underneath the lift cars. His position (with a dummy operator) can still be seen on the lifts today.
The Fives-Lille lifts were completely upgraded in 1986 to meet modern safety requirements and to make the lifts easier to operate. A new computer-controlled system was installed which completely automated the operation. One of the three counterbalances was taken out of use, and the cars were replaced with a more modern and lighter structure. Most importantly, the main driving force was removed from the original water pump such that the water hydraulic system provided only a counterbalancing function. The main driving force was transferred to a 320 kW electrically driven oil hydraulic pump which drives a pair of hydraulic motors on the chariot itself, thus providing the motive power. The new lift cars complete with their carriage and a full 92 passenger load weigh 22 tonnes.
Owing to elasticity in the ropes and the time taken to get the cars level with the landings, each lift in normal service takes an average of 8 minutes and 50 seconds to do the round trip, spending an average of 1 minute and 15 seconds at each floor. The average journey time between floors is just 1 minute.
The original Otis lifts in the North and South pillars in their turn proved to be inferior to the new (in 1899) French lifts and were scrapped from the South pillar in 1900 and from the North pillar in 1913 after failed attempts to repower them with an electric motor. The North and South pillars were to remain without lifts until 1965 when increasing visitor numbers persuaded the operators to install a relatively standard and modern cable hoisted system in the north pillar using a cable-hauled counterbalance weight, but hoisted by a block and tackle system to reduce its travel to one third of the lift travel. The counterbalance is clearly visible within the structure of the North pillar. This latter lift was upgraded in 1995 with new cars and computer controls.
The South pillar acquired a completely new fairly standard electrically driven lift in 1983 to serve the Jules Verne restaurant. This was also supplied by Otis. A further four-ton service lift was added to the South pillar in 1989 by Otis to relieve the main lifts when moving relatively small loads or even just maintenance personnel.
The East and West hydraulic (water) lift works are on display and, at least in theory, are open to the public in a small museum located in base of the East and West tower, which is somewhat hidden from public view. Because the massive mechanism requires frequent lubrication and attention, public access is often restricted. However, when open, the wait times are much less than the other, more popular, attractions. The rope mechanism of the North tower is visible to visitors as they exit from the lift.
Second to the third level
The original spiral stairs to the third floor which were only 80 centimetres wide. Note also the small service lift in the background.
The original lifts from the second to the third floor were also of a water-powered hydraulic design supplied by Léon Edoux. Instead of using a separate counterbalance, the two lift cars counterbalanced each other. A pair of 81-metre-long hydraulic rams were mounted on the second level reaching nearly halfway up to the third level. A lift car was mounted on top of the rams. Ropes ran from the top of this car up to a sheave on the third level and back down to a second car. The result of this arrangement was that each car only travelled half the distance between the second and third levels and passengers were required to change lifts halfway walking between the cars along a narrow gangway with a very impressive and relatively unobstructed downward view. The ten-ton cars held 65 passengers each or up to four tons.
One interesting feature of the original installation was that the hoisting rope ran through guides to retain it on windy days to prevent it flapping and becoming damaged. The guides were mechanically moved out of the way of the ascending car by the movement of the car itself. In spite of some antifreeze being added to the water that operated this system, it nevertheless had to close to the public from November to March each year.
The original lifts complete with their hydraulic mechanism were completely scrapped in 1982 after 97 years of service. They were replaced with two pairs of relatively standard rope hoisted cars which were able to operate all the year round. The cars operate in pairs with one providing the counterbalance for the other. Neither car can move unless both sets of doors are closed and both operators have given a start command. The commands from the cars to the hoisting mechanism are by radio obviating the necessity of a control cable. The replacement installation also has the advantage that the ascent can be made without changing cars and has reduced the ascent time from 8 minutes (including change) to 1 minute and 40 seconds. This installation also has guides for the hoisting ropes but they are electrically operated. The guide once it has moved out of the way as the car ascends automatically reverses when the car has passed to prevent the mechanism becoming snagged on the car on the downward journey in the event it has failed to completely clear the car. Unfortunately these lifts do not have the capacity to move as many people as the three public lower lifts and long lines to ascend to the third level are common. Most of the intermediate level structure present on the tower today was installed when the lifts were replaced and allows maintenance workers to take the lift halfway.
The replacement of these lifts allowed the restructuring of the criss-cross beams in upper part of the tower and further allowed the installation of two emergency staircases. These replaced the dangerous winding stairs that were installed when the tower was constructed.
Restaurants
The tower has two restaurants: Le 58 tour Eiffel, on the first floor 311 ft (95 m) above sea level; and the Le Jules Verne, a gastronomical restaurant on the second floor, with a private lift. This restaurant has one star in the Michelin Red Guide. In January 2007, the multi-Michelin star chef Alain Ducasse was brought in to run Jules Verne.
Attempted relocation
According to interviews given in the early 1980s, Montreal Mayor Jean Drapeau negotiated a secret agreement with French President Charles de Gaulle for the tower to be dismantled and temporarily relocated to Montreal to serve as a landmark and tourist attraction during Expo 67. The plan was allegedly vetoed by the company which operated the tower out of fear that the French government could refuse permission for the tower to be restored to its original location.
Economics
The American TV show Pricing the Priceless speculates that in 2011 the tower would cost about $480,000,000 to build, that the land under the tower is worth $350,000,000, and that the scrap value of the tower is worth $3,500,000. The TV show estimates the tower makes a profit of about $29,000,000 per year, though it is unlikely that the Eiffel Tower is managed so as to maximize profit.
It costs $5,300,000 to repaint the tower, which is done once every seven years. The electric bill is $400,000 per year for 7.5 million kilowatt-hours.
The Tokyo Tower in Japan is a very similar structure of very similar size. It was finished in 1958 at a final cost of ¥2.8 billion ($8.4 million in 1958).
Source Wikipedia