View allAll Photos Tagged Computational

An abstract for your Sunday. Computational art using Synthetik's Studio Artist 5.5.

Taken and edited on the iPhone 11 pro. This is where photography is headed.

a nomad is lost in the wasteland of cosmic compuation

Computational portrait photography of this flower at Seattle Center—site of the 1962 World’s Fair.

Did you know you can dial in the aperture after the fact? Me neither. Next time I’ll layer tight aperture over wide, to get the full blossom like this, and a super-smooth background.

Playing around with the built-in computational ND filter in the OM-1, in Donard forest, Newcastle. Turns out it’s pretty good…

Does computational photography in a smart phone obviate fast glass on ILCs?

 

Happy Food Friday!

This may be the last one from Croatia. some of the fans will be sighing in relief. There are still a few pretty pictures in the backlog, maybe I’ll get desparate and raid the fridge in coming months.

Tri-x Pyro48 Nikon N75 18-55mm 3.5-5.6G

Computational feature (in camera focus stacking) was used to create the DOF to render both subjects in focus. Olympus OM-1 is amazing...!!!!

follow me on my Facebook Public Page:

www.facebook.com/OlivieriDino/

 

"I've seen a world of oceans of burning lithium.."

 

Imago generated by handmade script (1/3 million of iterations), splitted in 2 layers, chroma shifted, transformed, overlayed, composed, luma enhanced, digitally oil painted.

 

More amazing stories on www.onyrix.com

More productions on www.umamu.org

twitter: twitter.com/OlivieriDino

facebook: www.facebook.com/OlivieriDino

g+: plus.google.com/collection/s1KwZB

soundcloud: soundcloud.com/onyrix

bandcamp: onyrix.bandcamp.com/

instagram: www.instagram.com/olivieri.dino

youtube: www.youtube.com/user/onyrix/

vimeo: vimeo.com/onyrix/

One of the textbooks used by my friend Aaron while he was studying at James Cook University.

Abstract composition of the Engineering, Environment and Computing Building at Coventry University. A riot of geometric futuristic shapes.

This mode is a nice feature of one of my cameras. Here the taillight of a bicycle riding down a gravel path.

It’s not fair but then again it’s no contest.

The ability to fake a long exposure on your phone with no ND filter or tripod is awesome, but the reality of it is... good but not great.

We use automated means for systematic computational analysis of wrongthink data and statistics. We surf the data ecosystem with machine learning classifiers, looking for content of wrongthink. When we track it down, we measure its effectiveness. Once its narratives are diagnosed, we analytically target the data and run it through our narrative richness classifier—our machine learning algorithm. We’ve trained it to identify content that is representative of wrongthink. Our algorithm has learned the characteristics that perpetuate wrongthink. In the information ecosystem we must use manipulation to win the hearts and minds of the people. Therefore we are constructing a periodic table of social systems, in which we can categorize groups of wrongthink offenders. With this table we will gain an understanding of these groups. Then we will be able to understand their reactions and interactions with conspiracy theory wrongthink. We can’t have free thinking, critical thinkers, who can think for themselves. So we must control the information—the narrative. We use media narratives and entertainment to persuade, engage, and mobilize the masses. Like the devil, we twist the truth to suit our narratives. We use technology to manipulate you—Techno-Manipulation. We will tell you what is fact and what is misinformation. If you are one of the wrongthink revolutionaries, we will eventually censor and deperson you. One day you will wakeup and find yourself in a global mass Techno-Surveillance police state, where your every move will be monitored and analyzed in real time. Eventually, if you are found guilty of wrongthink hate crimes, you will be sent to a reeducation camp. If, however, you refuse to reform, you will be sentenced to death. Off with your head, you wrongthink terrorist! We must stamp out terrorism (critical thinking)! Exterminate the useless thinkers/eaters—bring on another holocaust! Welcome to the Techno-Dark Ages—the Techno-World Order—where you will take the Techno-Mark (666) and worship the Techno-Beast (Image of the Beast). Techno-Deindividuation and Techno- Eugenics: boy, the transhuman (666) future looks bright! Techno-Dystopia and the Internet of Bodies, here we come! Join the Fourth Industrial Revolution, join the Fourth Reich, follow the Führer! Give the Antichrist salute. Hail Beast! Hail Satan!

 

Daniel 7:7 “Then in my vision that night, I saw a fourth beast—terrifying, dreadful, and very strong. It devoured and crushed its victims with huge iron teeth and trampled their remains beneath its feet. It was different from any of the other beasts, and it had ten horns.”

 

Computationally Challenge, Week 19 - Visual Weight and Balance, Depth of Field

 

Pentax SMC 50mm f/1.2, taken at f/1.2

 

Explored 5/14/2025! Thanks for all the views, faves and kind comments!

No tripod or neutral density filter necessary? I love the idea of being able to, um, *fake* a long exposure but I wish it resembled a longer duration. These two shots of Great Falls turned out pretty well, but I've since photographed a number of other waterfalls and the Pixel 6 typically seems to mimic a duration of only 1/4 of a second or so, which in my book doesn't really count as a "long exposure." Even then, the effect often looks less like motion blur and more like someone sloppily brushed in an out-of-focus quality over the water. Consider me initially happy but ultimately disappointed.

Computational artifacts of a colorful kind.

Suntoucher, Groove Armada ♫

Studying computational dynamics with the Sony RX100.

How do you compute the volume of a cat? Dunking it in water doesn't work-- you get the volume of the rat-like creature that lives inside the cat; much like the feeble alien within a Dalek. (And, if your answer had anything to do with contour integrals, get real.) Here is a method that works: Using successive approximation, determine the smallest box that the cat will fully enclose itself in.

 

This cat is approximately 648 cubic inches in volume.

 

Related blog entry here

Computationally Challenged - Minimalist

The Flickr Lounge - Get Close

A dark night scene taken with a Pixel 4A using the "Night Sight" mode. This is "computational photography," Handheld mobile phone, several second exposure and multiple exposures that are processed to render what the computer (AI) believes the scene looks like. Not perfect, but far better than anything I could get without a tripod.

2016, Athens, Psirri, Greece

Artist: N_Grams

If academic disciplines are playing card suits, then Computer Science is the joker in the pack

www.cdyf.me/computing?q=joker#joker

 

Public domain image of the Jolly Joker, a vintage Masenghini Italian playing card via Wikimedia Commons w.wiki/35EW

  

Computational domes. The design is generated with shape grammars and the construction is adapted with a catenary-simulation. Scripted in Processing.

Computational domes. The design is generated with shape grammars and the construction is adapted with a catenary-simulation. Scripted in Processing.

Timnit Gebru wants to replace the U.S. Census (which costs $1B/year to implement) by simply analyzing the cars seen in Google Street View images.

 

After processing 22 million observed cars, she found some fascinating things, like the predictive power of "the sedan/truck ratio" for political party. Republicans sure like trucks! More findings in the comments below.

 

From the AI in Fintech Forum today at Stanford ICME.

This is a Computational Fluid Dynamics (CFD) computer-generated model of the Space Shuttle during re-entry. CFD has supplanted wind tunnels for many evaluations of aircraft. As computing power increases and computer models become more sophisticated, CFD will become an increasingly more powerful tool for aeronautics research.

 

NASA Media Usage Guidelines

 

Credit: NASA

Image Number: L-1993-03205

Date: April 1, 1993

These components perform key computations for Tide Predicting Machine No. 2, a special purpose mechanical analog computer for predicting the height and time of high and low tides. The tide prediction formula implemented by the machine includes the addition of a series of cosine terms. The triangular metal pieces are part of slotted yoke cranks which convert circular motion to a vertical motion that traces a sinusoid. Each slotted yoke crank is connected by a shaft to a pulley, which causes the pulley to follow the sinusoidal motion. A chain going over and under pulleys sums each of their deflections to compute the tide. Along the top of the photo, connecting shafts drive slotted yoke cranks on both sides of the machine.

 

The U.S. government used Tide Predicting Machine No. 2 from 1910 to 1965 to predict tides for ports around the world. The machine, also known as “Old Brass Brains,” uses an intricate arrangement of gears, pulleys, chains, slides, and other mechanical components to perform the computations.

 

A person using the machine would require 2-3 days to compute a year’s tides at one location. A person performing the same calculations by hand would require hundreds of days to perform the work. The machine is 10.8 feet (3.3 m) long, 6.2 feet (1.9 m) high, and 2.0 feet (0.61 m) wide and weighs approximately 2,500 pounds (1134 kg). The operator powers the machine with a hand crank.

 

The National Oceanic and Atmospheric Administration (NOAA) occasionally displays the machine at its facility in Silver Spring, Maryland.

For my birthday Sunday night, I hosted a dinner at work for the Institute of Protein Design — the bastion of intelligent design at the University of Washington. We discussed their brand new coronavirus vaccine, a therapeutic cure in development and a variety of even-more amazing biologics in the pipeline.

 

Oh, and we should expect that 30-70% of all of us will get the coronavirus this time around, and the rest next winter.

 

TLDR; How I learned to stop worrying and love the bug.

 

David Baker and Neil King use large pools of computers (GPU banks and Rosetta@Home) to computationally design functional proteins, for example, to bind to specific invariant surface coat proteins of a target virus (to be robust to evolutionary countermeasures) or to create a self-assembling nanocage decorated with a floral arrangement of epitopes to trigger a B-cell response (i.e., more potent and broad-spectrum vaccines). Baker had just learned that their crash-effort on the new coronavirus vaccine worked, and it might be better than others. They got the genetic sequence while the outbreak was still limited to China and went from sequence to vaccine in 42 days.

• More info on their nanoparticle vaccines

• And a pre-print of their coronavirus strategy as recently applied for an HIV vaccine.

 

They are also computing novel nanoparticle binders to target sites on 2019-nCoV predicted to neutralize the virus or interfere with its ability to infect cells. This would be a prophylactic and therapeutic, and it might have a shorter regulatory approval than the 18 months expected for novel vaccines (where safety studies are paramount, given the use in healthy people). They have had earlier success using this with influenza.

• Earlier examples

• They are also using proteins to selectively bind semiconductors and other interesting inorganics (more info)

 

So, if we are all going to get the virus, why not worry? Their advice if you are showing symptoms: just stay home and ride it out. It might be more dangerous to visit the hospital for testing. And certainly more stressful.

 

[addendum: the initial low death rates did not come for free. ICU availability made a big difference. See this post]

1 3 4 5 6 7 ••• 79 80