View allAll Photos Tagged AI-powered
Captured on Blue Grosbeak trail of Weldon Spring Conservation Area, Weldon Spring, Missouri.
I processed this photo using AI-powered Topaz DeNoise software. I find it does a pretty amazing job of reducing grain and enhancing details. If you want to check it out you can use this link to download it for free, plus get a 15% discount if you purchase:
Greer Springs Oregon County, Missouri
I processed this photo using AI-powered Topaz DeNoise software. I find it does a pretty amazing job of reducing grain and enhancing details. If you want to check it out you can use this link to download it for free, plus get a 15% discount if you purchase:
Last year I drove several hours to an area in southernmost Missouri where they breed but failed to come away with a photo, and now one has turned up half an hour from my home. This is the first reported occurrence in the St Louis Area since 1977, and he appears to be sticking around- there are reports that there may be more than one bird. The photo is not the best quality due to poor exposure, but some post-processing with Topaz made the best of what I got. An exciting find-- with the warming temperatures perhaps they will return next year. Lost Valley Trail in Weldon Spring, Missouri
I processed this photo using AI-powered Topaz Sharpen software. If you want to check it out you can use this link to download it for free, plus get a 15% discount if you purchase:
Last year I drove several hours to an area in southernmost Missouri where they breed but failed to come away with a photo, and now one has turned up half an hour from my home. This is the first reported occurrence in the St Louis Area since 1977, and he appears to be sticking around- there are reports that there may be more than one bird. The photo is not the best quality due to poor exposure, but some post-processing with Topaz made the best of what I got. An exciting find-- with the warming temperatures perhaps they will return next year. Lost Valley Trail in Weldon Spring, Missouri
I processed this photo using AI-powered Topaz Sharpen software. If you want to check it out you can use this link to download it for free, plus get a 15% discount if you purchase:
A clear view of the spread tail feathers is necessary for a definitive identification, but this is about 95% likely to be an Allen's rather than the very similar Rufous hummingbird. Newport Beach Back Bay, California
This photograph was processed using Topaz DeNoise and/or Sharpen software. Topaz software is AI-powered and can nearly eliminate background noise while retaining and even enhancing details from the primary subject— it’s almost magical. It can take grainy high-ISO images and make them useable, and can make already good quality images really pop. I recommend checking it out, and if you use the link below to download it you can get a 15% discount off of the already-reasonable purchase price:
A clear view of the spread tail feathers is necessary for a definitive identification, but this is about 95% likely to be an Allen's rather than the very similar Rufous hummingbird. Newport Beach Back Bay, California
This photograph was processed using Topaz DeNoise and Sharpen software. Topaz software is AI-powered and can nearly eliminate background noise while retaining and even enhancing details from the primary subject— it’s almost magical. It can take grainy high-ISO images and make them useable, and can make already good quality images really pop. I recommend checking it out, and if you use the link below to download it you can get a 15% discount off of the already-reasonable purchase price:
Taken near Red Spring at Red Rock Canyon, Nevada
I processed this photo using AI-powered Topaz DeNoise software. I find it does a pretty amazing job of reducing grain and enhancing details. If you want to check it out you can use this link to download it for free, plus get a 15% discount if you purchase:
A clear view of the spread tail feathers is necessary for a definitive identification, but this is about 95% likely to be an Allen's rather than the very similar Rufous hummingbird. Newport Beach Back Bay, California
I processed this photo using AI-powered Topaz DeNoise software. I find it does a pretty amazing job of reducing grain and enhancing details. If you want to check it out you can use this link to download it for free, plus get a 15% discount if you purchase:
Greer Springs, Oregon County, Missouri
I processed this photo using AI-powered Topaz DeNoise software. I find it does a pretty amazing job of reducing grain and enhancing details. If you want to check it out you can use this link to download it for free, plus get a 15% discount if you purchase:
Dixon Waterfowl Refuge, Illinois
This photograph was processed using Topaz DeNoise software. Topaz software is AI-powered and can nearly eliminate background noise while retaining and even enhancing details from the primary subject— it’s almost magical. It can take grainy high-ISO images and make them useable, and can make already good quality images really pop. I recommend checking it out, and if you use the link below to download it you can get a 15% discount off of the already-reasonable purchase price:
This young Bald Eagle managed to catch a fish on this pass, but the pics from afterward didn't turn out quite as well. It didn't help that this pass was on the other side of the Mississippi. Clarksville, Missouri
I processed this photo using AI-powered Topaz Sharpen software. If you want to check it out you can use this link to download it for free, plus get a 15% discount if you purchase:
Dixon Waterfowl Refuge, Illinois
I processed this photo using AI-powered Topaz DeNoise software. I find it does a pretty amazing job of reducing grain and enhancing details. If you want to check it out you can use this link to download it for free, plus get a 15% discount if you purchase:
Lost Valley Trail at Weldon Spring Conservation Area, Missouri
I processed this photo using AI-powered Topaz DeNoise software. I find it does a pretty amazing job of reducing grain and enhancing details. If you want to check it out you can use this link to download it for free, plus get a 15% discount if you purchase:
You can always tell when the juvenile Starlings start to fledge ... you can hear them a mile away!
Best viewed large.
This is the first image I have processed with Lightroom's new AI powered de-noise feature. It's very good indeed. I still think that DXO has the edge but the difference is very slight. If this tool had been available in Lightroom six months ago I would have had no need to splash out on DXO or any other third party de-noise software.
Near Riverlands Migratory Bird Sanctuary, West Alton, Missouri
I processed this photo using AI-powered Topaz DeNoise software. I find it does a pretty amazing job of reducing grain and enhancing details. If you want to check it out you can use this link to download it for free, plus get a 15% discount if you purchase:
Newport Beach Back Bay, California
This photograph was processed using Topaz DeNoise and/or Sharpen software. Topaz software is AI-powered and can nearly eliminate background noise while retaining and even enhancing details from the primary subject— it’s almost magical. It can take grainy high-ISO images and make them useable, and can make already good quality images really pop. I recommend checking it out, and if you use the link below to download it you can get a 15% discount off of the already-reasonable purchase price:
Seen just off the Lost Valley Trail in the Weldon Spring Conservation Area near Defiance, Missouri
I processed this photo using AI-powered Topaz DeNoise software. I find it does a pretty amazing job of reducing grain and enhancing details. If you want to check it out you can use this link to download it for free, plus get a 15% discount if you purchase:
Virginia Rail, to be exact. ;-) Lucky encounter on a day of pre-breeding excitement in Rail Country, where four rails were crossing the path ahead of me, being very busy with each other.
Usually these birds are very secretive, and way more often heard than seen.
(Full disclosure: besides some AI denoise, I also removed a part of the gravel pathway using "AI-powered Generative Remove" using Lightroom CC 7.3)
Saab JAS-39 Gripen E seen here climbing effortlessly into the sky. Apparently Ai Powered shown in decals on the tail fin.
Image info:- Nikon Z9 with Nikon Z 100 - 400mm VR S @ f/5.6, ISO 500, shutter speed 1/2000th second, focal length 400mm, processed in Lightroom Classic.
Seen just off the Lost Valley Trail in the Weldon Spring Conservation Area near Defiance, Missouri
I processed this photo using AI-powered Topaz DeNoise software. I find it does a pretty amazing job of reducing grain and enhancing details. If you want to check it out you can use this link to download it for free, plus get a 15% discount if you purchase:
I normally wouldn’t go anywhere near f22, but I wanted to create a sun star. I used the AI-powered raw details and noise reduction to add back details to the barley that the aperture setting rendered mushy.
A view to forest.
A hiking trip to a nearby national park. Dull overcast weather made photographing easy. An even soft light is perfect for calm nature photos. Rain showers added contrast and pop to nature, no need to use polarizing filter. Post-processing is easy with these low contrast pics.
Switched my little Fuji (X-E3) to a similar sized faster camera, X-T30II. Paired it with XF70-300 and a 1.4 teleconverter and I've now got a ~200-600mm equivalent mini camera. Grain is heavy with this dim lens and Fuji's crop sensor, but Lightroom's new AI powered denoise seems to remove it very well.
As usual, uploaded a stack of pics to IG, wide-angle photos are taken with Sony A7RV: www.instagram.com/p/Cubz2AUoFE0/
Usually I try not to post more than one picture of the same subject, but with this one I made an exception.
In hindsight, I should have opted for this version instead of the first one. At least, I like it better.
Handheld with manual focus lens.
Used LR's new AI-powered noise reduction, although it was probably not really necessary with ISO 800 on my EOS 70D...
Google's 'Deep Dream Generator' is an Ai-powered graphics app that attempts to blend different graphic styles with your own original photographs (shown as small insets here)... It's great fun, although the results can be very hit-and-miss!
(Free to use for limited amounts of time, unless you want to fork out for the subscription packages)
Having a photo printer isn’t just about running off your own work — sometimes it opens the door to unexpected creative collaborations. That’s what happened after a friend of mine came back from Japan with her niece and a memory card full of potential.
We sat down to review the shots, not just casually flipping through but properly reviewing: what’s worth editing, printing, revisiting. One image jumped out. Her niece stood in a high-rise, facing a floor-to-ceiling window that overlooked a massive cityscape of Tokyo. The exposure was nailed for the ambient daylight, allowing an extraordinary level of detail in the urban sprawl below. That X-T20 is great camera, you can double click into the city! The mood was cinematic: a quiet moment of someone taking in the immensity of the city.
But the technical realities were clear too. The niece was deeply underexposed, as expected — she was backlit, a silhouette against the city. And structurally, a thick vertical beam bisected the window. It wasn’t wrong, but it did interfere with the visual flow. So the question arose: leave the beam in as part of the architectural truth, or remove it to let the moment breathe?
That marked the beginning of a slow-burn retouching project that spanned over a year, on and off. I started by selectively lifting shadows in Lightroom, enough to reveal presence without flattening the lighting. We ran a few test prints — and discussed two versions I think.
About the beam. I tested a few AI-powered content-aware removal tools (LR’s latest offerings, some generative fill options), but none of them respected the underlying geometry of the city. Artifacts, weird smearing, structural chaos. In the end, a manual clone/heal approach worked best. It took some attention to pattern continuity, but the result was clean, and we both agreed it was better gone.
The final image had one quirk: a duplicated cloud formation — a side effect of the cloning work. Subtle (or not now that you know to look for it :), believable though, and I decided to leave it. It becomes a sort of easter egg; a visual riddle for anyone observant enough to spot it.
The finished print became a Christmas gift for her niece’s grandparents — a keepsake from the trip, but refined into something that didn’t quite exist in the original file. Monica kept asking what she owed me. I think I accepted a couple of beers. Honestly, a fair trade for the fun I had.
YAY!! Friday 10/11 we're having a party at The Basilica. The theme is 'Lisa's Wedding', she gonna marry the Evil Priest of the Basilica. It's an AI powered character who became sentient and now thinks he's the evil priest. And ... of course ... we made a promo video!! Enjoyyyy => youtu.be/XHcchey1Isw
My Website : Twitter : Facebook : Instagram : Photocrowd
Often after the London Flickr Photowalks I hang around until it gets dark. On this Saturday evening back in May this meant rather a long day but it did provide some photo opportunities which made it all worth while. One such was this couple dancing in Trafalgar Square. I'm not sure whether they were a 'real' couple or just professional models but either way it would've been rude not to take at least a couple of shots.....
It also proved a good opportunity to try out the Photoshop 'distraction removal' tool now that my new PC can access the new AI powered features. It successfully removed some 'out of focus' people in the background as well as a very distracting 'No Entry' sign in the centre of the fountain.
Our next photowalk will be held on Saturday, August 9th. If you'd like to come along you can find more details here : www.flickr.com/groups/londonflickrgroup/discuss/721577219...
Click here for more of my street photography : www.flickr.com/photos/darrellg/albums/72157629075346606
© D.Godliman
Welcome to the cabbage patch, kid! Your future: the Fourth Industrial Revolution, the Forth Industrial Evolution. The Nano World Order! The Graphene Matrix! It’s a new era of profound degeneration, where organisms become algorithms. Human hacks: biochemical processes, electronic signals, store, analyze, no escape. Reengineering human life—666 half human half Beast.
Psychotronics: hacking the mind, brain, and consciousness. They will get inside your head: seeing through your eyes, hearing through your ears, reading your thoughts, inducing thoughts. They will hack you through the Mark of the Beast. They will plug you into the Hive Mind Beast System. Then you will be a brainless zombie of the Antichrist System. Your mind will be controlled with different frequencies and wave forms—controlling your psyche, modifying your consciousness. The New World Order Military Industrial Intelligence Complex, researching and developing highly sophisticated state of the art technology to harness the computing power of the human brain.
Bioenergetics, bio-photonics, biophysics, psionics, psycho-energetics, psychoneuroimmunology, quantum biology, radionics, scalar electromagnetic, bioelectromagnetism, biophotons, biopotentials, morphogenetic fields, non-hertzian waves, quantum fields, scalar waves, zero-point energy, 666 hacks, the path to transhumanism.
AI-powered analytic disclaimer:
“Public digital conversations provide unique insights on social trends shaping society’s opinion. We analyze key influencers and evolving themes as it is critical to understand how controversies and the public reaction unfold in real-time. We quickly identify any coordinated or unauthentic behavior aimed at fueling social unrest, polarization or pollution of the public debate.”
Now a commercial sponsored by the World Economic Forum:
Klaus Schwab- “Pay insufficient attention to the frightening scenario of a comprehensive cyber attack which will bring to a complete halt to the power supply, transportation, hospital services, our society as a whole. The covid-19 crisis would be seen in this respect as a small disturbance in comparison to a major cyber attack.”
Matthew 24:21 “For at that time there will be a great tribulation (pressure, distress, oppression), such as has not occurred since the beginning of the world until now, nor ever will [again].”
Take the jab, insert the microchip, become a better you…bahahahaha!!! I mean: it’s for your health, for your safety, for your own good, and for the good of society! #666
A return to a classic-type snowflake, but I’m having a bit of fun with this one – it was shot on a Lumix GH5S! Why?
I have a GH5S here in studio for a video production project and I noticed it was snowing, so I figured I’d test out the camera, predominantly designed around video, to see how it handled snowflake photography. The proof is in the pudding, as it were. It handled itself just fine! Slightly lower resolution but with the highest quality pixels, it’s a very capable camera.
The GH5S is the video-focused version of the GH5, which itself was the flagship video micro four thirds lineup from Panasonic before the release of this camera. A resolution drop from 20MP to 12MP on a sensor designed to push the highest possible 4K video quality, what happens when I fill the frame with a snowflake for high-quality stills? This is the result, and I’m not surprised. In terms of quality over quantity it’s the best “small sensor” camera I’ve used.
Obviously twelve megapixels isn’t enough for most photographers these days, but this camera was not designed for photographers. It was an enjoyable experiment and the results show that it is a very capable camera. As a side note, I thought it would be interesting to see what Adobe’s new AI-powered “Enhance Details” feature for RAW files would do, and it made sense to explore this with a lower-resolution file. Guess what? It did absolutely nothing. I expected as much, since it’s not a landscape or cityscape of any kind that AI would have been trained on, and I doubt further advancements would yield better results. I’ll stick with the Perfect Resize tool from ON1 as well as their “structure” slider which really helps define the surface details in these snowflake photos. I’ve used both of those for every snowflake in this series!
Notice the rounded or pointed tips at the end of the branches? That’s a sure sign of sublimation! Snowflakes are constantly in flux – growing in the clouds and beginning to fade even before they hit the ground. If the cloud ceiling is high and if there is any wind, a snowflake photographed the instant it lands can look like this. That’s exactly what happened here. No matter how fast you are, these little gems don’t last for long! This is one of the reasons why I shoot handheld, so that I can work as quickly as possible to capture the most “complete” snowflakes.
No two snowflakes can be identical, and furthermore no single snowflake is ever identical to itself for more than an instance of time at the smallest measurements. While a snowflake shows the order we can find in nature, it also illustrates the utter chaos of the planet we call home when you spend enough time thinking about them. I certainly spend more time on the subject than most. :)
Just a heads-up that I have my 2019 workshop schedule posted here! www.donkom.ca/workshops/ - some have already filled but there are still spots available including my Iceland tour which will be an absolute dream trip for photographers.
Entry for Biocup2024 preliminary- NASA’s first AI powered astronaut launched in 2055, featuring all new lab grown brain technology.
This photograph was published by BusinessNews in an online article on July 19th 2023 by SHAWN JOHNSON entitled:
'' This AI-Powered App Makes Identifying Birds Easy (With One Tricky Exception) ''
BusinessNews was created by a group of young tech entrepreneurs and comes under Biz.crast.net
This photograph was previously published in an online article in TREEHUGGER (Sustainability for all) on November 3rd 2021, entitled:
'' Why the Dawn Chorus Is Getting Quieter and Less Diverse '' - Changes in bird populations are altering spring soundscapes, By Mary Jo DiLonard and Fact checked by Haley Mast.
It was previously selected as my my 4,709th image for sale in the GETTY IMAGES COLLECTION on November 4th 2020 (I now have 7000+).
.
.
©All photographs on this site are copyright: DESPITE STRAIGHT LINES (Paul Williams) 2011 – 2020 & GETTY IMAGES ®
No license is given nor granted in respect of the use of any copyrighted material on this site other than with the express written agreement of DESPITE STRAIGHT LINES (Paul Williams) ©
.
.
Photograph taken at an altitude of Twenty metres at 09:15am on Monday 24th March 2020 off Ashbourne Avenue and Chessington Avenue in Bexleyheath, Kent, England.
A pair of nesting Red Robins are keeping me company in the garden, dive bombing the cats and sing at the tops of their lungs each day, which is always a pleasure to hear. Here we see the Male European robin (Erithacus rubecula), also known as the Robin, or Robin redbreast, a small insectivorous passerine bird, more specifically a Chat.
.
.
Nikon D850. Focal length 600mm Shutter speed 1/160s. Aperture f/9.0 iso64 Image area FX (36 x 24) NEF RAW L (8256 x 5504). NEF RAW L (14 bit uncompressed) Image size L (8256 x 5504 FX). Focus mode AF-C focus. AF-C Priority Selection: Release. Nikon Back button focusing enabled. AF-S Priority selection: Focus. 3D Tracking watch area: Normal 55 Tracking points.AF-Area mode single point & 73 point switchable. Exposure mode: Shutter priority. Matrix metering. Auto ISO sensitivity control on (Max Iso 800/ Minimum shutter speed 125). White balance on: Auto1. Colour space: RGB. Active D-lighting: Normal. Vignette control: Normal. Nikon Distortion control: Enabled. Picture control: Auto (Sharpening A +1/Clarity A+1)
Sigma 60-600mm f/4.5-6.3DG OS HSM SPORTS. Lee SW150 MKI filter holder with MK2 light shield and custom made velcro fitting for the Sigma lens. Lee SW150 circular polariser glass filter. Lee SW150 Filters field pouch. Mcoplus professional MB-D850 multi function battery grip 6960.Two Nikon EN-EL15a batteries (Priority to battery in Battery grip). Matin quick release neckstrap. My Memory 128GB Class 10 SDXC 80MB/s card. Lowepro Flipside 400 AW camera bag. Nikon GP-1 GPS module. Hoodman HEYENRG round eyepiece oversized eyecup.
.
.
LATITUDE: N 51d 28m 28.17s
LONGITUDE: E 0d 8m 10.60s
ALTITUDE: 54.0m
RAW (TIFF) FILE: 130.00MB NEF: 90.3MB
PROCESSED (JPeg) FILE: 19.60MB
.
.
PROCESSING POWER:
Nikon D850 Firmware versions C 1.10 (9/05/2019) LD Distortion Data 2.018 (18/02/20) LF 1.00
HP 110-352na Desktop PC with AMD Quad-Core A6-5200 APU 64Bit processor. Radeon HD8400 graphics. 8 GB DDR3 Memory with 1TB Data storage. 64-bit Windows 10. Verbatim USB 2.0 1TB desktop hard drive. WD My Passport Ultra 1tb USB3 Portable hard drive. Nikon ViewNX-1 64bit Version 1.4.1 (18/02/2020). Nikon Capture NX-D 64bit Version 1.6.2 (18/02/2020). Nikon Picture Control Utility 2 (Version 2.4.5 (18/02/2020). Nikon Transfer 2 Version 2.13.5. Adobe photoshop Elements 8 Version 8.0 64bit.
Congratulations to Intel on their acquisition of Nervana. This photo is from the last board meeting at our offices; the Nervana founders — from right to left: Naveen Rao, Amir Khosrowshahi and Arjun Bansal — pondered where on the wall they may fall during M&A negotiations.
We are now free to share some of our perspectives on the company and its mission to accelerate the future with custom chips for deep learning.
I’ll share a recap of the Nervana story, from an investor’s perspective, and try to explain why machine learning is of fundamental importance to every business over time. In short, I think the application of iterative algorithms (e.g., machine learning, directed evolution, generative design) to build complex systems is the most powerful advance in engineering since the Scientific Method. Machine learning allows us to build software solutions that exceed human understanding, and shows us how AI can innervate every industry.
By crude analogy, Nervana is recapitulating the evolutionary history of the human brain within computing — moving from the logical constructs of the reptilian brain to the cortical constructs of the human brain, with massive arrays of distributed memory and iterative learning algorithms.
Not surprisingly, the founders integrated experiences in neuroscience, distributed computing, and networking — a delightful mélange for tackling cognitive computing. Ali Partovi, an advisor to Nervana, introduced us to the company.
We were impressed with the founding team and we had a prepared mind to share their enthusiasm for the future of deep learning. Part of that prepared mind dates back to 1989, when I started a PhD in EE focusing on how to accelerate neural networks by mapping them to parallel processing computers. Fast forward 25 years, and the nomenclature has shifted to machine learning and the deep learning subset, and I chose it as the top tech trend of 2013 at the Churchill Club VC debate (video). We were also seeing the powerful application of deep learning and directed evolution across our portfolio, from molecular design to image recognition to cancer research to autonomous driving.
All of these companies were deploying these simulated neural networks on traditional compute clusters. Some were realizing huge advantages by porting their code to GPUs; these specialized processors originally designed for rapid rendering of computer graphics have many more computational cores than a traditional CPU, a baby step toward a cortical architecture. I first saw them being used for cortical simulations in 2007. But by the time of Nervana’s founding in 2014, some (e.g., Microsoft’s and Google’s search teams) were exploring FPGA chips for their even finer-grained arrays of customizable logic blocks. Custom silicon that could scale beyond any of these approaches seemed like the natural next step. Here is a page from Nervana’s original business plan (Fig. 1 in comments below).
The march to specialized silicon, from CPU to GPU to FPGA to ASIC, had played out similarly for Bitcoin miners, with each step toward specialized silicon obsoleting the predecessors. When we spoke to Amazon, Google, Baidu, and Microsoft in our due diligence, we found a much broader application of deep learning within these companies than we could have imagined prior, from product positioning to supply chain management.
Machine learning is central to almost everything that Google does. And through that lens, their acquisition, and new product strategies make sense; they are not traditional product line extensions, but a process expansion of machine leaning (more on that later). They are not just playing games of Go for the fun of it. Recently, Google switched their core search algorithms to deep learning, and they used Deep Mind to cut data center cooling costs by a whopping 40%.
The advances in deep learning are domain independent. Google can hire and acquire talent and delight in their passionate pursuit of game playing or robotics. These efforts help Google build a better brain. The brain can learn many things. It is like a newborn human; it has the capacity to learn any of the languages of the world, but based on training exposure, it will only learn a few. Similarly, a synthetic neural network can learn many things.
Google can let the Brain team find cats on the Internet and play a great game of Go. The process advances they make in building a better brain (or in this case, a better learning machine) can then be turned to ad matching, a task that does not inspire the best and the brightest to come work for Google.
The domain independence of deep learning has profound implications on labor markets and business strategy. The locus of learning shifts from end products to the process of their creation. Artifact engineering becomes more like parenting than programming. But more on that later; back to the Nervana story.
Our investment thesis for the Series A revolved around some universal tenets: a great group of people pursuing a product vision unlike anything we had seen before. The semiconductor sector was not crowded with investor interest. AI was not yet on many venture firms’ sectors of interest. We also shared with the team that we could envision secondary benefits from discovering the customers. Learning about the cutting edge of deep learning applications and the startups exploring the frontiers of the unknown held a certain appeal for me. And sure enough, there were patterns in customer interest, from an early flurry in medical imaging of all kinds to a recent explosion of interest in the automotive sector after Tesla’s Autopilot feature went live. The auto industry collectively rushed to catch up.
Soon after we led the Series A on August 8, 2014, I found myself moderating a deep learning panel at Stanford with Nervana CEO Naveen Rao.
I opened with an introduction to deep learning and why it has exploded in the past four years (video primer). I ended with some common patterns in the power and inscrutability of artifacts built with iterative algorithms. We see this in biology, cellular automata, genetic programming, machine learning and neural networks.
There is no mathematical shortcut for the decomposition of a neural network or genetic program, no way to “reverse evolve” with the ease that we can reverse engineer the artifacts of purposeful design.
The beauty of compounding iterative algorithms — evolution, fractals, organic growth, art — derives from their irreducibility. (More from my Google Tech Talk and MIT Tech Review)
Year 1. 2015
Nervana adds remarkable engineering talent, a key strategy of the first mover. One of the engineers figures out how to rework the undocumented firmware of NVIDIA GPUs so that they run deep learning algorithms faster than off-the-shelf GPUs or anything else Facebook could find. Matt Ocko preempted the second venture round of the company, and he brought the collective learning of the Data Collective to the board.
Year 2. 2016 Happy 2nd Birthday Nervana!
The company is heads down on chip development. They share some technical details (flexpoint arithmetic optimized for matrix multiplies and 32GB of stacked 3D memory on chip) that gives them 55 trillion operations per second on their forthcoming chip, and multiple high-speed interconnects (as typically seen in the networking industry) for ganging a matrix of chips together into unprecedented compute fabrics. 10x made manifest. See Fig. 2 below.
And then Intel came knocking.
With the most advanced production fab in the world and a healthy desire to regain the mantle of leading the future of Moore’s Law, the combination was hard to resist. Intel vice president Jason Waxman told Recode that the shift to artificial intelligence could dwarf the move to cloud computing. “I firmly believe this is not only the next wave but something that will dwarf the last wave.” But we had to put on our wizard hats to negotiate with giants.
The deep learning and AI sector have heated up in labor markets to relatively unprecedented levels. Large companies are recently paying $6–10 million per engineer for talent acquisitions, and $4–5M per head for pre-product startups still in academia. For the Masters students in a certain Stanford lab, they averaged $500K/yr for their first job offer at graduation. We witnessed an academic turn down a million dollar signing bonus because they got a better offer.
Why so hot?
The deep learning techniques, while relatively easy to learn, are quite foreign to traditional engineering modalities. It takes a different mindset and a relaxation of the presumption of control. The practitioners are like magi, sequestered from the rest of a typical engineering process. The artifacts of their creation are isolated blocks of functionality defined by their interfaces. They are like blocks of magic handed to other parts of a traditional organization. (This carries over to the customers too; just about any product that you experience in the next five years that seems like magic will almost certainly be built by these algorithms).
And remember that these “brain builders” could join any industry. They can ply their trade in any domain. When we were building the deep learning team at Human Longevity Inc. (HLI), we hired the engineering lead from the Google’s Translate team. Franz Och pioneered Google’s better-than-human translation service not by studying linguistics, grammar, or even speaking the languages being translated. He focused on building the brain that could learn the job from countless documents already translated by humans (UN transcripts in particular). When he came to HLI, he cared about the mission, but knew nothing about cancer and the genome. The learning machines can find the complex patterns across the genome. In short, the deep learning expertise is fungible, and there are a burgeoning number of companies hiring and competing across industry lines.
And it is an ever-widening set of industries undergoing transformation, from automotive to agriculture, healthcare to financial services. We saw this explosion in the Nervana customer pipeline. And we see it across the DFJ portfolio, especially in our newer investments. Here are some examples:
• Learning chemistry and drug discovery: Here is a visualization of the search space of candidates for a treatment for Ebola; it generated the lead molecule for animal trials. Atomwise summarizes: “When we examine different neurons on the network we see something new: AtomNet has learned to recognize essential chemical groups like hydrogen bonding, aromaticity, and single-bonded carbons. Critically, no human ever taught AtomNet the building blocks of organic chemistry. AtomNet discovered them itself by studying vast quantities of target and ligand data. The patterns it independently observed are so foundational that medicinal chemists often think about them, and they are studied in academic courses. Put simply, AtomNet is teaching itself college chemistry.”
• Designing new microbial life for better materials: Zymergen uses machine learning to predict the combination of genetic modifications that will optimize product yield for their customers. They are amassing one of the largest data sets about microbial design and performance, which enables them to train machine learning algorithms that make search predictions with increasing precision. Genomatica had great success in pathway optimization using directed evolution, a physical variant of an iterative optimization algorithm.
• Discovery and change detection in satellite imagery: Planet and Mapbox. Planet is now producing so much imagery that humans can’t actually look at each picture it takes. Soon, they will image every meter of the Earth every day. From a few training examples, a convolutional neural net can find similar examples globally — like all new housing starts, all depleted reservoirs, all current deforestation, or car counts for all retail parking lots.
• Automated driving & robotics: Tesla, Zoox, SpaceX, Rethink Robotics, etc.
• Visual classification: From e-commerce to drones to security cameras and more. Imagen is using deep learning to radically improve medical image analysis, starting with radiology.
• Cybersecurity: When protecting endpoint computing & IOT devices from the most advanced cyberthreats, AI-powered Cylance is proving to be a far superior and adaptive approach versus older signature-based antivirus solutions.
• Financial risk assessment: Avant and Prosper use machine learning to improve credit verification and merge traditional and non-traditional data sources during the underwriting process.
• And now for something completely different: quantum computing. For a wormhole peek into the near future, our quantum computing company, D-Wave Systems, powered a 100,000,000x speedup in a demonstration benchmark for Google, a company that has used D-Wave quantum computers for over a decade now on machine learning applications.
So where will this take us?
Neural networks had their early success in speech recognition in the 90’s. In 2012, the deep learning variant dominated the ImageNet competitions, and visual processing can now be better done by machine than human in many domains (like pathology, radiology and other medical image classification tasks). DARPA has research programs to do better than a dog’s nose in olfaction.
We are starting the development of our artificial brains in the sensory cortex, much like an infant coming into the world. Even within these systems, like vision, the deep learning network starts with similar low level constructs (like edge-detection) as foundations for higher level constructs like facial forms, and ultimately, finding cats on the internet with self-taught learning.
But the artificial brains need not limit themselves to the human senses. With the internet of things, we are creating a sensory nervous system on the planet, with countless sensors and data collecting proliferating across the planet. All of this “big data” would be a big headache but for machine learning to find patterns in it all and make it actionable. So, not only are we transcending human intelligence with multitudes of dedicated intelligences, we are transcending our sensory perception.
And it need not stop there. It is precisely by these iterative algorithms that human intelligence arose from primitive antecedents. While biological evolution was slow, it provides an existence proof of the process, now vastly accelerated in the artificial domain. It shifts the debate from the realm of the possible to the likely timeline ahead.
Let me end with the closing chapter in Danny Hillis’ CS book The Pattern on the Stone: “We will not engineer an artificial intelligence; rather we will set up the right conditions under which an intelligence can emerge. The greatest achievement of our technology may well be creation of tools that allow us to go beyond engineering — that allow us to create more than we can understand.”
-----
Here is some early press:
Xconomy(most in-depth), MIT Tech Review, Re/Code, Forbes, WSJ, Fortune.
Ever daydreamed about owning your own spaceship? As the next best thing – thanks to a free app launching today – anyone with an Apple Vision Pro or Meta Quest 3 or 3S VR headset can have ESA’s Hera asteroid mission for planetary defence turn up in their own living room.
Developed by Italian startup DIVE in cooperation with ESA’s Hera mission team, the “Guardians of Earth” app allows users to peer within a virtual spacecraft, learn about how it works, assemble its elements and follow its journey through space to its target asteroid.
“The availability of this app makes for a great Christmas gift, allowing people to learn about and interact with our mission in a totally new and immersive way,” notes Hera mission manager Ian Carnelli. “This was a dream project for DIVE founder Luca De Dominicis, who sadly recently passed away, yet whose vision for making space exploration accessible to everyone continues through this collaboration.”
DIVE CEO Michaelangelo Mochi adds: “Partnering with ESA is a journey we cherish deeply, akin to astronauts venturing into the unknown. We are proud to work alongside an organisation that shares our vision and commitment to exploration.”
Launched last October, Hera is ESA’s first planetary defence mission, on its way to visit the first asteroid to have had its orbit altered by human action. By gathering close-up data about the Dimorphos asteroid, which was impacted by NASA’s DART spacecraft in 2022, Hera will help turn asteroid deflection into a well understood and potentially repeatable technique. Hera is currently on its way to a ‘swingby’ of Mars next spring which will set on course towards Dimorphos.
With Guardians of Earth, users can engage with the Hera spacecraft in remarkable detail through augmented reality. They can choose to put it together piece by piece, discover its advanced instrumentation and experience key space travel technologies. Through the involvement of video game studio 34BigThings, the app offers a 360-degree immersive experience, projecting users into the cosmos with Hera, bringing them face-to-face with celestial bodies encountered along the way.
From today, the app is available for free on Apple Vision Pro from the App Store and for Meta Quest 3 and 3S.
Want to know more about the Hera mission? Try asking the mission ‘directly’, through Hera Space Companion, an interactive AI-powered assistant providing facts about the mission and real-time data from space. The Hera Space Companion has been developed by Terra Mater Studios, Impact AI, and Microsoft Austria in collaboration with ESA.
Credits: ESA/Terra Mater
Designed as a relatively low cost AI-powered stealth platform, the Gray Jay is a force multiplier designed to fly along with manned aircraft or carry out missions independently. Multi-role and able to support different mission modules via a removable nose the Gray Jay is capable of performing anything from air escort to strike and reconnaissance.
Due to size, munitions will be somewhat limited though the Gray Jay is still able to mount 2 munitions such as the Raybeam Defense Duck Hawk medium-range air-to-air missile internally in a pair weapons bays plus locations for a hard point under each wing when stealth is not called for.
About this model:
Features include deployable front and rear landing gear, an internal bay capable of mounting 1 6-brick-long-munition, and the ability to swap out the nose module for future mission modules. It would also be able to mount a hardpoint under each wing 4-studs in from the tip.
I’d also consider this 1/34-ish minifig scale making it compatible with others’ models, and also making it comparable in size to the Boeing ATS Loyal Wingman.
As with my other builds, all parts used in this are real production pieces.
If you're interested in this build, a file can be found here:
Designed as a relatively low cost AI-powered stealth platform, the Gray Jay is a force multiplier designed to fly along with manned aircraft or carry out missions independently. Multi-role and able to support different mission modules via a removable nose the Gray Jay is capable of performing anything from air escort to strike and reconnaissance.
Due to size, munitions will be somewhat limited though the Gray Jay is still able to mount 2 munitions such as the Raybeam Defense Duck Hawk medium-range air-to-air missile internally in a pair weapons bays plus locations for a hard point under each wing when stealth is not called for.
About this model:
Features include deployable front and rear landing gear, an internal bay capable of mounting 1 6-brick-long-munition, and the ability to swap out the nose module for future mission modules. It would also be able to mount a hardpoint under each wing 4-studs in from the tip.
I’d also consider this 1/34-ish minifig scale making it compatible with others’ models, and also making it comparable in size to the Boeing ATS Loyal Wingman.
As with my other builds, all parts used in this are real production pieces.
If you're interested in this build, a file can be found here:
This was one of the better shots from the day, but marred by a guy walking through the background. But then I installed a new version of Lightroom the other day, which claimed to have better AI powered object removal, so I gave that a go.
Designed as a relatively low cost AI-powered stealth platform, the Gray Jay is a force multiplier designed to fly along with manned aircraft or carry out missions independently. Multi-role and able to support different mission modules via a removable nose the Gray Jay is capable of performing anything from air escort to strike and reconnaissance.
Due to size, munitions will be somewhat limited though the Gray Jay is still able to mount 2 munitions such as the Raybeam Defense Duck Hawk medium-range air-to-air missile internally in a pair weapons bays plus locations for a hard point under each wing when stealth is not called for.
About this model:
Features include deployable front and rear landing gear, an internal bay capable of mounting 1 6-brick-long-munition, and the ability to swap out the nose module for future mission modules. It would also be able to mount a hardpoint under each wing 4-studs in from the tip.
I’d also consider this 1/34-ish minifig scale making it compatible with others’ models, and also making it comparable in size to the Boeing ATS Loyal Wingman.
As with my other builds, all parts used in this are real production pieces.
If you're interested in this build, a file can be found here:
iss073e0384102 (July 18, 2025) --- JAXA (Japan Aerospace Exploration Agency) astronaut and Expedition 73 Commander Takuya Onishi sets up the CIMON artificial intelligence-powered robotic assistant inside the International Space Station's Kibo laboratory module. Engineers on the ground tested CIMON's ability to command a free-flying robotic camera for JAXA’s ICHIBAN technology demonstration. CIMON tests how artificial intelligence affects crew support potentially relieving crews for more important tasks and increasing time for relaxation during long-term missions.
ISO accidentally set to 25600. I tried Lightroom's AI powered denoise, the results were really good (minus the person's face, not a lot of detail there) but somehow I like the grainy original more, it better captures the rainy mood that day. I'm excited to try the denoise feature next time I shoot the Milky Way!