View allAll Photos Tagged computation
Does computational photography in a smart phone obviate fast glass on ILCs?
Happy Food Friday!
This may be the last one from Croatia. some of the fans will be sighing in relief. There are still a few pretty pictures in the backlog, maybe I’ll get desparate and raid the fridge in coming months.
Abstract composition of the Engineering, Environment and Computing Building at Coventry University. A riot of geometric futuristic shapes.
This mode is a nice feature of one of my cameras. Here the taillight of a bicycle riding down a gravel path.
The ability to fake a long exposure on your phone with no ND filter or tripod is awesome, but the reality of it is... good but not great.
We use automated means for systematic computational analysis of wrongthink data and statistics. We surf the data ecosystem with machine learning classifiers, looking for content of wrongthink. When we track it down, we measure its effectiveness. Once its narratives are diagnosed, we analytically target the data and run it through our narrative richness classifier—our machine learning algorithm. We’ve trained it to identify content that is representative of wrongthink. Our algorithm has learned the characteristics that perpetuate wrongthink. In the information ecosystem we must use manipulation to win the hearts and minds of the people. Therefore we are constructing a periodic table of social systems, in which we can categorize groups of wrongthink offenders. With this table we will gain an understanding of these groups. Then we will be able to understand their reactions and interactions with conspiracy theory wrongthink. We can’t have free thinking, critical thinkers, who can think for themselves. So we must control the information—the narrative. We use media narratives and entertainment to persuade, engage, and mobilize the masses. Like the devil, we twist the truth to suit our narratives. We use technology to manipulate you—Techno-Manipulation. We will tell you what is fact and what is misinformation. If you are one of the wrongthink revolutionaries, we will eventually censor and deperson you. One day you will wakeup and find yourself in a global mass Techno-Surveillance police state, where your every move will be monitored and analyzed in real time. Eventually, if you are found guilty of wrongthink hate crimes, you will be sent to a reeducation camp. If, however, you refuse to reform, you will be sentenced to death. Off with your head, you wrongthink terrorist! We must stamp out terrorism (critical thinking)! Exterminate the useless thinkers/eaters—bring on another holocaust! Welcome to the Techno-Dark Ages—the Techno-World Order—where you will take the Techno-Mark (666) and worship the Techno-Beast (Image of the Beast). Techno-Deindividuation and Techno- Eugenics: boy, the transhuman (666) future looks bright! Techno-Dystopia and the Internet of Bodies, here we come! Join the Fourth Industrial Revolution, join the Fourth Reich, follow the Führer! Give the Antichrist salute. Hail Beast! Hail Satan!
Daniel 7:7 “Then in my vision that night, I saw a fourth beast—terrifying, dreadful, and very strong. It devoured and crushed its victims with huge iron teeth and trampled their remains beneath its feet. It was different from any of the other beasts, and it had ten horns.”
Computationally Challenge, Week 19 - Visual Weight and Balance, Depth of Field
Pentax SMC 50mm f/1.2, taken at f/1.2
Explored 5/14/2025! Thanks for all the views, faves and kind comments!
No tripod or neutral density filter necessary? I love the idea of being able to, um, *fake* a long exposure but I wish it resembled a longer duration. These two shots of Great Falls turned out pretty well, but I've since photographed a number of other waterfalls and the Pixel 6 typically seems to mimic a duration of only 1/4 of a second or so, which in my book doesn't really count as a "long exposure." Even then, the effect often looks less like motion blur and more like someone sloppily brushed in an out-of-focus quality over the water. Consider me initially happy but ultimately disappointed.
Modeled in Blender 2.79 and rendered using the LuxCoreRender 2.1 render engine.
This surface represents a surface of constant potential where source terms have been randomly allocated a position and size within a cube. Each source term is given the equation:
f(x,y,z) = e^(-a * ((x-xi)^2 + (y-yi)^2 + (z-zi)^2))
where xi, yi, zi is a random coordinate with the cube and a is a random positive constant.
How do you compute the volume of a cat? Dunking it in water doesn't work-- you get the volume of the rat-like creature that lives inside the cat; much like the feeble alien within a Dalek. (And, if your answer had anything to do with contour integrals, get real.) Here is a method that works: Using successive approximation, determine the smallest box that the cat will fully enclose itself in.
This cat is approximately 648 cubic inches in volume.
Related blog entry here
A dark night scene taken with a Pixel 4A using the "Night Sight" mode. This is "computational photography," Handheld mobile phone, several second exposure and multiple exposures that are processed to render what the computer (AI) believes the scene looks like. Not perfect, but far better than anything I could get without a tripod.
Dr Gearenstein's new & improved Computation Terminal MK III, now 100% Tesla electric, no steam power required.
If academic disciplines are playing card suits, then Computer Science is the joker in the pack
www.cdyf.me/computing?q=joker#joker
Public domain image of the Jolly Joker, a vintage Masenghini Italian playing card via Wikimedia Commons w.wiki/35EW
Congratulations to Intel on their acquisition of Nervana. This photo is from the last board meeting at our offices; the Nervana founders — from right to left: Naveen Rao, Amir Khosrowshahi and Arjun Bansal — pondered where on the wall they may fall during M&A negotiations.
We are now free to share some of our perspectives on the company and its mission to accelerate the future with custom chips for deep learning.
I’ll share a recap of the Nervana story, from an investor’s perspective, and try to explain why machine learning is of fundamental importance to every business over time. In short, I think the application of iterative algorithms (e.g., machine learning, directed evolution, generative design) to build complex systems is the most powerful advance in engineering since the Scientific Method. Machine learning allows us to build software solutions that exceed human understanding, and shows us how AI can innervate every industry.
By crude analogy, Nervana is recapitulating the evolutionary history of the human brain within computing — moving from the logical constructs of the reptilian brain to the cortical constructs of the human brain, with massive arrays of distributed memory and iterative learning algorithms.
Not surprisingly, the founders integrated experiences in neuroscience, distributed computing, and networking — a delightful mélange for tackling cognitive computing. Ali Partovi, an advisor to Nervana, introduced us to the company.
We were impressed with the founding team and we had a prepared mind to share their enthusiasm for the future of deep learning. Part of that prepared mind dates back to 1989, when I started a PhD in EE focusing on how to accelerate neural networks by mapping them to parallel processing computers. Fast forward 25 years, and the nomenclature has shifted to machine learning and the deep learning subset, and I chose it as the top tech trend of 2013 at the Churchill Club VC debate (video). We were also seeing the powerful application of deep learning and directed evolution across our portfolio, from molecular design to image recognition to cancer research to autonomous driving.
All of these companies were deploying these simulated neural networks on traditional compute clusters. Some were realizing huge advantages by porting their code to GPUs; these specialized processors originally designed for rapid rendering of computer graphics have many more computational cores than a traditional CPU, a baby step toward a cortical architecture. I first saw them being used for cortical simulations in 2007. But by the time of Nervana’s founding in 2014, some (e.g., Microsoft’s and Google’s search teams) were exploring FPGA chips for their even finer-grained arrays of customizable logic blocks. Custom silicon that could scale beyond any of these approaches seemed like the natural next step. Here is a page from Nervana’s original business plan (Fig. 1 in comments below).
The march to specialized silicon, from CPU to GPU to FPGA to ASIC, had played out similarly for Bitcoin miners, with each step toward specialized silicon obsoleting the predecessors. When we spoke to Amazon, Google, Baidu, and Microsoft in our due diligence, we found a much broader application of deep learning within these companies than we could have imagined prior, from product positioning to supply chain management.
Machine learning is central to almost everything that Google does. And through that lens, their acquisition, and new product strategies make sense; they are not traditional product line extensions, but a process expansion of machine leaning (more on that later). They are not just playing games of Go for the fun of it. Recently, Google switched their core search algorithms to deep learning, and they used Deep Mind to cut data center cooling costs by a whopping 40%.
The advances in deep learning are domain independent. Google can hire and acquire talent and delight in their passionate pursuit of game playing or robotics. These efforts help Google build a better brain. The brain can learn many things. It is like a newborn human; it has the capacity to learn any of the languages of the world, but based on training exposure, it will only learn a few. Similarly, a synthetic neural network can learn many things.
Google can let the Brain team find cats on the Internet and play a great game of Go. The process advances they make in building a better brain (or in this case, a better learning machine) can then be turned to ad matching, a task that does not inspire the best and the brightest to come work for Google.
The domain independence of deep learning has profound implications on labor markets and business strategy. The locus of learning shifts from end products to the process of their creation. Artifact engineering becomes more like parenting than programming. But more on that later; back to the Nervana story.
Our investment thesis for the Series A revolved around some universal tenets: a great group of people pursuing a product vision unlike anything we had seen before. The semiconductor sector was not crowded with investor interest. AI was not yet on many venture firms’ sectors of interest. We also shared with the team that we could envision secondary benefits from discovering the customers. Learning about the cutting edge of deep learning applications and the startups exploring the frontiers of the unknown held a certain appeal for me. And sure enough, there were patterns in customer interest, from an early flurry in medical imaging of all kinds to a recent explosion of interest in the automotive sector after Tesla’s Autopilot feature went live. The auto industry collectively rushed to catch up.
Soon after we led the Series A on August 8, 2014, I found myself moderating a deep learning panel at Stanford with Nervana CEO Naveen Rao.
I opened with an introduction to deep learning and why it has exploded in the past four years (video primer). I ended with some common patterns in the power and inscrutability of artifacts built with iterative algorithms. We see this in biology, cellular automata, genetic programming, machine learning and neural networks.
There is no mathematical shortcut for the decomposition of a neural network or genetic program, no way to “reverse evolve” with the ease that we can reverse engineer the artifacts of purposeful design.
The beauty of compounding iterative algorithms — evolution, fractals, organic growth, art — derives from their irreducibility. (More from my Google Tech Talk and MIT Tech Review)
Year 1. 2015
Nervana adds remarkable engineering talent, a key strategy of the first mover. One of the engineers figures out how to rework the undocumented firmware of NVIDIA GPUs so that they run deep learning algorithms faster than off-the-shelf GPUs or anything else Facebook could find. Matt Ocko preempted the second venture round of the company, and he brought the collective learning of the Data Collective to the board.
Year 2. 2016 Happy 2nd Birthday Nervana!
The company is heads down on chip development. They share some technical details (flexpoint arithmetic optimized for matrix multiplies and 32GB of stacked 3D memory on chip) that gives them 55 trillion operations per second on their forthcoming chip, and multiple high-speed interconnects (as typically seen in the networking industry) for ganging a matrix of chips together into unprecedented compute fabrics. 10x made manifest. See Fig. 2 below.
And then Intel came knocking.
With the most advanced production fab in the world and a healthy desire to regain the mantle of leading the future of Moore’s Law, the combination was hard to resist. Intel vice president Jason Waxman told Recode that the shift to artificial intelligence could dwarf the move to cloud computing. “I firmly believe this is not only the next wave but something that will dwarf the last wave.” But we had to put on our wizard hats to negotiate with giants.
The deep learning and AI sector have heated up in labor markets to relatively unprecedented levels. Large companies are recently paying $6–10 million per engineer for talent acquisitions, and $4–5M per head for pre-product startups still in academia. For the Masters students in a certain Stanford lab, they averaged $500K/yr for their first job offer at graduation. We witnessed an academic turn down a million dollar signing bonus because they got a better offer.
Why so hot?
The deep learning techniques, while relatively easy to learn, are quite foreign to traditional engineering modalities. It takes a different mindset and a relaxation of the presumption of control. The practitioners are like magi, sequestered from the rest of a typical engineering process. The artifacts of their creation are isolated blocks of functionality defined by their interfaces. They are like blocks of magic handed to other parts of a traditional organization. (This carries over to the customers too; just about any product that you experience in the next five years that seems like magic will almost certainly be built by these algorithms).
And remember that these “brain builders” could join any industry. They can ply their trade in any domain. When we were building the deep learning team at Human Longevity Inc. (HLI), we hired the engineering lead from the Google’s Translate team. Franz Och pioneered Google’s better-than-human translation service not by studying linguistics, grammar, or even speaking the languages being translated. He focused on building the brain that could learn the job from countless documents already translated by humans (UN transcripts in particular). When he came to HLI, he cared about the mission, but knew nothing about cancer and the genome. The learning machines can find the complex patterns across the genome. In short, the deep learning expertise is fungible, and there are a burgeoning number of companies hiring and competing across industry lines.
And it is an ever-widening set of industries undergoing transformation, from automotive to agriculture, healthcare to financial services. We saw this explosion in the Nervana customer pipeline. And we see it across the DFJ portfolio, especially in our newer investments. Here are some examples:
• Learning chemistry and drug discovery: Here is a visualization of the search space of candidates for a treatment for Ebola; it generated the lead molecule for animal trials. Atomwise summarizes: “When we examine different neurons on the network we see something new: AtomNet has learned to recognize essential chemical groups like hydrogen bonding, aromaticity, and single-bonded carbons. Critically, no human ever taught AtomNet the building blocks of organic chemistry. AtomNet discovered them itself by studying vast quantities of target and ligand data. The patterns it independently observed are so foundational that medicinal chemists often think about them, and they are studied in academic courses. Put simply, AtomNet is teaching itself college chemistry.”
• Designing new microbial life for better materials: Zymergen uses machine learning to predict the combination of genetic modifications that will optimize product yield for their customers. They are amassing one of the largest data sets about microbial design and performance, which enables them to train machine learning algorithms that make search predictions with increasing precision. Genomatica had great success in pathway optimization using directed evolution, a physical variant of an iterative optimization algorithm.
• Discovery and change detection in satellite imagery: Planet and Mapbox. Planet is now producing so much imagery that humans can’t actually look at each picture it takes. Soon, they will image every meter of the Earth every day. From a few training examples, a convolutional neural net can find similar examples globally — like all new housing starts, all depleted reservoirs, all current deforestation, or car counts for all retail parking lots.
• Automated driving & robotics: Tesla, Zoox, SpaceX, Rethink Robotics, etc.
• Visual classification: From e-commerce to drones to security cameras and more. Imagen is using deep learning to radically improve medical image analysis, starting with radiology.
• Cybersecurity: When protecting endpoint computing & IOT devices from the most advanced cyberthreats, AI-powered Cylance is proving to be a far superior and adaptive approach versus older signature-based antivirus solutions.
• Financial risk assessment: Avant and Prosper use machine learning to improve credit verification and merge traditional and non-traditional data sources during the underwriting process.
• And now for something completely different: quantum computing. For a wormhole peek into the near future, our quantum computing company, D-Wave Systems, powered a 100,000,000x speedup in a demonstration benchmark for Google, a company that has used D-Wave quantum computers for over a decade now on machine learning applications.
So where will this take us?
Neural networks had their early success in speech recognition in the 90’s. In 2012, the deep learning variant dominated the ImageNet competitions, and visual processing can now be better done by machine than human in many domains (like pathology, radiology and other medical image classification tasks). DARPA has research programs to do better than a dog’s nose in olfaction.
We are starting the development of our artificial brains in the sensory cortex, much like an infant coming into the world. Even within these systems, like vision, the deep learning network starts with similar low level constructs (like edge-detection) as foundations for higher level constructs like facial forms, and ultimately, finding cats on the internet with self-taught learning.
But the artificial brains need not limit themselves to the human senses. With the internet of things, we are creating a sensory nervous system on the planet, with countless sensors and data collecting proliferating across the planet. All of this “big data” would be a big headache but for machine learning to find patterns in it all and make it actionable. So, not only are we transcending human intelligence with multitudes of dedicated intelligences, we are transcending our sensory perception.
And it need not stop there. It is precisely by these iterative algorithms that human intelligence arose from primitive antecedents. While biological evolution was slow, it provides an existence proof of the process, now vastly accelerated in the artificial domain. It shifts the debate from the realm of the possible to the likely timeline ahead.
Let me end with the closing chapter in Danny Hillis’ CS book The Pattern on the Stone: “We will not engineer an artificial intelligence; rather we will set up the right conditions under which an intelligence can emerge. The greatest achievement of our technology may well be creation of tools that allow us to go beyond engineering — that allow us to create more than we can understand.”
-----
Here is some early press:
Xconomy(most in-depth), MIT Tech Review, Re/Code, Forbes, WSJ, Fortune.
Computational domes. The design is generated with shape grammars and the construction is adapted with a catenary-simulation. Scripted in Processing.
Computational domes. The design is generated with shape grammars and the construction is adapted with a catenary-simulation. Scripted in Processing.
Timnit Gebru wants to replace the U.S. Census (which costs $1B/year to implement) by simply analyzing the cars seen in Google Street View images.
After processing 22 million observed cars, she found some fascinating things, like the predictive power of "the sedan/truck ratio" for political party. Republicans sure like trucks! More findings in the comments below.
From the AI in Fintech Forum today at Stanford ICME.
This is a Computational Fluid Dynamics (CFD) computer-generated model of the Space Shuttle during re-entry. CFD has supplanted wind tunnels for many evaluations of aircraft. As computing power increases and computer models become more sophisticated, CFD will become an increasingly more powerful tool for aeronautics research.
Credit: NASA
Image Number: L-1993-03205
Date: April 1, 1993
These components perform key computations for Tide Predicting Machine No. 2, a special purpose mechanical analog computer for predicting the height and time of high and low tides. The tide prediction formula implemented by the machine includes the addition of a series of cosine terms. The triangular metal pieces are part of slotted yoke cranks which convert circular motion to a vertical motion that traces a sinusoid. Each slotted yoke crank is connected by a shaft to a pulley, which causes the pulley to follow the sinusoidal motion. A chain going over and under pulleys sums each of their deflections to compute the tide. Along the top of the photo, connecting shafts drive slotted yoke cranks on both sides of the machine.
The U.S. government used Tide Predicting Machine No. 2 from 1910 to 1965 to predict tides for ports around the world. The machine, also known as “Old Brass Brains,” uses an intricate arrangement of gears, pulleys, chains, slides, and other mechanical components to perform the computations.
A person using the machine would require 2-3 days to compute a year’s tides at one location. A person performing the same calculations by hand would require hundreds of days to perform the work. The machine is 10.8 feet (3.3 m) long, 6.2 feet (1.9 m) high, and 2.0 feet (0.61 m) wide and weighs approximately 2,500 pounds (1134 kg). The operator powers the machine with a hand crank.
The National Oceanic and Atmospheric Administration (NOAA) occasionally displays the machine at its facility in Silver Spring, Maryland.
For my birthday Sunday night, I hosted a dinner at work for the Institute of Protein Design — the bastion of intelligent design at the University of Washington. We discussed their brand new coronavirus vaccine, a therapeutic cure in development and a variety of even-more amazing biologics in the pipeline.
Oh, and we should expect that 30-70% of all of us will get the coronavirus this time around, and the rest next winter.
TLDR; How I learned to stop worrying and love the bug.
David Baker and Neil King use large pools of computers (GPU banks and Rosetta@Home) to computationally design functional proteins, for example, to bind to specific invariant surface coat proteins of a target virus (to be robust to evolutionary countermeasures) or to create a self-assembling nanocage decorated with a floral arrangement of epitopes to trigger a B-cell response (i.e., more potent and broad-spectrum vaccines). Baker had just learned that their crash-effort on the new coronavirus vaccine worked, and it might be better than others. They got the genetic sequence while the outbreak was still limited to China and went from sequence to vaccine in 42 days.
• More info on their nanoparticle vaccines
• And a pre-print of their coronavirus strategy as recently applied for an HIV vaccine.
They are also computing novel nanoparticle binders to target sites on 2019-nCoV predicted to neutralize the virus or interfere with its ability to infect cells. This would be a prophylactic and therapeutic, and it might have a shorter regulatory approval than the 18 months expected for novel vaccines (where safety studies are paramount, given the use in healthy people). They have had earlier success using this with influenza.
• Earlier examples
• They are also using proteins to selectively bind semiconductors and other interesting inorganics (more info)
So, if we are all going to get the virus, why not worry? Their advice if you are showing symptoms: just stay home and ride it out. It might be more dangerous to visit the hospital for testing. And certainly more stressful.
[addendum: the initial low death rates did not come for free. ICU availability made a big difference. See this post]