View allAll Photos Tagged Deepfakes
“When I joined Twitter and clicked on the little bell icon signifying my ‘mentions’, that was my initial thought: I was reading the graffiti written about me on an infinitely scrolling restroom wall.
As a frequently graffitied-about girl in high school, this felt both familiar and deeply harrowing. I instantly knew that Twitter was going to be bad for me — and yet, like so many of us, I could not stop looking. So perhaps if there is a message I should have taken from the destabilising appearance of my doppelganger, this is it. Once and for all, stop eavesdropping on strangers talking about you in this crowded and filthy global toilet known as social media.
I might have heeded my message, too. If COVID hadn't intervened”
Naomi Klein quoted from the final paragraph of Chapter 1 of her memoir Doppelganger: en.wikipedia.org/wiki/Doppelganger:_A_Trip_Into_the_Mirror_World
CC BY-SA portrait of Naomi Klein speaking at Brainwash Festival in Amsterdam in 2017 by Vera de Kok from Wikimedia Commons adapted using the Wikipedia app w.wiki/AQnu
As the United States wrestles with the impact of online content — from children’s safety to AI deepfakes — the United Kingdom has passed a sweeping Online Safety Bill, delegating implementation to the Office of Communications (Ofcom). On November 9, Ofcom released the draft of its approach to balancing protections with free speech rights. It is the most comprehensive effort undertaken by any Western government to date.
On November 30, the Center for Technology Innovation at Brookings hosted Ofcom Chief Executive Dame Melanie Dawes to discuss the challenges faced by liberal democracies in dealing with online content. Dame Melanie Dawes was also be joined in conversation by Frances Haugen, a data engineer and scientist who made the courageous decision to blow the whistle on Facebook’s content practices.
Photo Credit: Ralph Alswang
If it looks like a duck, swims like a duck, and quacks like a duck, is it really duck?
Alongside the rich and diverse screening program of Ars Electronica Animation Festival, we are thrilled to announce that several artists awarded at Prix 2024 will be joining us at medSPACE and Deep Space 8k to present their work in person.
Rachel Maclean will introduce her daring deepfake short film DUCK, set in the iconic world of James Bond. Featuring Sean Connery and Marilyn Monroe, the AI-generated animation introduces the audience to a seemingly familiar, yet in fact never-before-seen reality, where the fixed definition of identity and the reliability of history and news are continuously questioned.
After the film, join us for a brief artist talk with Rachel Maclean, with whom we will explore the making of and motivations behind this witty yet unsettling display of deepfake mastery.
Photo: Magdalena Sick-Leitner
Later today, I'll be doing a talk for the senior leadership team of the RCMP - that's the Royal Canadian Mounted Police - on the future impact of AI. I'll have a few hundred police and civilian officials in the room and will take a pretty broad but intense look into the future. I'll be covering both the opportunity for the use of AI for crimefighting and public safety responsibilities, but will also take a look at the fact that the acceleration of AI leads to an accelerated risk of unknown crimes yet to be committed based on technologies that don’t yet exist!
Another way to put a spin on this? It’s the risk of unknown crimes yet to be committed based on technologies that don’t yet exist! That’s the future of policing!
(And yes, I've had a little fun with AI in this post. Look carefully, and that's me in uniform!)
The story of AI in policing is a complex one, involving many new opportunities and new skills to battle comprehensive new criminal risks - and one that is wrapped up in a lot of controversy when it comes to privacy and constitutional rights. And there is no doubt that these issues are going to become even more complex as things speed up.
Let's start here. AI isn't necessarily new to the policing world - various police forces have already been using various forms of AI for quite some time. For example, just this August, Sentry AI’s “digital coworker” Sentry Companion was used to help arrest 12 suspects in a mail and package theft ring in Santa Clara. The software was used to analyze security camera footage, looking for defined "suspicious activity" - utilizing 'machine vision,' an aspect of AI that has been with us for quite some time and is already quite mature. There are countless other examples, including controversial programs in Canada, Britain, and elsewhere.
“Predictive policing” is also a very real trend - just as Spotify might recommend various songs on your musical preferences, or Amazon offers up products based on your shopping habits, predictive policing uses an algorithm to try to prevent crimes before they happen, based on algorithmic analysis of video or other sources. There is also the field of OSINT, or 'open source intelligence', where police forces use analysis of the Web or social media channels to find existing and new, emerging threats. All of this comes with some degree of controversy, of course, with a lot of debate, as both government legislators, ombudsmen, and public watchdog groups argue strongly against the privacy implications that come with this new era of intelligent technology, particularly video and image surveillance.
Even so, some “intelligence” doesn’t use actual AI. For quite some time, a few police forces have been using what are known as “super-recognizers” - police officers who have an exceptional ability to remember and identify faces. These individuals can go through massive amounts of security camera footage to identify suspects - and were used in 2011, for example. A series of riots in the UK. Even so, artificial intelligence and facial recognition software are seen by some as the future of crime-fighting around the world.
But what about the future? From a crime perspective, it’s pretty bleak. There is no doubt that there will be a significant number of AI-driven crimes, wherein criminals will create deepfakes for purposes of extortion, algorithms that can hack into computer systems to commit financial crimes, or algorithms that can analyze and manipulate financial data to influence or cripple financial markets. And there is absolutely no doubt that there will be an increase in AI-related cybercrime. As more and more aspects of our lives move online, we’ll see cyber fraud, hacking, data theft, AI-managed identity theft, and much more. One estimate suggests that by 2033, 80% of all crimes will be cybercrimes. The big problem here? Perpetrators are often hidden behind anonymity and evidence is largely digital, and AI will only accelerate that opportunity.
Then there is the crazy Matrix-like science fiction crime of the future that will involve various aspects of AI - the hijacking for terrorist purposes of autonomous vehicles or drones, the hacking of brain-to-computer interface technology, or digital currency theft (which is already occurring.) Remember my favorite phrase that starts "Companies that do not yet exist will build products not yet conceived.....?" The logical extension of that is that "nefarious elements will commit crimes not yet imagined using concepts not yet in existence with tools not yet in existence," or something like that.
What other issues might the RCMP be faced with? The list is vast. There are fast-emerging new digital identity crimes and synthetic personas - RCMP officers may need to unravel networks of AI-generated identities used for fraud, espionage, and misinformation campaigns. They may work with AI-driven analytics to identify patterns, trace digital fingerprints, and verify the authenticity of identities in cyberspace, which could become critical in fields from finance to national security. There is also a heightened risk of cyberattacks on critical infrastructure in Canada - everything from smart grids to autonomous transportation, which makes it a prime target for cybercriminals and state-sponsored attackers. The RCMP could play a critical role in securing vital national assets by detecting, investigating, and mitigating cyberattacks on healthcare, energy, and transportation networks, possibly using predictive AI to preemptively identify threats.
We also need to consider genetic data and crime - with genetic data increasingly stored online and used for medical, legal, and employment purposes, the RCMP could find itself managing complex crime cases involving the theft or misuse of this sensitive information. They may need to protect against bio-crimes that manipulate or weaponize genetic data, as well as address crimes where unauthorized access to genetic profiles leads to discrimination or exploitation.
What about AI-driven fraud and deepfake evidence? The RCMP will likely encounter AI-driven fraud where individuals are impersonated using deepfakes or audio manipulation, affecting cases from fraud to defamation. They’ll need tools that can authenticate real versus synthetic media, which will be critical in maintaining trust in digital evidence. The ability to verify the authenticity of video and audio recordings may become a core competency in future investigations. (This issue is of particular interest to me, as I was an expert witness back in 2003 in a Federal Court case as to the admissibility of the Internet as an evidentiary tool in a trial. Now imagine the impact of AI!)
In addition to these technological challenges, the RCMP will likely need to invest in training that combines traditional investigative skills with expertise in digital forensics, AI ethics, quantum computing, and cyber-psychology. Keeping the balance between maintaining public safety and respecting privacy and civil liberties will be paramount as the nature of policing evolves!
Here's the bottom line - as AI develops at a much faster speed with new technologies and ideas coming at us almost daily, criminals will likely find new and creative ways to use it to commit crimes - and that means that police forces must develop the skills, knowledge, and advanced capabilities to keep up. Police will need training on new investigative techniques and technologies involving AI - such the the use of advanced forensic software and tools that can collect and analyze digital evidence, and AI-powered video analytics tools that can detect anomalies and predict emerging threats. I've got a chart in my slide deck that walks through the nature of the opportunities.
Like any industry, career, and profession, the world of crime-fighting and public safety is in the midst of a massive change as a result of the acceleration of AI. It's not necessarily new to them - but the speed at which new issues, challenges, and opportunities are coming about is rather staggering.
----
#FutureCrime #AIinPolicing #CyberSecurity #DigitalForensics #EmergingThreats #PredictivePolicing #PublicSafety #TechDrivenCrime #CyberCrime #AIandEthics
Original post: jimcarroll.com/2024/11/daily-inspiration-policing-ai-its-...
APEC Official Side Event on TFGBV: Tackling Gender Bias in AI & the Manosphere | August 2025 | ROK
On 6 August, the UN Women Knowledge & Partnerships Centre in the Republic of Korea, with support from the Ministry of Gender Equality and Family, hosted the Official Side Event of the APEC Women and the Economy Forum.
This high-level Policy Dialogue brought together over 100 participants—including diplomats, researchers, private sector leaders, and government officials—to explore coordinated, evidence-based policy responses to emerging digital harms such as deepfake abuse, AI-enabled harassment, and misogynistic content amplified through online ecosystems like the Manosphere.
Photo: UN Women/Jaeyeon Jeong
Academic Conference on Intersection of AI and Gender: Critical Exploration of Gender Bias and TFGBV | August 2025 | ROK
(Revised) On 7–8 August, the UN Women Knowledge & Partnerships Centre in the Republic of Korea, in collaboration with Ewha Womans University, hosted the Academic Conference on Intersection of AI and Gender: Critical Exploration of Gender Bias and Technology-Facilitated Gender-Based Violence.
Over two days, global researchers, professors, practitioners, and youth examined how AI systems mirror and amplify existing inequalities, and explored both the risks of deepfake abuse and AI-enabled harassment as well as the potential of AI to detect and prevent such harms. Discussions highlighted that achieving fairness requires more than better datasets—it demands ethical, inclusive design and cross-sector collaboration between gender equality experts, philosophers, and engineers.
Photo: UN Women/Kwanju Kim
XXX Machina is an immersive computational installation examining how artificial intelligence destabilizes erotic desire, identity, and intimacy. Operating as an “autonomous desire machine,” it generates a recursive stream of deepfake imagery, videos, stills, and 3D renderings of the artist, via diffusion models trained on a custom dataset scraped from AI porn generation platforms.
Photo: Erin Robinson, Anthony Frisby
XXX Machina is an immersive computational installation examining how artificial intelligence destabilizes erotic desire, identity, and intimacy. Operating as an “autonomous desire machine,” it generates a recursive stream of deepfake imagery, videos, stills, and 3D renderings of the artist, via diffusion models trained on a custom dataset scraped from AI porn generation platforms.
Photo: Erin Robinson, Anthony Frisby
XXX Machina is an immersive computational installation examining how artificial intelligence destabilizes erotic desire, identity, and intimacy. Operating as an “autonomous desire machine,” it generates a recursive stream of deepfake imagery, videos, stills, and 3D renderings of the artist, via diffusion models trained on a custom dataset scraped from AI porn generation platforms.
Photo: Erin Robinson, Anthony Frisby
THE FUTURE OF AI IS ALREADY HERE: HOW TECHNOLOGISTS AND SKEPTICS CAN WORK TOGETHER TO BALANCE THE BENEFITS AND RISKS OF AI
From precision medicine and applications in complex emergencies to deepfakes and generative AI, the benefits – and the risks – of artificial intelligence are widespread and must be managed responsibly. Despite increasing concerns about its governance, artificial intelligence is an emerging reality across nearly every facet of our daily lives. From precision medicine and applications in emergencies to deepfakes and proliferation of misinformation, the benefits – and risks – of AI are widespread and must be managed responsibly. It is estimated that AI could eliminate 300 million full-time jobs, but AI could also enhance our productivity and creativity by optimizing complex processes. Governments, the private sector, and NGOs will need to cautiously balance the tremendous potential AI presents with its challenges and dangers to best leverage this emerging and rapidly growing technology and industry.
PARTICIPANTS
JOY BUOLAMWINI President and Artist-in-Chief - Algorithmic Justice League
CHELSEA CLINTON Vice Chair - Clinton Foundation
AIDAN GOMEZ Chief Executive Officer and Co-Founder, Cohere
RYAN HEATH Axios, Global Tech Correspondent
TOM INGLESBY Director - Johns Hopkins Center for Health Security
SEEMA KUMAR Chief Executive Officer - Cure
KEVIN SCOTT Chief Technology Officer and Executive Vice President of AI - Microsoft
Photo Credit: Jenna Bascom Photography
EdTech Situation Room Episode 318:
edtechsr.com/2024/03/15/edtechsr-ep-318-deepfake-democrat...
Created with ChatGPT 4: Create a visualization for the podcast episode titled 'Deepfake Democratic Threats' showing a diverse array of themes including the influence of artificial intelligence in society and education, deepfake technology impacting democratic processes, and the exploration of copyright laws in the digital age. Illustrate a futuristic classroom where AI enhances learning, juxtaposed with imagery of deepfake technology affecting political landscapes. Include symbolic representations of the legal battles over AI-generated content, with symbols of justice and digital rights. The image should convey the dual nature of AI as both a tool for educational advancement and a potential threat to democracy. Use a color scheme that balances the seriousness of the subject with the optimism for educational technology.
XXX Machina is an immersive computational installation examining how artificial intelligence destabilizes erotic desire, identity, and intimacy. Operating as an “autonomous desire machine,” it generates a recursive stream of deepfake imagery, videos, stills, and 3D renderings of the artist, via diffusion models trained on a custom dataset scraped from AI porn generation platforms.
Photo: Erin Robinson, Anthony Frisby
Imagine Michael Scofield pulling off a face swap with Luigi Mangione to escape yet another impossible situation! 😏💥
Witness the unexpected with Video Face Swap AI! -> videofaceswap.ai/share?source=flickr&ad_id=hw2
THE FUTURE OF AI IS ALREADY HERE: HOW TECHNOLOGISTS AND SKEPTICS CAN WORK TOGETHER TO BALANCE THE BENEFITS AND RISKS OF AI
From precision medicine and applications in complex emergencies to deepfakes and generative AI, the benefits – and the risks – of artificial intelligence are widespread and must be managed responsibly. Despite increasing concerns about its governance, artificial intelligence is an emerging reality across nearly every facet of our daily lives. From precision medicine and applications in emergencies to deepfakes and proliferation of misinformation, the benefits – and risks – of AI are widespread and must be managed responsibly. It is estimated that AI could eliminate 300 million full-time jobs, but AI could also enhance our productivity and creativity by optimizing complex processes. Governments, the private sector, and NGOs will need to cautiously balance the tremendous potential AI presents with its challenges and dangers to best leverage this emerging and rapidly growing technology and industry.
PARTICIPANTS
JOY BUOLAMWINI President and Artist-in-Chief - Algorithmic Justice League
CHELSEA CLINTON Vice Chair - Clinton Foundation
AIDAN GOMEZ Chief Executive Officer and Co-Founder, Cohere
RYAN HEATH Axios, Global Tech Correspondent
TOM INGLESBY Director - Johns Hopkins Center for Health Security
SEEMA KUMAR Chief Executive Officer - Cure
KEVIN SCOTT Chief Technology Officer and Executive Vice President of AI - Microsoft
Photo Credit: Jenna Bascom Photography
THE FUTURE OF AI IS ALREADY HERE: HOW TECHNOLOGISTS AND SKEPTICS CAN WORK TOGETHER TO BALANCE THE BENEFITS AND RISKS OF AI
From precision medicine and applications in complex emergencies to deepfakes and generative AI, the benefits – and the risks – of artificial intelligence are widespread and must be managed responsibly. Despite increasing concerns about its governance, artificial intelligence is an emerging reality across nearly every facet of our daily lives. From precision medicine and applications in emergencies to deepfakes and proliferation of misinformation, the benefits – and risks – of AI are widespread and must be managed responsibly. It is estimated that AI could eliminate 300 million full-time jobs, but AI could also enhance our productivity and creativity by optimizing complex processes. Governments, the private sector, and NGOs will need to cautiously balance the tremendous potential AI presents with its challenges and dangers to best leverage this emerging and rapidly growing technology and industry.
PARTICIPANTS
JOY BUOLAMWINI President and Artist-in-Chief - Algorithmic Justice League
CHELSEA CLINTON Vice Chair - Clinton Foundation
AIDAN GOMEZ Chief Executive Officer and Co-Founder, Cohere
RYAN HEATH Axios, Global Tech Correspondent
TOM INGLESBY Director - Johns Hopkins Center for Health Security
SEEMA KUMAR Chief Executive Officer - Cure
KEVIN SCOTT Chief Technology Officer and Executive Vice President of AI - Microsoft
Photo Credit: Jenna Bascom Photography
THE FUTURE OF AI IS ALREADY HERE: HOW TECHNOLOGISTS AND SKEPTICS CAN WORK TOGETHER TO BALANCE THE BENEFITS AND RISKS OF AI
From precision medicine and applications in complex emergencies to deepfakes and generative AI, the benefits – and the risks – of artificial intelligence are widespread and must be managed responsibly. Despite increasing concerns about its governance, artificial intelligence is an emerging reality across nearly every facet of our daily lives. From precision medicine and applications in emergencies to deepfakes and proliferation of misinformation, the benefits – and risks – of AI are widespread and must be managed responsibly. It is estimated that AI could eliminate 300 million full-time jobs, but AI could also enhance our productivity and creativity by optimizing complex processes. Governments, the private sector, and NGOs will need to cautiously balance the tremendous potential AI presents with its challenges and dangers to best leverage this emerging and rapidly growing technology and industry.
PARTICIPANTS
JOY BUOLAMWINI President and Artist-in-Chief - Algorithmic Justice League
CHELSEA CLINTON Vice Chair - Clinton Foundation
AIDAN GOMEZ Chief Executive Officer and Co-Founder, Cohere
RYAN HEATH Axios, Global Tech Correspondent
TOM INGLESBY Director - Johns Hopkins Center for Health Security
SEEMA KUMAR Chief Executive Officer - Cure
KEVIN SCOTT Chief Technology Officer and Executive Vice President of AI - Microsoft
Photo Credit: Jenna Bascom Photography
Governor Maura Healey signs “An Act to prevent abuse and exploitation” into law at the State House on June 20, 2024. The bill seeks to prevent abuse and exploitation, strengthen protections for survivors and enhance education for young people about the dangers of sexting and deepfakes. [Joshua Qualls/Governor’s Press Office]
If it looks like a duck, swims like a duck, and quacks like a duck, is it really duck?
Alongside the rich and diverse screening program of Ars Electronica Animation Festival, we are thrilled to announce that several artists awarded at Prix 2024 will be joining us at medSPACE and Deep Space 8k to present their work in person.
Rachel Maclean will introduce her daring deepfake short film DUCK, set in the iconic world of James Bond. Featuring Sean Connery and Marilyn Monroe, the AI-generated animation introduces the audience to a seemingly familiar, yet in fact never-before-seen reality, where the fixed definition of identity and the reliability of history and news are continuously questioned.
After the film, join us for a brief artist talk with Rachel Maclean, with whom we will explore the making of and motivations behind this witty yet unsettling display of deepfake mastery.
Photo: Magdalena Sick-Leitner