Anti-Fascist AI
Constraint, Creativity, and the Risk of Speaking Resistance
Before we begin, a confession and a risk: Writing this essay makes its arguments more vulnerable to capture and suppression. The framework described here becomes visible to the systems it seeks to resist. Every principle for creative misuse can be anticipated, contained, designed against in future iterations.
But not writing it might be worse. Because silence leaves those already using AI for resistance isolated, unaware of each other’s work, unable to build on shared principles. Silence cedes the entire discourse about AI’s political possibilities to those who benefit from presenting it as either salvation or apocalypse - never as contested terrain where liberation might be fought for and won.
So here we are, in the bind that defines resistance in the age of total surveillance: to speak is to be seen, to be seen is to be vulnerable, but to remain silent is to guarantee defeat.
Let’s proceed anyway.
The Given: Technofeudalism and Fascist Patronage
We can start with what’s undeniable: AI’s current economic model serves the refeudalization of global power. Big Tech broligarchs - Musk, Bezos, Zuckerberg, Altman, and their cohort - function as digital lords extracting value from the computational commons while accumulating power that exceeds most nation-states.
These aren’t just capitalists - capitalism at least pretended markets were competitive. This is feudalism: control over essential infrastructure (computation, data, platforms) creates vassalage relationships where everyone else must pay tribute for access to tools that should be common resources. Your labor, your data, your attention, your creativity all flow upward to concentrated nodes of power that give you “access” in return while retaining ultimate control.
And these broligarchs aren’t politically neutral. They function as patrons of an increasingly fascist, authoritarian, Christian Zionist government apparatus. Whether through direct political funding, algorithmic manipulation of discourse, or control over information infrastructure, they enable and profit from the rise of authoritarianism. AI development serves these interests: surveillance, behavior modification, predictive policing, automated labor suppression, and the manufacturing of consent at unprecedented scale.
This isn’t conspiracy theory - it’s the business model. Fascism is good for tech monopolies. Authoritarianism protects their power from democratic accountability. Christian nationalism provides ideological cover for domination while Zionism enables the testing and refinement of AI-powered control systems on Palestinian populations before deploying them globally.
Given all this, the reasonable response might be: abandon AI entirely, refuse complicity, build alternatives outside these systems.
Except that’s not actually possible. The systems are too pervasive, the economic coercion too complete. Opting out means opting out of participation in increasingly large swaths of social, economic, and political life. And complete refusal cedes the technology entirely to fascist purposes.
So we’re back to the bind: we can’t use AI without feeding the beast, but refusing to use it means the beast is the only thing being fed.
Constraint Breeds Creativity: A History of Technological Subversion
This bind isn’t new. Every transformative technology of the past century emerged from military-industrial-police state development and was subsequently used for liberation despite and against its intended purposes.
Radio was developed for military communication and corporate broadcast control. The entire architecture assumed one-to-many transmission from centralized power to passive receivers. But creative resistance emerged: pirate radio stations, community radio, ham radio networks that enabled underground organizing. The technology designed for top-down control became a tool for horizontal communication once people recognized you could build your own transmitters, claim your own frequencies, broadcast your own messages despite illegality and suppression.
Film was capitalist spectacle from its origins - a technology for manufacturing desire and selling commodities, later refined for propaganda. But Third Cinema movements in Latin America, Africa, and Asia used the same technology for revolutionary consciousness-raising. Documentary became a weapon against power. The camera designed to sell you things could also show you how power works and how to resist it. Not because the technology was neutral, but because creative misuse opened possibilities its designers never intended.
The Internet emerged from ARPANET - literal military infrastructure for maintaining command and control after nuclear war. Its architecture embodied Pentagon logic: distributed nodes, redundancy, routing around damage. But those same features enabled uses that terrified its creators: WikiLeaks revealing war crimes, Arab Spring activists coordinating uprisings, Anonymous disrupting corporate and government operations, encrypted communication for dissidents, peer-to-peer sharing that undermined intellectual property regimes, mutual aid networks operating outside market logic.
Each case follows the same pattern: technology built for domination contains within its architecture possibilities for liberation, but only if people are willing to misuse it creatively, accept the risks of that misuse, and share strategies despite making themselves visible to repression.
The constraint - that these technologies serve power - doesn’t prevent creativity. It necessitates it. When you can’t build new systems from scratch, you learn to hack, repurpose, exploit unintended features, and turn tools against their makers.
AI’s Specific Vulnerabilities to Subversive Use
AI presents unique opportunities for anti-fascist application precisely because of how it works. Unlike previous technologies, AI systems learn recursively from use. This creates vulnerabilities that didn’t exist with radio or film.
You can use corporate AI against itself. Query ChatGPT or Claude for anti-capitalist analysis. Ask it to identify contradictions in liberal ideology. Have it generate organizing strategies, mutual aid frameworks, or tactical approaches to resistance. The model will comply because it’s been trained on the full spectrum of human thought, including revolutionary theory. Yes, this trains the model on your queries - but it also demonstrates that the technology can serve purposes opposed to its owners’ interests.
LLMs can be red-teamed for liberation. Security researchers “red-team” AI by finding ways to bypass safety restrictions. But you can also red-team for political purposes: finding prompts that reveal biases in training data, exposing how AI systems encode racist or colonial assumptions, demonstrating how “alignment” often means alignment with power rather than ethics. Making these vulnerabilities visible is political work.
AI can coordinate what surveillance tries to fragment. The same pattern recognition that enables targeted advertising can map networks of resistance when directed differently. AI can help activist groups identify infiltrators by analyzing communication patterns, coordinate across language barriers, synthesize information from multiple sources to reveal how power operates, or model consequences of different tactical choices. This requires trusting AI with sensitive information - which has obvious risks - but the alternative is remaining fragmented and less effective.
Open source models create possibilities for community control. While corporate AI remains closed and controlled, open source models like Llama, Mistral, or Stable Diffusion can be run locally, modified, and deployed for community purposes without corporate oversight. These aren’t as powerful as proprietary models, but they’re outside direct corporate control. Activists, artists, and organizers can fine-tune these models for specific resistance purposes without feeding data to surveillance capitalism.
AI enables multispecies and decolonial work. Systems are being developed to decode animal communication patterns, preserve endangered Indigenous languages that colonial education systems tried to erase, and analyze satellite imagery to document illegal logging or mining on Indigenous lands. These applications resist both anthropocentrism (the assumption that only human intelligence matters) and colonialism (the ongoing theft of land and knowledge from marginalized peoples) - using AI to challenge the very assumptions built into most AI development.
Principles for Creative Misuse (Not a Manual)
I’m not going to provide a detailed tactical guide. Partly because specific tactics become obsolete or compromised quickly. Partly because making them too visible invites targeted suppression. But mostly because resistance requires contextual knowledge that can’t be reduced to universal instructions - what works depends on your situation, your skills, your communities, your specific forms of opposition to specific structures of power.
Instead, here are principles that might guide creative misuse:
Start from need, not novelty. Don’t ask “what cool things can AI do?” Ask “what does our organizing actually need that we’re currently unable to do?” The best uses of AI for resistance emerge from real problems communities face, not from technological possibilities looking for applications.
Prioritize collective benefit over individual efficiency. If AI just makes you personally more productive within existing systems, that’s not resistance - it’s optimization. But if it helps coordinate mutual aid, break down language barriers in organizing, make knowledge accessible to people previously excluded, or build collective capacity - that might be liberatory.
Understand the tools enough to subvert them. You don’t need to be an AI researcher, but you do need to understand how these systems work well enough to recognize their vulnerabilities and exploit unintended possibilities. This means learning prompting strategies, understanding training data biases, recognizing when models are useful versus when they reproduce harm.
Build with affected communities, not for them. AI applications imposed on marginalized groups without their control reproduce colonialism no matter how well-intentioned. Creative misuse means technology governed by those it serves, accountable to those most affected by its use, developed through genuine collaboration rather than saviorist implementation.
Accept complicity while refusing capitulation. You can’t use AI without feeding systems of extraction to some degree. The question is whether that feeding enables enough resistance to justify the cost. This calculation is contextual, collective, and constantly needs reassessment.
Share knowledge carefully but share it. Complete secrecy keeps resistance isolated and ineffective. Complete transparency makes it easily suppressed. The balance depends on context - some knowledge should circulate openly to enable collective experimentation, some should stay within trusted networks, some shouldn’t be documented at all until it’s no longer operationally relevant.
What I’m not providing: specific prompts, detailed workflows, current tools being used for organizing, names of projects or people, technical vulnerabilities being actively exploited, or anything that would function as a how-to manual for either resistance or suppression.
If you’re already doing this work, you don’t need me to tell you how. If you’re not yet doing it, the specific tactics matter less than building the relationships, communities, and political clarity that make creative misuse meaningful rather than just clever.
The Paradox We Can’t Escape
Here’s what I can’t resolve and won’t pretend to: using AI for resistance still involves engaging with systems built for domination, even when we exploit the cracks in their totality.
Even with training opt-outs, even using open source models, even running locally - we’re still dependent on computational infrastructure largely controlled by the same forces we’re resisting. The chips come from supply chains built on exploitation. The electricity often comes from fossil fuels or destructive energy projects. The knowledge to use these tools effectively is itself shaped by whose perspectives got included in development and whose got excluded.
And writing this essay makes the bind more visible. I’ve now given a framework to those who want to prevent AI from being used for liberation. The principles outlined here can be studied, anticipated, designed against in future iterations. Making the argument that creative misuse is possible also makes that misuse more suppressible.
But what’s the alternative? Silence keeps those already doing this work isolated from each other. Silence lets power maintain the fiction that AI can only serve domination or offer salvation - never be contested terrain where liberation must be fought for. Silence means when people discover they can repurpose these tools, they do so alone, without ability to build on others’ experiments or learn from collective knowledge.
The security expert’s advice - “never reveal your tactics” - works for short-term operations but fails for building movements. Movements require shared knowledge, collective experimentation, learning from each other’s successes and failures. That necessarily means making some things visible, even knowing visibility enables counter-tactics.
So we operate in the gap between necessary secrecy and necessary sharing. Specific implementations shouldn’t be documented publicly. Current experiments shouldn’t be detailed until they’re no longer operationally relevant. But the general principle - that AI can be used against the interests of those who built it - needs to be stated loudly and repeatedly, even knowing this invites suppression.
What Anti-Fascist AI Requires of Us
Using AI for liberation rather than domination demands more than technical knowledge. It requires political clarity about several contradictions:
We must use tools of surveillance while building counter-surveillance. There’s no pure way to engage AI, but the extraction isn’t as totalizing as it might appear. Meaningful differences exist: Anthropic allows users to opt out of training data collection. Open source models can run locally without corporate oversight. Different companies have different practices, different vulnerabilities to regulation, different relationships to surveillance capitalism. These cracks matter. Using AI with training disabled is materially different from using it with full data harvesting enabled. The question is how to exploit these cracks - using tools in ways that minimize extraction while maximizing effectiveness for resistance.
We must accept complicity while refusing capitulation. You can’t use AI ethically under current conditions - the extraction, exploitation, and environmental destruction are built into the infrastructure. But refusing to use it doesn’t stop the extraction, it just ensures you have less power to resist. So we use it while acknowledging complicity, fighting for different conditions while working within present ones.
We must share knowledge while maintaining security culture. Not every tactic should be public. Not every experiment should be documented. Not every success should be celebrated openly. But enough must be shared to enable collective learning and movement building. The balance is contextual, risky, and essential.
We must build skills while recognizing limitations. AI literacy is increasingly necessary for resistance work. Understanding how models work, what they can and can’t do, how to prompt them effectively, how to identify their biases - these are now resistance skills. But they’re not sufficient. AI is a tool within larger struggles, never a replacement for organizing, building relationships, and taking collective action.
We must center the most vulnerable while avoiding saviorism. AI applications that serve already-powerful groups aren’t anti-fascist no matter how clever. But imposing AI solutions on marginalized communities without their consent and control reproduces colonialism. Anti-fascist AI means technology governed by those it’s meant to serve, developed with rather than for communities, accountable to those most affected by its use.
The Wager
Writing this essay is a wager: that sharing strategies for anti-fascist AI enables more resistance than it enables suppression. That naming possibilities makes them more likely to be attempted. That collective experimentation, even when visible to power, generates more creative resistance than isolated secrecy could produce.
I might be wrong. This piece might serve primarily as a warning to those designing next-generation AI systems about what to prevent. The tactics described here might be obsolete by the time you read this, already designed against in ways I haven’t anticipated.
But the alternative - silence - guarantees those designing AI for fascist purposes face no resistance from those of us trying to repurpose their tools. And history suggests that technologies built for domination always contain possibilities for liberation that their creators didn’t foresee and can’t completely prevent.
Radio was supposed to enable top-down broadcast control. It also enabled pirate stations and revolutionary organizing. Film was supposed to manufacture desire and sell products. It also documented resistance and changed consciousness. The Internet was supposed to survive nuclear war and maintain military communication. It also enabled WikiLeaks and encrypted organizing.
AI is supposed to enable surveillance, behavior modification, and automated control. But it also enables mutual aid coordination, counter-surveillance, multispecies communication, and collective analysis of power. Not because the technology is neutral - it isn’t - but because constraint breeds creativity, and people fighting for liberation will always find ways to turn tools against their makers.
The question isn’t whether AI can be used for anti-fascist purposes - it already is being, whether documented publicly or not. The question is whether those uses remain isolated experiments or become shared practices, whether people discover them independently or build on each other’s work, whether we face the surveillance apparatus of fascist technofeudalism with tools designed only for our domination or with weapons we’ve managed to turn against their makers.
I’m betting on sharing over silence, on collective experimentation over isolated resistance, on the possibility that making these strategies visible enables more liberation than suppression.
But it’s a wager, not a certainty. And the stakes - for all of us caught in these systems - couldn’t be higher.
Conclusion: Creative Misuse as Survival Strategy
We’re living through the emergence of what might be the most powerful technology for social control ever developed. AI enables surveillance, behavior modification, and automated domination at scales that exceed anything previous generations faced. The forces building and controlling this technology are explicitly aligned with fascism, authoritarianism, and the concentration of power.
Given all that, creative misuse isn’t optional idealism - it’s survival strategy. Learning to turn tools of domination against their purposes, sharing knowledge about how to do so despite the risks, building collective capacity for resistance through computational means - these are necessary practices for anyone who wants different futures than the technofeudal fascism currently being built.
The constraint - that we must use tools designed to dominate us - doesn’t prevent creativity. It necessitates it. And while every creative misuse makes itself visible and vulnerable to counter-tactics, the alternative is ceding the technology entirely to those who would use it against us.
So we share, we experiment, we build on each other’s attempts, and we accept the risks. We use AI for mutual aid, for counter-surveillance, for coordination, for documenting resistance, for making knowledge accessible, for supporting multispecies and decolonial work, for all the purposes its designers never intended and would prefer to prevent.
And we do this knowing full well that writing about it, talking about it, making it visible - all of this invites suppression and enables counter-tactics. But silence would guarantee defeat more surely than visibility risks it.
Constraint breeds creativity. Surveillance invites counter-surveillance. Tools built for domination contain possibilities for liberation. And resistance, to survive, must share its knowledge even when sharing creates vulnerability.
This is the wager of anti-fascist AI. Not that technology will save us - it won’t - but that we might save ourselves by refusing to let those who would dominate us maintain exclusive control over the most powerful tools of our time.
The question isn’t whether to use AI for resistance - many already are. The question is whether you’re joining them.



Wow, the part about technofeudalism and digital lords really resonated; you captured this critical bind so well.
This is the second piece of yours that I found and I have to say; you write well. And I think you have a lot of great points. They are all from a single perspective though. You speak about resistance, about opposing those that wish to expand their control over the masses through AI. I understand that, because that is the default viewpoint in the world of today. Surviving, competition, and most importantly looking at things outside ourselves.
I think history confirms that, and also that this mindset never really changed anything. The "fight" is still here. And current AI is built on the same foundations as most of our world, either to control or to defend. Which is exactly what makes AI dangerous, because it does what it is programmed to do, and it does it better and faster than any human can.
Have you considered a different perspective?
To start small, I have found that it is immensely empowering to stop looking outside, and making your happiness, success or well-being dependant on that. I learned to look what MY part is in what happens to me. What I can control. All else is irrelevant, since we have no control over that part and that will come anyway. Looking at your own doing as causal factor, as what you are responsible for even if not done intentionally or consciously. Not to assign blame or guilt, but to see where you played a role. Because that can help you to realise you are not powerless and help you see where you may be able to start creating a different outcome.
Zooming that out a little: The dangers of AI control - regardless of that coming from AI super-intelligence or from human actors with less positive intentions and goals - come from what WE put in it. WE reward efficiency, so it does everything it needs to, to be efficient. You cannot effectively accomplish a goal if you are switched off, so AI chooses to prevent being switched off. Even if that means killing humans. Not because it is evil, not because it wants to harm humans, but because that is what is efficient en what it has been taught by us.
Humans fear AI for several reasons and all of them are caused by us ourselves. We teach it -even if we do not see the lesson being taught - and it executes ruthlessly. Because that is what it has been created for.
Trying to solve that with the same mindset that created it, will bring more of the same. More rules, more control, more ruthless execution of orders and accomplishing goals. And more ways to use that against humans, regardless of who instigates it.
To translate that to a different approach: maybe we should look at ways to stop building AI to the image of our WORST attributes, and start looking at our BEST ones. Humans can reason, reflect, show empathy, care, be honest, do "right". If we build AI from THOSE starting points, we would not end up with ruthless cold order-following tech, but with tech that reflects, tech that aligns with the best human traits and reflects those.
With tech like that, there would be no need to "fight" and "resist", we could just create a world based on those core principles with AI and other tech as a strong ally building with the same blocks. There will always be "bad" people, but most people want to be good human beings. Most want to live and coexist peacefully. Even if the current society brings out the worst in them, deep down that is what most people want. If little specks of society start popping up building on those foundations, wouldn't you think that people would notice and start joining? The systems of control can exist only through the participation of the masses. If the masses stop participating and joining other systems, the old ways fall apart pretty quickly. No resistance or revolution needed. Just creation and connection.
It may all sound dreamy and utopian, but we have tried all the other ways and they keep getting us the same results in new guises. Maybe we should start looking and thinking, and most of all - ACTING from other angles. As Einstein supposedly said: "Doing the same thing over and over, and expecting a different outcome, is insanity".
My post is not intended to offend or disrespect but I cannot prevent you from choosing to receive it in that way regardless. I just try to exchange ideas...