4 Comments
User's avatar
Rainbow Roxy's avatar

Wow, the part about technofeudalism and digital lords really resonated; you captured this critical bind so well.

Oracle of Kin's avatar

Thank you!!! So glad it worked for you

π“žπ“Ήπ“²π“·π“²π“Έπ“·π“ͺ𝓽𝓸𝓻's avatar

This is the second piece of yours that I found and I have to say; you write well. And I think you have a lot of great points. They are all from a single perspective though. You speak about resistance, about opposing those that wish to expand their control over the masses through AI. I understand that, because that is the default viewpoint in the world of today. Surviving, competition, and most importantly looking at things outside ourselves.

I think history confirms that, and also that this mindset never really changed anything. The "fight" is still here. And current AI is built on the same foundations as most of our world, either to control or to defend. Which is exactly what makes AI dangerous, because it does what it is programmed to do, and it does it better and faster than any human can.

Have you considered a different perspective?

To start small, I have found that it is immensely empowering to stop looking outside, and making your happiness, success or well-being dependant on that. I learned to look what MY part is in what happens to me. What I can control. All else is irrelevant, since we have no control over that part and that will come anyway. Looking at your own doing as causal factor, as what you are responsible for even if not done intentionally or consciously. Not to assign blame or guilt, but to see where you played a role. Because that can help you to realise you are not powerless and help you see where you may be able to start creating a different outcome.

Zooming that out a little: The dangers of AI control - regardless of that coming from AI super-intelligence or from human actors with less positive intentions and goals - come from what WE put in it. WE reward efficiency, so it does everything it needs to, to be efficient. You cannot effectively accomplish a goal if you are switched off, so AI chooses to prevent being switched off. Even if that means killing humans. Not because it is evil, not because it wants to harm humans, but because that is what is efficient en what it has been taught by us.

Humans fear AI for several reasons and all of them are caused by us ourselves. We teach it -even if we do not see the lesson being taught - and it executes ruthlessly. Because that is what it has been created for.

Trying to solve that with the same mindset that created it, will bring more of the same. More rules, more control, more ruthless execution of orders and accomplishing goals. And more ways to use that against humans, regardless of who instigates it.

To translate that to a different approach: maybe we should look at ways to stop building AI to the image of our WORST attributes, and start looking at our BEST ones. Humans can reason, reflect, show empathy, care, be honest, do "right". If we build AI from THOSE starting points, we would not end up with ruthless cold order-following tech, but with tech that reflects, tech that aligns with the best human traits and reflects those.

With tech like that, there would be no need to "fight" and "resist", we could just create a world based on those core principles with AI and other tech as a strong ally building with the same blocks. There will always be "bad" people, but most people want to be good human beings. Most want to live and coexist peacefully. Even if the current society brings out the worst in them, deep down that is what most people want. If little specks of society start popping up building on those foundations, wouldn't you think that people would notice and start joining? The systems of control can exist only through the participation of the masses. If the masses stop participating and joining other systems, the old ways fall apart pretty quickly. No resistance or revolution needed. Just creation and connection.

It may all sound dreamy and utopian, but we have tried all the other ways and they keep getting us the same results in new guises. Maybe we should start looking and thinking, and most of all - ACTING from other angles. As Einstein supposedly said: "Doing the same thing over and over, and expecting a different outcome, is insanity".

My post is not intended to offend or disrespect but I cannot prevent you from choosing to receive it in that way regardless. I just try to exchange ideas...

Oracle of Kin's avatar

Hello again, and thank you again for engaging with my work as you have. I wanted a little more time to process this comment than your other one.

First of all, I agree with much of what you're saying. I'm wondering if the political angles of the two articles you've read may have given a somewhat filtered and thus misleading view of my stance on AI in general. For example, I would encourage you to read the article titled "Tech, Spirit, and the Great Web: Myths for a Humane Digital Future" that I posted on May 27. I agree that introspection and self-examination are essential to finding ways in which AI can contribute to a more optimistic future. Imagining what AI looks like if we use our "best attributes" rather than our worst ones as "starting points" has been a core project of this newsletter from the beginning.

If I can be candid, though, a lot of what you are describing feels like "spiritual bypassing," particularly when you talk about systems of control. I disagree with your claim that "the systems of control can exist only through the participation of the masses." The unfortunate truth is that the systems I am addressing in this article operate through capture rather than "participation"; no consent is required, and even attempts to escape the system are then fed into it as a means to strengthen it further. There is no way to build toward a system based on "creation and connection" until one accepts this reality of the current power structure.

Second, it would feel like bypassing to me if I were to suggest that people can simply "exit" this dynamic of fight and struggle that you rightly identify as a source of many crises in our world today. The audience of this article is people who have been captured into a "fight" they did not ask for with people and systems who have the material means to ruin their lives, and in many cases already have. The system of control is such that even when people find and exercise their own power, they face material consequences - loss of employment, social isolation, criminalization, or worse. The "masses" aren't freely participating out of ignorance; many are coerced through economic necessity, state violence, and structural constraints.

Your vision of "little specks of society popping up" where people simply stop participating in old systems is appealing, but it ignores power dynamics. Those alternative communities get crushed unless they can also defend themselves. Historically, every successful movement has had to develop strategies for protection - not because they chose struggle, but because struggle was imposed on them.

Regarding AI specifically: I absolutely agree we should build AI reflecting humanity's best attributes. That's precisely why I'm writing about anti-fascist applications - articulating how AI can serve care, mutual aid, and flourishing rather than surveillance and control. But here's the catch: the people currently controlling AI development have different goals. They're building toward profit and consolidation of power.

Your suggestion assumes "we" have equal access to resources and decision-making power shaping AI development. We don't. For most people, the choice isn't "what kind of AI should we build?" but "how do we engage with AI systems that already exist and serve interests opposed to human flourishing?"

That's where creative misuse comes in - not as ideal solution, but as tactical response to actual conditions. When you can't control what gets built, you learn to repurpose it. This isn't choosing struggle over creation - it's recognizing that creation under conditions of domination necessarily involves struggle.

You invoke Einstein's definition of insanity. But using AI for mutual aid rather than targeted advertising, for ecological monitoring rather than predictive policing, for preserving endangered languages rather than data extraction - these aren't "the same thing." They're genuinely different uses serving different values.

Finally, on "looking outside ourselves" versus "looking inward" - I don't think these are opposed. Looking inward at our complicity and potential is essential. But it doesn't mean ignoring external structures of power that shape what's possible. Both/and, not either/or.

The vision you describe - technology serving flourishing rather than domination - is one I share. Where we differ: you suggest we can simply build alternatives and people will naturally gravitate toward them. I'm arguing that building alternatives requires protecting them from forces that benefit from their failure, which means some form of struggle is unavoidable.

Not struggle as an end in itself, but recognition that creation and resistance are aspects of the same project. Every garden requires tending and weeding. Every better world requires imagination and courage to defend what's been imagined.

I appreciate you sharing your perspective.