Why strengthen community digital archives in the face of AI and deepfakes

Why strengthen community digital archives in the face of AI and deepfakes

At a historical moment in which artificial intelligence can produce images, audios, and videos practically indistinguishable from those recorded by a camera, the work we promote from the Laboratorio Popular de Medios Libres (LPML) together with WITNESS and multiple allied organizations in Escuela Común takes on an urgent dimension. We are not facing only a technical challenge; we are facing a profound transformation of the conditions under which public truth is built, territories are defended, and the lives of those who report abuses are protected.

For decades, Indigenous, peasant, and urban communities have documented police violence, evictions, threats from armed actors, and private security operations linked to extractive projects. As is happening today in Trump’s United States, the cell phone camera becomes a tool of defense, a mechanism to say: this happened, here is the evidence. However, the expansion of generative AI disrupts that basic premise. As Sam Gregory from WITNESS says: “Seeing is no longer believing.”

The dominant response to this problem usually focuses on the development of deepfake detection technologies or on new regulations that force platforms to label or remove content. In India, for example, recent updates to information technology rules require companies to act with enormous speed in the face of complaints of digital manipulation, in some cases reaching deadlines of just a couple of hours to evaluate and remove problematic material. The intention is understandable: to reduce harm. But this approach also places gigantic power in corporate intermediaries that do not necessarily understand local contexts or the dynamics of risk faced by communities.

In the current climate, any content uncomfortable for those in power can quickly be flagged as a possible deepfake; and if communities do not have technical tools, traceability, or preservation processes that allow them to reliably demonstrate authenticity, that simple suspicion may be enough to justify deletion, silencing, or the drastic reduction of its reach. The problem is aggravated by an evident asymmetry: the capacity to manufacture forgeries often advances faster than the technologies that try to detect them. In that race, the margin of error almost always turns against those who denounce, leaving room for the false to be legitimized while the true is put into doubt. Added to this is the fact that metadata, contact networks, and usage patterns hosted in corporate infrastructures can reveal extremely sensitive information about organizational strategies or identities that need to remain protected.

For this reason, in Escuela Común the main effort is for communities to develop the capacity to document, safeguard, and sustain their own digital memory autonomously. This means learning to use free software, deploying their own servers, understanding how the integrity of a file is preserved and how its authenticity can be demonstrated over time. It also means discussing care protocols, evaluating what information can be made public and what must circulate in a restricted manner, and understanding that digital security is inseparable from physical security. In territories where documenting can cost lives, the responsible management of evidence is a transcendental matter.

The existence of community archives managed with their own infrastructure changes the terrain of the debate about truthfulness. In the face of the permanent suspicion installed by deepfakes, communities can sustain traceability, context, and clear custody of information. They do not depend on the goodwill of a company to preserve their history nor on automated forms to avoid the deletion of crucial testimony. The proof is not delegated: it is governed.

Autonomy does not imply isolation. Escuela Común is a network of shared learning where organizations from different countries exchange methodologies, accompany one another in solving technical problems, and strengthen a common vision: technology must serve processes of justice. Within this framework, the work with WITNESS brings long experience in the use of video and audiovisual evidence for the defense of human rights, while tools such as ProofMode are contributed by Guardian Project and efforts are articulated with the Centro de Autonomía Digital (CAD), which promotes the development of the MAIA model. All these collaborations are reimagined under the common horizon of technological sovereignty for the defense of human rights.

The era of artificial intelligence forces us to move the conversation beyond amazement or fear in the face of increasingly convincing synthetic images. The strategic question is who has the capacity to affirm what is real, under what criteria, and with what infrastructure. If that capacity remains concentrated in a few platforms, communities in struggle will continue to be at a disadvantage. If, instead, it is multiplied through training processes, support networks, and free technologies, concrete possibilities open up to protect both truth and life.

From the Laboratorio Popular de Medios Libres (LPML) we want to reaffirm that the our memory is not just another piece of data on someone else’s servers, but a territory that also deserves to be cared for and governed by those who inhabit it.

Comentarios

Aún no hay comentarios. ¿Por qué no comienzas el debate?

Deja un comentario

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *