Adobe>
21 February 2025
Digital platforms empower civic engagement and activism, but also pose serious risks, such as government surveillance, targeted cyberattacks, and sophisticated disinformation tactics. Ransomware attacks on healthcare systems, government networks, and infrastructure illustrate how cyber threats can disrupt essential services and national security. Disinformation campaigns, amplified by AI-generated deepfakes and bot-driven misinformation, have been used to shape political narratives, weaken trust in democratic institutions, and incite social divisions.
Our latest research brief ‘Behind the Lens: Exploring the Problematic Intersection of Surveillance, Cyber Targeting, and Disinformation’ examines the complex relationship between digital technologies and their misuse in surveillance, cyberattacks, and disinformation campaigns. This joint study written by Erica Harper, Jonathan Andrew, Florence Foster, Joshua Niyo, Beatrice Meretti and Catherine Sturgess details how the increasing reliance on digital systems has made them primary targets and tools for controlling societies - with deep implications for human rights, human agency and global security.
Using global examples the authors highlight the role of technology companies in regulating these threats, and emphasize the need for a balanced approach that preserves digital freedoms while implementing safeguards. The research brief concludes by outlining policy recommendations for governments to enforce rights-based regulations, private companies to enhance transparency and ethical oversight, and civil society to advocate for digital rights.
This report is part of the Academy’s broader work related to new technologies, digitalization, and big data. Our research in this domain explores whether these new developments are compatible with exis ting rules and whether international human rights law and IHL continue to provide the level of protection they should.
Adobe
Our new series of Research Briefs examine the impact of digital disinformation and potential solutions for its regulation
Adobe
Our research brief 'Neurotechnology - Integrating Human Rights in Regulation' examines the human rights challenges posed by the rapid development of neurotechnology.
ICRC
Co-hosted with the ICRC, this event aims to enhance the capacity of academics to teach and research international humanitarian law, while also equipping policymakers with an in-depth understanding of ongoing legal debates.
LATSIS Symposium
This Human Rights Conversation will explore how AI is being used by human rights institutions to enhance the efficiency, scope, and impact of monitoring and implementation frameworks.
Participants in this training course will be introduced to the major international and regional instruments for the promotion of human rights, as well as international environmental law and its implementation and enforcement mechanisms.
Adobe
This training course, specifically designed for staff of city and regional governments, will explore the means and mechanisms through which local and regional governments can interact with and integrate the recommendations of international human rights bodies in their concrete work at the local level.
Adobe Stock
This project addresses the human rights implications stemming from the development of neurotechnology for commercial, non-therapeutic ends, and is based on a partnership between the Geneva Academy, the Geneva University Neurocentre and the UN Human Rights Council Advisory Committee.
The Geneva Human Rights Platform contributes to this review process by providing expert input via different avenues, by facilitating dialogue on the review among various stakeholders, as well as by accompanying the development of a follow-up resolution to 68/268 in New York and in Geneva.