Gnuxie & Draupnir: March 2025
Table of Contents
The immediate priorities for Draupnir, and our progress towards them
After 2.0 was released and with an evolving situation in Matrix's open federation and ecosystem, we decided that we urgently needed to improve the situation for Matrix homeserver administrators. This work would be a divergence from the road-map and goals set out for the NLnet grant. But given the 2.0 release and bug hunting had come to a close, we had the opportunity to develop bespoke protections for server admins and we have done so. We will be expanding the available list further.
As a reminder, you can keep track of which issues have been worked on here.
We include at the end of this blog update an assessment of the current situation which can serve as a justification for the current prioritisation described below.
Progress
- We have updated the documentation to include a specific page mentioning all Draupnir features that are relevant to homeserver administration. Which you should read if you are operating a Matrix homeserver with open registration
- We have developed a new room takedown protection for homeserver administrators. Allowing admins to use policy rooms to block and purge rooms from their Matrix homeservers, including declining open invitations from the room to users on the homeserver.
- We have implemented both MSC4204: Takedown moderation policy recommendation and MSC4205: Hashed moderation policy entities to directly support the room takedown protection. These MSCs ensure that policy rooms cannot be used by malicious actors as a directory of abuse.
- We have added support for Tulir's synapse-http-antispam to replace the legacy Mjolnir synapse module which is incompatible with Synapse workers. This module allows us to proactively block room invitations from users marked on watched policy rooms1. We expect to have to wait for matrix-ansible-docker-deploy to package this module before room takedown policies can be organised from the Community Moderation Effort.
- Crucially, with the help of Bea, cdesai, and Cat we have resolved a bug experienced by matrix-ansible-docker-deploy users that stopped sqlite from creating temporary files. This opens the door for us to use sqlite for much more than the room state backing store, and we have begun to do so.
- All of these features are ready to use in v2.3.0-beta.0.
Where we are going in the next two months
- We are currently working with feedback from server moderators to develop a protection to automatically suspend local users that end up on watched policy rooms. https://github.com/the-draupnir-project/Draupnir/pull/799
- We will be provisioning a test environment to gather feedback from third parties unfamiliar with Matrix. This is to assist with obligations to receive an accessibility and security review. We might like to keep it though.
- We are implementing better policy list filtering to reduce the risk
of watching policy rooms while enabling protections with irreversible
actions such as the room takedown protection. This will cover
the goal for an Explicit agreement watch mode for policy lists.
- We will also likely cover the goal for policy list subscription previews while we are here.
- MSC4273: Approve and Disapprove ratings for moderation policies will also likely be used as the basis for these features.
- We are working towards proactive moderation and improving the on-boarding experience for users that are entirely new to a community. The Draupnir protection we are developing under the participation metric user story will be a game changer for room moderation on Matrix.
A focus on on-boarding users to a community
Why on-boarding is the best place to focus
The overwhelming majority of bans that land onto the community moderation effort's policy list come from throwaway user accounts that have never been encountered by a community before they send abuse.
Despite new users being an obvious vector for abuse, Matrix provides new users with the same access to interact with Matrix rooms as any other present user2. This reality is actually a lot worse than it sounds, Matrix takes a giant leap backwards here due to the nature of the Matrix DAG and its crude "power levels" access control system. In an open federation context, new users actually have the access to interact with a room before the room moderators even know that the new user exists. And before the room moderator's homeserver even knows the abuser's homeserver exists3. This means that all moderation in Matrix at the room level within the context of the DAG is fundamentally reactive.
This is categorized as a violation of the principle of least authority and it is actually a software vulnerability. It is THE MOST common vulnerability in the OWASP Top 10 that is referenced in the security considerations of the matrix specification change process and is explicitly described at the very top2.
Fortunately, most Matrix homeservers apply rate limits to incoming events from federation, which in most situations will buy time for our reactive tooling.
Gradual Access Control: How we can take matters into our own hands
We're going to introduce something new into Draupnir: gradual access control. What this means is restricting a new user's access to a community until the new user is familiar with the community's active users and moderators. What this will mean is Draupnir will have to implement an access control system of its own on top of Matrix's power levels which will be enforced with redactions and direct communication with community members.
What Draupnir must do is implement what we have been calling the "participation metric" which we will use to gauge whether a given user has become familiar to the community.
The way this is implemented is by requiring new users to navigate through a series of levels which give them more and more access. A threshold of interaction must be met before they transition to the next level and this represents the increase in familiarity they have with a community.
At the moment the design of this system is as so. I will want explicit credit if you appropriate this idea but don't let that stop you from copying4.
- Level 0: The user has been encountered by the community, they do not have the ability to do anything except interact with a welcome notice describing the rules, code of conduct, and an FAQ. This is an extreme level of protection that can be skipped if the active threat level is low.
- Level 1: The user has acknowledged the welcome notice and is granted an initial set of capabilities to interact with the community. By default this means they will only be able to use text based media. That means no images, links, videos, user mentions, stickers. These restrictions are expressed in terms of what is allowed, instead of what is not allowed.
- Level 2: The user has sent enough messages that have been seen by moderators and active community members to qualify for an expanded set of capabilities. Users at level 2 can now send more media such as images and links. These are met with scrutiny and are applied under a global rate limit.
- Level 3: The user has sent enough messages containing media that have been seen by moderators and active community members. They no longer need to be watched as closely and safe guards can optionally be removed. They can now mention other users.
- Level 4: Users at level 4 are considered to be integral to the
community and can use
redact
orreport
functionality against users of level 2 and below. (Although, for now implementingredact
orreport
functionality is out of scope)
This system is going to fundamentally make it much harder to send spam to a community and it will be able to work automatically. The only concern I have is that communities with support rooms may need to figure out how to quickly move to level 2 after some initial messages (since these rooms typically require new users to upload screenshots of bugs). But I am to make the thresholds configurable.
We will implement this as defensively as possible by writing predicates for which events are allowed and anything that is unknown or unexpected will be redacted. This could lead to false positives until we discover all edge cases but it's the safest way to do this.
We will also have to record the number of redactions sent and why events were redacted so that the new user can be reminded of the rules and system. And then removed if we notice there are no moderators present and it's clear they are gaming the system.
While this is a very simple system to create, it is the edge cases and customization that communities will expect to be covered that will make this a complex tool.
Assessment of the active situation
Why are the attacks happening now?
Matrix.org decided to move to a curated room directory at the end of February5. Prior to this decision, the matrix.org room directory contained shell rooms that served as fronts for communities dedicated to extremist or illegal content. These communities have since reacted to the unlisting of their rooms and by actively engaging in abuse targeting the foundation and communities close to it6,
The Matrix federation has a problem where homeserver admins have very little oversight into which rooms their users are joining and causing the homeserver to participate within7. And this has been allowing the aforementioned communities to thrive, which is why they have responded so harshly to the foundation making a move to cleanup their own server.
So I don't think I need to elaborate much on why we've decided it's urgent to give server admins features to support them here.
Things have escalated in the past few weeks however
The context for why abuse is so accessible
The abuse and attack vectors that are being exploited on Matrix are not new. Even the volume of abuse, and the variety of actors involved is nothing Matrix has not experienced before, and Matrix has even seen greater volume in its history. This includes the software vulnerabilities, which not only were known, but also have been exploited in the wild before. Materially for the defenders, not much has changed since 2019. The community moderation effort and the Matrix foundation itself remain the only lines of defence for room moderators.
The ease of effort that malicious actors have to propagate abuse on Matrix is NOT acceptable for any platform and is inexcusable. Decentralization is NOT an excuse a for lack of consideration for safety. The reality of the protocol and its available tooling is that all attempts to stop abuse are reactive. Matrix is a risky and unsafe platform for any open community to call home. The only reason communities call Matrix home despite this is because of an ideological commitment to FOSS and decentralization. And this has always been the case.
This ideological commitment has blinded Matrix's own community and developers to how unsafe this protocol is and has led to a lack of resource prioritisation on safety. In Matrix's history there has been an abundance of resources, with over $80M in total being spent on core-development according to representatives of both the foundation and Element as of 2024. Yet safety has consistently been left with the dregs and the bare minimum required to maintain legal obligations. And this has been catastrophically poor for a protocol that does exist as a social network and even aims to be a universal one.
Any software that is primarily social must have safety as a top priority and this means cross functional communication and representation of safety concerns MUST exist within any organisation in much the same way as a product team would in a corporate enterprise8. Matrix and its vendors have certainly never made the commitment to do this and unfortunately it is unclear now if they will ever be able to.
Speaking of this. While editing this update I'm now heavily disappointed to hear that Element X Android does not have redact or ban functionality. And I have found this out from a room moderator who has just had their room attacked.
Hello philanthropists, how're you doing baby? Please don't use that money to exploit people. And certainly if you ever find yourself creating a social media startup, make sure that there is organisation and structure in place dedicated to safety which cuts into each team. To ensure that safety is considered throughout the tech you are creating. But also, again, DO NOT because you will not have control for long. In the words of Dmytri Kleiner
Whatever portion of our productivity we allow to be taken from us will return in the form of our own oppression.
Despite these overwhelming problems, with Draupnir's Gradual Access Control we aim to plug the gap in the room level which is being exploited. And we hope will provide a new and proactive line of defence against abuse that will make a huge difference to the experience of running a community Matrix.
Why contributors are so important
There is a very important lesson to learn from Matrix. An optimistic approach to contributors and fostering contribution is something that has never materialised for Matrix due to the corporate management of Element's repositories and that same culture bleeding into the foundation.
A constant source of pessimism on the governing board is the lack of budget that the foundation has to spend. And consistently whenever a solution is sought, it can be dismissed because someone would have to be paid to implement it.
This pessimism completely ignores that Matrix has a thriving, although currently disorganised, contributor community. And over the years, Matrix has had a number of volunteers push progress, contributing to a variety of projects to implement MSCs, only to be burnt because they were treated as lower-than-second-class to Element's priorities.
The narrative that Element is telling the ecosystem at the moment is that Matrix cannot survive, let alone succeed without them. This narrative is toxic and if we do not combat it, we risk ensuring that the Matrix community ends when Element ends.
This will NOT be the case9. Element's failure to solve its problems will cause an existential crisis for the foundation that will be very rough for everyone. It will not be a desirable outcome. However it will not be the end of Matrix. Despite a lack of funds available to the foundation, there are now more contributors working on Matrix than there has ever been in Matrix's history. And increasingly third parties are becoming more responsible and maintaining more complex and critical technology.
The best example we have of this is the Community Moderation Effort itself. Which provides a service to the Matrix community that would cost hundreds of thousands of dollars to run if it was all done under the accountant's book. And that would be accounting for just the handful of moderators to provide round the clock coverage. Not the range of volunteers and the software infrastructure that CME does have available to assist communities.
Increasing costs for the foundation at this point in time also increases the foundation's dependency on the vendors for direct financial support. The reality is that they have always had offers from volunteers to assist, but not the organisation to enable them and still allow their work to be carried out autonomously.
Breaking the negativity spiral and how to rebuild Matrix
There is a huge cloud of negativity hovering over Matrix foundation related rooms at the moment. Anyone with foundation responsibilities is visibly unhappy, as are a lot of community members. Everyone who is reading will know what I am talking about.
At the moment when people talk about Matrix's problems, they are currently told to use official channels that people have very little confidence in, and these are often also are very opaque. One of the reasons why I suspect these interactions between complainers and representatives are so bad is because the representatives know themselves how ineffective and hopeless their response is.
These channels have rarely demonstrated themselves, particularly when they related to technical matters. And this makes things worse, people feel as though these appeals are a suppression of their complaint or otherwise have the effect of making them feel as though they are not cared for.
The foundations own bodies are dysfunctional, even the SCT does not have a transparent system in place that they can point to when there are complaints or when they need to demonstrate that progress can be made, before it actually is made. They also do not have anything close to objective insight into the direction of community projects, or what their concerns or struggles are with the protocol.
All these things add up across the board, from developers, contributors of all kinds to users. They make for a very heavy and negative environment to work within.
The reason this happens is because there is no organisation showing people what the issues are, how progress is being made, or how the priorities are derived from triaging. I do not think there is a way of addressing complaints without referring to transparent organisation that people can have confidence in. Honesty about the state of things as a part of that goes a long way too.
This turns the initial negativity from complaints into something productive and something that can be used to show hope and build confidence. Being able to invert someone's negativity in a chat room at the drop of a link that shows they can relate with you is very powerful.
Draupnir would have all these problems too if we weren't able to show people the issue tracker and explain to them how things can be fixed.
Ultimately it's now my belief that an organisation needs to exist to assess the issues that Matrix has from a community perspective. The information from this organisation could then be used freely by developers or the foundation itself to provide insight into what features or fixes they could choose to focus on that would be beneficial to the systems they are maintaining in the context of the ecosystem as a whole.
The same organisation could have its responsibilities expanded later to encourage contribution with calls for participation to certain projects upon request of their maintainers. And as a way for different Matrix projects to communicate with each other about the direction they are heading.
I hate having to be the one to pick up the slack, but I am going to try show what this will look like regardless of whether it will be accepted as an official foundation working group. I would really appreciate it if someone else took over though, so please get in touch if you are interested.
Closing
Footnotes:
Special thank you for Tulir for working on this module in conjunction with his own moderation bot, Meowlnir.
This is increadibly ironic because the matrix-spec-proposals template links to OWASP Top Ten in the security considerations section https://github.com/matrix-org/matrix-spec-proposals/blob/main/proposals/0000-proposal-template.md#security-considerations and the first consideration is about broken access control. If you look at the description of broken access control, violation of the principle of least privilidge is actually listed not only as a vulnerability but the first of the first of the common access control vulnerabilities https://owasp.org/Top10/A01_2021-Broken_Access_Control/.
Cos f'knows where i've drawn inspirration from, the toilet?
Switching to Curated Room Directories, matrix.org 2025.02.20, source: https://matrix.org/blog/2025/02/curated-room-directories/
Matrix - A Pit of Abuse with Government Ties, Upper Echelon June 2024, source: https://www.youtube.com/watch?v=W8KEuAEYjQ4. Note: Despite a number of technical inaccuracies in this video, the premise of the video is correct.
DO NOT found a for-profit tech company.
Although currently, it is unclear that there is a way to rescue the matrix.org homeserver or its infrastructure as it exists. Other than this, things would not change too much, Element already priotises its own things and progress is generally not being made on issues that are immediately important to the community. OIDC and sliding sync are very nice and impressive achievements but largely not relevant to the most pressing problems Matrix has.