Feature Planning

(Redirected from Requirements)
Under-construction monkies.gif This document is a work in progress. Everything in this document is subject to change.


This page is for brainstorming requirements and long-term goals for the actual system behind the fork. We can probably sort this into separate pages, maybe.

This is not an official document. These features have not all been vetted for security/abuse potential.

Safety

For Instances

  • Management of existing blocklists
  • Granular control over blocking instances and users
  • Configurable restrictions for registration/invitation
  • Robust moderation tools
  • One-click user banning from the timeline interface
  • Prevent IP addresses from making new accounts
  • Better prevention against spambot sign-ups
  • Chain-banning (i.e. ban someone and all of their followers)

For Users

  • Notes when blocking someone (e.g. reason for blocking)
  • Control of where blocked/muted content is shown (allowing "nowhere")
  • Time-based blocking and muting
  • Display of mutual followers
  • Conformance to post privacy
  • Control of who can reply to a post; "don't @ me" mode
  • Configurable blocking of federation of certain posts
  • Some way to address block evasion (i.e. by making an account on a new instance)
  • User blocking is two-way

Accessibility

  • Interface that works on (almost) all devices (consider a stripped down view that's usable without javascript? might be infeasible)
  • Screen-reader accessibility (including persistent suggestion of image descriptions, addition of descriptions for emoji, and ascii art)
    • A way to convert ascii art into an image?
    • Disable emoji or screen reader descriptions for emoji or alternate post transcriptions for emoji?
  • Motion sensitivity control (ensuring that flashing/fast movement doesn't happen by default, and allowing users to configure this)
  • Differentiation between subjects (currently Mastodon's "CW" feature) and content warnings in general, encouragement of users to add subjects
  • Keyword-based hide and block
  • How could these accessibility tools be misused as a harassment vector?

Deployability

  • Ability to migrate existing Mastodon instance to fork code, up to a certain Mastodon version
  • Ability to serve media via S3 and similar services

High immediate priority

  • Visually appealing interface
  • Local-instance-only privacy option
  • Mutuals-only privacy option
  • Configuration of federation (blacklists like Mastodon versus whitelists like awoo.space)
  • Follow request notifications

Long-term planned

Usability

  • Performant web UI
  • Low server resource usage
  • Ease of deployment
  • Translation of toots inline

Technical

  • Conformance to ActivityPub standard
  • Creation of a spec for features not in ActivityPub, to ensure fediverse health

General Post Features

  • More advanced bios (e.g. follower-specific notes, pinned posts as part of bios, longer bios)
  • Boosting/pinning posts of any privacy level (while preserving privacy)
  • Controlling boostability separate from privacy (e.g. boostable private posts, non-boostable public posts)

Privacy

  • Public-only followers

Desired

  • Robust list functionality
  • Ability to credit custom emojis to their authors

If we can get to it

  • Total separation of frontend and backend (e.g. backend-only installations, swappable frontends)
  • Purging of locally cached remote content (posts, media) and retrieval on demand from remote instances
  • Subject line in posts (semantically different from content warnings)

Rejected Features

  • If a person you follow specifically blocks a person and includes a reason, then you get a pop-up with that reason when you try to interact with this person. Also you could set a threshold like, if X people I follow manually block this person, block them automatically for me and let me know about it.
An abuser could use this to isolate somebody, sending mass alerts to their followers that <target> is bad and should be blocked, encouraging pile-ons and ostracism for accusations that may or may not even be true. It's the same as posting or boosting false information that would get said person blocked, just more insidious.
More formally, let's say Bob blocks Anne. Bob can enter whatever he wants as the reason for the block. If Bob's followers trust him enough to follow him, they're likely to trust what he writes (whether or not it's true) and block Anne too, increasing her isolation. This is reducible to the problem of spreading false information. The blocking threshold can be easily overcome when X is too low (whether relatively or absolutely), or when a false call-out post gets enough attention. Having both of these would result in far-reaching effects across networks; users with a low X value would contribute towards surpassing higher X values set by other users.
This would also be a source of information leakage: a fake account could be created to follow a particular person and interact with different accounts to "test" if said person has blocked them.
  • Let block activities be routinely published and federated through the fediverse.
A malicious person could start a custom instance with software for aggregating just those lists and publish them as harassment honeypots. They could even do the collection using what seem like ordinary Mastodon or Pleroma instances and publish them separately/anonymously, so that it isn't clear who is doing the collecting. Any group concerned about harassment would have to be extremely careful about what leaves the boundaries of an instance, and which instances they put their trust in. Any block/blocklist publication would become a feature waiting to be abused.