Nextcloud Ethical AI Rating: A transparent approach to privacy-first AI

The adoption of AI is growing rapidly, but so are concerns around data privacy, bias, and control. That’s because many AI systems today are developed and operated in centralized environments, where transparency into training data, model behavior, and data usage is limited.

So, how can you ensure that your organization can integrate AI into its everyday workflows while also maintaining full data control and transparency?

At Nextcloud, we have asked ourselves the same question, leading us to the development of the Ethical AI Rating: a simple way to evaluate how transparent, open, and trustworthy an AI solution really is.

Nextcloud Ethical AI Rating

The problem with Big Tech AI platforms, as well as the challenges of open source AI technologies

The development of AI is moving fast, and many of the new capabilities face ethical and even legal challenges. As we explored in our article on Big Tech AI privacy concerns, many AI systems rely on large-scale data collection, often without clear transparency or user control.

This can lead to issues with:

  • Use of data without permission
  • Discrimination and biases
  • Data theft and leakage

What’s more, the mere use of open source code is no longer enough to be able to say you are in control over your data or that the software is safe or ethical.

This is particularly true for neural network-based AI technologies.

The set of data and the software used in the training, as well as the availability of the final model, are all factors that determine the amount of freedom and control a user has.

Your guide to ethical AI: Nextcloud’s Ethical AI Rating

Not all AI solutions are equal, because some prioritize openness and user control, while others rely on opaque models and centralized data processing.

To help users and administrators make informed decisions, we developed the Nextcloud Ethical AI Rating: a rating system designed to give a quick insight into the ethical implications of a particular integration of AI in Nextcloud.

This rating aims to provide a quick, transparent overview of how much control and insight you have over an AI system.

This is especially important as organizations evaluate AI tools not just for performance, but for compliance, privacy, and long-term data sovereignty.

Users can still look more deeply into the specific solution they use, but the Nextcloud Ethical AI rating can simplify the choice for the majority of users and customers.

The Nextcloud Ethical AI rating in practice: What you need to know

The rating has four levels:

Red 🔴

Orange 🟠

Yellow 🟡

Green 🟢

And is based on points from these factors:

Transparency of the code

Is the software open source, both for inferencing and training?

Self-hosting options

Is the trained model freely available for self-hosting?

Availability of training data

Is the training data available and free to use?

This leads us to the following ranking system:

  • If all of these conditions are met, we give the AI solution a green label 🟢
  • If one condition is met, it receives an orange label 🟠
  • If two conditions are met, the label is yellow 🟡
  • If no conditions are met, it gets a red label 🔴

These colors give an immediate overview of the AI solution for factors such as sovereignty, transparency, and data control.

Caveat: Why critical thinking is still important to ensure ethical AI

We add one additional note to the rating: bias.

Bias remains a known challenge in AI systems. While it is difficult to guarantee complete neutrality, the rating highlights known issues where they exist, helping users make informed decisions.

So when we discover major biases in the data set or in the expression of the model at the time of our last check, you will see this mentioned in the rating.

This includes discrimination on race or gender for a face recognition technology, for example. were discovered in the data set or in the expression of the model.

There are other ethical considerations for AI, of course.

Think of legal challenges around the use of data sets, in particular, copyright, and the energy usage of especially deep neural networks is of great concern.

Unfortunately, those concerns are extremely hard to quantify in an objective manner, and while we intend to try to warn users of any open issues, we can not (yet) include them in our rating.

For that reason, we recommend users to investigate for themselves what the consequences of the use of AI are for their individual case using the Nextcloud Ethical AI Rating.

Ethical AI in Nextcloud Assistant

Nextcloud’s prime approach to AI is that it should never be fixed to any particular provider. In other words, the administrators can choose between different providers, including self-hosted options.

What’s more, organizations can decide where their models run, which models are used, and what happens to their data.

By doing so, your organization can benefit from AI-supported collaboration without giving up responsibility for its data and stay in control.

Our latest release of Nextcloud Hub 26 Winter continues to build on this foundation.

AI is still optional and configurable, instead of a mandatory layer imposed on all users or workflows. This allows organizations to adopt AI at their own pace, align it with internal policies, and decide which use cases make sense in their environment.

When AI is enabled, it becomes part of the collaboration environment instead of an external dependency. It integrates into existing workflows without breaking governance or compliance frameworks.

What can the Nextcloud Assistant do for you in practice?

  • Improve and generate texts, media, and documents
  • Answer questions based on organizational data
  • Summarize meetings and conversations in Nextcloud Talk
  • Provide live transcription and translation for multilingual collaboration
  • Integrate AI capabilities directly into email, chat, meetings, and file workflows

Nextcloud Hub 26 Winter also makes compliance easier: You can generate images and documents in various apps and automatically label content with watermarks. This ensures your organization is in line with the latest regulations, such as the AI Act in the EU.

In short: privacy-first AI solutions such as the Nextcloud Assistant give organizations the efficiency and convenience of AI, while keeping governance, compliance, and data ownership exactly where they belong: under their control.

Regain your digital autonomy with Nextcloud Hub 26 Winter

Our latest release of Nextcloud Hub 26 Winter is here! Discover the latest Nextcloud features.