The adoption of AI is growing rapidly, but so are concerns around data privacy, bias, and control. That’s because many AI systems today are developed and operated in centralized environments, where transparency into training data, model behavior, and data usage is limited.
With new regulations, like the EU AI Act, organizations are more and more under pressure to adopt AI safely. So, how can you ensure that your organization can integrate AI into its everyday workflows while also maintaining full data control and transparency?
At Nextcloud, we have asked ourselves the same question, leading us to the development of the Ethical AI Rating: a simple way to evaluate how transparent, open, and trustworthy an AI solution really is.
The problem with Big Tech AI platforms, as well as the challenges of open source AI technologies
The development of AI is moving fast, and many of the new capabilities face ethical and even legal challenges. As we explored in our article on Big Tech AI privacy concerns, many of these tools rely on large-scale data collection, often without clear transparency or user control.
This can lead to issues with:
Use of data without permission
Discrimination and biases
Data theft and leakage
What’s more, the mere use of open source code is no longer enough to be able to say you are in control over your data or that the software is safe or ethical.
This is particularly true for neural network-based AI technologies.
The set of data and the software used in the training, as well as the availability of the final model, are all factors that determine the amount of freedom and control a user has.
Your guide to privacy-first AI: Nextcloud’s Ethical AI Rating
Not all AI models are equal, because some prioritize openness and user control, while others rely on opaque models and centralized data processing.
To help users and administrators make informed decisions, we developed the Nextcloud Ethical AI Rating: a rating system designed to give a quick insight into the ethical implications of a particular integration of AI in Nextcloud.
This rating aims to provide a quick, transparent review of how much control and insight you have over these tools.
This is especially important as organizations evaluate AI tools not just for performance, but for compliance, privacy, and long-term data sovereignty.
Users can still evaluate solutions in more detail, but the Nextcloud Ethical AI rating can simplify the choice for the majority of users and customers.
The Nextcloud Ethical AI rating in practice: What you need to know
The rating is based on points from these factors:
Open source licensing
Is the software open source for both inference and training?
Self-hosting options
Is the trained model freely available for self-hosting?
Availability of training data
Is the training data available and free to use?
And has four levels:
Red 🔴
Orange 🟠
Yellow 🟡
Green 🟢
This leads us to the following ranking system:
If all of these conditions are met, we give the AI solution a green label 🟢
If one condition is met, it receives an orange label 🟠
If two conditions are met, the label is yellow 🟡
If none of the conditions are met, it gets a red label 🔴
In other words, if you have full control over the AI tool you’re using, you’ll see a green label. If you have no control and a lot of dependency, the label will be red.
These colors give an immediate overview of the AI solution for factors such as sovereignty, transparency, and data control.
Caveat: Why critical thinking is still important to ensure ethical AI
We add one additional note to the rating: bias.
Bias remains a known challenge in AI systems. While it is difficult to guarantee complete neutrality, the rating highlights known issues where they exist, helping users make informed decisions.
So when we discover major biases in the data set or in the expression of the model at the time of our last check, you will see this mentioned in the rating. This includes discrimination on the basis of race or gender for a face recognition technology, for example.
There are other ethical considerations for AI, of course.
Think of legal challenges related to the use of datasets, in particular copyright issues, as well as the significant energy consumption of deep neural networks, which is of great concern.
Unfortunately, those concerns are extremely hard to quantify in an objective manner, and while we intend to try to warn users of any open issues, we can not (yet) include them in our rating.
For that reason, we recommend users to investigate for themselves what the consequences of the use of AI are for their individual case using the Nextcloud Ethical AI Rating.
Ethical AI in Nextcloud Assistant
Of course, we try to practice what we preach: this approach is not just theoretical, but also shapes how AI is implemented in Nextcloud.
At the core of Nextcloud’s AI approach is the principle that it should never be locked into any single provider. In other words, the administrators can choose between different providers, including self-hosted options.
What’s more, organizations can decide where their models run, which models are used, and what happens to their data.
By doing so, your organization can benefit from AI-supported collaboration without giving up responsibility for its data and stay in control.
AI is still optional and configurable, instead of a mandatory layer imposed on all users or workflows. This allows organizations to adopt AI at their own pace, align it with internal policies, and decide which use cases make sense in their environment.
When AI is enabled, it becomes part of the collaboration environment instead of an external dependency. It integrates into existing workflows without breaking governance or compliance frameworks.
Summarize meetings and conversations in Nextcloud Talk
Provide live transcription and translation for multilingual collaboration
Integrate AI capabilities directly into email, chat, meetings, and file workflows
Nextcloud Hub 26 Winter also makes compliance easier: You can generate images and documents in various apps and automatically label content with watermarks. This ensures your organization is in line with the latest regulations, such as the EU AI Act.
In short: privacy-first AI solutions such as the Nextcloud Assistant give organizations the efficiency and convenience of AI, while keeping governance, compliance, and data ownership exactly where they belong: under their control.
Regain your digital autonomy with Nextcloud Hub 26 Winter
Our latest release of Nextcloud Hub 26 Winter is here! Discover the latest Nextcloud features.
È il momento di riprendere in mano i propri dati. Presentiamo il nuovo Nextcloud Hub, una potente piattaforma di collaborazione open source che ti mette al comando. Scopri le novità in termini di prestazioni, design e sicurezza, con tanti strumenti nuovi e migliorati per il lavoro e la vita di ogni giorno.
Nextcloud Hub 25 Autumn semplifica l'avvio di una potente collaborazione con il pieno controllo dei tuoi dati. Dagli aggiornamenti del design globale al miglioramento dell'usabilità e delle prestazioni, scopri la nostra ultima release in questo blog.
Le organizzazioni, grandi e piccole, necessitano di una soluzione che garantisca la resilienza e la sovranità digitale delle loro operazioni: un'alternativa open source e rispettosa della privacy a Teams. E oggi presentiamo questa soluzione: Nextcloud Talk.
New maintenance updates are available for Nextcloud Hub. Read more in this post or access the full changelog on our website. Keep your server up-to-date!
From government institutions to research organisations and private companies: There is a growing urgency to regain control over data, infrastructure and digital collaboration. Austria is actively shaping this movement with great lighthouse projects.
Salviamo alcuni cookie per contare i visitatori e rendere il sito più facile da usare. Questi dati non lasciano il nostro server e non servono a tracciare il tuo profilo personale! Per maggiori informazioni, consulta la nostra Informativa sulla privacy. Personalizza
I cookie statistici raccolgono informazioni in forma anonima e ci aiutano a capire come i visitatori utilizzano il nostro sito web. Utilizziamo Matomo in cloud.
Servizio:Matomo
Descrizione del cookie:
_pk_ses*: Conta la prima visita dell'utente
_pk_id*: Aiuta a non contare due volte le visite.
mtm_cookie_consent: Ricorda il consenso alla memorizzazione e all'utilizzo dei cookie dato dall'utente.
Scadenza del cookie:_pk_ses*: 30 minuti
_pk_id*: 13 mesi
mtm_cookie_consent: 30 giorni